id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
273642 | https://en.wikipedia.org/wiki/Ultraproduct | Ultraproduct | The ultraproduct is a mathematical construction that appears mainly in abstract algebra and mathematical logic, in particular in model theory and set theory. An ultraproduct is a quotient of the direct product of a family of structures. All factors need to have the same signature. The ultrapower is the special case of this construction in which all factors are equal.
For example, ultrapowers can be used to construct new fields from given ones. The hyperreal numbers, an ultrapower of the real numbers, are a special case of this.
Some striking applications of ultraproducts include very elegant proofs of the compactness theorem and the completeness theorem, Keisler's ultrapower theorem, which gives an algebraic characterization of the semantic notion of elementary equivalence, and the Robinson–Zakon presentation of the use of superstructures and their monomorphisms to construct nonstandard models of analysis, leading to the growth of the area of nonstandard analysis, which was pioneered (as an application of the compactness theorem) by Abraham Robinson.
Definition
The general method for getting ultraproducts uses an index set a structure (assumed to be non-empty in this article) for each element (all of the same signature), and an ultrafilter on
For any two elements and of the Cartesian product
declare them to be , written or if and only if the set of indices on which they agree is an element of in symbols,
which compares components only relative to the ultrafilter
This binary relation is an equivalence relation on the Cartesian product
The is the quotient set of with respect to and is therefore sometimes denoted by
or
Explicitly, if the -equivalence class of an element is denoted by
then the ultraproduct is the set of all -equivalence classes
Although was assumed to be an ultrafilter, the construction above can be carried out more generally whenever is merely a filter on in which case the resulting quotient set is called a .
When is a principal ultrafilter (which happens if and only if contains its kernel ) then the ultraproduct is isomorphic to one of the factors.
And so usually, is not a principal ultrafilter, which happens if and only if is free (meaning ), or equivalently, if every cofinite subset of is an element of
Since every ultrafilter on a finite set is principal, the index set is consequently also usually infinite.
The ultraproduct acts as a filter product space where elements are equal if they are equal only at the filtered components (non-filtered components are ignored under the equivalence).
One may define a finitely additive measure on the index set by saying if and otherwise. Then two members of the Cartesian product are equivalent precisely if they are equal almost everywhere on the index set. The ultraproduct is the set of equivalence classes thus generated.
Finitary operations on the Cartesian product are defined pointwise (for example, if is a binary function then ).
Other relations can be extended the same way:
where denotes the -equivalence class of with respect to
In particular, if every is an ordered field then so is the ultraproduct.
Ultrapower
An ultrapower is an ultraproduct for which all the factors are equal.
Explicitly, the is the ultraproduct of the indexed family defined by for every index
The ultrapower may be denoted by or (since is often denoted by ) by
For every let denote the constant map that is identically equal to This constant map/tuple is an element of the Cartesian product and so the assignment defines a map
The is the map that sends an element to the -equivalence class of the constant tuple
Examples
The hyperreal numbers are the ultraproduct of one copy of the real numbers for every natural number, with regard to an ultrafilter over the natural numbers containing all cofinite sets. Their order is the extension of the order of the real numbers. For example, the sequence given by defines an equivalence class representing a hyperreal number that is greater than any real number.
Analogously, one can define nonstandard integers, nonstandard complex numbers, etc., by taking the ultraproduct of copies of the corresponding structures.
As an example of the carrying over of relations into the ultraproduct, consider the sequence defined by Because for all it follows that the equivalence class of is greater than the equivalence class of so that it can be interpreted as an infinite number which is greater than the one originally constructed. However, let for not equal to but The set of indices on which and agree is a member of any ultrafilter (because and agree almost everywhere), so and belong to the same equivalence class.
In the theory of large cardinals, a standard construction is to take the ultraproduct of the whole set-theoretic universe with respect to some carefully chosen ultrafilter Properties of this ultrafilter have a strong influence on (higher order) properties of the ultraproduct; for example, if is -complete, then the ultraproduct will again be well-founded. (See measurable cardinal for the prototypical example.)
Łoś's theorem
Łoś's theorem, also called , is due to Jerzy Łoś (the surname is pronounced , approximately "wash"). It states that any first-order formula is true in the ultraproduct if and only if the set of indices such that the formula is true in is a member of More precisely:
Let be a signature, an ultrafilter over a set and for each let be a -structure.
Let or be the ultraproduct of the with respect to
Then, for each where and for every -formula
The theorem is proved by induction on the complexity of the formula The fact that is an ultrafilter (and not just a filter) is used in the negation clause, and the axiom of choice is needed at the existential quantifier step. As an application, one obtains the transfer theorem for hyperreal fields.
Examples
Let be a unary relation in the structure and form the ultrapower of Then the set has an analog in the ultrapower, and first-order formulas involving are also valid for For example, let be the reals, and let hold if is a rational number. Then in we can say that for any pair of rationals and there exists another number such that is not rational, and Since this can be translated into a first-order logical formula in the relevant formal language, Łoś's theorem implies that has the same property. That is, we can define a notion of the hyperrational numbers, which are a subset of the hyperreals, and they have the same first-order properties as the rationals.
Consider, however, the Archimedean property of the reals, which states that there is no real number such that for every inequality in the infinite list. Łoś's theorem does not apply to the Archimedean property, because the Archimedean property cannot be stated in first-order logic. In fact, the Archimedean property is false for the hyperreals, as shown by the construction of the hyperreal number above.
Direct limits of ultrapowers (ultralimits)
In model theory and set theory, the direct limit of a sequence of ultrapowers is often considered. In model theory, this construction can be referred to as an ultralimit or limiting ultrapower.
Beginning with a structure, and an ultrafilter, form an ultrapower, Then repeat the process to form and so forth. For each there is a canonical diagonal embedding At limit stages, such as form the direct limit of earlier stages. One may continue into the transfinite.
Ultraproduct monad
The ultrafilter monad is the codensity monad of the inclusion of the category of finite sets into the category of all sets.
Similarly, the is the codensity monad of the inclusion of the category of finitely-indexed families of sets into the category of all indexed families of sets. So in this sense, ultraproducts are categorically inevitable.
Explicitly, an object of consists of a non-empty index set and an indexed family of sets.
A morphism between two objects consists of a function between the index sets and a -indexed family of function
The category is a full subcategory of this category of consisting of all objects whose index set is finite.
The codensity monad of the inclusion map is then, in essence, given by
| Mathematics | Model theory | null |
273665 | https://en.wikipedia.org/wiki/Little%20egret | Little egret | The little egret (Egretta garzetta) is a species of small heron in the family Ardeidae. It is a white bird with a slender black beak, long black legs and, in the western race, yellow feet. As an aquatic bird, it feeds in shallow water and on land, consuming a variety of small creatures. It breeds colonially, often with other species of water birds, making a platform nest of sticks in a tree, bush or reed bed. A clutch of three to five bluish-green eggs is laid and incubated by both parents for about three weeks. The young fledge at about six weeks of age.
Its breeding distribution is in wetlands in warm temperate to tropical parts of Asia, Africa, Australia, and Europe. A successful colonist, its range has gradually expanded north, with stable and self-sustaining populations now present in the United Kingdom.
In warmer locations, most birds are permanent residents; northern populations, including many European birds, migrate to Africa and southern Asia to over-winter there. The birds may also wander north in late summer after the breeding season, and their tendency to disperse may have assisted in the recent expansion of the bird's range. At one time common in Western Europe, it was hunted extensively in the 19th century to provide plumes for the decoration of hats and became locally extinct in northwestern Europe and scarce in the south. Around 1950, conservation laws were introduced in southern Europe to protect the species and their numbers began to increase. By the beginning of the 21st century the bird was breeding again in France, the Netherlands, Ireland and Britain. Its range is continuing to expand westward, and the species has begun to colonise the New World; it was first seen in Barbados in 1954 and first bred there in 1994. The International Union for Conservation of Nature has assessed the bird's global conservation status as being of "least concern".
Taxonomy
The little egret was formally described by the Swedish naturalist Carl Linnaeus in 1766 in the twelfth edition of his Systema Naturae under the binomial name Ardea garzetta. It is now placed with 12 other species in the genus Egretta that was introduced in 1817 by the German naturalist Johann Reinhold Forster with the little egret as the type species. The genus name comes from the Provençal French Aigrette, "egret", a diminutive of Aigron, "heron". The species epithet garzetta is from the Italian name for this bird, garzetta or sgarzetta.
Two subspecies are recognised:
E. g. garzetta (Linnaeus, 1766) – nominate, found in Europe, Africa, and most of Asia except the south-east
E. g. nigripes (Temminck, 1840) – found in the Sunda Islands, Australia and New Zealand
Three other egret taxa have at times been classified as subspecies of the little egret in the past but are now regarded as two separate species. These are the western reef heron Egretta gularis which occurs on the coastline of West Africa (Egretta gularis gularis) and from the Red Sea to India (Egretta gularis schistacea), and the dimorphic egret (Egretta dimorpha), found in East Africa, Madagascar, the Comoros and the Aldabra Islands.
Description
The adult little egret is long with an wingspan, and weighs . Its plumage is normally entirely white, although there are dark forms with largely bluish-grey plumage. In the breeding season, the adult has two long plumes on the nape that form a crest. These plumes are about and are pointed and very narrow. There are similar feathers on the breast, but the barbs are more widely spread. There are also several elongated scapular feathers that have long loose barbs and may be long. During the winter the plumage is similar but the scapulars are shorter and more normal in appearance. The bill is long and slender and it and the lores are black. There is an area of greenish-grey bare skin at the base of the lower mandible and around the eye which has a yellow iris. The legs are black and the feet yellow. Juveniles are similar to non-breeding adults but have greenish-black legs and duller yellow feet, and may have a certain proportion of greyish or brownish feathers. The subspecies nigripes differs in having yellow skin between the bill and eye, and blackish feet. During the height of courtship, the lores turn red and the feet of the yellow-footed races turn red.
Little egrets are mostly silent but make various croaking and bubbling calls at their breeding colonies and produce a harsh alarm call when disturbed. To the human ear, the sounds are indistinguishable from the black-crowned night heron (Nycticorax nycticorax) and the cattle egret (Bubulcus ibis) with which it sometimes associates.
Distribution and habitat
The breeding range of the western race (E. g. garzetta) includes southern Europe, the Middle East, much of Africa and southern Asia. Northern European populations are migratory, mostly travelling to Africa although some remain in southern Europe, while some Asian populations migrate to the Philippines. The eastern race, (E. g. nigripes), is resident in Indonesia and New Guinea, while E. g. immaculata inhabits Australia and New Zealand, but does not breed in the latter. During the late twentieth century, the range of the little egret expanded northwards in Europe and into the New World, where a breeding population was established on Barbados in 1994. The birds have since spread elsewhere in the Caribbean region and on the Atlantic coast of the United States.
The little egret's habitat varies widely, and includes the shores of lakes, rivers, canals, ponds, lagoons, marshes and flooded land, the bird preferring open locations to dense cover. On the coast it inhabits mangrove areas, swamps, mudflats, sandy beaches and reefs. Rice fields are an important habitat in Italy, and coastal and mangrove areas are important in Africa. The bird often moves about among cattle or other hoofed mammals.
Behaviour
Little egrets are sociable birds and are often seen in small flocks. Nevertheless, individual birds do not tolerate others coming too close to their chosen feeding site, though this depends on the abundance of prey.
Food and feeding
They use a variety of methods to procure their food; they stalk their prey in shallow water, often running with raised wings or shuffling their feet to disturb small fish, or may stand still and wait to ambush prey. They make use of opportunities provided by cormorants disturbing fish or humans attracting fish by throwing bread into water. On land they walk or run while chasing their prey, feed on creatures disturbed by grazing livestock and ticks on the livestock, and even scavenge. Their diet is mainly fish, but amphibians, small reptiles, mammals and birds are also eaten, as well as crustaceans, molluscs, insects, spiders and worms.
Breeding
Little egrets nest in colonies, often with other wading birds. On the coasts of western India these colonies may be in urban areas, and associated birds include cattle egrets (Bubulcus ibis), black-crowned night herons (Nycticorax nycticorax) and black-headed ibises (Threskiornis melanocephalus). In Europe, associated species may be squacco herons (Ardeola ralloides), cattle egrets, black-crowned night herons and glossy ibises (Plegadis falcinellus). The nests are usually platforms of sticks built in trees or shrubs, or in reed beds or bamboo groves. In some locations such as the Cape Verde Islands, the birds nest on cliffs. Pairs defend a small breeding territory, usually extending around from the nest. The three to five eggs are incubated by both adults for 21 to 25 days before hatching. They are oval in shape and have a pale, non-glossy, blue-green shell colour. The young birds are covered in white down feathers, are cared for by both parents and fledge after 40 to 45 days.
Conservation
Globally, the little egret is not listed as a threatened species and has in fact expanded its range over the last few decades. The International Union for Conservation of Nature states that their wide distribution and large total population means that they are a species that cause them "least concern".
Status in northwestern Europe
Historical research has shown that the little egret was once present, and probably common, in Ireland and Great Britain, but became extinct there through a combination of over-hunting in the late medieval period and climate change at the start of the Little Ice Age. The inclusion of 1,000 egrets (among numerous other birds) in the banquet to celebrate the enthronement of George Neville as Archbishop of York at Cawood Castle in 1465 indicates the presence of a sizable population in northern England at the time, and they are also listed in the coronation feast of King Henry VI in 1429. They had become scarce by the mid-16th century, when William Gowreley, "yeoman purveyor to the Kinges mowthe", "had to send further south" for egrets. In 1804 Thomas Bewick commented that if it were the same bird as listed in Neville's bill of fare "No wonder this species has become nearly extinct in this country!"
Further declines occurred throughout Europe as the plumes of the little egret and other egrets were in demand for decorating hats. They had been used in the plume trade since at least the 17th century but in the 19th century it became a major craze and the number of egret skins passing through dealers reached into the millions. Complete statistics do not exist, but in the first three months of 1885, 750,000 egret skins were sold in London, while in 1887 one London dealer sold 2 million egret skins. Egret farms were set up where the birds could be plucked without being killed but most of the supply of so-called "Osprey plumes" was obtained by hunting, which reduced the population of the species to dangerously low levels and stimulated the establishment of Britain's Royal Society for the Protection of Birds in 1889.
By the 1950s, the little egret had become restricted to southern Europe, and conservation laws protecting the species were introduced. This allowed the population to rebound strongly; over the next few decades it became increasingly common in western France and later on the north coast. It bred in the Netherlands in 1979 with further breeding from the 1990s onward. About 22,700 pairs are thought to breed in Europe, with populations stable or increasing in Spain, France and Italy but decreasing in Greece.
In Britain it was a rare vagrant from its 16th-century disappearance until the late 20th century, and did not breed. It has however recently become a regular breeding species and is commonly present, often in large numbers, at favoured coastal sites. The first recent breeding record in England was on Brownsea Island in Dorset in 1996, and the species bred in Wales for the first time in 2002. The population increase has been rapid subsequently, with over 750 pairs breeding in nearly 70 colonies in 2008, and a post-breeding total of 4,540 birds in September 2008. The first record of breeding in Scotland happened in 2020 in Dumfries & Galloway. In Ireland, the species bred for the first time in 1997 at a site in County Cork and the population has also expanded rapidly since, breeding in most Irish counties by 2010. Severe winter weather in 2010–2012 proved to be only a temporary setback, and the species continues to spread.
Status in Australia
In Australia, its status varies from state to state. It is listed as "Threatened" on the Victorian Flora and Fauna Guarantee Act 1988. Under this act, an Action Statement for the recovery and future management of this species has been prepared. On the 2007 advisory list of threatened vertebrate fauna in Victoria, the little egret is listed as endangered.
Colonisation of the New World
With its range continuing to expand, the little egret has now started to colonise the New World. The first record there was on Barbados in April 1954. The bird began breeding on the island in 1994 and now also breeds in the Bahamas. They may have made the crossing from Western Africa. Ringed birds from Spain provide a clue to the birds' origin. The birds are very similar in appearance to the snowy egret and share colonial nesting sites with these birds in Barbados, where they are both recent arrivals. The little egrets are larger, have more varied foraging strategies and exert dominance over feeding sites.
Little egrets are seen with increasing regularity over a wider area and have been observed from Suriname and Brazil in the south to Newfoundland, Quebec and Ontario in the north. Birds on the east coast of North America are thought to have moved north with snowy egrets from the Caribbean. In June 2011, a little egret was spotted in Maine, in the Scarborough Marsh, near the Audubon Center.
| Biology and health sciences | Pelecanimorphae | Animals |
273679 | https://en.wikipedia.org/wiki/Astronomical%20spectroscopy | Astronomical spectroscopy | Astronomical spectroscopy is the study of astronomy using the techniques of spectroscopy to measure the spectrum of electromagnetic radiation, including visible light, ultraviolet, X-ray, infrared and radio waves that radiate from stars and other celestial objects. A stellar spectrum can reveal many properties of stars, such as their chemical composition, temperature, density, mass, distance and luminosity. Spectroscopy can show the velocity of motion towards or away from the observer by measuring the Doppler shift. Spectroscopy is also used to study the physical properties of many other types of celestial objects such as planets, nebulae, galaxies, and active galactic nuclei.
Background
Astronomical spectroscopy is used to measure three major bands of radiation in the electromagnetic spectrum: visible light, radio waves, and X-rays. While all spectroscopy looks at specific bands of the spectrum, different methods are required to acquire the signal depending on the frequency. Ozone (O3) and molecular oxygen (O2) absorb light with wavelengths under 300 nm, meaning that X-ray and ultraviolet spectroscopy require the use of a satellite telescope or rocket mounted detectors. Radio signals have much longer wavelengths than optical signals, and require the use of antennas or radio dishes. Infrared light is absorbed by atmospheric water and carbon dioxide, so while the equipment is similar to that used in optical spectroscopy, satellites are required to record much of the infrared spectrum.
Optical spectroscopy
Physicists have been looking at the solar spectrum since Isaac Newton first used a simple prism to observe the refractive properties of light. In the early 1800s Joseph von Fraunhofer used his skills as a glassmaker to create very pure prisms, which allowed him to observe 574 dark lines in a seemingly continuous spectrum. Soon after this, he combined telescope and prism to observe the spectrum of Venus, the Moon, Mars, and various stars such as Betelgeuse; his company continued to manufacture and sell high-quality refracting telescopes based on his original designs until its closure in 1884.
The resolution of a prism is limited by its size; a larger prism will provide a more detailed spectrum, but the increase in mass makes it unsuitable for highly detailed work. This issue was resolved in the early 1900s with the development of high-quality reflection gratings by J.S. Plaskett at the Dominion Observatory in Ottawa, Canada. Light striking a mirror will reflect at the same angle, however a small portion of the light will be refracted at a different angle; this is dependent upon the indices of refraction of the materials and the wavelength of the light. By creating a "blazed" grating which utilizes a large number of parallel mirrors, the small portion of light can be focused and visualized. These new spectroscopes were more detailed than a prism, required less light, and could be focused on a specific region of the spectrum by tilting the grating.
The limitation to a blazed grating is the width of the mirrors, which can only be ground a finite amount before focus is lost; the maximum is around 1000 lines/mm. In order to overcome this limitation holographic gratings were developed. Volume phase holographic gratings use a thin film of dichromated gelatin on a glass surface, which is subsequently exposed to a wave pattern created by an interferometer. This wave pattern sets up a reflection pattern similar to the blazed gratings but utilizing Bragg diffraction, a process where the angle of reflection is dependent on the arrangement of the atoms in the gelatin. The holographic gratings can have up to 6000 lines/mm and can be up to twice as efficient in collecting light as blazed gratings. Because they are sealed between two sheets of glass, the holographic gratings are very versatile, potentially lasting decades before needing replacement.
Light dispersed by the grating or prism in a spectrograph can be recorded by a detector. Historically, photographic plates were widely used to record spectra until electronic detectors were developed, and today optical spectrographs most often employ charge-coupled devices (CCDs). The wavelength scale of a spectrum can be calibrated by observing the spectrum of emission lines of known wavelength from a gas-discharge lamp. The flux scale of a spectrum can be calibrated as a function of wavelength by comparison with an observation of a standard star with corrections for atmospheric absorption of light; this is known as spectrophotometry.
Radio spectroscopy
Radio astronomy was founded with the work of Karl Jansky in the early 1930s, while working for Bell Labs. He built a radio antenna to look at potential sources of interference for transatlantic radio transmissions. One of the sources of noise discovered came not from Earth, but from the center of the Milky Way, in the constellation Sagittarius. In 1942, JS Hey captured the Sun's radio frequency using military radar receivers. Radio spectroscopy started with the discovery of the 21-centimeter H I line in 1951.
Radio interferometry
Radio interferometry was pioneered in 1946, when Joseph Lade Pawsey, Ruby Payne-Scott and Lindsay McCready used a single antenna atop a sea cliff to observe 200 MHz solar radiation. Two incident beams, one directly from the sun and the other reflected from the sea surface, generated the necessary interference. The first multi-receiver interferometer was built in the same year by Martin Ryle and Vonberg. In 1960, Ryle and Antony Hewish published the technique of aperture synthesis to analyze interferometer data. The aperture synthesis process, which involves autocorrelating and discrete Fourier transforming the incoming signal, recovers both the spatial and frequency variation in flux. The result is a 3D image whose third axis is frequency. For this work, Ryle and Hewish were jointly awarded the 1974 Nobel Prize in Physics.
X-ray spectroscopy
Stars and their properties
Chemical properties
Newton used a prism to split white light into a spectrum of color, and Fraunhofer's high-quality prisms allowed scientists to see dark lines of an unknown origin. In the 1850s, Gustav Kirchhoff and Robert Bunsen described the phenomena behind these dark lines. Hot solid objects produce light with a continuous spectrum, hot gases emit light at specific wavelengths, and hot solid objects surrounded by cooler gases show a near-continuous spectrum with dark lines corresponding to the emission lines of the gases. By comparing the absorption lines of the Sun with emission spectra of known gases, the chemical composition of stars can be determined.
The major Fraunhofer lines, and the elements with which they are associated, appear in the following table. Designations from the early Balmer Series are shown in parentheses.
Not all of the elements in the Sun were immediately identified. Two examples are listed below:
In 1868 Norman Lockyer and Pierre Janssen independently observed a line next to the sodium doublet (D1 and D2) which Lockyer determined to be a new element. He named it Helium, but it wasn't until 1895 the element was found on Earth.
In 1869 the astronomers Charles Augustus Young and William Harkness independently observed a novel green emission line in the Sun's corona during an eclipse. This "new" element was incorrectly named coronium, as it was only found in the corona. It was not until the 1930s that Walter Grotrian and Bengt Edlén discovered that the spectral line at 530.3 nm was due to highly ionized iron (Fe13+). Other unusual lines in the coronal spectrum are also caused by highly charged ions, such as nickel and calcium, the high ionization being due to the extreme temperature of the solar corona.
To date more than 20 000 absorption lines have been listed for the Sun between 293.5 and 877.0 nm, yet only approximately 75% of these lines have been linked to elemental absorption.
By analyzing the equivalent width of each spectral line in an emission spectrum, both the elements present in a star and their relative abundances can be determined. Using this information stars can be categorized into stellar populations; Population I stars are the youngest stars and have the highest metal content (the Sun is a Pop I star), while Population III stars are the oldest stars with a very low metal content.
Temperature and size
In 1860 Gustav Kirchhoff proposed the idea of a black body, a material that emits electromagnetic radiation at all wavelengths. In 1894 Wilhelm Wien derived an expression relating the temperature (T) of a black body to its peak emission wavelength (λmax):
b is a constant of proportionality called Wien's displacement constant, equal to This equation is called Wien's Law. By measuring the peak wavelength of a star, the surface temperature can be determined. For example, if the peak wavelength of a star is 502 nm the corresponding temperature will be 5772 kelvins.
The luminosity of a star is a measure of the electromagnetic energy output in a given amount of time. Luminosity (L) can be related to the temperature (T) of a star by:
,
where R is the radius of the star and σ is the Stefan–Boltzmann constant, with a value of Thus, when both luminosity and temperature are known (via direct measurement and calculation) the radius of a star can be determined.
Galaxies
The spectra of galaxies look similar to stellar spectra, as they consist of the combined light of billions of stars.
Doppler shift studies of galaxy clusters by Fritz Zwicky in 1937 found that the galaxies in a cluster were moving much faster than seemed to be possible from the mass of the cluster inferred from the visible light. Zwicky hypothesized that there must be a great deal of non-luminous matter in the galaxy clusters, which became known as dark matter. Since his discovery, astronomers have determined that a large portion of galaxies (and most of the universe) is made up of dark matter. In 2003, however, four galaxies (NGC 821, NGC 3379, NGC 4494, and NGC 4697) were found to have little to no dark matter influencing the motion of the stars contained within them; the reason behind the lack of dark matter is unknown.
In the 1950s, strong radio sources were found to be associated with very dim, very red objects. When the first spectrum of one of these objects was taken there were absorption lines at wavelengths where none were expected. It was soon realised that what was observed was a normal galactic spectrum, but highly red shifted. These were named quasi-stellar radio sources, or quasars, by Hong-Yee Chiu in 1964. Quasars are now thought to be galaxies formed in the early years of our universe, with their extreme energy output powered by super-massive black holes.
The properties of a galaxy can also be determined by analyzing the stars found within them. NGC 4550, a galaxy in the Virgo Cluster, has a large portion of its stars rotating in the opposite direction as the other portion. It is believed that the galaxy is the combination of two smaller galaxies that were rotating in opposite directions to each other. Bright stars in galaxies can also help determine the distance to a galaxy, which may be a more accurate method than parallax or standard candles.
Interstellar medium
The interstellar medium is matter that occupies the space between star systems in a galaxy. 99% of this matter is gaseous – hydrogen, helium, and smaller quantities of other ionized elements such as oxygen. The other 1% is dust particles, thought to be mainly graphite, silicates, and ices. Clouds of the dust and gas are referred to as nebulae.
There are three main types of nebula: absorption, reflection, and emission nebulae. Absorption (or dark) nebulae are made of dust and gas in such quantities that they obscure the starlight behind them, making photometry difficult. Reflection nebulae, as their name suggest, reflect the light of nearby stars. Their spectra are the same as the stars surrounding them, though the light is bluer; shorter wavelengths scatter better than longer wavelengths. Emission nebulae emit light at specific wavelengths depending on their chemical composition.
Gaseous emission nebulae
In the early years of astronomical spectroscopy, scientists were puzzled by the spectrum of gaseous nebulae. In 1864 William Huggins noticed that many nebulae showed only emission lines rather than a full spectrum like stars. From the work of Kirchhoff, he concluded that nebulae must contain "enormous masses of luminous gas or vapour." However, there were several emission lines that could not be linked to any terrestrial element, brightest among them lines at 495.9 nm and 500.7 nm. These lines were attributed to a new element, nebulium, until Ira Bowen determined in 1927 that the emission lines were from highly ionised oxygen (O+2). These emission lines could not be replicated in a laboratory because they are forbidden lines; the low density of a nebula (one atom per cubic centimetre) allows for metastable ions to decay via forbidden line emission rather than collisions with other atoms.
Not all emission nebulae are found around or near stars where solar heating causes ionisation. The majority of gaseous emission nebulae are formed of neutral hydrogen. In the ground state neutral hydrogen has two possible spin states: the electron has either the same spin or the opposite spin of the proton. When the atom transitions between these two states, it releases an emission or absorption line of 21 cm. This line is within the radio range and allows for very precise measurements:
Velocity of the cloud can be measured via Doppler shift
The intensity of the 21 cm line gives the density and number of atoms in the cloud
The temperature of the cloud can be calculated
Using this information, the shape of the Milky Way has been determined to be a spiral galaxy, though the exact number and position of the spiral arms is the subject of ongoing research.
Complex molecules
Dust and molecules in the interstellar medium not only obscures photometry, but also causes absorption lines in spectroscopy. Their spectral features are generated by transitions of component electrons between different energy levels, or by rotational or vibrational spectra. Detection usually occurs in radio, microwave, or infrared portions of the spectrum. The chemical reactions that form these molecules can happen in cold, diffuse clouds or in dense regions illuminated with ultraviolet light. Most known compounds in space are organic, ranging from small molecules e.g. acetylene C2H2 and acetone (CH3)2CO; to entire classes of large molecule e.g. fullerenes and polycyclic aromatic hydrocarbons; to solids, such as graphite or other sooty material.
Motion in the universe
Stars and interstellar gas are bound by gravity to form galaxies, and groups of galaxies can be bound by gravity in galaxy clusters. With the exception of stars in the Milky Way and the galaxies in the Local Group, almost all galaxies are moving away from Earth due to the expansion of the universe.
Doppler effect and redshift
The motion of stellar objects can be determined by looking at their spectrum. Because of the Doppler effect, objects moving towards someone are blueshifted, and objects moving away are redshifted. The wavelength of redshifted light is longer, appearing redder than the source. Conversely, the wavelength of blueshifted light is shorter, appearing bluer than the source light:
where is the emitted wavelength, is the velocity of the object, and is the observed wavelength. Note that v<0 corresponds to λ<λ0, a blueshifted wavelength. A redshifted absorption or emission line will appear more towards the red end of the spectrum than a stationary line. In 1913 Vesto Slipher determined the Andromeda Galaxy was blueshifted, meaning it was moving towards the Milky Way. He recorded the spectra of 20 other galaxies — all but four of which were redshifted — and was able to calculate their velocities relative to the Earth. Edwin Hubble would later use this information, as well as his own observations, to define Hubble's law: The further a galaxy is from the Earth, the faster it is moving away. Hubble's law can be generalised to:
where is the velocity (or Hubble Flow), is the Hubble Constant, and is the distance from Earth.
Redshift (z) can be expressed by the following equations:
In these equations, frequency is denoted by and wavelength by . The larger the value of z, the more redshifted the light and the farther away the object is from the Earth. As of January 2013, the largest galaxy redshift of z~12 was found using the Hubble Ultra-Deep Field, corresponding to an age of over 13 billion years (the universe is approximately 13.82 billion years old).
The Doppler effect and Hubble's law can be combined to form the equation ,
where c is the speed of light.
Peculiar motion
Objects that are gravitationally bound will rotate around a common center of mass. For stellar bodies, this motion is known as peculiar velocity and can alter the Hubble Flow. Thus, an extra term for the peculiar motion needs to be added to Hubble's law:
This motion can cause confusion when looking at a solar or galactic spectrum, because the expected redshift based on the simple Hubble law will be obscured by the peculiar motion. For example, the shape and size of the Virgo Cluster has been a matter of great scientific scrutiny due to the very large peculiar velocities of the galaxies in the cluster.
Binary stars
Just as planets can be gravitationally bound to stars, pairs of stars can orbit each other. Some binary stars are visual binaries, meaning they can be observed orbiting each other through a telescope. Some binary stars, however, are too close together to be resolved. These two stars, when viewed through a spectrometer, will show a composite spectrum: the spectrum of each star will be added together. This composite spectrum becomes easier to detect when the stars are of similar luminosity and of different spectral class.
Spectroscopic binaries can be also detected due to their radial velocity; as they orbit around each other one star may be moving towards the Earth whilst the other moves away, causing a Doppler shift in the composite spectrum. The orbital plane of the system determines the magnitude of the observed shift: if the observer is looking perpendicular to the orbital plane there will be no observed radial velocity. For example, a person looking at a carousel from the side will see the animals moving toward and away from them, whereas if they look from directly above they will only be moving in the horizontal plane.
Planets, asteroids, and comets
Planets, asteroids, and comets all reflect light from their parent stars and emit their own light. For cooler objects, including Solar System planets and asteroids, most of the emission is at infrared wavelengths we cannot see, but that are routinely measured with spectrometers. For objects surrounded by gas, such as comets and planets with atmospheres, further emission and absorption happens at specific wavelengths in the gas, imprinting the spectrum of the gas on that of the solid object. In the case of worlds with thick atmospheres or complete cloud or haze cover (such as the four giant planets, Venus, and Saturn's satellite Titan), the spectrum is mostly or completely due to the atmosphere alone.
Planets
The reflected light of a planet contains absorption bands due to minerals in the rocks present for rocky bodies, or due to the elements and molecules present in the atmosphere. To date over 3,500 exoplanets have been discovered. These include so-called Hot Jupiters, as well as Earth-like planets. Using spectroscopy, compounds such as alkali metals, water vapor, carbon monoxide, carbon dioxide, and methane have all been discovered.
Asteroids
Asteroids can be classified into three major types according to their spectra. The original categories were created by Clark R. Chapman, David Morrison, and Ben Zellner in 1975, and further expanded by David J. Tholen in 1984. In what is now known as the Tholen classification, the C-types are made of carbonaceous material, S-types consist mainly of silicates, and X-types are 'metallic'. There are other classifications for unusual asteroids. C- and S-type asteroids are the most common asteroids. In 2002 the Tholen classification was further "evolved" into the SMASS classification, expanding the number of categories from 14 to 26 to account for more precise spectroscopic analysis of the asteroids.
Comets
The spectra of comets consist of a reflected solar spectrum from the dusty clouds surrounding the comet, as well as emission lines from gaseous atoms and molecules excited to fluorescence by sunlight and/or chemical reactions. For example, the chemical composition of Comet ISON was determined by spectroscopy due to the prominent emission lines of cyanogen (CN), as well as two- and three-carbon atoms (C2 and C3). Nearby comets can even be seen in X-ray as solar wind ions flying to the coma are neutralized. The cometary X-ray spectra therefore reflect the state of the solar wind rather than that of the comet.
| Physical sciences | Basics | Astronomy |
273706 | https://en.wikipedia.org/wiki/Pallas%27s%20cat | Pallas's cat | The Pallas's cat (Otocolobus manul), also known as the manul, is a small wild cat with long and dense light grey fur, and rounded ears set low on the sides of the head. Its head-and-body length ranges from with a long bushy tail. It is well camouflaged and adapted to the cold continental climate in its native range, which receives little rainfall and experiences a wide range of temperatures.
The Pallas's cat was first described in 1776 by Peter Simon Pallas, who observed it in the vicinity of Lake Baikal. Since then, it has been recorded across a large region in Central Asia, albeit in widely spaced sites from the Caucasus, Iranian Plateau, Hindu Kush, parts of the Himalayas, Tibetan Plateau to the Altai-Sayan region and South Siberian Mountains. It inhabits rocky montane grasslands and shrublands, where the snow cover is below . It finds shelter in rock crevices and burrows, and preys foremost on lagomorphs and rodents. The female gives birth to between two and six kittens in spring.
Due to its widespread range and assumed large population, the Pallas's cat has been listed as Least Concern on the IUCN Red List since 2020. Some population units are threatened by poaching, prey base decline due to rodent control programs, and habitat fragmentation as a result of mining and infrastructure projects.
The Pallas's cat has been kept in zoos since the early 1950s. 60 zoos in Europe, Russia, North America and Japan participate in Pallas's cat captive breeding programs.
Taxonomy
Felis manul was the scientific name used by Peter Simon Pallas in 1776, who first described a Pallas's cat that he had encountered near the Dzhida River southeast of Lake Baikal.
Several Pallas's cat zoological specimens were subsequently described:
Felis nigripectus proposed by Brian Houghton Hodgson in 1842 was based on three specimens from Tibet.
Otocolobus manul ferrugineus proposed by Sergey Ognev in 1928 was an erythristic specimen from the Kopet Dag mountains.
Otocolobus was proposed by Johann Friedrich von Brandt in 1842 as a generic name. Reginald Innes Pocock recognized the taxonomic rank of Otocolobus in 1907, described several Pallas's cat skulls in detail and considered the Pallas's cat an aberrant form of Felis.
In 1951, John Ellerman and Terence Morrison-Scott considered
the nominate subspecies Felis manul manul to be distributed from Russian Turkestan to Transbaikalia;
F. m. nigripecta to be distributed in Tibet and Kashmir;
F. m. ferruginea occurring from southwestern Turkestan and the Kopet Dag mountains to Afghanistan and Balochistan.
Since 2017, the Cat Classification Task Force of the Cat Specialist Group recognises only two subspecies as valid taxa, namely:
O. m. manul syn. O. m. ferrugineus in the western and northern part of Central Asia from Iran to Mongolia;
O. m. nigripectus in the Himalayas from Kashmir to Bhutan.
Phylogeny
Phylogenetic analysis of the nuclear DNA in tissue samples from all Felidae species revealed that the evolutionary radiation of the Felidae began in Asia during the late Miocene around . Analysis of mitochondrial DNA of all Felidae species indicates a radiation at around .
The Pallas's cat is estimated to have genetically diverged from a common ancestor with the genus Prionailurus between based on analysis of nuclear DNA. Based on analysis of mitochondrial DNA, it diverged from a common ancestor with Felis.
Characteristics
The Pallas's cat's fur is light grey with pale yellowish-ochre or pale yellowish-reddish hues. Some hair tips are white and some blackish. Its fur is greyer and denser with fewer markings visible in winter than in the summer. The forehead and top of the head are light grey with small black spots. It has two black zigzag lines on the cheeks running from the corner of the eyes to the jaw joints. Its chin, whiskers, lower and upper lips are white.
It has narrow black stripes on the back, consisting of five to seven dark transversal lines across the lower back. Its grey tail has seven narrow black rings and a black tip. The underfur is long and 19 μm thick, and the guard hairs up to long and 93 μm thick on the back. Its fur is soft and dense with up to .
The Pallas's cat's ears are grey with a yellowish tinge on the back and a darker rim, but with whitish hair in front and in the ear pinnae. Its rounded ears are set low on the side, such that it can peer over an object and show only a relatively small part of the head above the eyes without depressing the ears. This can give its face a look of ferocity and unrest. Its eyes are encircled by white. The iris is yellowish, and its pupils contract to small circular disks in sunlight. Among the Felinae, it shares this trait of round pupils with Puma, Herpailurus and Acinonyx species.
The Pallas's cat is about the size of a domestic cat (Felis catus). Its stocky posture with the long and dense fur make it appear stout and plush. Its head-to-body is long with a long tail. It weighs .
Its body is stout, and its skull is rounded with a short nasal bone, an enlarged cranial part and rounded zygomatic arches. Its orbits are large and directed forward. Its legs are short with short and sharp retractile claws.
The skull of males is long and wide at the base. Females have a long and wide skull. The lower carnassial teeth are powerful, and the upper carnassials are short and massive. The first pair of upper premolars is absent. The dental formula is . It has a bite force at the canine tip of 155.4 newtons and a bite force quotient at the canine tip of 113.8.
The mitochondrial genome of the Pallas's cat consists of 16,672 base pairs containing 13 protein-coding, 22 transfer RNA and two ribosomal RNA genes and one non-coding RNA control region.
Distribution and habitat
The Pallas's cat's range extends from the Caucasus eastward to Central Asia, Mongolia and adjacent parts of Dzungaria and the Tibetan Plateau. It inhabits montane shrublands and grasslands, rocky outcrops, scree slopes and ravines in areas, where the continuous snow cover is below .
In the southwestern part of its range, the habitat of the Pallas's cat is affected by cold and dry winters, and moderate to low rainfall in warm summers. The typical vegetation in this part consists of small shrubs, sagebrush (Artemisia), Festuca and Stipa grasses.
In the central part of its range, it inhabits hilly landscapes, high plateaus and intermontane valleys that are covered by dry steppe or semi-desert vegetation, such as low shrubs and xerophytic grasses. The continental climate in this region exhibits a range of between the highest and lowest air temperatures, dropping to in winter.
The Greater Caucasus region is considered climatically suitable for the Pallas's cat. In Armenia, an individual was killed near Vedi in the mountains of Ararat Province in the late 1920s. In January 2020, an individual was sighted about farther north in Tavush Province; the habitat at this location transitions from semi-desert to montane steppe at an elevation of about . Records in Azerbaijan are limited to a Pallas's cat skin found in Karabakh and a sighting of an individual in Julfa District, both in the late 20th century.
On the Iranian Plateau, two Pallas's cats were encountered near the Aras River in northwestern Iran before the 1970s. In the area, an individual was captured at an elevation of about near Azarshahr in East Azerbaijan Province in 2008. In the same year, a camera trap recorded a Pallas's cat on the southern slopes of the central Alborz Mountains in Khojir National Park shortly after heavy snowfall. Farther east in the Alborz Mountains, an individual was recorded among rocks at an elevation of in 2016. In the Aladagh and Kopet Dag Mountains, the Pallas's cat was recorded inside and in the vicinity of protected areas. In the south of the Zagros Mountains, an individual was caught in a corral used by transhumant pastoralists in Abadeh County in 2012. The surrounding area consists of rocky steppe habitat dominated by mountain almond (Prunus scoparia), Astragalus and Artemisia.
In the Hindu Kush, a Pallas's cat was observed sunbathing at the fringe of a rocky high-elevation plain near Dasht-e Nawar in Afghanistan's Koh-i-Baba range in April 2007. The Pallas's cat was also photographed multiple times in Bamyan Province between 2015 and 2017.
In Pakistan's Qurumber National Park in Gilgit-Baltistan, an individual was recorded on a ridge in a juniper dominated forest at in July 2012.
In the Transcaspian Region, its presence was first reported in the Kopet Dag mountains and in the vicinity of the Tedzhen and Murghab Rivers in the late 19th century. In Turkmenistan's Sünt-Hasardag Nature Reserve, a camera trap recorded an individual in 2019. The Pallas's cat is allegedly also present in Köpetdag Nature Reserve.
Historical records of the Pallas's cat are known in the Surxondaryo Region and Gissar Range along the border of Tajikistan and Uzbekistan. In Kyrgyzstan, it is present at high elevations of Sarychat-Ertash State Nature Reserve and in the foothills of the Alay Range. In 2013, a dead female was found in a valley near Engilchek, Kyrgyzstan. In Kazakhstan, it inhabits the highlands and steppes of central and east Kazakhstan Region, the periphery of the Betpak-Dala Desert, the northern Balkhash District and the Tarbagatai Mountains.
In the South Siberian Mountains, it inhabits grasslands on the Ukok Plateau and in the Altai, Kuray and Saylyugem Mountains. It is also present in Chagan-Uzun and Argut river basins, Mongun-Taiga, Uvs Lake Basin, Sayano-Shushenski Nature Reserve, Tunkinsky National Park, Lake Gusinoye basin and in the interfluves of the Selenga, Chikoy and Khilok rivers. In the eastern Sayan Mountains, its presence was documented for the first time in 1997. In Transbaikal, it inhabits montane steppes at elevations of , where annual rainfall ranges from . In 2013, an individual was observed on the Vitim Plateau.
The Pallas's cat inhabits the semi-desert steppe of Ikh Nartiin Chuluu Nature Reserve in Mongolia. In Khustain Nuruu National Park and Gobi Gurvansaikhan National Park, it prefers rocky and rugged habitats that provides cover and camouflage.
On the Tibetan plateau, two Pallas's cats were observed in undulating alpine meadow amidst plateau pika (Ochotona curzoniae) colonies at in western China's Qumarlêb County in 2001. One of them swam across an irrigation channel. In Gêrzê County, an individual was sighted in desert steppe habitat at an elevation of in 2005. In 2011, the Pallas's cat was photographed in an alpine meadow in the core area of Sanjiangyuan National Nature Reserve. In Ruoergai, it was observed at several places in habitat that was frequented by pastoralists and their livestock herds.
The presence of the Pallas's cat in the Himalayas was first reported in Ladakh's Indus valley in 1991. In Changthang Wildlife Sanctuary, Pallas's cats were sighted close by riverbanks at elevations of in 2013 and 2015. In Gangotri National Park, a Pallas's cat was photographed in rocky alpine scrub at in 2019. In Sikkim, an individual was observed on a rocky slope at an elevation of in the vicinity of Tso Lhamo Lake in 2007.
In December 2012, the Pallas's cat was recorded for the first time in the Nepal Himalayas. It was photographed in the upper Marshyangdi river valley in alpine pastures at elevations of and in Annapurna Conservation Area. In Shey-Phoksundo National Park, Pallas's cat scat was detected at in 2016, the globally highest record to date.
In January 2012, it was recorded for the first time in Bhutan, namely in rolling hills dominated by glacial outwash and alpine steppe vegetation in Wangchuck Centennial National Park. In autumn 2012, it was also photographed at an elevation of in Jigme Dorji National Park. In 2019, scat samples of two individuals were found in Sagarmatha National Park, providing the first genetic evidence of the cat's presence in the eastern Himalayas.
Behaviour and ecology
The Pallas's cat is solitary. Of nine Pallas's cat kittens observed in captivity, only the two males scent marked by spraying urine.
The Pallas's cat uses caves, rock crevices and marmot burrows as shelter. In central Mongolia, 29 Pallas's cats were fitted with radio collars between June 2005 and October 2007. They used 101 dens during this time, including 39 winter dens, 42 summer dens and 20 dens for raising kittens. The summer and winter dens usually had one entrance with a diameter of . They resided in the summer dens for 2–21 days, and in the winter dens for 2–28 days. Summer and maternal dens were close to rocky habitats with little direct sunlight, whereas winter dens were closer to ravines. The home ranges of 16 females varied from . The home ranges of nine males varied from and overlapped those of one to four females and partly also those of other males. The sizes of their home ranges decreased in winter.
In an unprotected area in central Mongolia, Pallas's cats were mainly crepuscular between May and August, but active by day from September to November. Pallas's cats recorded in four study areas in the western Mongolian Altai mountains were also active during the day, but with a lower frequency at sites where livestock was present.
Hunting and diet
The Pallas's cat is a highly specialised predator of small mammals, which it catches by stalking or ambushing near exits of burrows. It also pulls out rodents with its paws from shallow burrows. In the Altai Mountains, remains of long-tailed ground squirrel (Urocitellus undulatus), flat-skulled shrew (Sorex roboratus), Pallas's pika (Ochotona pallasi) and bird feathers were found near breeding burrows of Pallas's cats. In Transbaikal, it preys on Daurian pika (Ochotona dauurica), steppe pika (O. pusilla), Daurian ground squirrel (Spermophilus dauricus) and young of red-billed chough (Pyrrhocorax pyrrhocorax).
Scat samples of the Pallas's cat collected in the bufferzone of Khustain Nuruu National Park in central Mongolia contained foremost remains of Daurian pika, Mongolian gerbil (Meriones unguiculatus), Mongolian silver vole (Alticola semicanus) and remains of passerine birds, beetles and grasshoppers. Brandt's vole (Lasiopodomys brandtii) dominated in the diet of Pallas's cats in Mongolia's Sükhbaatar Province after the irruptive growth of this vole population during 2017 to 2020.
Scat found in Shey-Phoksundo National Park contained remains of pika species and of woolly hare (Lepus oiostolus). Remains of a cypriniform fish were found in Pallas's cat scat in Gongga Mountain Nature Reserve.
Reproduction and life cycle
The female is sexually mature at the age of about one year. She is in estrus for 26 to 42 hours. Gestation lasts 66 to 75 days.
A captive male Pallas's cat housed under natural lighting conditions showed increased aggressive and territorial behaviour at the onset of the breeding season, lasting from September to December. Its blood contained three times more testosterone than in the non-breeding season, and its ejaculate was more concentrated with more normal sperm forms and a higher motility of sperm.
In the wild, the female gives birth to a litter of two to six kittens between the end of April and late May. The newborn kittens' fur is fuzzy, and their eyes are closed until the age of about two weeks. A newborn male kitten born in a zoo weighed , measured and had a long tail.
In central Mongolia, seven females with kittens were observed using 20 dens for 4–60 days. Their maternal dens were either among rocks, or in former burrows of the Tarbagan marmot (Marmota sibirica), and had at least two entrances. In Iran, a Pallas's cat was observed using cavities of aged Greek juniper (Juniperus excelsa) as breeding dens for a litter of four kittens.
Two-month-old kittens weigh , and their fur gradually grows longer. They start hunting at the age of about five months and reach adult size by the age of six to seven months.
Threats
In China, Mongolia and Russia, the Pallas's cat was once hunted for its fur in large numbers of more than 10,000 skins annually. In China and the former Soviet Union, hunting of the Pallas's cat decreased in the 1970s when it became legally protected. Mongolia exported 9,185 skins in 1987, but international trade has ceased since 1988. However, domestic trade of its skins and body parts for medicinal purposes continues in the country, and it may be hunted throughout the year.
Cases of herding dogs killing Pallas's cats were reported in Iran, Kazakhstan and the Altai Republic.
Pallas's cats have also fallen victim in traps set for small mammals in Kazakhstan and in the Altai Republic. In Transbaikal, the Pallas's cat is threatened by poaching. In Mongolia, the use of the rodenticide bromadiolone in the frame of rodent control measures in the early 21st century poisoned the prey base of carnivores and raptors.
In the Sanjiangyuan region of the Tibetan Plateau, of grassland was poisoned between 2005 and 2009, leading to an estimated loss of of pika biomass.
The Pallas's cat may be negatively affected by habitat fragmentation due to mining and infrastructure projects.
Conservation
On the IUCN Red List, the Pallas's cat is classified as Least Concern since 2020 because of its wide-spread range and assumed large global population. It is listed in CITES Appendix II. Hunting it is prohibited in all range countries except Mongolia. Since 2009, it is legally protected in Afghanistan, where all hunting and trade with its body parts is banned.
On the Mongolian Red List of Mammals, it is listed as Near Threatened since 2006. In China, it is listed as Endangered. In Turkmenistan, it is proposed to be listed as Critically Endangered due to the scarcity of contemporary records.
In captivity
Between 1951 and 1979, the Beijing Zoo kept 16 Pallas's cats, but they lived for less than three years. In 1984, the Pallas's cat was designated as a priority species for captive breeding of the American Association of Zoos and Aquariums's Species Survival Plan. Almost half of the kittens born in member zoos died within the first 30 days, reaching the highest mortality rate in captivity of any small wild cat.
Zoos in the former Soviet Union received most of the wild-caught Pallas's cats from the Transbaikal region and a few from Mongolia. Moscow Zoo initiated a studbook for the Pallas's cat in 1997. Since 2004, the Pallas's cat international studbook has been managed by the Royal Zoological Society of Scotland, which also coordinates the captive breeding program for the Pallas's cat within the European Endangered Species Programme. As of 2018, 177 Pallas's cats were kept in 60 zoos in Europe, Russia, North America and Japan.
In 2011, a female Pallas's cat was artificially inseminated for the first time with semen from the male at the Cincinnati Zoo. After 69 days, she gave birth to four kittens, of which one was stillborn.
Etymology
'Manul' is the Pallas's cat's name in the Mongolian language. It is called 'manol' in the Kyrgyz language. The common name 'Pallas's cat' was coined by William Thomas Blanford in honour of Peter Simon Pallas.
In popular culture
The Pallas's cat is featured in a Russian Internet meme known as "Pet the cat" introduced in 2008; the meme is a picture of a Pallas's cat that invites the reader to pet it in the image's caption. In 2012, the Pallas's cat became the mascot of Moscow Zoo.
| Biology and health sciences | Felines | Animals |
273978 | https://en.wikipedia.org/wiki/Bandage | Bandage | A bandage is a piece of material used either to support a medical device such as a dressing or splint, or on its own to provide support for the movement of a part of the body. When used with a dressing, the dressing is applied directly on a wound, and a bandage is used to hold the dressing in place. Other bandages are used without dressings, such as elastic bandages that are used to reduce swelling or provide support to a sprained ankle. Tight bandages can be used to slow blood flow to an extremity, such as when a leg or arm is bleeding heavily.
Bandages are available in a wide range of types, from generic cloth strips to specialized shaped bandages designed for a specific limb or part of the body. Bandages can often be improvised as the situation demands, using clothing, blankets or other material. In American English, the word bandage is often used to indicate a small gauze dressing attached to an adhesive bandage.
Types
Gauze bandage (common gauze roller bandage)
The most common type of bandage is the gauze bandage, a woven strip of material with a Telfa absorbent barrier to prevent adhering to wounds. A gauze bandage can come in any number of widths and lengths and can be used for almost any bandage application, including holding a dressing in place.
Adhesive bandage
Liquid bandage
Compression bandage
The term 'compression bandage' describes a wide variety of bandages with many different applications.
Short stretch compression bandages are applied to a limb (usually for treatment of lymphedema or venous ulcers). This type of bandage is capable of shortening around the limb after application and is therefore not exerting ever-increasing pressure during inactivity. This dynamic is called resting pressure and is considered safe and comfortable for long-term treatment. Conversely, the stability of the bandage creates a very high resistance to stretch when pressure is applied through internal muscle contraction and joint movement. This force is called working pressure.
Long stretch compression bandages have long stretch properties, meaning their high compressive power can be easily adjusted. However, they also have a very high resting pressure and must be removed at night or if the patient is in a resting position.
Triangular bandage
Also known as a cravat bandage, a triangular bandage is a piece of cloth put into a right-angled triangle, and often provided with safety pins to secure it in place. It can be used fully unrolled as a sling, folded as a normal bandage, or for specialized applications, such as on the head. One advantage of this type of bandage is that it can be makeshift and made from a fabric scrap or a piece of clothing. The Boy Scouts popularized the use of this bandage in many of their first aid lessons, as a part of the uniform is a "neckerchief" that can easily be folded to form a cravat.
Tube bandage
A tube bandage is applied using an applicator, and is woven in a continuous circle. It is used to hold dressings or splints on to limbs, or to provide support to sprains and strains, so that it stops bleeding.
Kirigami bandage
A new type of bandage was invented in 2016; inspired by the art of kirigami, it uses parallel slits to better fit areas of the body that bend. The bandages have been produced with 3D-printed molds.
| Technology | Equipment | null |
273993 | https://en.wikipedia.org/wiki/Stack%20%28abstract%20data%20type%29 | Stack (abstract data type) | In computer science, a stack is an abstract data type that serves as a collection of elements with two main operations:
Push, which adds an element to the collection, and
Pop, which removes the most recently added element.
Additionally, a peek operation can, without modifying the stack, return the value of the last element added. The name stack is an analogy to a set of physical items stacked one atop another, such as a stack of plates.
The order in which an element added to or removed from a stack is described as last in, first out, referred to by the acronym LIFO. As with a stack of physical objects, this structure makes it easy to take an item off the top of the stack, but accessing a datum deeper in the stack may require removing multiple other items first.
Considered a sequential collection, a stack has one end which is the only position at which the push and pop operations may occur, the top of the stack, and is fixed at the other end, the bottom. A stack may be implemented as, for example, a singly linked list with a pointer to the top element.
A stack may be implemented to have a bounded capacity. If the stack is full and does not contain enough space to accept another element, the stack is in a state of stack overflow.
A stack is needed to implement depth-first search.
History
Stacks entered the computer science literature in 1946, when Alan Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines. Subroutines and a two-level stack had already been implemented in Konrad Zuse's Z4 in 1945.
Klaus Samelson and Friedrich L. Bauer of Technical University Munich proposed the idea of a stack called ("operational cellar") in 1955 and filed a patent in 1957. In March 1988, by which time Samelson was deceased, Bauer received the IEEE Computer Pioneer Award for the invention of the stack principle. Similar concepts were independently developed by Charles Leonard Hamblin in the first half of 1954 and by with his ("automatic memory") in 1958.
Stacks are often described using the analogy of a spring-loaded stack of plates in a cafeteria. Clean plates are placed on top of the stack, pushing down any plates already there. When the top plate is removed from the stack, the one below it is elevated to become the new top plate.
Non-essential operations
In many implementations, a stack has more operations than the essential "push" and "pop" operations. An example of a non-essential operation is "top of stack", or "peek", which observes the top element without removing it from the stack. Since this can be broken down into a "pop" followed by a "push" to return the same data to the stack, it is not considered an essential operation. If the stack is empty, an underflow condition will occur upon execution of either the "stack top" or "pop" operations. Additionally, many implementations provide a check if the stack is empty and an operation that returns its size.
Software stacks
Implementation
A stack can be easily implemented either through an array or a linked list, as it is merely a special case of a list. In either case, what identifies the data structure as a stack is not the implementation but the interface: the user is only allowed to pop or push items onto the array or linked list, with few other helper operations. The following will demonstrate both implementations using pseudocode.
Array
An array can be used to implement a (bounded) stack, as follows. The first element, usually at the zero offset, is the bottom, resulting in array[0] being the first element pushed onto the stack and the last element popped off. The program must keep track of the size (length) of the stack, using a variable top that records the number of items pushed so far, therefore pointing to the place in the array where the next element is to be inserted (assuming a zero-based index convention). Thus, the stack itself can be effectively implemented as a three-element structure:
structure stack:
maxsize : integer
top : integer
items : array of item
procedure initialize(stk : stack, size : integer):
stk.items ← new array of size items, initially empty
stk.maxsize ← size
stk.top ← 0
The push operation adds an element and increments the top index, after checking for overflow:
procedure push(stk : stack, x : item):
if stk.top = stk.maxsize:
report overflow error
else:
stk.items[stk.top] ← x
stk.top ← stk.top + 1
Similarly, pop decrements the top index after checking for underflow, and returns the item that was previously the top one:
procedure pop(stk : stack):
if stk.top = 0:
report underflow error
else:
stk.top ← stk.top − 1
r ← stk.items[stk.top]
return r
Using a dynamic array, it is possible to implement a stack that can grow or shrink as much as needed. The size of the stack is simply the size of the dynamic array, which is a very efficient implementation of a stack since adding items to or removing items from the end of a dynamic array requires amortized O(1) time.
Linked list
Another option for implementing stacks is to use a singly linked list. A stack is then a pointer to the "head" of the list, with perhaps a counter to keep track of the size of the list:
structure frame:
data : item
next : frame or nil
structure stack:
head : frame or nil
size : integer
procedure initialize(stk : stack):
stk.head ← nil
stk.size ← 0
Pushing and popping items happens at the head of the list; overflow is not possible in this implementation (unless memory is exhausted):
procedure push(stk : stack, x : item):
newhead ← new frame
newhead.data ← x
newhead.next ← stk.head
stk.head ← newhead
stk.size ← stk.size + 1
procedure pop(stk : stack):
if stk.head = nil:
report underflow error
r ← stk.head.data
stk.head ← stk.head.next
stk.size ← stk.size - 1
return r
Stacks and programming languages
Some languages, such as Perl, LISP, JavaScript and Python, make the stack operations push and pop available on their standard list/array types. Some languages, notably those in the Forth family (including PostScript), are designed around language-defined stacks that are directly visible to and manipulated by the programmer.
The following is an example of manipulating a stack in Common Lisp ("" is the Lisp interpreter's prompt; lines not starting with "" are the interpreter's responses to expressions):
> (setf stack (list 'a 'b 'c)) ;; set the variable "stack"
(A B C)
> (pop stack) ;; get top (leftmost) element, should modify the stack
A
> stack ;; check the value of stack
(B C)
> (push 'new stack) ;; push a new top onto the stack
(NEW B C)
Several of the C++ Standard Library container types have and operations with LIFO semantics; additionally, the template class adapts existing containers to provide a restricted API with only push/pop operations. PHP has an SplStack class. Java's library contains a class that is a specialization of . Following is an example program in Java language, using that class.
import java.util.Stack;
class StackDemo {
public static void main(String[]args) {
Stack<String> stack = new Stack<String>();
stack.push("A"); // Insert "A" in the stack
stack.push("B"); // Insert "B" in the stack
stack.push("C"); // Insert "C" in the stack
stack.push("D"); // Insert "D" in the stack
System.out.println(stack.peek()); // Prints the top of the stack ("D")
stack.pop(); // removing the top ("D")
stack.pop(); // removing the next top ("C")
}
}
Hardware stack
A common use of stacks at the architecture level is as a means of allocating and accessing memory.
Basic architecture of a stack
A typical stack is an area of computer memory with a fixed origin and a variable size. Initially the size of the stack is zero. A stack pointer (usually in the form of a processor register) points to the most recently referenced location on the stack; when the stack has a size of zero, the stack pointer points to the origin of the stack.
The two operations applicable to all stacks are:
A push operation: the address in the stack pointer is adjusted by the size of the data item and a data item is written at the location to which the stack pointer points.
A pop or pull operation: a data item at the current location to which the stack pointer points is read, and the stack pointer is moved by a distance corresponding to the size of that data item.
There are many variations on the basic principle of stack operations. Every stack has a fixed location in memory at which it begins. As data items are added to the stack, the stack pointer is displaced to indicate the current extent of the stack, which expands away from the origin.
Stack pointers may point to the origin of a stack or to a limited range of addresses above or below the origin (depending on the direction in which the stack grows); however, the stack pointer cannot cross the origin of the stack. In other words, if the origin of the stack is at address 1000 and the stack grows downwards (towards addresses 999, 998, and so on), the stack pointer must never be incremented beyond 1000 (to 1001 or beyond). If a pop operation on the stack causes the stack pointer to move past the origin of the stack, a stack underflow occurs. If a push operation causes the stack pointer to increment or decrement beyond the maximum extent of the stack, a stack overflow occurs.
Some environments that rely heavily on stacks may provide additional operations, for example:
Duplicate: the top item is popped and then pushed twice, such that two copies of the former top item now lie at the top.
Peek: the topmost item is inspected (or returned), but the stack pointer and stack size does not change (meaning the item remains on the stack). This can also be called the top operation.
Swap or exchange: the two topmost items on the stack exchange places.
Rotate (or Roll): the topmost items are moved on the stack in a rotating fashion. For example, if , items 1, 2, and 3 on the stack are moved to positions 2, 3, and 1 on the stack, respectively. Many variants of this operation are possible, with the most common being called left rotate and right rotate.
Stacks are often visualized growing from the bottom up (like real-world stacks). They may also be visualized growing from left to right, where the top is on the far right, or even growing from top to bottom. The important feature is for the bottom of the stack to be in a fixed position. The illustration in this section is an example of a top-to-bottom growth visualization: the top (28) is the stack "bottom", since the stack "top" (9) is where items are pushed or popped from.
A right rotate will move the first element to the third position, the second to the first and the third to the second. Here are two equivalent visualizations of this process:
apple banana
banana ===right rotate==> cucumber
cucumber apple
cucumber apple
banana ===left rotate==> cucumber
apple banana
A stack is usually represented in computers by a block of memory cells, with the "bottom" at a fixed location, and the stack pointer holding the address of the current "top" cell in the stack. The "top" and "bottom" nomenclature is used irrespective of whether the stack actually grows towards higher memory addresses.
Pushing an item on to the stack adjusts the stack pointer by the size of the item (either decrementing or incrementing, depending on the direction in which the stack grows in memory), pointing it to the next cell, and copies the new top item to the stack area. Depending again on the exact implementation, at the end of a push operation, the stack pointer may point to the next unused location in the stack, or it may point to the topmost item in the stack. If the stack points to the current topmost item, the stack pointer will be updated before a new item is pushed onto the stack; if it points to the next available location in the stack, it will be updated after the new item is pushed onto the stack.
Popping the stack is simply the inverse of pushing. The topmost item in the stack is removed and the stack pointer is updated, in the opposite order of that used in the push operation.
Stack in main memory
Many CISC-type CPU designs, including the x86, Z80 and 6502, have a dedicated register for use as the call stack stack pointer with dedicated call, return, push, and pop instructions that implicitly update the dedicated register, thus increasing code density. Some CISC processors, like the PDP-11 and the 68000, also have special addressing modes for implementation of stacks, typically with a semi-dedicated stack pointer as well (such as A7 in the 68000). In contrast, most RISC CPU designs do not have dedicated stack instructions and therefore most, if not all, registers may be used as stack pointers as needed.
Stack in registers or dedicated memory
Some machines use a stack for arithmetic and logical operations; operands are pushed onto the stack, and arithmetic and logical operations act on the top one or more items on the stack, popping them off the stack and pushing the result onto the stack. Machines that function in this fashion are called stack machines.
A number of mainframes and minicomputers were stack machines, the most famous being the Burroughs large systems. Other examples include the CISC HP 3000 machines and the CISC machines from Tandem Computers.
The x87 floating point architecture is an example of a set of registers organised as a stack where direct access to individual registers (relative to the current top) is also possible.
Having the top-of-stack as an implicit argument allows for a small machine code footprint with a good usage of bus bandwidth and code caches, but it also prevents some types of optimizations possible on processors permitting random access to the register file for all (two or three) operands. A stack structure also makes superscalar implementations with register renaming (for speculative execution) somewhat more complex to implement, although it is still feasible, as exemplified by modern x87 implementations.
Sun SPARC, AMD Am29000, and Intel i960 are all examples of architectures that use register windows within a register-stack as another strategy to avoid the use of slow main memory for function arguments and return values.
There is also a number of small microprocessors that implement a stack directly in hardware, and some microcontrollers have a fixed-depth stack that is not directly accessible. Examples are the PIC microcontrollers, the Computer Cowboys MuP21, the Harris RTX line, and the Novix NC4016. At least one microcontroller family, the COP400, implements a stack either directly in hardware or in RAM via a stack pointer, depending on the device. Many stack-based microprocessors were used to implement the programming language Forth at the microcode level.
Applications of stacks
Expression evaluation and syntax parsing
Calculators that employ reverse Polish notation use a stack structure to hold values. Expressions can be represented in prefix, postfix or infix notations and conversion from one form to another may be accomplished using a stack. Many compilers use a stack to parse syntax before translation into low-level code. Most programming languages are context-free languages, allowing them to be parsed with stack-based machines.
Backtracking
Another important application of stacks is backtracking. An illustration of this is the simple example of finding the correct path in a maze that contains a series of points, a starting point, several paths and a destination. If random paths must be chosen, then after following an incorrect path, there must be a method by which to return to the beginning of that path. This can be achieved through the use of stacks, as a last correct point can be pushed onto the stack, and popped from the stack in case of an incorrect path.
The prototypical example of a backtracking algorithm is depth-first search, which finds all vertices of a graph that can be reached from a specified starting vertex. Other applications of backtracking involve searching through spaces that represent potential solutions to an optimization problem. Branch and bound is a technique for performing such backtracking searches without exhaustively searching all of the potential solutions in such a space.
Compile-time memory management
A number of programming languages are stack-oriented, meaning they define most basic operations (adding two numbers, printing a character) as taking their arguments from the stack, and placing any return values back on the stack. For example, PostScript has a return stack and an operand stack, and also has a graphics state stack and a dictionary stack. Many virtual machines are also stack-oriented, including the p-code machine and the Java Virtual Machine.
Almost all calling conventionsthe ways in which subroutines receive their parameters and return resultsuse a special stack (the "call stack") to hold information about procedure/function calling and nesting in order to switch to the context of the called function and restore to the caller function when the calling finishes. The functions follow a runtime protocol between caller and callee to save arguments and return value on the stack. Stacks are an important way of supporting nested or recursive function calls. This type of stack is used implicitly by the compiler to support CALL and RETURN statements (or their equivalents) and is not manipulated directly by the programmer.
Some programming languages use the stack to store data that is local to a procedure. Space for local data items is allocated from the stack when the procedure is entered, and is deallocated when the procedure exits. The C programming language is typically implemented in this way. Using the same stack for both data and procedure calls has important security implications (see below) of which a programmer must be aware in order to avoid introducing serious security bugs into a program.
Efficient algorithms
Several algorithms use a stack (separate from the usual function call stack of most programming languages) as the principal data structure with which they organize their information. These include:
Graham scan, an algorithm for the convex hull of a two-dimensional system of points. A convex hull of a subset of the input is maintained in a stack, which is used to find and remove concavities in the boundary when a new point is added to the hull.
Part of the SMAWK algorithm for finding the row minima of a monotone matrix uses stacks in a similar way to Graham scan.
All nearest smaller values, the problem of finding, for each number in an array, the closest preceding number that is smaller than it. One algorithm for this problem uses a stack to maintain a collection of candidates for the nearest smaller value. For each position in the array, the stack is popped until a smaller value is found on its top, and then the value in the new position is pushed onto the stack.
The nearest-neighbor chain algorithm, a method for agglomerative hierarchical clustering based on maintaining a stack of clusters, each of which is the nearest neighbor of its predecessor on the stack. When this method finds a pair of clusters that are mutual nearest neighbors, they are popped and merged.
Security
Some computing environments use stacks in ways that may make them vulnerable to security breaches and attacks. Programmers working in such environments must take special care to avoid such pitfalls in these implementations.
As an example, some programming languages use a common stack to store both data local to a called procedure and the linking information that allows the procedure to return to its caller. This means that the program moves data into and out of the same stack that contains critical return addresses for the procedure calls. If data is moved to the wrong location on the stack, or an oversized data item is moved to a stack location that is not large enough to contain it, return information for procedure calls may be corrupted, causing the program to fail.
Malicious parties may attempt a stack smashing attack that takes advantage of this type of implementation by providing oversized data input to a program that does not check the length of input. Such a program may copy the data in its entirety to a location on the stack, and in doing so, it may change the return addresses for procedures that have called it. An attacker can experiment to find a specific type of data that can be provided to such a program such that the return address of the current procedure is reset to point to an area within the stack itself (and within the data provided by the attacker), which in turn contains instructions that carry out unauthorized operations.
This type of attack is a variation on the buffer overflow attack and is an extremely frequent source of security breaches in software, mainly because some of the most popular compilers use a shared stack for both data and procedure calls, and do not verify the length of data items. Frequently, programmers do not write code to verify the size of data items, either, and when an oversized or undersized data item is copied to the stack, a security breach may occur.
| Mathematics | Data structures and types | null |
273997 | https://en.wikipedia.org/wiki/Stack%20%28geology%29 | Stack (geology) | A stack or sea stack is a geological landform consisting of a steep and often vertical column or columns of rock in the sea near a coast, formed by wave erosion. Stacks are formed over time by wind and water, processes of coastal geomorphology. They are formed when part of a headland is eroded by hydraulic action, which is the force of the sea or water crashing against the rock. The force of the water weakens cracks in the headland, causing them to later collapse, forming free-standing stacks and even a small island. Without the constant presence of water, stacks also form when a natural arch collapses under gravity, due to sub-aerial processes like wind erosion. Erosion causes the arch to collapse, leaving the pillar of hard rock standing away from the coast—the stack. Eventually, erosion will cause the stack to collapse, leaving a stump. Stacks can provide important nesting locations for seabirds, and many are popular for rock climbing.
Isolated steep-sided, rocky oceanic islets typically of volcanic origin, are also loosely called "stacks" or "volcanic stacks".
Formation
Stacks typically form in horizontally bedded sedimentary or volcanic rocks, particularly on limestone cliffs. The medium hardness of these rocks means medium resistance to abrasive and attritive erosion. A more resistant layer may form a capstone. (Cliffs with weaker rock, such as claystone or highly jointed rock, tend to slump and erode too quickly to form stacks, while harder rocks such as granite erode in different ways.)
The formation process usually begins when the sea attacks lines of weakness, such as steep joints or small fault zones in a cliff face. These cracks then gradually get larger and turn into caves. If a cave wears through a headland, an arch forms. Further erosion causes the arch to collapse, leaving the pillar of hard rock standing away from the coast, the stack. Eventually, erosion will cause the stack to collapse, leaving a stump. This stump usually forms a small rock island, low enough for a high tide to submerge.
| Physical sciences | Oceanic and coastal landforms | Earth science |
274115 | https://en.wikipedia.org/wiki/Lockheed%20Constellation | Lockheed Constellation | The Lockheed Constellation ("Connie") is a propeller-driven, four-engined airliner built by Lockheed Corporation starting in 1943. The Constellation series was the first civil airliner family to enter widespread use equipped with a pressurized cabin, enabling it to fly well above most bad weather, thus significantly improving the general safety and ease of commercial passenger air travel.
Several different models of the Constellation series were produced, although they all featured the distinctive triple-tail and dolphin-shaped fuselage. Most were powered by four 18-cylinder Wright R-3350 Duplex-Cyclones. In total, 856 were produced between 1943 and 1958 at Lockheed's plant in Burbank, California, and used as both a civil airliner and as a military and civilian cargo transport. Among their famous uses was during the Berlin and the Biafran airlifts. Three served as the presidential aircraft for Dwight D. Eisenhower, one of which is at the National Museum of the United States Air Force.
Design and development
Initial studies
Lockheed had been working on the L-044 Excalibur, a four-engined, pressurized airliner, since 1937. In 1939, Transcontinental and Western Airlines (TWA), at the instigation of major stockholder Howard Hughes, requested a 40-passenger transcontinental airliner with a range of —well beyond the capabilities of the Excalibur design. TWA's requirements led to the L-049 Constellation, designed by Lockheed engineers, including Kelly Johnson and Hall Hibbard. Willis Hawkins, another Lockheed engineer, maintains that the Excalibur program was purely a cover for the Constellation.
Development of the Constellation
The Constellation's wing design was close to that of the Lockheed P-38 Lightning, differing mostly in size. The triple tail allowed the aircraft to fit into existing hangars, while features included hydraulically boosted controls and a deicing system used on wing and tail leading edges. The aircraft had a maximum speed over , faster than that of a Japanese Zero fighter, a cruise speed of , and a service ceiling of .
According to Anthony Sampson in Empires of the Sky, Lockheed may have undertaken the intricate design, but Hughes's intercession in the design process drove the concept, shape, capabilities, appearance, and ethos. These rumors were discredited by Johnson. Howard Hughes and Jack Frye confirmed that the rumors were false in a letter dated November 1941.
Operational history
World War II
With the onset of World War II, the TWA aircraft entering production were converted to an order for C-69 Constellation military transport aircraft, with 202 aircraft intended for the United States Army Air Forces (USAAF). The first prototype (civil registration NX25600) flew on January 9, 1943, a short ferry hop from Burbank to Muroc Field for testing. Edmund T. "Eddie" Allen, on loan from Boeing, flew left seat, with Lockheed's own Milo Burcham as copilot. Rudy Thoren and Kelly Johnson were also aboard.
Lockheed proposed the model L-249 as a long-range bomber. It received the military designation XB-30, but the aircraft was not developed. A plan for a very long-range troop transport, the C-69B (L-349, ordered by Pan Am in 1940 as the L-149), was cancelled. A single C-69C (L-549), a 43-seat VIP transport, was built in 1945 at the Lockheed-Burbank plant.
The C-69 was mostly used as a high-speed, long-distance troop transport during the war. In total, 22 C-69s were built before the end of hostilities, but seven of these never entered military service, as they were converted to civilian L-049s on the assembly line. The USAAF cancelled the remainder of the order in 1945. Some aircraft remained in USAF service into the 1960s, serving as passenger ferries for the airline that relocated military personnel, wearing the livery of the Military Air Transport Service.
Postwar use
After World War II, the Constellation came into its own as a fast civilian airliner. Aircraft already in production for the USAAF as C-69 transports were finished as civilian airliners, with TWA receiving the first on 1 October 1945. TWA's first transatlantic proving flight departed Washington, D.C., on December 3, 1945, arriving in Paris on December 4 via Gander and Shannon.
TWA transatlantic service started on February 6, 1946, with a New York-Paris flight in a Constellation. On June 17, 1947, Pan American World Airways (Pan Am) opened the first-ever scheduled round-the-world service with its L-749 Clipper America. The famous flight "Pan Am 1" operated until 1982.
As the first pressurized airliner in widespread use, the Constellation helped establish affordable and comfortable air travel. Operators of Constellations included TWA, Eastern Air Lines, Pan Am, Air France, BOAC, KLM, Qantas, Lufthansa, Iberia Airlines, Panair do Brasil, TAP Portugal, Trans-Canada Air Lines (later renamed Air Canada), Aer Lingus, VARIG, Cubana de Aviación, Línea Aeropostal Venezolana, Northwest Airlines, and Avianca, the national airline of Colombia.
Records
Sleek and powerful, Constellations set many records. On April 17, 1944, the second production C-69, piloted by Howard Hughes and TWA president Jack Frye, flew from Burbank, California, to Washington, D.C., in 6 hours and 57 minutes (about at an average ). On the return trip, the aircraft stopped at Wright Field in Ohio to give Orville Wright his last flight, more than 40 years after his historic first flight near Kitty Hawk, North Carolina. He commented that the Constellation's wingspan was longer than the distance of his first flight.
On September 29, 1957, a TWA L-1649A flew from Los Angeles to London in 18 hours and 32 minutes—about at . The L-1649A holds the record for the longest-duration, nonstop passenger flight aboard a piston-powered airliner. On TWA's first London-to-San Francisco flight on October 1–2, 1957, the aircraft stayed aloft for 23 hours and 19 minutes (about at ).
Obsolescence
Jet airliners such as the de Havilland Comet, Boeing 707, Douglas DC-8, Convair 880, and Sud Aviation Caravelle rendered the Constellation obsolete. The first routes lost to jets were the long overseas routes, but Constellations continued to fly domestic routes. The last scheduled passenger flight of a Constellation in the contiguous United States was made by a TWA L749 on May 11, 1967, from Philadelphia to Kansas City, Missouri; the last scheduled passenger flight in North America was by Western Airlines' N86525 in Alaska, Anchorage to Yakutat to Juneau on 26 November 1968.
Constellations carried freight in later years, and were used on backup sections of Eastern Airlines' shuttle service between New York, Washington, and Boston until 1968. Propeller airliners were used on overnight freight runs into the 1990s, as their low speed was not an impediment. An Eastern Air Lines Connie holds the record for a New York–to–Washington flight from take off to touchdown in just over 30 minutes. The record was set prior to speed restrictions by the Federal Aviation Administration (FAA) below .
One of the reasons for the elegance of the aircraft was the dolphin-shaped fuselage, a continuously variable profile with no two bulkheads the same shape
and a skin formed into compound curves, which was expensive to build. Manufacturers have since favored tube-shaped fuselages in airliner designs, as the cylindrical cross-section design is more resistant to pressurization changes and less expensive to build.
After ending Constellation production, Lockheed chose not to develop a first-generation jetliner, sticking to its military business and production of the turboprop Lockheed L-188 Electra. Lockheed did not build a large passenger aircraft again until its L-1011 Tristar debuted in 1972. While a technological marvel, the L-1011 was a commercial failure, and Lockheed left the commercial airliner business permanently in 1983.
Variants
The initial military versions carried the Lockheed designation of L-049; as World War II came to a close, some were completed as civilian L-049 Constellations followed by the L-149 (L-049 modified to carry more fuel tanks).
The first purpose-built passenger Constellations were the more powerful L-649 and L-749 (which had more fuel in the outer wings), L-849 (an unbuilt model to use the R-3350 turbo-compound engines adopted for the L-1049 ), L-949 (an unbuilt, high-density seating-cum-freighter type, what would come to be called a "combi aircraft").
These were followed by the L-1049 Super Constellation (with longer fuselage), L-1149 (proposal to use Allison turbine engines) and L-1249 (similar to L-1149, built as R7V-2/YC-121F), L-1449 (unbuilt proposal for L1049G, stretched , with new wing and turbines) and L-1549 (unbuilt project to stretch L-1449 ).
The final civilian variant was the L-1649 Starliner (all new wing and L1049G fuselage).
Military versions included the C-69 and C-121 for the Army Air Forces/Air Force and the R7O R7V-1 (L-1049B) EC-121 WV-1 (L-749A) WV-2 (L-1049B) (widely known as the Willie Victor) and many variant EC-121 designations for the Navy.
Operators
After TWA's initial order was filled following World War II, customers rapidly accumulated, with over 800 aircraft built. In military service, the U.S. Navy and Air Force operated the EC-121 Warning Star variant until 1978, nearly 40 years after work on the L-049 began. Cubana de Aviación was the first airline in Latin America to operate Super Constellations.
Appearances in film
A TWA-liveried Connie appears in the 1957 film Funny Face, starring Audrey Hepburn and Fred Astaire. The footage shows take off, a brief shot in flight over Paris, and landing. Finally, it is visible in background, parked on tarmac along with 1950s era mobile passenger stairs. Footage begins at film time stamp 32:40. Additionally, a TWA-liveried Connie appears in the 1953 film How to Marry a Millionaire, featuring Lauren Bacall, Marilyn Monroe and Betty Grable.
Surviving aircraft
Commercial
On display
L-049
N90831 – on display at the Pima Air & Space Museum in Tucson, Arizona. This is a former C-69 transport, s/n 42-94549, that was converted for civilian service, and was one of the first TWA aircraft.
N86533 – on display at the TAM Museum, located in São Carlos, Brazil. Previously, it served as a children's attraction at the entrance of Silvio Pettirossi International Airport in Asunción, Paraguay. It is painted in the markings of Panair do Brasil.
N9412H – parked adjacent to a flight school and cafe at Greenwood Lake Airport in West Milford, New Jersey. It was delivered as Air France's first Constellation in June 1946 as L-049 F-BAZA, before being sold to Frank Lembo Enterprises in May 1976 for $45,000 for use as a restaurant and lounge. It was flown to the airport in July 1977, and, along with the airport, was sold to the State of New Jersey in 2000. In 2005, the interior was refurbished for use as a flight school office.
N2520B – on display in Aerosur livery, on the first ring road in Santa Cruz de la Sierra, Bolivia. It is known as El Avión Pirata.
L-749
F-ZVMV – on display at the Musée de l'Air et de l'Espace (The Museum of Air and Space) located at Paris-Le Bourget Airport near Le Bourget, France, 10 km north of Paris. It initially served with Pan American Airways, before being transferred to Air France, with which it served until 1960. Afterwards, it was used by the Compagnie Générale des Turbo-Machines (General Company of Turbomachinery) as an engine testbed until December 1974.
N749NL - on display at the Avidrome, Lelystad Airport, The Netherlands. https://www.aviodrome.nl/en/collection/lockheed-constellation
L-1049 Super Constellation
CF-TGE – on display at the Museum of Flight in Seattle, Washington. It is painted in the markings it carried during its service with Trans-Canada Air Lines from 1954 to the 1960s. After TCA service, it was sold to World Wide Airways and later retired in Montreal by 1965; it was renovated as a restaurant and bar in and around the Montreal area, and sold and moved again to Toronto and used as convention facility by the Regal Constellation Hotel. It was sold again and stored at Toronto Pearson International Airport. Finally, it was sold to the Museum of Flight, restored in Rome, New York, and shipped to Seattle for display.
44-0315 – on display at the Air Mobility Command Museum at Dover Air Force Base in Dover, Delaware. Last registered N1005C it is painted to represent a USAF C-121C, but was never actually delivered to the air force.
D-ALIN – on display at the Flugausstellung Hermeskeil, near Hermeskeil, Germany. It is a former Lufthansa Super Constellation, and was the actual aircraft that Konrad Adenauer flew into Moscow in 1955, when he negotiated the release of German POWs.
D-ALEM – on display near Munich International Airport at Munich, Germany. Last registered F-BHML it is painted to represent Super Constellation D-ALEM, Lufthansa's first long-haul aircraft of 1955.
IN315 – on display at the Naval Aviation Museum at Dabolim in Goa, India. This aircraft is a former Air India Super Constellation (VT-DHM Rani of Ellora) that was later transferred to the Indian Navy
L-1649 Starliner
N974R – on display in front of the Fantasy of Flight attraction in Lakeland, Florida.
ZS-DVJ – On display at Rand Airport in Germiston in Trek Airways colours. Used to be at OR Tambo International Airport, South Africa at the South African Airways Technical area. The aircraft is owned by the South African Airways Museum Society.
Under restoration or in storage
L-049
N7777G – painted in TWA colors (although this aircraft never flew for TWA) it is stored at the Large Item Storage facility for the UK Science Museum at Wroughton, near Swindon. This aircraft was used by the Rolling Stones to transport equipment during their 1973 Australian tour. It is the only Constellation in the United Kingdom.
L-1049 Super Constellation
F-BRAD – to display by the Amicale du Super Constellation located at the Nantes Airport in Nantes, France. It was delivered to Air France on November 2, 1953, and was upgraded to a L-1049 G in 1956, serving until August 8, 1967, having totaled 24,284 hours under Air France's colors. After retirement, it was sent to Spain, to be registered EC-BEN, briefly flying humanitarian and medevac missions in Biafra. Aero Fret bought it in 1968, brought it back home to France, registered it as F-BRAD, and operated it on cargo hauls until 1974. When the Constellation landed in Nantes one last time to be scrapped, it was ultimately saved by Mr. Gaborit, who revamped it somewhat by his own modest means to finally park it near the terminal, accessible to visitors for a few years, until the Chamber of Commerce and Industry of the Nantes-Atlantique Airport bought it, to contract the Amicale du Super Constellation to undergo a complete restoration of the aircraft.
HI-542CT City of Miami – parked on an unused runway at the Rafael Hernández Airport in Aguadilla, Puerto Rico. It was struck by a runaway DC-4 on February 3, 1992, resulting in damage to the right wing and main spar.
N6937C Star of America – to airworthiness by the National Airline History Museum in Kansas City, Missouri. This aircraft was originally built in 1957, stored for several years, and then delivered to cargo carrier Slick Airways. It was restored in 1986 by the Save-a-Connie, Inc. organization, later renamed as the National Airline History Museum. It was originally painted in red and white with Save-a-Connie, but was later repainted in the 1950s livery of TWA to resemble its original Star of America Constellation. The aircraft appeared at New York's John F. Kennedy International Airport at the original TWA terminal designed by Eero Saarinen to commemorate the 75th anniversary of the airline with the paint scheme donated by TWA in Kansas City for the occasion. The Star of America has appeared at many airshows and was even used in The Aviator, the 2004 film depicting the life of TWA's one-time owner Howard Hughes, the man often credited with helping design and develop the original Constellation series.
L-1649 Starliner
N7316C – returned to airworthiness by Lufthansa Technik North America in Auburn, Maine. This aircraft was purchased at auction in 2007, along with C/N 1038, by the Deutsche Lufthansa Berlin Foundation. Lufthansa has built a hangar at the airport, which will allow the aircraft to be restored indoors. Lufthansa announced in March 2018 that it will be transported back to Germany and further restoration decisions will be made after it arrives. As of the end of 2019 the plan is to restore the aircraft for static display in a museum. According to reports from the US, the aircraft was dismantled (as apparently was the Ju-52 D-AQUI) without the requisite documentation that would have allowed the return-to-flight work to continue.
N8083H – This aircraft was purchased at auction in 2007, along with C/N 1018, by the Deutsche Lufthansa Berlin Foundation, and stripped of all usable spares to support the restoration of C/N 1018. The aircraft was subsequently sold and transported to JFK International Airport to become a cocktail bar in the TWA Hotel, a retro-aviation themed hotel built on the former TWA Flight Center.
Military
Airworthy
C-121C
S/N 54-0156 – Flies with the Super Constellation Flyers Association out of Basel, as the Breitling Super Constellation. Its restoration was sponsored by Swiss watch manufacturer Breitling, and is now registered in the Swiss Aircraft registry as HB-RSC. This Constellation is one of two flying in the world.
S/N 54-0157 – Flies with the Historical Aircraft Restoration Society (HARS) out of Shellharbour Airport near Wollongong, Australia. Following its restoration, it was painted in pseudo-Qantas livery, including the Qantas logo on the tail, (with the usual Qantas lettering along the fuselage and on the wing-end fuel tanks replaced with the word "CONNIE") and registered as VH-EAG. This Constellation is the other of two flying in the world.
S/N 48-0613 Bataan – Restored to airworthiness by Lewis Air Legends in San Antonio, Texas. This aircraft was used as a personal transport by General Douglas MacArthur during the Korean War, and later by other Army general officers until 1966, when it was transferred to NASA. Following its permanent retirement in 1970, it was placed on display at a museum at Fort Rucker near Daleville, Alabama. It was acquired by the Planes of Fame Air Museum at Chino, California, in 1992, and overhauled into airworthy condition for a flight to Dothan, Alabama, where it received additional work. After a thorough restoration back to its original configuration with a "VIP interior", it was placed on display at the Planes of Fame secondary location in Valle, Arizona. Then, in 2015, it was sold to Lewis Air Legends, and prepped for a ferry flight to Chino, arriving there on January 14, 2016. On June 20th, 2023, the Air Legends Foundation’s Lockheed VC-121A Constellation took off on its first post-restoration flight from Chino Airport. The aircraft flew to the 2023 EAA AirVenture Oshkosh in Oshkosh, Wisconsin.
On display
VC-121A
S/N 48-0609 – on display at Jeongseok Airport on Jeju Island, South Korea. It was donated to Korean Air in 2005, and restored to airworthy condition at Tucson, Arizona. It was then ferried to South Korea, where it made its final flight, under its own power, from Seoul to its current location for static display. It has been repainted in 1950s Korean Air colors, and rendered unable to fly by the presence of unserviceable engines.
S/N 48-0612 – on display at the Dutch National Aviation Museum Aviodrome. It was restored to airworthy condition and ferried from Tucson, Arizona, to the Netherlands, where restoration continued. It is now painted in the KLM livery of the 1950s, depicting a KLM Lockheed L-749A. Renamed Flevoland, this was the only airworthy example of the "short" version of the Constellation until an engine failure grounded the aircraft.
S/N 48-0614 Columbine – on display at the Pima Air and Space Museum in Tucson, Arizona. This aircraft was used by Dwight D. Eisenhower during his role as Supreme Headquarters Allied Powers Europe commander before he became president. It is on loan from the National Museum of the U.S. Air Force.
VC-121E
S/N 53-7885 Columbine III – on display at the National Museum of the United States Air Force at Wright-Patterson Air Force Base near Dayton, Ohio. Columbine III was used as Dwight D. Eisenhower's presidential aircraft, and was eventually retired to the museum in 1966, where it is now displayed in the museum's Presidential Gallery (Building 4). The interior of the aircraft is open to the public.
C-121C
S/N 54-0155 – on display at Lackland Air Force Base near San Antonio, Texas
S/N 54-0177 – on display at the National Air and Space Museum, Udvar-Hazy Center located at Dulles Airport in Virginia.
S/N 54-0180 – on display at Charleston Air Force Base near North Charleston, South Carolina.
C-121J
BuNo 131643 – From March 2020 onwards, the aircraft is on static display at the Qantas Founders Outback Museum. Formerly stored in derelict condition at Ninoy Aquino International Airport in Manila, Philippines and impounded at the airport from June 1988 to September 2014, when it was secured for removal and static preservation by the Qantas Founders Outback Museum, Longreach.
EC-121K
BuNo 137890 – on display at Tinker Air Force Base near Oklahoma City, Oklahoma.
BuNo 141297 – on display at the Museum of Aviation at Robins Air Force Base near Warner Robins, Georgia.
BuNo 141309 – on display at the Aerospace Museum of California at the former McClellan Air Force Base in North Highlands, California. This aircraft is a former navy aircraft on loan from the National Museum of the United States Air Force. It is painted in the markings of a USAF EC-121 Warning Star.
BuNo 141311 – on display at the Chanute Aerospace Museum at the former Chanute AFB in Rantoul, Illinois.
BuNo 143221 – on display at the National Museum of Naval Aviation at NAS Pensacola near Pensacola, Florida.
EC-121T
S/N 52-3418 – on display at the Combat Air Museum in Topeka, Kansas. This aircraft was delivered to the Air Force in October 1954. It served an additional 22 years, until it was retired and flown to Davis Monthan AFB for storage on April 7, 1976. It June 1981, it was ferried to Topeka, Kansas, with Frank Lang in command.
S/N 52-3425 – on display at the Peterson Air and Space Museum at Peterson AFB in Colorado Springs, Colorado. Previously assigned to the 966th AEWCS at McCoy AFB, Florida, and then the 79th AEWCS at Homestead AFB, Florida. It was the last operational EC-121 and was deployed by the 79th AEWCS to NAS Keflavik, Iceland. It was delivered to Peterson AFB in October 1978.
S/N 53-0548 – on display at the Yanks Air Museum in Chino, California. Stored at Camarillo Airport, from 2000 to 2012, this aircraft made its final flight, to Chino, on January 14, 2012.
S/N 53-0554 – on display at the Pima Air & Space Museum in Tucson, Arizona. , it is undergoing restoration on its radome.
S/N 53-0555 – on display at the National Museum of the United States Air Force at Wright-Patterson Air Force Base near Dayton, Ohio, in the museum's Southeast Asia Gallery (Building 2).
Under restoration or in storage
WV-1
BuNo 124438 – to airworthiness by Gordon Cole at Salina, Kansas. This aircraft was the first of two WV-1s delivered to the U.S. Navy in 1949. Essentially, it was a prototype for the EC-121 Warning Star that followed. Retired from the Navy in 1957, it served the FAA from 1958 to 1966, before being flown to Salina in 1967 for retirement. It remains parked there, and was last flown in 1992.
VC-121A
S/N 48-0610 Columbine II – to airworthiness by Dynamic Aviation in Bridgewater, Virginia. This aircraft served as the first Air Force One, during the presidency of Dwight D. Eisenhower, before it was replaced by Columbine III as Eisenhower's primary presidential aircraft in 1954. After a long period of storage at Marana Regional Airport, near Tucson, Arizona, this aircraft made its first flight, since 2003, in March 2016, when it was ferried to Bridgewater for additional restoration.
EC-121T
S/N 51-3417 – This in storage at Helena Regional Airport in Helena, Montana. Acquired by the Castle Air Museum of Atwater, California, in 2014.
Specifications (L-1049G Super Constellation)
Accidents and incidents
| Technology | Specific aircraft_2 | null |
274192 | https://en.wikipedia.org/wiki/Genetic%20linkage | Genetic linkage | Genetic linkage is the tendency of DNA sequences that are close together on a chromosome to be inherited together during the meiosis phase of sexual reproduction. Two genetic markers that are physically near to each other are unlikely to be separated onto different chromatids during chromosomal crossover, and are therefore said to be more linked than markers that are far apart. In other words, the nearer two genes are on a chromosome, the lower the chance of recombination between them, and the more likely they are to be inherited together. Markers on different chromosomes are perfectly unlinked, although the penetrance of potentially deleterious alleles may be influenced by the presence of other alleles, and these other alleles may be located on other chromosomes than that on which a particular potentially deleterious allele is located.
Genetic linkage is the most prominent exception to Gregor Mendel's Law of Independent Assortment. The first experiment to demonstrate linkage was carried out in 1905. At the time, the reason why certain traits tend to be inherited together was unknown. Later work revealed that genes are physical structures related by physical distance.
The typical unit of genetic linkage is the centimorgan (cM). A distance of 1 cM between two markers means that the markers are separated to different chromosomes on average once per 100 meiotic product, thus once per 50 meioses.
Discovery
Gregor Mendel's Law of Independent Assortment states that every trait is inherited independently of every other trait. But shortly after Mendel's work was rediscovered, exceptions to this rule were found. In 1905, the British geneticists William Bateson, Edith Rebecca Saunders and Reginald Punnett cross-bred pea plants in experiments similar to Mendel's. They were interested in trait inheritance in the sweet pea and were studying two genes—the gene for flower colour (P, purple, and p, red) and the gene affecting the shape of pollen grains (L, long, and l, round). They crossed the pure lines PPLL and ppll and then self-crossed the resulting PpLl lines.
According to Mendelian genetics, the expected phenotypes would occur in a 9:3:3:1 ratio of PL:Pl:pL:pl. To their surprise, they observed an increased frequency of PL and pl and a decreased frequency of Pl and pL:
Their experiment revealed linkage between the P and L alleles and the p and l alleles. The frequency of P occurring together with L and p occurring together with l is greater than that of the recombinant Pl and pL. The recombination frequency is more difficult to compute in an F2 cross than a backcross, but the lack of fit between observed and expected numbers of progeny in the above table indicate it is less than 50%. This indicated that two factors interacted in some way to create this difference by masking the appearance of the other two phenotypes. This led to the conclusion that some traits are related to each other because of their near proximity to each other on a chromosome.
The understanding of linkage was expanded by the work of Thomas Hunt Morgan. Morgan's observation that the amount of crossing over between linked genes differs led to the idea that crossover frequency might indicate the distance separating genes on the chromosome. The centimorgan, which expresses the frequency of crossing over, is named in his honour.
Linkage map
A linkage map (also known as a genetic map) is a table for a species or experimental population that shows the position of its known genes or genetic markers relative to each other in terms of recombination frequency, rather than a specific physical distance along each chromosome. Linkage maps were first developed by Alfred Sturtevant, a student of Thomas Hunt Morgan.
A linkage map is a map based on the frequencies of recombination between markers during crossover of homologous chromosomes. The greater the frequency of recombination (segregation) between two genetic markers, the further apart they are assumed to be. Conversely, the lower the frequency of recombination between the markers, the smaller the physical distance between them. Historically, the markers originally used were detectable phenotypes (enzyme production, eye colour) derived from coding DNA sequences; eventually, confirmed or assumed noncoding DNA sequences such as microsatellites or those generating restriction fragment length polymorphisms (RFLPs) have been used.
Linkage maps help researchers to locate other markers, such as other genes by testing for genetic linkage of the already known markers. In the early stages of developing a linkage map, the data are used to assemble linkage groups, a set of genes which are known to be linked. As knowledge advances, more markers can be added to a group, until the group covers an entire chromosome. For well-studied organisms the linkage groups correspond one-to-one with the chromosomes.
A linkage map is not a physical map (such as a radiation reduced hybrid map) or gene map.
Linkage analysis
Linkage analysis is a genetic method that searches for chromosomal segments that cosegregate with the ailment phenotype through families. It can be used to map genes for both binary and quantitative traits. Linkage analysis may be either parametric (if we know the relationship between phenotypic and genetic similarity) or non-parametric. Parametric linkage analysis is the traditional approach, whereby the probability that a gene important for a disease is linked to a genetic marker is studied through the LOD score, which assesses the probability that a given pedigree, where the disease and the marker are cosegregating, is due to the existence of linkage (with a given linkage value) or to chance. Non-parametric linkage analysis, in turn, studies the probability of an allele being identical by descent with itself.
Parametric linkage analysis
The LOD score (logarithm (base 10) of odds), developed by Newton Morton, is a statistical test often used for linkage analysis in human, animal, and plant populations. The LOD score compares the likelihood of obtaining the test data if the two loci are indeed linked, to the likelihood of observing the same data purely by chance. Positive LOD scores favour the presence of linkage, whereas negative LOD scores indicate that linkage is less likely. Computerised LOD score analysis is a simple way to analyse complex family pedigrees in order to determine the linkage between Mendelian traits (or between a trait and a marker, or two markers).
The method is described in greater detail by Strachan and Read. Briefly, it works as follows:
Establish a pedigree
Make a number of estimates of recombination frequency
Calculate a LOD score for each estimate
The estimate with the highest LOD score will be considered the best estimate
The LOD score is calculated as follows:
NR denotes the number of non-recombinant offspring, and R denotes the number of recombinant offspring. The reason 0.5 is used in the denominator is that any alleles that are completely unlinked (e.g. alleles on separate chromosomes) have a 50% chance of recombination, due to independent assortment. θ is the recombinant fraction, i.e. the fraction of births in which recombination has happened between the studied genetic marker and the putative gene associated with the disease. Thus, it is equal to .
By convention, a LOD score greater than 3.0 is considered evidence for linkage, as it indicates 1000 to 1 odds that the linkage being observed did not occur by chance. On the other hand, a LOD score less than −2.0 is considered evidence to exclude linkage. Although it is very unlikely that a LOD score of 3 would be obtained from a single pedigree, the mathematical properties of the test allow data from a number of pedigrees to be combined by summing their LOD scores. A LOD score of 3 translates to a p-value of approximately 0.05, and no multiple testing correction (e.g. Bonferroni correction) is required.
Limitations
Linkage analysis has a number of methodological and theoretical limitations that can significantly increase the type-1 error rate and reduce the power to map human quantitative trait loci (QTL). While linkage analysis was successfully used to identify genetic variants that contribute to rare disorders such as Huntington disease, it did not perform that well when applied to more common disorders such as heart disease or different forms of cancer. An explanation for this is that the genetic mechanisms affecting common disorders are different from those causing some rare disorders.
Recombination frequency
Recombination frequency is a measure of genetic linkage and is used in the creation of a genetic linkage map. Recombination frequency (θ) is the frequency with which a single chromosomal crossover will take place between two genes during meiosis. A centimorgan (cM) is a unit that describes a recombination frequency of 1%. In this way we can measure the genetic distance between two loci, based upon their recombination frequency. This is a good estimate of the real distance. Double crossovers would turn into no recombination. In this case we cannot tell if crossovers took place. If the loci we're analysing are very close (less than 7 cM) a double crossover is very unlikely. When distances become higher, the likelihood of a double crossover increases. As the likelihood of a double crossover increases one could systematically underestimate the genetic distance between two loci, unless one used an appropriate mathematical model.
Double linkage is more of a historical concern for plants. In animals, double crossover happens rarely. In humans, for example, one chromosome has two crossovers on average during meiosis. Furthermore, modern geneticists have enough genes that only nearby genes need to be linkage-analyzed, unlike the early days when only a few genes were known.
During meiosis, chromosomes assort randomly into gametes, such that the segregation of alleles of one gene is independent of alleles of another gene. This is stated in Mendel's Second Law and is known as the law of independent assortment. The law of independent assortment always holds true for genes that are located on different chromosomes, but for genes that are on the same chromosome, it does not always hold true.
As an example of independent assortment, consider the crossing of the pure-bred homozygote parental strain with genotype AABB with a different pure-bred strain with genotype aabb. A and a and B and b represent the alleles of genes A and B. Crossing these homozygous parental strains will result in F1 generation offspring that are double heterozygotes with genotype AaBb. The F1 offspring AaBb produces gametes that are AB, Ab, aB, and ab with equal frequencies (25%) because the alleles of gene A assort independently of the alleles for gene B during meiosis. Note that 2 of the 4 gametes (50%)—Ab and aB—were not present in the parental generation. These gametes represent recombinant gametes. Recombinant gametes are those gametes that differ from both of the haploid gametes that made up the original diploid cell. In this example, the recombination frequency is 50% since 2 of the 4 gametes were recombinant gametes.
The recombination frequency will be 50% when two genes are located on different chromosomes or when they are widely separated on the same chromosome. This is a consequence of independent assortment.
When two genes are close together on the same chromosome, they do not assort independently and are said to be linked. Whereas genes located on different chromosomes assort independently and have a recombination frequency of 50%, linked genes have a recombination frequency that is less than 50%.
As an example of linkage, consider the classic experiment by William Bateson and Reginald Punnett. They were interested in trait inheritance in the sweet pea and were studying two genes—the gene for flower colour (P, purple, and p, red) and the gene affecting the shape of pollen grains (L, long, and l, round). They crossed the pure lines PPLL and ppll and then self-crossed the resulting PpLl lines. According to Mendelian genetics, the expected phenotypes would occur in a 9:3:3:1 ratio of PL:Pl:pL:pl. To their surprise, they observed an increased frequency of PL and pl and a decreased frequency of Pl and pL (see table below).
Their experiment revealed linkage between the P and L alleles and the p and l alleles. The frequency of P occurring together with L and with p occurring together with l is greater than that of the recombinant Pl and pL. The recombination frequency is more difficult to compute in an F2 cross than a backcross, but the lack of fit between observed and expected numbers of progeny in the above table indicate it is less than 50%.
The progeny in this case received two dominant alleles linked on one chromosome (referred to as coupling or cis arrangement). However, after crossover, some progeny could have received one parental chromosome with a dominant allele for one trait (e.g. Purple) linked to a recessive allele for a second trait (e.g. round) with the opposite being true for the other parental chromosome (e.g. red and Long). This is referred to as repulsion or a trans arrangement. The phenotype here would still be purple and long but a test cross of this individual with the recessive parent would produce progeny with much greater proportion of the two crossover phenotypes. While such a problem may not seem likely from this example, unfavourable repulsion linkages do appear when breeding for disease resistance in some crops.
The two possible arrangements, cis and trans, of alleles in a double heterozygote are referred to as gametic phases, and phasing is the process of determining which of the two is present in a given individual.
When two genes are located on the same chromosome, the chance of a crossover producing recombination between the genes is related to the distance between the two genes. Thus, the use of recombination frequencies has been used to develop linkage maps or genetic maps.
However, it is important to note that recombination frequency tends to underestimate the distance between two linked genes. This is because as the two genes are located farther apart, the chance of double or even number of crossovers between them also increases. Double or even number of crossovers between the two genes results in them being cosegregated to the same gamete, yielding a parental progeny instead of the expected recombinant progeny. As mentioned above, the Kosambi and Haldane transformations attempt to correct for multiple crossovers.
Linkage of genetic sites within a gene
In the early 1950s the prevailing view was that the genes in a chromosome are discrete entities, indivisible by genetic recombination and arranged like beads on a string. During 1955 to 1959, Benzer performed genetic recombination experiments using rII mutants of bacteriophage T4. He found that, on the basis of recombination tests, the sites of mutation could be mapped in a linear order. This result provided evidence for the key idea that the gene has a linear structure equivalent to a length of DNA with many sites that can independently mutate.
Edgar et al. performed mapping experiments with r mutants of bacteriophage T4 showing that recombination frequencies between rII mutants are not strictly additive. The recombination frequency from a cross of two rII mutants (a x d) is usually less than the sum of recombination frequencies for adjacent internal sub-intervals (a x b) + (b x c) + (c x d). Although not strictly additive, a systematic relationship was observed that likely reflects the underlying molecular mechanism of genetic recombination.
Variation of recombination frequency
While recombination of chromosomes is an essential process during meiosis, there is a large range of frequency of cross overs across organisms and within species. Sexually dimorphic rates of recombination are termed heterochiasmy, and are observed more often than a common rate between male and females. In mammals, females often have a higher rate of recombination compared to males. It is theorised that there are unique selections acting or meiotic drivers which influence the difference in rates. The difference in rates may also reflect the vastly different environments and conditions of meiosis in oogenesis and spermatogenesis.
Genes affecting recombination frequency
Mutations in genes that encode proteins involved in the processing of DNA often affect recombination frequency. In bacteriophage T4, mutations that reduce expression of the replicative DNA polymerase [gene product 43 (gp43)] increase recombination (decrease linkage) several fold. The increase in recombination may be due to replication errors by the defective DNA polymerase that are themselves recombination events such as template switches, i.e. copy choice recombination events. Recombination is also increased by mutations that reduce the expression of DNA ligase (gp30) and dCMP hydroxymethylase (gp42), two enzymes employed in DNA synthesis.
Recombination is reduced (linkage increased) by mutations in genes that encode proteins with nuclease functions (gp46 and gp47) and a DNA-binding protein (gp32) Mutation in the bacteriophage uvsX gene also substantially reduces recombination. The uvsX gene is analogous to the well studied recA gene of Escherichia coli that plays a central role in recombination.
Meiosis indicators
With very large pedigrees or with very dense genetic marker data, such as from whole-genome sequencing, it is possible to precisely locate recombinations. With this type of genetic analysis, a meiosis indicator is assigned to each position of the genome for each meiosis in a pedigree. The indicator indicates which copy of the parental chromosome contributes to the transmitted gamete at that position. For example, if the allele from the 'first' copy of the parental chromosome is transmitted, a '0' might be assigned to that meiosis. If the allele from the 'second' copy of the parental chromosome is transmitted, a '1' would be assigned to that meiosis. The two alleles in the parent came, one each, from two grandparents. These indicators are then used to determine identical-by-descent (IBD) states or inheritance states, which are in turn used to identify genes responsible for diseases.
| Biology and health sciences | Genetics | Biology |
274227 | https://en.wikipedia.org/wiki/Sunbird | Sunbird | Sunbirds and spiderhunters make up the family Nectariniidae of passerine birds. They are small, slender passerines from the Old World, usually with downward-curved bills. Many are brightly coloured, often with iridescent feathers, particularly in the males. Many species also have especially long tail feathers. Their range extends through most of Africa to the Middle East, South Asia, South-east Asia and southern China, to Indonesia, New Guinea and northern Australia. Species diversity is highest in equatorial regions.
There are 151 species in 16 genera. Their family name is from most sunbirds feeding largely on nectar, but they will also catch insects and spiders, especially when feeding their young. Flowers that prevent access to their nectar because of their shape (for example, very long and narrow flowers) are simply punctured at the base near the nectaries, from which the birds sip the nectar. Fruit is also part of the diet of some species. Their flight is fast and direct, thanks to their short wings.
The sunbirds have counterparts in two very distantly related groups: the hummingbirds of the Americas and the honeyeaters of Australia. The resemblances are due to convergent evolution brought about by a similar nectar-feeding lifestyle. Some sunbird species can take nectar by hovering like a hummingbird, but they usually perch to feed.
Description
The family ranges in size from the 5-gram black-bellied sunbird to the spectacled spiderhunter, at about 45 grams. Like the hummingbirds, sunbirds are strongly sexually dimorphic, with the males usually brilliantly plumaged in iridescent colours. In addition to this the tails of many species are longer in the males, and overall the males are larger. Sunbirds have long thin down-curved bills and brush-tipped tubular tongues, both adaptations to their nectar feeding. The spiderhunters, of the genus Arachnothera, are distinct in appearance from the other members of the family. They are typically larger than the other sunbirds, with drab brown plumage that is the same for both sexes, and long, down-curved beaks.
In metabolic behaviour similar to that of Andes hummingbirds, species of sunbirds that live at high altitudes or latitudes will enter torpor while roosting at night, lowering their body temperature and entering a state of low activity and responsiveness.
The moulting regimes of sunbirds are complex, being different in different species. Many species have no eclipse plumage, but do have juvenile plumage. Some species do show duller plumage in the off-season. In the dry months of June−August, male copper sunbirds and variable sunbirds lose much of their metallic sheen. In some instances different populations of the same species can display variation in different molting regimes.
Distribution and habitat
Sunbirds are a tropical Old World family, with representatives in Africa, Asia and Australasia. In Africa they are found mostly in sub-Saharan Africa and Madagascar but are also distributed in Egypt. In Asia the group occurs along the coasts of the Red Sea as far north as Israel, and along the Mediterranean as far north as Beirut, with a gap in their distribution across inland Syria and Iraq, and resuming in Iran, from where the group occurs continuously as far as southern China and Indonesia. In Australasia the family occurs in New Guinea, north eastern Australia and the Solomon Islands. They are generally not found on oceanic islands, with the exception of the Seychelles. The greatest variety of species is found in Africa, where the group probably arose. Most species are sedentary or short-distance seasonal migrants. Sunbirds occur over the entire family's range, whereas the spiderhunters are restricted to Asia.
The sunbirds and spiderhunters occupy a wide range of habitats, with a majority of species being found in primary rainforest, but other habitats used by the family including disturbed secondary forest, open woodland, open scrub and savannah, coastal scrub and alpine forest. Some species have readily adapted to human modified landscapes such as plantations, gardens and agricultural land. Many species are able to occupy a wide range of habitats from sea level to 4900 m.
Behaviour and ecology
Sunbird are active diurnal birds that generally occur in pairs or occasionally in small family groups. A few species occasionally gather in larger groups, and sunbird will join with other birds to mob potential predators, although sunbirds will also aggressively target other species, even if they are not predators, when defending their territories.
Breeding
Sunbirds that breed outside of the equatorial regions are mostly seasonal breeders, with the majority of them breeding in the wet season. This timing reflects the increased availability of insect prey for the growing young. Where species, like the buff-throated sunbird, breed in the dry season, it is thought to be associated with the flowering of favoured food plants. Species of sunbird in the equatorial areas breed throughout the year. They are generally monogamous and often territorial, although a few species of sunbirds have lekking behaviour.
The nests of sunbirds are generally purse-shaped, enclosed, suspended from thin branches with generous use of spiderweb. The nests of the spiderhunters are different, both from the sunbirds and in some cases from each other. Some, like the little spiderhunter, are small woven cups attached to the underside of large leaves; that of the yellow-eared spiderhunter is similarly attached but is a long tube. The nests of spiderhunters are inconspicuous, in contrast to those of the other sunbirds which are more visible. In most species the female alone constructs the nest. Up to four eggs are laid. The female builds the nest and incubates the eggs alone, although the male assists in rearing the nestlings. In the spiderhunters both sexes help to incubate the eggs. The nests of sunbirds and spiderhunters are often targeted by brood parasites such as cuckoos and honeyguides.
Pollination
As nectar is a primary food source for sunbirds, they are important pollinators in African ecosystems. Sunbird-pollinated flowers are typically long, tubular, and red-to-orange in colour, showing convergent evolution with many hummingbird-pollinated flowers in the Americas. A key difference is that sunbirds cannot hover, so sunbird-pollinated flowers and inflorescences are typically sturdier than hummingbird-pollinated flowers, with an appropriate landing spot from which the bird can feed. Sunbirds are critical pollinators for many iconic African plants, including proteas, aloes, Erica, Erythrina coral trees, and bird-of-paradise flowers. Specialization on sunbirds vs other pollinators is thought to have contributed to plant speciation, including the exceptionally high floral diversity in southern Africa.
Relationship with humans
Overall the family has fared better than many others, with only seven species considered to be threatened with extinction. Most species are fairly resistant to changes in habitat, and while attractive the family is not sought after by the cagebird trade, as they have what is considered an unpleasant song and are tricky to keep alive. Sunbirds are considered attractive birds and readily enter gardens where flowering plants are planted to attract them. There are a few negative interactions, for example the scarlet-chested sunbird is considered a pest in cocoa plantations as it spreads parasitic mistletoes.
List of genera
The family contains 151 species divided into 16 genera: For more detail, see list of sunbird species.
| Biology and health sciences | Passerida | null |
274231 | https://en.wikipedia.org/wiki/Cepheus%20%28constellation%29 | Cepheus (constellation) | Cepheus is a constellation in the deep northern sky, named after Cepheus, a king of Aethiopia in Greek mythology. It is one of the 48 constellations listed by the second century astronomer Ptolemy, and it remains one of the 88 constellations in the modern times.
The constellation's brightest star is Alderamin (Alpha Cephei), with an apparent magnitude of 2.5. Delta Cephei is the prototype of an important class of star known as a Cepheid variable. RW Cephei, an orange hypergiant, together with the red supergiants Mu Cephei, MY Cephei, VV Cephei, V381 Cephei, and V354 Cephei are among the largest stars known. In addition, Cepheus also has the hyperluminous quasar S5 0014+81, which hosts an ultramassive black hole in its core, reported at 40 billion solar masses, about 10,000 times more massive than the central black hole of the Milky Way, making this among the most massive black holes currently known.
History and mythology
Cepheus was the King of Aethiopia. He was married to Cassiopeia and was the father of Andromeda, both of whom are immortalized as modern day constellations along with Cepheus.
Features
Alderamin, also known as Alpha Cephei, is the brightest star in the constellation, with an apparent magnitude of 2.51. Gamma Cephei, also known as Errai, is the second-brightest star in the constellation, with an apparent magnitude of 3.21. It is a binary star, made up by an orange giant or subgiant and a red dwarf. The primary component hosts one exoplanet, Gamma Cephei Ab (Tadmor). Delta Cephei is a yellow-hued supergiant star 980 light-years from Earth and the prototype of the class of the Cepheid variables. It was discovered to be variable by John Goodricke in 1784. It varies between 3.5m and 4.4m over a period of 5 days and 9 hours. The Cepheids are a class of pulsating variable stars; Delta Cephei has a minimum size of 40 solar diameters and a maximum size of 46 solar diameters. It is also a double star; the primary star also has a wide-set blue-hued companion of magnitude 6.3.
There are four red supergiants in the constellation that are visible to the naked eye. Mu Cephei is also known as Herschel's Garnet Star due to its deep red colour. It is a semiregular variable star with a minimum magnitude of 5.1 and a maximum magnitude of 3.4. Its period is approximately 2 years. The star's radius has been estimated to be from to . If it were placed at the center of the Solar System, it would likely extend past the orbit of Jupiter. The second, VV Cephei A, is a semiregular variable star, located approximately 5,000 light-years from Earth. It has a minimum magnitude of 5.4 and a maximum magnitude of 4.8, and is paired with a blue main sequence star called VV Cephei B. The red supergiant primary is around 1,050 times larger than the Sun. VV Cephei is also an unusually long-period eclipsing binary, but the eclipses, which occur every 20.3 years, are too faint to be observed with the unaided eye. The third, Zeta Cephei, is not as large as Mu Cephei and VV Cephei A with a diameter less than 200 times that of the Sun; however, its surface would lie between the orbits of Venus and Earth if placed at the center of the Solar System. Zeta Cephei has an apparent magnitude of 3.35, being the fourth-brightest star in the constellation. The last and faintest is V381 Cephei Aa with a maximum magnitude of 5.5. It is part of a triple star system similar to VV Cephei, and has a diameter 980 times that of the Sun. All four stars have initial masses more than eight times that of the Sun and are accepted core-collapse supernova candidates.
Nu Cephei is a blue supergiant similar to Deneb with an initial mass of over 20 solar masses. It belongs to the Cepheus OB2 stellar association along with Mu Cephei and VV Cephei, which have similar initial masses.
There are several prominent double stars and binary stars in Cepheus. Omicron Cephei is a binary star with a period of 800 years. The system, 211 light-years from Earth, consists of an orange-hued giant primary of magnitude 4.9 and a secondary of magnitude 7.1. Xi Cephei is another binary star, 102 light-years from Earth, with a period of 4,000 years. It has a blue-white primary of magnitude 4.4 and a yellow secondary of magnitude 6.5.
Krüger 60 is an 11th-magnitude binary star consisting of two red dwarfs. The star system is one of the nearest, being only 13 light-years away from Earth. It was once proposed as a possible home system for 2I/Borisov, the first accepted interstellar comet, but this was later rejected.
Deep-sky objects
NGC 188 is an open cluster that has the distinction of being the closest open cluster to the north celestial pole, as well as one of the oldest-known open clusters.
NGC 6946 is a spiral galaxy in which ten supernovae have been observed, more than in any other galaxy. It is sometimes called the Fireworks Galaxy.
IC 469 is another spiral galaxy, characterized by a compact nucleus, of oval shape, with perceptible side arms.
The nebula NGC 7538 is home to the largest-yet-discovered protostar.
NGC 7023 is a reflection nebula with an associated star cluster (Collinder 429); it has an overall magnitude of 7.7 and is 1,400 light-years from Earth. The nebula and cluster are located near Beta Cephei and T Cephei.
S 155, also known as the Cave Nebula, is a dim and very diffuse bright nebula within a larger nebula complex containing emission, reflection, and dark nebulosity.
The quasar 6C B0014+8120 is one of the most powerful objects in the universe, powered by a supermassive black hole which is as massive as 40 billion Suns.
Visualizations
Cepheus is most commonly depicted as holding his arms aloft, praying for the deities to spare the life of Andromeda. He also is depicted as a more regal monarch sitting on his throne.
Equivalents
In Chinese astronomy, the stars of the constellation Cepheus are found in two areas: the Purple Forbidden enclosure (紫微垣, Zǐ Wēi Yuán) and the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ).
Namesakes
USS Cepheus (AKA-18) and USS Cepheus (AK-265), United States Navy ships.
Update 3.4 "Cepheus" of the videogame Stellaris
| Physical sciences | Other | Astronomy |
274471 | https://en.wikipedia.org/wiki/Gunboat | Gunboat | A gunboat is a naval watercraft designed for the express purpose of carrying one or more guns to bombard coastal targets, as opposed to those military craft designed for naval warfare, or for ferrying troops or supplies.
History
Pre-steam era
In the age of sail, a gunboat was usually a small undecked vessel carrying a single smoothbore cannon in the bow, or just two or three such cannons. A gunboat could carry one or two masts or be oar-powered only, but the single-masted version of about length was most typical. Some types of gunboats carried two cannon, or else mounted a number of swivel guns on the railings.
The small gunboat had advantages: if it only carried a single cannon, the boat could manoeuvre in shallow or restricted areas – such as rivers or lakes – where larger ships could sail only with difficulty. The gun that such boats carried could be quite heavy; a 32-pounder for instance. As such boats were cheap and quick to build, naval forces favoured swarm tactics: while a single hit from a frigate's broadside would destroy a gunboat, a frigate facing a large squadron of gunboats could suffer serious damage before it could manage to sink them all. For example: during the 1808 Battle of Alvøen of the Gunboat War, five Dano-Norwegian gunboats disabled the British frigate . Gunboats used in the Battle of Valcour Island (1776) on Lake Champlain during the American Revolutionary War were mostly built on the spot, attesting to the speed of their construction.
Spanish admiral Antonio Barceló, experienced in the usage of small vessels in the conflicts against Barbary pirates, unveiled in 1781 a kind of small armored gunboat equipped with a heavy, long-range artillery piece. This originated the Spanish Royal Armada's doctrine of fuerzas sutiles, which emphasized the usage of ships equipped with significant firepower but small enough to be difficult to hit back. His gunboats were first employed during the Great Siege of Gibraltar, obtaining great success in the otherwise failed siege, after which they were adopted by the Royal Armada.
All navies of the sailing era kept a number of gunboats on hand. Gunboats saw extensive use in the Baltic Sea during the late 18th century as they were well-suited for the extensive coastal skerries and archipelagoes of Sweden, Finland and Russia. The rivalry between Sweden and Russia, in particular, led to an intense expansion of gunboat fleets and the development of new gunboat types. The two countries clashed during the Russo-Swedish war of 1788–1790, a conflict that culminated in the massive Battle of Svensksund in 1790, in which over 30,000 men and hundreds of gunboats, galleys and other oared craft took part. The majority of these were vessels developed from the 1770s and onwards by the naval architect Fredrik Henrik af Chapman for the Swedish archipelago fleet. The designs, copied and refined by the rival Danish and Russian navies, spread to the Mediterranean and to the Black Sea.
Two variants occurred most commonly:
a larger "gun sloop" (from the Swedish kanonslup) with two 24-pounder cannon, one in the stern and one in the bow
a smaller "gun yawl" (kanonjolle) with a single 24-pounder cannon
Many of the Baltic navies kept gunboats in service well into the second half of the 19th century. British ships engaged larger Russian gunboats off Turku in southeast Finland in 1854 during the Crimean War. The Russian vessels had the distinction of being the last oared vessels of war in history to fire their guns in anger.
Gunboats played a key role in Napoleon Bonaparte's plan for the invasion of England in 1804. Denmark-Norway used them heavily in the Gunboat War. Between 1803 and 1812 the United States Navy had a policy of basing its navy on coastal gunboats, experimenting with a variety of designs. President Thomas Jefferson (in office: 1801–1809) and his Democratic-Republican Party opposed a strong navy, regarding gunboats as adequate to defend the United States' major harbors. They proved useless against the British blockade during the War of 1812.
Steam era
With the introduction of steam power in the early 19th century, the Royal Navy and other navies built considerable numbers of small vessels propelled by side paddles and later by screws. Initially, these vessels retained full sailing rigs and used steam engines for auxiliary propulsion.
The British Royal Navy deployed two wooden paddle-gunboats in the Lower Great Lakes and St. Lawrence River during the Rebellions of 1837 in Upper and Lower Canada. The United States Navy deployed an iron-hulled paddle gunboat, , to the Great Lakes in 1844.
became the first propeller-driven gunboat in the world. Conradi shipyards in Kiel built the steam-powered gunboat in 1849 for the small navy of Schleswig-Holstein. Initially called "Gunboat No. 1", Von der Tann was the most modern ship in the navy. She participated successfully in the First Schleswig War of 1848–1851.
Britain built a large number of wooden screw-gunboats during the 1850s, some of which participated in the Crimean War (1853–1856), Second Opium War (1856–1860) and Indian Mutiny (1857–1859). The requirement for gunboats in the Crimean War was formulated in 1854 to allow the Royal Navy to bombard shore facilities in the Baltic. The first ships the Royal Navy built that met this requirement were the s. Then in mid-1854 the Royal Navy ordered six s followed later in the year by an order for 20 s. In May 1855 the Royal Navy deployed six Dapper-class gunboats in the Sea of Azov, where they repeatedly raided and destroyed stores around its coast. In June 1855 the Royal Navy reentered the Baltic with a total of 18 gunboats as part of a larger fleet. The gunboats attacked various coastal facilities, operating alongside larger British warships from which they drew supplies such as coal.
Gunboats experienced a revival during the American Civil War (1861–1865). Union and Confederate forces quickly converted existing passenger-carrying boats into armed sidewheel steamers. Later, some purpose-built boats, such as , joined the fray. They frequently mounted 12 or more guns, sometimes of rather large caliber, and usually carried some armor. At the same time, Britain's gunboats from the Crimean War period were starting to wear out, so a new series of classes was ordered. Construction shifted from a purely wooden hull to an iron–teak composite.
In the later 19th century and early 20th century, "gunboat" was the common name for smaller armed vessels. These could be classified, from the smallest to the largest, into river gunboats, river monitors, coastal-defense gunboats (such as ), and full-fledged monitors for coastal bombardments. In the 1870s and 1880s, Britain took to building so-called "flat-iron" (or Rendel) gunboats for coastal defence.<refsag162>Preston (2007), pp. 162–163.</ref> When there would be few opportunities to re-coal, vessels carrying a full sailing rig continued in use as gunboats; , a sloop preserved at Chatham Historic Dockyard in the United Kingdom, exemplifies this type of gunboat.
In the United States Navy, these boats had the hull classification symbol "PG", which led to their being referred to as "patrol gunboats". They usually displaced under , were about long, draught and sometimes much less, and mounted several guns of calibers up to . An important characteristic of these was the ability to operate in rivers, enabling them to reach inland targets in a way not otherwise possible before the development of aircraft. In this period the naval powers used gunboats for police actions in colonies or in weaker countries, for example in China (see e.g. Yangtze Patrol). This category of gunboat inspired the term "gunboat diplomacy". With the addition of torpedoes, they became "torpedo gunboats", designated by the hull classification symbol "PTG" (Patrol Torpedo Gunboat).
In Britain, Admiral Fisher's reforms in the 1900s saw the disposal of much of the gunboat fleet. A handful remained in service in various roles at the start of World War I in 1914. The last in active service were two of the second which survived until 1926, carrying out river patrols in west Africa.
In the circumstances of World War I (1914–1918), however, the Royal Navy re-equipped with small , shallow-draught gunboats (12 ships of the ) with sufficient speed to operate in fast-flowing rivers and with relatively heavy armament. During the war and in the post-war period, these were deployed in Romania on the Danube, in Mesopotamia on the Euphrates and Tigris, in northern Russia on the Northern Dvina, and in China on the Yangtze. In China, during anarchic and war conditions, they continued to protect British interests until World War II; other western Powers acted similarly.
More and larger gunboats were built in the late 1930s for the Far East. Some sailed there; others were transported in sections and reassembled at Shanghai.
World War II
United Kingdom
Most British gunboats were based initially in East Asia. When war with Japan broke out, many of these vessels withdrew to the Indian Ocean. Others were given to the Republic of China Navy (such as , which was renamed Ying Hao) and some were captured by the Japanese.
Some were later redeployed to the Mediterranean theatre and supported land operations during the North African campaign, as well as in parts of Southern Europe.
United States
In late 1941, the US Navy's Yangtze Patrol boats based in China were withdrawn to the Philippines or scuttled. Following the US defeat in the Philippines, most of the remaining craft were scuttled. However, survived until being sunk in action during the Battle of Java in 1942.
Soviet Union
During the 1930s, the Soviet Navy began developing small armoured riverboats or "riverine tanks": vessels displacing 26 to 48 tons, on which the turrets of tanks were mounted.
Three classes, numbering 210 vessels, saw service between 1934 and 1945:
Project 1124: their standard armament was initially two turrets from T-28 or T-34 tanks, each mounting a 76.2 mm gun and Degtyaryov tank machine gun (DT), as well as two anti-aircraft machine guns – in some cases the rear turret was replaced with a Katyusha rocket-launcher
: one T-28/T-34 turret with a 76.2 mm gun and DT, as well as four anti-aircraft machine guns
S-40: one T-34 turret with a 76.2 mm gun and DT, as well as four anti-aircraft machine guns
With crews of 10 to 20 personnel, riverine tanks displaced 26 to 48 tons, had armour thick, and were long. They saw significant action in the Baltic and Black Seas between 1941 and 1945.
Vietnam War
US riverine gunboats in the Vietnam War, included Patrol Boats River (PBR), constructed of fiberglass; Patrol Craft Fast (PCF), commonly known as Swift Boats, built of aluminum; and Assault Support Patrol Boats (ASPB) built of steel. U.S. Coast Guard s supplemented these US Navy vessels. The ASPBs were commonly referred to as "Alpha" boats and primarily carried out mine-sweeping duties along the waterways, due to their all-steel construction. The ASPBs were the only US Navy riverine craft specifically designed and built for the Vietnam War. All of these boats were assigned to the US Navy's "Brownwater Navy".
Surviving vessels (incomplete)
(1776) - resides at the National Museum of American History in Washington, D.C.
(1861) - is on display at the Vicksburg National Military Park in Vicksburg, Mississippi.
(1863) - The remains of are currently on display at the National Civil War Naval Museum in Columbus, Georgia.
(1905) - resides in Iquitos, Peru.
(1912) - SS Zhongshan is currently preserved in Wuhan, China.
(1930) - , museum ship as of 1992 located in Asunción, Paraguay.
(1936) - , located in Boca del Río, Veracruz, and is undergoing restoration.
| Technology | Naval warfare | null |
274675 | https://en.wikipedia.org/wiki/Maillard%20reaction | Maillard reaction | The Maillard reaction ( ; ) is a chemical reaction between amino acids and reducing sugars to create melanoidins, the compounds that give browned food its distinctive flavor. Seared steaks, fried dumplings, cookies and other kinds of biscuits, breads, toasted marshmallows, falafel and many other foods undergo this reaction. It is named after French chemist Louis Camille Maillard, who first described it in 1912 while attempting to reproduce biological protein synthesis. The reaction is a form of non-enzymatic browning which typically proceeds rapidly from around . Many recipes call for an oven temperature high enough to ensure that a Maillard reaction occurs. At higher temperatures, caramelization (the browning of sugars, a distinct process) and subsequently pyrolysis (final breakdown leading to burning and the development of acrid flavors) become more pronounced.
The reactive carbonyl group of the sugar reacts with the nucleophilic amino group of the amino acid and forms a complex mixture of poorly characterized molecules responsible for a range of aromas and flavors. This process is accelerated in an alkaline environment (e.g., lye applied to darken pretzels; see lye roll), as the amino groups () are deprotonated, and hence have an increased nucleophilicity. This reaction is the basis for many of the flavoring industry's recipes. At high temperatures, a probable carcinogen called acrylamide can form. This can be discouraged by heating at a lower temperature, adding asparaginase, or injecting carbon dioxide.
In the cooking process, Maillard reactions can produce hundreds of different flavor compounds depending on the chemical constituents in the food, the temperature, the cooking time, and the presence of air. These compounds, in turn, often break down to form yet more flavor compounds. Flavor scientists have used the Maillard reaction over the years to make artificial flavors, the majority of patents being related to the production of meat-like flavors.
History
In 1912, Louis Camille Maillard published a paper describing the reaction between amino acids and sugars at elevated temperatures. In 1953, chemist John E. Hodge with the U.S. Department of Agriculture established a mechanism for the Maillard reaction.
Foods and products
The Maillard reaction is responsible for many colors and flavors in foods, such as the browning of various meats when seared or grilled, the browning and umami taste in fried onions and coffee roasting. It contributes to the darkened crust of baked goods, the golden-brown color of French fries and other crisps, browning of malted barley as found in malt whiskey and beer, and the color and taste of dried and condensed milk, dulce de leche, toffee, black garlic, chocolate, toasted marshmallows, and roasted peanuts.
6-Acetyl-2,3,4,5-tetrahydropyridine is responsible for the biscuit or cracker-like flavor present in baked goods such as bread, popcorn, and tortilla products. The structurally related compound 2-acetyl-1-pyrroline has a similar smell and also occurs naturally without heating. The compound gives varieties of cooked rice and the herb pandan (Pandanus amaryllifolius) their typical smells. Both compounds have odor thresholds below 0.06 nanograms per liter.
The browning reactions that occur when meat is roasted or seared are complex and occur mostly by Maillard browning with contributions from other chemical reactions, including the breakdown of the tetrapyrrole rings of the muscle protein myoglobin. Maillard reactions also occur in dried fruit and when champagne ages in the bottle.
Caramelization is an entirely different process from Maillard browning, though the results of the two processes are sometimes similar to the naked eye (and taste buds). Caramelization may sometimes cause browning in the same foods in which the Maillard reaction occurs, but the two processes are distinct. They are both promoted by heating, but the Maillard reaction involves amino acids, whereas caramelization is the pyrolysis of certain sugars.
In making silage, excess heat causes the Maillard reaction to occur, which reduces the amount of energy and protein available to the animals that feed on it.
Archaeology
In archaeology, the Maillard process occurs when bodies are preserved in peat bogs. The acidic peat environment causes a tanning or browning of skin tones and can turn hair to a red or ginger tone. The chemical mechanism is the same as in the browning of food, but it develops slowly over time due to the acidic action on the bog body. It is typically seen on Iron Age bodies and was described by Painter in 1991 as the interaction of anaerobic, acidic, and cold (typically ) sphagnum acid on the polysaccharides.
The Maillard reaction also contributes to the preservation of paleofeces.
Chemical mechanism
The carbonyl group of the sugar reacts with the amino group of the amino acid, producing N-substituted glycosylamine and water
The unstable glycosylamine undergoes Amadori rearrangement, forming ketosamines
Several ways are known for the ketosamines to react further:
Produce two water molecules and reductones
Diacetyl, pyruvaldehyde, and other short-chain hydrolytic fission products can be formed.
Produce brown nitrogenous polymers and melanoidins
The open-chain Amadori products undergo further dehydration and deamination to produce dicarbonyls.
This is a crucial intermediate.
Dicarbonyls react with amines to produce Strecker aldehydes through Strecker degradation.
Acrylamide, a possible human carcinogen, can be generated as a byproduct of Maillard reaction between reducing sugars and amino acids, especially asparagine, both of which are present in most food products.
| Physical sciences | Organic reactions | Chemistry |
274788 | https://en.wikipedia.org/wiki/Roe%20deer | Roe deer | The roe deer (Capreolus capreolus), also known as the roe, western roe deer, or European roe, is a species of deer. The male of the species is sometimes referred to as a roebuck. The roe is a small deer, reddish and grey-brown, and well-adapted to cold environments. The species is widespread in Europe, from the Mediterranean to Scandinavia, from Scotland to the Caucasus, and east as far as northern Iran.
Etymology
The English roe is from the Old English or , from Proto-Germanic *raihô, cognate with Old Norse , Old Saxon rēho, Middle Dutch and Dutch , Old High German , , , German . It is perhaps ultimately derived from a PIE root *rei-, meaning "streaked, spotted or striped".
The word is attested on the 5th-century Caistor-by-Norwich astragalusa roe deer talus bone, written in Elder Futhark as , transliterated as raïhan.
In the English language, this deer was originally simply called a 'roe', but over time the word 'roe' has become a qualifier, and it is now usually called 'roe deer'.
The Koiné Greek name , transliterated 'pygargos', mentioned in the Septuagint and the works of various writers such as Hesychius, Herodotus and later Pliny, was originally thought to refer to this species (in many European translations of the Bible), although it is now more often believed to refer to the addax. It is derived from the words
() and
().
The taxonomic name Capreolus is derived from capra or caprea, meaning 'billy goat', with the diminutive suffix -olus. The meaning of this word in Latin is not entirely clear: it may have meant 'ibex' or 'chamois'. The roe was also known as or in Latin.
Taxonomy
Linnaeus first described the roe deer in the modern taxonomic system as Cervus capreolus in 1758. The initially monotypic genus Capreolus was first proposed by John Edward Gray in 1821, although he did not provide a proper description for this taxon. Gray was not actually the first to use the name Capreolus, it has been used by other authors before him. Nonetheless, his publication is seen as taxonomically acceptable. He was generally ignored until the 20th century, most 19th-century works having continued to follow Linnaeus.
Roe deer populations gradually become somewhat larger as one moves further to the east, peaking in Kazakhstan, then becoming smaller again towards the Pacific Ocean. The Soviet mammalogist Vladimir Sokolov had recognised this as a separate species from 1985 already using electrophoretic chromatography to show differences in the fractional protein content of the body tissues. Fawns, females and males make different noises between species. Alexander S. Graphodatsky looked at the karyotypy to present more evidence to recognise these Russian and Asian populations as a separate species, now renamed the eastern or Siberian roe deer (Capreolus pygargus).
This new taxonomic interpretation (circumscription) was first followed in the American book Mammal Species of the World in 1993. Populations of the roe deer from east of the Khopyor River and Don River to Korea are considered to be this species.
Subspecies
The Integrated Taxonomic Information System, following the 2005 Mammal Species of the World, gives the following subspecies:
Capreolus capreolus capreolus (Linnaeus, 1758)
Capreolus capreolus canus Miller, 1910 - Spain
Capreolus capreolus caucasicus Nikolay Yakovlevich Dinnik, 1910 - A large-sized subspecies found in the region to the north of the Caucasus Mountains; although Mammal Species of the World appears to recognise the taxon, this work bases itself on a chapter by Lister et al. in the 1998 book The European roe deer: the biology of success, which only recognises the name as provisional.
Capreolus capreolus italicus Enrico Festa, 1925 - Italy
This is just one (extreme) interpretation among a number of them. Two main specialists did not recognise these taxa and considered the species to be without subspecies in 2001. The European Union's Fauna Europaea recognised in 2005 two subspecies, but besides the nominate form recognises the Spanish population as the endemic Capreolus capreolus garganta Meunier, 1983.
Systematics
Roe deer are most closely related to the water deer, and, counter-intuitively, the three species in this group, called the Capreolini, are most closely related to moose and reindeer.
Although roe deer were once classified as belonging to the Cervinae subfamily, they are now classified as part of the Capreolinae, which includes the deer that developed in the New World.
Hybrids
Both the European roe deer and Siberian roe deer have seen their populations increase, both around the 1930s. In recent times, since the 1960s, the two species have become sympatric where their distributions meet, and there is now a broad 'hybridization zone' running from the right side of the Volga River up to eastern Poland. It is extremely difficult for hunters to know which species they have bagged. In line with Haldane's rule, female hybrids of the two taxa are fertile, while male hybrids are not. Hybrids are much larger than normal and a Cesarean section was sometimes needed to birth the fawns, becoming larger than their mothers at the age of 4–5 months. F1 hybrid males may be sterile, but backcrosses with the females are possible.
22% of the animals around Moscow carry the mtDNA of the European roe deer and 78% of the Siberian. In the Volgograd region, the European roe deer predominates. In the Stavropol and Dnipropetrovsk regions of Ukraine, most of the deer are Siberian roe deer. In northeastern Poland there is also evidence of introgression with the Siberian roe deer, which was likely an Introduced species. In some cases, such as around Moscow, former introductions of European stock is likely responsible.
Description
The roe deer is a relatively small deer, with a body length of throughout its range, and a shoulder height of , and a weight of . Populations from Urals and northern Kazakhstan are larger on average growing to in length and at shoulder height, with body weights of up to , with the populations becoming smaller again further east in the Transbaikal, Amur Oblast, and Primorsky Krai regions. In healthy populations, where population density is restricted by hunting or predators, bucks are slightly larger than does. Under other conditions, males can be similar in size to females, or slightly smaller.
Bucks in good conditions develop antlers up to long with two or three, rarely even four, points. When the male's antlers begin to regrow, they are covered in a thin layer of velvet-like fur which disappears later on after the hair's blood supply is lost. Males may speed up the process by rubbing their antlers on trees, so that their antlers are hard and stiff for the duels during the mating season. Unlike most cervids, roe deer begin regrowing antlers almost immediately after they are shed. In rare cases, some bucks possess only a single antler branch, the result of a genetic defect.
Distribution
The roe deer is found in most areas of Europe, with the exception of northernmost Scandinavia, Iceland, Ireland, and the islands of the Mediterranean Sea. In the Mediterranean region, it is largely confined to mountainous areas, and is absent or rare at low altitudes. There is an early Neolithic fossil record from Jordan.
Belgium
In Flanders the roe deer was mostly confined to the hilly regions in the east, but like in neighbouring countries the population has expanded in recent times. A theory is that the expansion of maize cultivation, which are higher than traditional crops and afford more shelter, has aided their expansion to the west.
Britain
In England and Wales, roe deer have experienced a substantial expansion in their range in the latter half of the 20th century and continuing into the 21st century. This increase in population also appears to be affecting woodland ecosystems. At the start of the 20th century, they were almost extirpated in Southern England, but since then have hugely expanded their range, mostly due to restrictions and decrease in hunting, increases in forests and reductions in arable farming, changes in agriculture (more winter cereal crops), a massive reduction in extensive livestock husbandry, and a general warming climate over the past 200 years. Furthermore, there are no large predators in Britain. In some cases, roe deer have been introduced with human help. In 1884 roe deer were introduced from Württemberg in Germany into the Thetford Forest, and these spread to populate most of Norfolk, Suffolk, and substantial parts of Cambridgeshire. In southern England, they started their expansion in Sussex (possibly from enclosed stock in Petworth Park) and from there soon spread into Surrey, Berkshire, Wiltshire, Hampshire, and Dorset, and for the first half of the 20th century, most roe deer in Southern England were to be found in these counties. By the end of the 20th century, they had repopulated much of southern England and had expanded into Somerset, Devon, Cornwall, Oxfordshire, Gloucestershire, Warwickshire, Lincolnshire and South Yorkshire, and had even spread into Wales from the Ludlow area where an isolated population had appeared. At the same time, the surviving population in Scotland and the Lake District had pushed further south beyond Yorkshire and Lancashire and into Derbyshire and Humberside.
In the 1970s, the species was still completely absent from Wales. Roe deer can now be found in most of rural England except for southeast Kent and parts of Wales; anywhere in the UK mainland suitable for roe deer may have a population. Not being a species that needs large areas of woodland to survive, urban roe deer are now a feature of several cities, notably Glasgow and Bristol, where in particular they favour cemeteries. In Wales, they are least common, but they are reasonably well established in Powys and Monmouthshire.
Iran
Roe deer are found in northern Iran in the Caspian region: they occur in the Hyrcanian woodlands and agricultural lands of the Alborz Mountains (Golestan National Park, Jahan Nama Protected Area).
Ireland
Scottish roe deer were introduced to the Lissadell Estate in County Sligo in Ireland around 1870 by Sir Henry Gore-Booth. The Lissadell roe deer were noted for their occasional abnormal antlers and survived in that general area for about 50 years before they died out. According to the National Biodiversity Data Centre, in 2014 there was a confirmed sighting of roe deer in County Armagh. There have been other, unconfirmed, sightings in County Wicklow.
The Netherlands
In the Netherlands, roe deer were extirpated from the entirety of the country except for two small areas around 1875. As new forests were planted in the country in the 20th century, the population began to expand rapidly. Although it was a protected species in 1950, the population is no longer considered threatened and it has lost legal protection. As of 2016 there are some 110,000 roe deer in the country. The population is primarily kept in check through the efforts of hunters.
Israel
In 1991, a breeding colony of 27 roe deer coming from France, Hungary and Italy were brought in the Hai-Bar Carmel Reserve. A small number of this roe deer population has been reintroduced to the Carmel Mountains from the Carmel Hai-Bar Nature Reserve, with the first deer being released in 1996. 24 to 29 animals had been released by 2006. Some of the reintroduced animals were hand-reared and could be monitored by their responses to their keeper calls.
Ecology
Habitat
This species can utilize a large number of habitats, including open agricultural areas and above the tree line, but a requisite factor is access to food and cover. It retreats to dense woodland, especially among conifers, or bramble scrub when it must rest, but it is very opportunistic and a hedgerow may be good enough. Roe deer in the southern Czech Republic live in almost completely open agricultural land. The animal is more likely to be spotted in places with nearby forests to retreat to. A pioneer species commonly associated with biotic communities at an early stage of succession, during the Neolithic period in Europe when farming humans began to colonise the continent from the Middle East, the roe deer was abundant, taking advantage of areas of forest or woodland cleared by Neolithic farmers.
Behaviour
In order to mitigate risk, roe deer remain within refuge habitats (such as forests) during the day. They are likelier to venture into more open habitats at night and during crepuscular periods when there is less ambient activity. It scrapes leaf litter off the ground to make a 'bed'.
When alarmed it will bark a sound much like a dog and flash out its white rump patch. Rump patches differ between the sexes, with the white rump patches heart-shaped on females and kidney-shaped on males. Males may also bark or make a low grunting noise. Does (the females) make a high-pitched "pheep" whine to attract males during the rut (breeding season) in July and August. Initially the female goes looking for a mate and commonly lures the buck back into her territory before mating. The roe deer is territorial, and while the territories of a male and a female might overlap, other roe deer of the same sex are excluded unless they are the doe's offspring of that year.
Diet
It feeds mainly on grass, leaves, berries, and young shoots. It particularly likes very young, tender grass with a high moisture content, i.e., grass that has received rain the day before. Roe deer will generally not venture into a field that has or has had livestock in it.
Reproduction
The polygamous roe deer males clash over territory in early summer and mate in early autumn. During courtship, when the males chase the females, they often flatten the underbrush, leaving behind areas of the forest in the shape of a circle or figure eight called 'roe rings'. These tend to be in diameter. In 1956 it was speculated based on some field evidence that they choose where to form rings around plants with ergot mould, but this has not been substantiated further. Males may also use their antlers to shovel around fallen foliage and soil as a way of attracting a mate. Roebucks enter rutting inappetence during the July and August breeding season. Females are monoestrous and after delayed implantation usually give birth the following June, after a 10-month gestation period, typically to two spotted fawns of opposite sexes. The fawns remain hidden in long grass from predators; they are suckled by their mother several times a day for around three months. Young female roe deer can begin to reproduce when they are around six months old. During the mating season, a male roe deer may mount the same doe several times over a duration of several hours.
Population ecology
A roe deer can live up to 20 years, but it usually does not reach such an age. A normal life span in the wild is seven to eight years, or ten years.
The roe deer population shows irruptive growth. It is extremely fecund and can double its population every year; it shows a retarded reaction to population density with females continuing to have a similar fecundity at high population densities.
Population structure is modified by available nutrition, where populations are irrupting there are few animals over six years old. Where populations are stagnant or moribund, there is huge fawn mortality and a large part of the population is over seven years old. Mortality is highest in the first weeks after birth due to predation, or sometimes farm machinery; or in the first winter due to starvation or disease, with up to 90% mortality.
Community ecology
It is a main prey of the Persian leopard (Panthera pardus tulliana) in the Alborz Mountains of Iran.
The nematode Spiculopteragia asymmetrica infects this deer.
Compared to the other large herbivores and omnivores in Iran, it is a poor disperser of plant seeds, despite consuming relatively more of them.
Uses
The roe deer is a game animal of great economic value in Europe, providing large amounts of meat and earning millions of euros in sport hunting. In 1998, some 2,500,000 roe deer were shot per year in Western Europe. In Germany alone, 700,000 were shot a year in the 1990s. This is insufficient to slow down the population growth, and the roe deer continues to increase in number.
It is the main source of venison in Europe. The meat, like most game meat, is darker in colour than that of most farm-raised deer.
Palaeontology
Roe deer are thought to have evolved from a species in the Eurasian genus Procapreolus, with some 10 species occurring from the Late Miocene to the Early Pleistocene, which moved from the east to Central Europe over the millennia, where Procapreolus cusanus (also classified as Capreolus cusanus) occurred. It may not have evolved from C. cusanus, however, because the two extant species split from each other 1.375 and 2.75 Myr ago, and the western species first appeared in Europe 600 thousand years ago.
As of 2008, over 3,000 fossil specimens of this species had been recovered from Europe, which affords a good set of data to elucidate the prehistoric distribution. The distribution of the European species has fluctuated often since entering Europe. During some periods of the last ice age, it was present in central Europe, but during the Last Glacial Maximum it retreated to refugia in the Iberian Peninsula (two refugia here), southern France, Italy (likely two), the Balkans and the Carpathians. When the last Ice Age ended, the species initially abruptly expanded north of the Alps to Germany during the Greenland Interstadial, 12.5–10.8 thousand years ago, but during the cooling of the Younger Dryas, 10.8–10 thousand years ago, it appears to have disappeared again from this region. It reappeared 9.7–9.5 thousand years ago, reaching northern central Europe. The modern population in this area appears to have recolonised it from the Carpathians and/or further east, but not the Balkans or other refugia. This is opposite to the red deer, which recolonised Europe from Iberia. There has been much admixture of these populations where they meet, also possibly due to human intervention in some cases.
It is thought that during the Middle Ages the two species of roe deer were kept apart due to hunting pressure and an abundance of predators; the different species may have met in the period just before that, and yet, during the Ice Age they were also kept apart.
Conservation
Populations are increasing throughout Europe; it is considered a species of 'least concern'.
Culture
In the Hebrew Bible Deuteronomy 14:5, the , yahmur, derived from 'to be red', is listed as the third species of animal that may be eaten. In most Bibles this word has usually been translated as 'roe deer', and it still means as much in Arabic (, pronounced 'ahmar) -it was still said to be a common species in the Mount Carmel area in the 19th century. The King James Bible translated the word as 'fallow deer', and in other English Bible translations the word has been translated as a number of different species. When Modern Hebrew was reconstructed to serve as the language of the future Israel in late Ottoman and British Mandatory Palestine, the King James Bible interpretation was chosen, despite the fallow deer being fallow, not red.
Bambi, the titular character of the book Bambi, A Life in the Woods and its sequel Bambi's Children was originally a roe deer. When the story was adapted to the animated film Bambi by Walt Disney Pictures, the main character was changed to a white-tailed deer.
Albino roe deer were exceedingly rare in history, and they were regarded as national treasures or sacred animals in ancient times in China.
| Biology and health sciences | Deer | Animals |
274810 | https://en.wikipedia.org/wiki/Scientific%20instrument | Scientific instrument | A scientific instrument is a device or tool used for scientific purposes, including the study of both natural phenomena and theoretical research.
History
Historically, the definition of a scientific instrument has varied, based on usage, laws, and historical time period. Before the mid-nineteenth century such tools were referred to as "natural philosophical" or "philosophical" apparatus and instruments, and older tools from antiquity to the Middle Ages (such as the astrolabe and pendulum clock) defy a more modern definition of "a tool developed to investigate nature qualitatively or quantitatively." Scientific instruments were made by instrument makers living near a center of learning or research, such as a university or research laboratory. Instrument makers designed, constructed, and refined instruments for purposes, but if demand was sufficient, an instrument would go into production as a commercial product.
In a description of the use of the eudiometer by Jan Ingenhousz to show photosynthesis, a biographer observed, "The history of the use and evolution of this instrument helps to show that science is not just a theoretical endeavor but equally an activity grounded on an instrumental basis, which is a cocktail of instruments and techniques wrapped in a social setting within a community of practitioners. The eudiometer has been shown to be one of the elements in this mix that kept a whole community of researchers together, even while they were at odds about the significance and the proper use of the thing."
By World War II, the demand for improved analyses of wartime products such as medicines, fuels, and weaponized agents pushed instrumentation to new heights. Today, changes to instruments used in scientific endeavors — particularly analytical instruments — are occurring rapidly, with interconnections to computers and data management systems becoming increasingly necessary.
Scope
Scientific instruments vary greatly in size, shape, purpose, complication and complexity. They include relatively simple laboratory equipment like scales, rulers, chronometers, thermometers, etc. Other simple tools developed in the late 20th century or early 21st century are the Foldscope (an optical microscope), the SCALE(KAS Periodic Table), the MasSpec Pen (a pen that detects cancer), the glucose meter, etc. However, some scientific instruments can be quite large in size and significant in complexity, like particle colliders or radio-telescope antennas. Conversely, microscale and nanoscale technologies are advancing to the point where instrument sizes are shifting towards the tiny, including nanoscale surgical instruments, biological nanobots, and bioelectronics.
The digital era
Instruments are increasingly based upon integration with computers to improve and simplify control; enhance and extend instrumental functions, conditions, and parameter adjustments; and streamline data sampling, collection, resolution, analysis (both during and post-process), and storage and retrieval. Advanced instruments can be connected as a local area network (LAN) directly or via middleware and can be further integrated as part of an information management application such as a laboratory information management system (LIMS). Instrument connectivity can be furthered even more using internet of things (IoT) technologies, allowing for example laboratories separated by great distances to connect their instruments to a network that can be monitored from a workstation or mobile device elsewhere.
Examples of scientific instruments
List of scientific instruments manufacturers
List of scientific instruments designers
Jones, William
Kipp, Petrus Jacobus
Le Bon, Gustave
Roelofs, Arjen
Schöner, Johannes
Von Reichenbach, Georg Friedrich
History of scientific instruments
Museums
Collection of Historical Scientific Instruments (CHSI)
Boerhaave Museum
Chemical Heritage Foundation
Deutsches Museum
Royal Victoria Gallery for the Encouragement of Practical Science
Whipple Museum of the History of Science
Historiography
Paul Bunge Prize
Types of scientific instruments
Optical instrument
Electronic test equipment
| Technology | General | null |
274833 | https://en.wikipedia.org/wiki/Instant%20camera | Instant camera | An instant camera is a camera which uses self-developing film to create a chemically developed print shortly after taking the picture. Polaroid Corporation pioneered (and patented) consumer-friendly instant cameras and film, and were followed by various other manufacturers.
The invention of commercially viable instant cameras which were easy to use is generally credited to Edwin Land, the inventor of the model 95 Land Camera, widely considered the first commercial instant camera, in 1948, a year after he unveiled instant film in New York City.
In February 2008, Polaroid filed for Chapter 11 bankruptcy protection for the second time and announced it would discontinue production of its instant films and cameras, shut down three manufacturing facilities, and lay off 450 workers. Sales of analog film by all makers dropped by at least 25% per year in the first decade of the 21st century. In 2009, Polaroid was acquired by PLR IP Holdings LLC, which uses the Polaroid brand to market various products often relating to instant cameras. Among the products it markets are a Polaroid branded Fuji Instax instant camera, and various digital cameras and portable printers.
, film continues to be made by Polaroid B.V. (previously the Impossible Project) for several models of Polaroid camera, and for the 8×10 inch format. Other brands such as Lomography, Leica, Fujifilm, and others have designed new models and features in their own takes on instant cameras.
Cameras and film
Many different models of Polaroid and non-Polaroid instant cameras were introduced in the mid to late 20th century. They can be categorized by the film type.
Roll film
The first roll film camera was the Polaroid Model 95, followed by subsequent models containing various new features. Roll film came in two rolls (positive/developing agent and negative) which were loaded into the camera and was eventually offered in three sizes (40, 30, and 20 series).
Pack film
The first 100 series pack film model was the model 100, followed by various models in the 100 - 400 series and a few ad hoc cameras such as the countdown series. The next generation of Polaroid cameras used 100 series "pack film," where the photographer pulled the film out of the camera, then peeled apart the positive from the negative at the end of the developing process. Pack film initially was offered in a rectangular format (100 series), then in square format (80 series).
Integral film
Models which used SX-70 film were introduced in a folding version, with later versions being solid plastic bodied. Third generation Polaroids, like the once popular SX-70, used a square format integral film, in which all components of the film (negative, developer, fixer, etc.) were contained. The SX-70 instant camera used the print technology that Edwin Land had most desired. It introduced the use of more efficient print technology that developed more instantly than previous film types offered, which cut out some of the user's responsibility and made it easier to use. Each exposure developed automatically once the shot was taken. SX-70 (or Time Zero) film had a strong following with artists who used it for image manipulation. 600 series cameras such as the Pronto, Sun 600, and One600 used 600 type film which was four times faster than SX-70 film. 600 series cameras were almost all plastic bodied, except for the SLR 680 and 690 models, which resembled SX-70 type cameras, but most came with an electronic flash.
Spectra, Captiva, and i-Zone film
This was followed by other various plastic cameras based on Spectra, Captiva, and i-Zone film. Polaroid Spectra cameras used Polaroid Spectra film which went back to a rectangular format. Captiva, Joycam, and Popshots (single use) cameras used a smaller 500 series film in rectangular format. i-Zone cameras use a very small film format which was offered in a sticker format. Finally, Mio cameras used Polaroid Mio film which was Fuji Instax mini, branded as Polaroid and which is still available in 2015 as Fuji Instax Mini. This size produces a billfold sized photo. Polaroid still markets a mini format camera built by Fuji branded as Polaroid 300 and the film is available with both the Polaroid name and as Fuji Instax mini which are interchangeable.
Polaroid instant movie cameras
Polaroid also invented and manufactured an instant movie camera system called Polavision. The kit included a camera, film, and a movie viewer. When the movie was shot, it would be taken out of the camera and then inserted into the viewer for development, then viewed after development. This format was close to Super 8 mm film. Polavision film was different from normal film in that it was an additive film, mixing the primary colors (red, green, blue) to form the color image. The biggest disadvantage of the Polavision system was the low film speed (ASA 40), which resulted in having to use very bright lights when taking the movie, as well as requiring a special player to view the developed movie. It also lacked audio capability. Because of this, and combined with the advent of VHS video recorders, Polavision had a short history.
Types of non-Polaroid instant cameras
The earliest instant cameras were conceived before Edwin Land's invention of the instant camera. These cameras were, however, more portable wet darkrooms than "instant" camera and were difficult to use.
After Land's instant camera invention was brought to market in 1948, a few different instant cameras were developed, some using Polaroid-compatible film such as cameras by Keystone, Konica, and Minolta. Others were incompatible with Polaroid cameras and film, the most notable of these being made by Kodak, such as the EK series and Kodamatic cameras.
Later, Fujifilm introduced instant cameras and film in selected markets. After taking over an old Polaroid factory in 2008, the Netherlands-based Impossible Project began producing instant film for Polaroid cameras. This helped generate new interest in instant photography.
Kodak (EK and Kodamatic)
Kodak's EK and Kodamatic series cameras were introduced in 1976, and accepted a Kodak developed integral instant film, similar to but incompatible with Polaroid's SX-70 film. The film was chemically similar to Polaroid's with the exception that the negative was exposed from the rear and the dye/developers diffused to the front of the photograph. This alleviated the need for a mirror to reverse the image before it struck the negative.
Even so, Polaroid brought a patent-infringement lawsuit against Kodak, and eventually Kodak was forced to stop manufacture of both the camera and film. Kodak was also left to pay a settlement to some customers who were left without a way to use their now defunct cameras. One settlement offered owners of Kodak instant cameras a credit towards a new Kodak camera. Many Kodak instant cameras still exist and can be found on auction sites. Kodak also lost the contract to manufacture Polaroid's negatives which subsequently took production in house. Recently photographers tried to use Instax mini and square film inside the Kodak EK4 being somewhat successful and only being able to load one picture at a time in a darkroom.
Fujifilm
In more recent years, Fujifilm introduced a line of instant cameras and film in Japanese and Asian markets. Fujifilm called their instant camera line Fotorama. Starting in the early 1980s the F series of cameras include the F-10, F-50S and F-62AF. In the mid-1980s it introduced the 800 series with models such as the MX800, 850E, and Mr Handy collapsible. The ACE cameras were introduced in the mid-1990s with film identical to the 800 film but with a different cartridge. The integral films are based on the Kodak line of instant camera films. The instant films FI-10/PI-800/ACE series are somewhat compatible with the Kodak line of instant cameras, with minor modifications to the cartridge to make it fit. The F series film was discontinued in 1994 but similar modifications on more recent Instax film can be made to fit in the older cartridges.
Fujifilm was one of the first manufacturers who added different shooting modes to Polaroid cameras. "Kid mode" for example, will shoot photos at a faster shutter speed for capturing fast moving objects or people. Fujifilm later introduced Instax Mini 8 and advertised as the "cutest camera" targeting young women and girls. Shortly after, they introduced Instax Mini 90 and Instax mini 70, Targeting middle-aged men with the new sleek and classic design.
In the late 1990s Fujifilm introduced a new series of cameras using a new film called Instax it was available in markets outside the US. Instax became available in a smaller size with the introduction of the Instax Mini/Cheki line. Polaroid's Mio was available in the US, it uses the same film as the Fujifilm Instax Mini series but were rebranded as Mio film. This was also true of the Polaroid 300, and this film is still being sold. None of Fujifilm's products were sold officially in the United States originally. With the announcement in 2008 of Polaroid ceasing film production, Instax and peel apart type films became available in more channels. Fuji ended production of peel-apart films in 2016, FP-100C being the last such product from them.
Polaroid Originals
As noted above, Polaroid Originals (previously the Impossible Project) produces instant film for Polaroid cameras. In spring 2016, as Impossible Project they released their own instant camera, the Impossible I-1 that uses the company's 600-type and I-Type films. In September 2017, now renamed Polaroid Originals, it announced the Polaroid OneStep 2 that also uses its 600-type and I-Type films.
MiNT Camera
In 2015, MiNT Camera released the InstantFlex TL70, a vintage twin-lens reflex-looking instant camera that used Fuji Instax Mini film.
In 2016, it launched the SLR670-S. It has the look of a Polaroid SX-70, but with an ISO 640 system and manual shutter options. These are built from vintage cameras with new electronics.
In 2019, it introduced the InstantKon RF70, a rangefinder camera that uses Fuji instax wide film. Two years later in 2021, it introduced another rangefinder camera, the InstantKon SF70, that uses Fuji instax square film.
Lomography
In 2014, Lomography funded the creation of a new instant camera, the Lomo'Instant, by raising over US $1,000,000 on Kickstarter. Like Fujifilm's Instax Mini camera, the Lomo'Instant uses Instax Mini film.
The following year, the company released the Lomo'Instant Wide, a variation on the original Lomo'Instant which shot larger photos using Fujifilm's Instax Wide film. These images are more similar in size to original Polaroid film.
In the summer of 2016, Lomography announced the development of a new instant camera. Called the Lomo'Instant Automat, Lomography describes it as "the most advanced automatic instant camera.”
In August 2017, Lomography released the Lomo'Instant Square Glass. It takes 86mm x 72mm photographs and is the "world's first dual-format, glass lensed instant camera".
Applications
Instant cameras have found many uses throughout their history. The original purpose of instant cameras was motivated by Jennifer Land's question to her father (Edwin Land): "Why can't I see them now?" Many people have enjoyed seeing their photos shortly after taking them, allowing them to recompose or retake the photo if they didn't get it right. But instant cameras were found to be useful for other purposes such as ID cards, passport photos, ultrasound photos, and other uses which required an instant photo. They were also used by police officers and fire investigators because of their ability to create an unalterable instant photo. Medium and large format professional photographers have also used the higher end instant cameras to preview lighting before taking the more expensive medium and/or large format photo. Instant film also has been used in ways that are similar to folk art, including the transfer of the images/emulsion and image manipulation.
Script supervisors in film production used instant cameras (until superseded by digital cameras) as standard to aid visual continuity by photographing actors, sets or props, to take photographs that could be instantly referred to when a particular set or character's appearance needs to be reset and shot again, or recalled later due to reshoots or the out-of-sequence shooting schedule of a film or television production.
The fashion industry relied upon Polaroid prints as a record of models or potential models.
Instant photography was also useful in conducting a study about the perception of vehicle accidents. The instant photos were used to document accidents to show medical professionals the condition of a vehicle after an accident. Having this visual in turn changed how the physician viewed the accident their patient was in.
With the advent of digital photography, much of the instant camera's consumer appeal has been transferred to digital cameras. Passport photo cameras have gone to digital, leaving instant cameras to a niche market.
Instant Cameras and Society
The introduction of instant camera technologies was important to society because it allowed for more creativity among camera users. Instead of having to use a darkroom to develop photographs, users were able to explore and document their world and experiences as they occurred. Instant Camera photography acted as an activity to some of its users. Instant cameras were portrayed by Polaroid as being able to combine the activities of both taking a photo and viewing one, into a singular past time.
Because instant cameras were easy to use, didn't require a darkroom or sending out the film for processing, this allowed couples to take personal private photos without concerns about unwanted third parties viewing the photos.
Taking an instant photograph
Edwin Land's original idea behind instant photography was to create a photographic system that was seamless and easy for anyone to use. The first roll film instant cameras required the photographer to use a light meter to take a reading of the light level, then to set the exposure setting on the lens. Then the lens was focused and the subject framed and the picture was taken. The photographer flipped a switch and pulled the large tab in the back of the camera to pull the negative over the positive, through some rollers to spread the developing agent. After the picture developed inside the camera for the required time, the photographer opened the small door in the camera back and peeled the positive from the negative. To prevent fading, the black and white positive had to be coated with a fixing agent, a potentially messy procedure which led to the development of coaterless instant pack film.
Pack film cameras were mostly equipped with automatic exposure, but still had to be focused and a flash bulb or cube unit needed to be used with colour film indoors. The development of the film required the photographer pull two tabs, the second tab which pulled the positive/negative "sandwich" from the camera, where it developed outside the camera. If the temperature was below 15 °C (60 °F), the positive/negative "sandwich" was placed between two aluminum plates and placed either in the user's pocket or under their arm to keep it warm while developing. After the required development time (15 seconds to 2 minutes), the positive (with the latent image) was peeled apart from the negative.
Integral film cameras, such as the SX-70, 600 series, Spectra, and Captiva cameras went a long way in accomplishing Edwin Land's goal of creating a seamless process in producing instant photos. The photographer simply pointed the camera at the subject, framed it and took the photo. The camera and film did the rest, including adjusting the exposure settings, taking care of focusing (Sonar autofocus models only), utilising a flash if necessary (600 series and up), and ejecting the film, which developed without intervention from the photographer. The new design of the frame film for the SX-70 cameras allowed for their convenient usage. With all of the ingredients necessary to develop the photograph in the thicker portion of the frame, the user only has to take the photo to initiate the reaction which provided them their photo.
Creative techniques
Due to the way that instant film develops, several techniques to modify or distort the final image exist, which were utilized by many artists. The three main techniques used are SX-70 manipulation, emulsion lift, and image transfer. SX-70 manipulation is used with SX-70 Time Zero film and it allows the photographer to draw on or distort an image by applying pressure to it while it is developing. With an emulsion lift, it is possible to separate the image from the medium it developed on, and transfer it to a different one. Image transfers are used with peel-apart film, like packfilm, to develop the instant image into a different material by peeling the picture too early and adhering the negative onto the desired material. Polaroid encouraged the use of these techniques by producing videos about them.
The artist Lucas Samaras, for example, was among the first to modify the images taken with the Polaroid SX-70 through the "Polaroid transfer". Thus, he developed the series "autoentrevistas", a set of self-portraits in which he takes the place of a model in different circumstances.
John Reuter, the director of the Polaroid 20×24 camera studio, for years experimented with snapshot transfers.
Andy Warhol also made use of instant cameras. Warhol began taking snapshots to use as sketches of his popular lithographs. In spite of this, their peculiar vision and the passage of time have turned these Polaroids into famous and interesting photographs from an artistic point of view. They are also part of pop art or pop culture.
David Hockney also utilised polaroids within his work to create photo collages. Hockney was skeptical about photography, until instant photography was suggested to him by a museum curator. In the 1980s he began to experiment and creating composite photo collages. These include portraits, still lifes and the iconic swimming pools that Hockney is known for. He admitted that his works are very Cubist and often reference Synthetic Cubism with their distorted perspective. He later moved on from polaroids to 35mm film.
In popular culture
Polaroid pictures are used extensively in the movie Memento.
The popular 2003 song "Hey Ya!" by Outkast features the line "Shake it like a Polaroid picture", referring to the myth that shaking an instant photo makes it dry faster. In reality, shaking has no positive effect and can even damage the photo. As a result of the song, the Polaroid Corporation released a statement discouraging the practice.
The name and app icon of the social photo sharing platform Instagram, founded in 2010, originated from the instant camera, with the 2010 icon directly resembling a Polaroid Land Camera 1000.
Instant cameras featured prominently in the 2015 video game Life Is Strange in which the protagonist, Max Caulfield, frequently uses one.
In 2014, American singer-songwriter Taylor Swift used Polaroids as the aesthetic for her fifth studio album 1989.
| Technology | Photography | null |
274951 | https://en.wikipedia.org/wiki/Mesoproterozoic | Mesoproterozoic | The Mesoproterozoic Era is a geologic era that occurred from . The Mesoproterozoic was the first era of Earth's history for which a fairly definitive geological record survives. Continents existed during the preceding era (the Paleoproterozoic), but little is known about them. The continental masses of the Mesoproterozoic were more or less the same ones that exist today, although their arrangement on the Earth's surface was different.
Major events and characteristics
The major events of this era are the breakup of the Columbia supercontinent, the formation of the Rodinia supercontinent, and the evolution of sexual reproduction.
This era is marked by the further development of continental plates and plate tectonics. The supercontinent of Columbia broke up between 1500 and 1350 million years ago, and the fragments reassembled into the supercontinent of Rodinia around 1100 to 900 million years ago, on the time boundary between the Mesoproterozoic and the subsequent Neoproterozoic. These tectonic events were accompanied by numerous orogenies (episodes of mountain building) that included the Kibaran orogeny in Africa; the Late Ruker orogeny in Antarctica; the Gothian and Sveconorwegian orogenies in Europe; and the Picuris and Grenville orogenies in North America.
The era saw the development of sexual reproduction, which greatly increased the complexity of life to come and signified the start of development of true multicellular organisms. Though the biota of the era was once thought to be exclusively microbial, recent finds have shown multicellular life did exist during the Mesoproterozoic. This era was also the high point of the stromatolites before they declined in the Neoproterozoic.
Subdivisions
The subdivisions of the Mesoproterozoic are arbitrary divisions based on time. They are not geostratigraphic or biostratigraphic units. The decision to base the Precambrian time scale on radiometric dating reflects the sparse nature of the fossil record, and Precambrian subdivisions of geologic time roughly reflect major tectonic cycles. It is possible that future revisions to the time scale will reflect more "natural" boundaries based on correlative geologic events.
The Mesoproterozoic is presently divided into the Calymmian (1600 to 1400 Mya) and the Ectasian (1400 to 1200 Mya), and the Stenian (1200 to 1000 Mya). The Calymmian and Ectasian were characterized by stabilization and expansion of cratonic covers and the Stenian by formation of orogenic belts.
The time period from 1780 Ma to 850 Ma, an unofficial period based on stratigraphy rather than chronometry, named the Rodinian, is described in the geological timescale review 2012 edited by Gradstein et al., but , this has not yet been officially adopted by the International Union of Geological Sciences (IUGS).
| Physical sciences | Geological timescale | Earth science |
275006 | https://en.wikipedia.org/wiki/Autocannon | Autocannon | An autocannon, automatic cannon or machine cannon is a fully automatic gun that is capable of rapid-firing large-caliber ( or more) armour-piercing, explosive or incendiary shells, as opposed to the smaller-caliber kinetic projectiles (bullets) fired by a machine gun. Autocannons have a longer effective range and greater terminal performance than machine guns, due to the use of larger/heavier munitions (most often in the range of , but bigger calibers also exist), but are usually smaller than tank guns, howitzers, field guns, or other artillery. When used on its own, the word "autocannon" typically indicates a non-rotary weapon with a single barrel. When multiple rotating barrels are involved, such a weapon is referred to as a "rotary autocannon" or "rotary cannon", or if it uses a single with a rotating cylinder with multiple chambers, it is known as a "revolver autocannon" or "revolver cannon", both of these systems are commonly used as aircraft guns and anti-aircraft guns.
Autocannons are heavy weapons that are unsuitable for use by infantry. Due to the heavy weight and recoil, they are typically installed on fixed mounts, wheeled carriages, ground combat vehicles, aircraft, or watercraft, and are almost always crew-served, or even remote-operated with automatic target recognition/acquisition (e.g. sentry guns and naval CIWS). As such, ammunition is typically fed from a belt system to reduce reloading pauses or for a faster rate of fire, but magazines remain an option. Common types of ammunition, among a wide variety, include HEIAP, HEDP and more specialised armour-piercing (AP) munitions, mainly composite rigid (APCR) and discarding sabot (APDS) rounds.
Capable of generating extremely rapid firepower, autocannons overheat quickly if used for sustained fire, and are limited by the amount of ammunition that can be carried by the weapons systems mounting them. Both the US 25 mm M242 Bushmaster and the British 30 mm RARDEN have relatively slow rates of fire so as not to deplete ammunition too quickly. The Oerlikon KBA 25 mm has a relatively mid-high rate of fire 650 rounds per minute but can be electronically programmed to 175-200 rounds per minute. The rate of fire of a modern autocannon ranges from 90 rounds per minute, in the case of the British RARDEN, to 2,500 rounds per minute with the GIAT 30. Rotary systems with multiple barrels can achieve over 10,000 rounds per minute (the Russian GSh-6-23, for example). Such extremely high rates of fire are effectively employed by aircraft in aerial dogfights and close air support on ground targets via strafing attacks, where the target dwell time is short and weapons are typically operated in brief bursts.
History
Early developments
The first modern autocannon was the British QF 1-pounder, also known as the "pom-pom". This was essentially an enlarged version of the Maxim gun, which was the first successful fully automatic machine gun, requiring no outside stimulus in its firing cycle other than holding the trigger. The pom-pom fired gunpowder-filled explosive shells at a rate of over 200 rounds a minute: much faster than conventional artillery while possessing a much longer range and more firepower than the infantry rifle.
In 1913, Reinhold Becker and his Stahlwerke Becker firm designed the 20mm Becker cannon, addressing the German Empire's perceived need for heavy-calibre aircraft armament. The Imperial Government's Spandau Arsenal assisted them in perfecting the ordnance. Although only about 500+ examples of the original Becker design were made during World War I, the design's patent was acquired by the Swiss Oerlikon Contraves firm in 1924, with the Third Reich's Ikaria-Werke firm of Berlin using Oerlikon design patents in creating the MG FF wingmount cannon ordnance. The Imperial Japanese Navy's Type 99 cannon, adopted and produced in 1939, was also based on the Becker/Oerlikon design's principles.
During the First World War, autocannons were mostly used in the trenches as anti-aircraft guns. The British used pom-pom guns as part of their air defences to counter the German Zeppelin airships that made regular bombing raids on London. However, they were of little value, as their shells neither ignited the hydrogen of the Zeppelins nor caused sufficient loss of gas (and hence lift) to bring them down. Attempts to use the guns in aircraft failed, as the weight severely limited both speed and altitude, thus making successful interception impossible. The more effective QF 2 pounder naval gun would be developed during the war to serve as an anti-aircraft and close range defensive weapon for naval vessels.
Second World War
Autocannons would serve to a much greater extent and effect during the Second World War. The German Panzer II light tank, which was one of the most numerous in German service during the invasion of Poland and the campaign in France, used a 20 mm autocannon as its main armament. Although ineffective against tank armour even during the early years of the war, the cannon was effective against light-skinned vehicles as well as infantry and was also used by armoured cars. Larger examples, such as the 40 mm Vickers S, were mounted in ground attack aircraft to serve as an anti-tank weapon, a role to which they were suited as tank armour is often lightest on top.
The Polish 20 mm 38 Fk auto cannon was expensive to produce, but an exception. Unlike the Oerlikon, it was effective against all the tanks fielded in 1939, largely because it was built as an upgrade to the Oerlikon, Hispano-Suiza, and Madsen. It even proved capable of knocking out early Panzer IIIs and IVs, albeit with great difficulty. Only 55 were produced by the time of the Polish Defensive War. However it was in the air war that these weapons played their most important part in the conflict.
During the First World War, rifle-calibre machine guns became the standard weapons of military aircraft. In the Second, several factors brought about their replacement by autocannon. During the inter-war years, aircraft underwent extensive evolution and the all-metal monoplane, pioneered as far back as the end of 1915, almost entirely replaced wood and fabric biplanes. At the same time as they began to be made from stronger materials, the machines also increased in speed, streamlining, power and size, and it began to be apparent that correspondingly more powerful weapons would be needed to counter them. Conversely, they were becoming much better able to carry exactly such larger and more powerful guns; the technology of which was in the meantime also developing, providing significantly improved rates of fire and reliability.
When the Second World War did break out, it was swiftly realised that the power of contemporary aircraft allowed armour plate to be fitted to protect the pilot and other vulnerable areas. This innovation proved highly effective against rifle-calibre machine gun rounds, which tended to ricochet off harmlessly. Similarly the introduction of self sealing fuel tanks provided reliable protection against these small projectiles. These new defenses, synergistically with the general robustness of new aircraft designs and of course their sheer speed, which made simply shooting them accurately in the first place far more difficult, entailed that it took a lot of such bullets and a fair amount of luck to cause them critical damage; but potentially a single cannon shell with a high-explosive payload could instantly sever essential structural elements, penetrate armour or open up a fuel tank beyond the capacity of self-sealing compounds to counter, even from fairly long range. (Instead of explosives, such shells could carry incendiaries, also highly effective at destroying planes, or a combination of explosives and incendiaries.) Thus by the end of the war, the fighter aircraft of almost all the belligerents mounted cannon of some sort, the only exception being the United States which in most cases favoured the Browning AN/M2 "light-barrel" .50 calibre heavy machine gun. A fighter equipped with these intermediate weapons in sufficient numbers was adequately armed to fulfill most of the Americans' combat needs aloft, as they tended to confront enemy fighters and other small planes far more often than large bombers; and as, in the earlier phases of the war, the Japanese aircraft they dealt with were not only unusually lightly built but went without either armour plate or self-sealing tanks in order to reduce their weight. Nevertheless, the U.S. also adopted planes fitted with autocannon, such as the Lockheed P-38 Lightning, despite experiencing technical difficulties with developing and manufacturing these large-calibre automatic guns.
Weapons such as the Oerlikon 20 mm, the Bofors 40 mm and various German Rheinmetall autocannons would see widespread use by both sides during the Second World War; not only in an anti-aircraft role, but as a weapon for use against ground targets as well. Heavier anti-aircraft cannon had difficulty tracking fast-moving aircraft and were unable to accurately judge altitude or distance, while machine guns possessed insufficient range and firepower to bring down aircraft consistently. Continued ineffectiveness against aircraft despite the large numbers installed during the second World War led, in the West, to the removal of almost all shipboard anti-aircraft weapons in the early post-war period. This was only reversed with the introduction of computer-controlled systems.
The German Luftwaffe deployed small numbers of the experimental Bordkanone series of heavy aircraft cannon in 37, 50 and 75 mm calibres, mounted in gun pods under the fuselage or wings. The 37 mm BK 3,7 cannon, based on the German Army's 3.7 cm FlaK 43 anti-aircraft autocannon was mounted in pairs in underwing gun pods on a small number of specialized Stuka Panzerknacker (tank buster) aircraft. The BK 5 cm cannon, based on the 5 cm KwK 39 cannon of the Panzer III, was installed in Ju 88P bomber destroyers, which also used other Bordkanone models, and in the Messerschmitt 410 Hornisse (Hornet) bomber destroyer. 300 examples of the BK 5 cannon were built, more than all other versions. The PaK 40 semi-automatic 7.5 cm calibre anti-tank gun was the basis for the BK 7,5 in the Junkers Ju 88 P-1 heavy fighter and Henschel Hs 129 B-3 twin engined ground attack aircraft.
The German Mauser MK 213 was developed at the end of the Second World War and is regarded as the archetypal modern revolver cannon. With multiple chambers and a single barrel, autocannons using the revolver principle can combine a very high rate of fire and high acceleration to its maximum firing rate with low weight, at cost of a reduced sustained rate of fire compared to rotary cannon. They are, therefore, used mainly in aircraft for AA purposes, in which a target is visible for a short period of time.
Modern era
The development of guided missiles was thought to render cannons unnecessary, and a full generation of western fighter aircraft was built without them. In contrast, all Eastern Bloc aircraft kept their guns. During the Vietnam War, however, the United States Air Force realized that cannons were useful for firing warning shots and for attacking targets that did not warrant the expenditure of a (much more expensive) missile, and, more importantly, as an additional weapon if the aircraft had expended all its missiles or enemy aircraft were inside of the missiles' minimum target acquisition range in a high-G close range engagement. This was particularly important with the lower reliability of early air-to-air missile technology, such as that employed during the Vietnam War. As a consequence, fighters at the time had cannons added back in external "gun pods", and virtually all fighter aircraft retain autocannons in integral internal mounts to this day.
After the Second World War, autocannons continued to serve as a versatile weapon in land, sea, and air applications. Examples of modern autocannons include the 25 mm Oerlikon KBA mounted on the IFV Freccia, the M242 Bushmaster mounted on the M2/M3 Bradley, updated versions of the Bofors 40 mm gun, and the Mauser BK-27. The 20 mm M61A1 is an example of an electrically powered rotary autocannon. Another role that has come into association with autocannons are that of close-in weapon systems on naval vessels, which are used to destroy anti-ship missiles and low flying aircraft.
| Technology | Firearms | null |
275223 | https://en.wikipedia.org/wiki/Serval | Serval | The serval (Leptailurus serval) is a wild cat native to Africa. It is widespread in sub-Saharan countries, where it inhabits grasslands, wetlands, moorlands and bamboo thickets. Across its range, it occurs in protected areas, and hunting it is either prohibited or regulated in range countries.
It is the sole member of the genus Leptailurus. Three subspecies are recognised. The serval is a slender, medium-sized cat that stands tall at the shoulder and has a weight range of approximately . It is characterised by a small head, large ears, a golden-yellow to buff coat spotted and striped with black, and a short, black-tipped tail. The serval has the longest legs of any cat relative to its body size.
The serval is a solitary carnivore and active both by day and at night. It preys on rodents, particularly vlei rats, small birds, frogs, insects, and reptiles, using its sense of hearing to locate prey. It leaps over above the ground to land on the prey on its forefeet, and finally kills it with a bite on the neck or the head. Both sexes establish highly overlapping home ranges of , and mark them with feces and saliva. Mating takes place at different times of the year in different parts of their range, but typically once or twice a year in an area. After a gestational period of two to three months, a litter of one to four is born. The kittens are weaned at the age of one month and begin hunting on their own at six months of age. They leave their mother at the age of around 12 months.
Etymology
The name "serval" is derived from (lobo-) cerval, i.e. Portuguese for lynx, used by Georges-Louis Leclerc, Comte de Buffon in 1765 for a spotted cat that was kept at the time in the Royal Menagerie in Versailles; lobo-cerval is derived from Latin lupus cervarius, literally and respectively "wolf" and "of or pertaining to deer".
The name Leptailurus derives from the Greek leptós meaning "fine, delicate", and aílouros meaning "cat".
Taxonomy
Felis serval was first described by Johann Christian Daniel von Schreber in 1776. In the 19th and 20th centuries, the following serval zoological specimens were described:
Felis constantina proposed by Georg Forster in 1780 was a specimen from the vicinity of Constantine, Algeria.
Felis servalina proposed by William Ogilby in 1839 was based on one serval skin from Sierra Leone with freckle-sized spots.
Felis brachyura proposed by Johann Andreas Wagner in 1841 was also a serval skin from Sierra Leone.
Felis (Serval) togoensis proposed by Paul Matschie in 1893 were two skins and three skulls from Togo.
Felis servalina pantasticta and F. s. liposticta proposed by Reginald Innes Pocock in 1907 were based on one serval from Entebbe in Uganda with a yellowish fur, and one serval skin from Mombasa in Kenya with dusky spots on its belly.
Felis capensis phillipsi proposed by Glover Morrill Allen in 1914 was a skin and a skeleton of an adult male serval from El Garef at the Blue Nile in Sudan.
The generic name Leptailurus was proposed by Nikolai Severtzov in 1858. The serval is the sole member of this genus.
In 1944, Pocock recognised three serval races in North Africa.
Three subspecies are recognised as valid since 2017:
L. s. serval, the nominate subspecies, in Southern Africa
L. s. constantina in Central and West Africa
L. s. lipostictus in East Africa
Phylogeny
The phylogenetic relationships of the serval have remained in dispute; in 1997, palaeontologists M. C. McKenna and S. K. Bell classified Leptailurus as a subgenus of Felis, while others like O. R. P. Bininda-Edmonds (of the Technical University of Munich) have grouped it with Felis, Lynx and Caracal. Studies in the 2000s and the 2010s show that the serval, along with the caracal and the African golden cat, forms one of the eight lineages of Felidae. According to a 2006 genetic study, the Caracal lineage came into existence 8.5 million years ago, and the ancestor of this lineage arrived in Africa 8.5–5.6 mya.
The phylogenetic relationships of the serval are as follows:
Hybrid
In April 1986, the first savannah cat, a hybrid between a male serval and a female domestic cat, was born; it was larger than a typical domestic kitten and resembled its father in its coat pattern. It appeared to have inherited a few domestic cat traits, such as tameness, from its mother. This cat breed may have a dog-like habit of following its owner about, is adept at jumping and leaping, and can be a good swimmer. Over the years it has gained popularity as a pet.
Characteristics
The serval is a slender, medium-sized cat; it stands at the shoulder and weighs , but females tend to be lighter. The head-and-body length is typically between . Males tend to be sturdier than females. Prominent characteristics include the small head, large ears, spotted and striped coat, long legs and a black-tipped tail that is around long. The serval has the longest legs of any cat relative to its body size, largely due to the greatly elongated metatarsal bones in the feet. The toes are elongated as well, and unusually mobile.
The coat is basically golden-yellow to buff and extensively marked with black spots and stripes. The spots show great variation in size. Facial features include the whitish chin, spots, and streaks on the cheeks and the forehead, brownish or greenish eyes, white whiskers on the snout and near the ears, which are black on the back with a white horizontal band in the middle; three to four black stripes run from the back of the head onto the shoulders and then break into rows of spots. The white underbelly has dense and fluffy basal fur, and the soft guard hairs (the layer of fur protecting the basal fur) are long. Guard hairs are up to long on the neck, back and flanks, and are merely long on the face. The serval has a good sense of smell, hearing and vision.
The serval is similar to the sympatric caracal, but has a narrower spoor, a rounder skull, and lacks its prominent ear tufts. The closely set ears can rotate up to 180 degrees independently of each other and help in locating prey efficiently.
Both leucistic and melanistic servals have been observed in captivity. In addition, the melanistic variant has been sighted in the wild, with most melanistic servals having been observed in Kenya.
Distribution and habitat
In North Africa, the serval is known only from Morocco and has been reintroduced in Tunisia, but is feared to be extinct in Algeria. It inhabits semi-arid areas and cork oak forests close to the Mediterranean Sea, but avoids rainforests and arid areas. It occurs in the Sahel, and is widespread in Southern Africa. It inhabits grasslands, moorlands, and bamboo thickets at high altitudes up to on Mount Kilimanjaro. It prefers areas close to water bodies such as wetland and savanna, which provide cover such as reeds and tall grasses. In the East Sudanian Savanna, it was recorded in the transboundary Dinder–Alatash protected area complex during surveys between 2015 and 2018.
In Zambia's Luambe National Park, the population density was recorded as in 2011.
In South Africa, the serval was recorded in Free State, eastern Northern Cape, and southern North West.
In Namibia, it is present in Khaudum and Mudumu National Parks.
Behaviour and ecology
The serval is active in the day as well as at night; activity might peak in early morning, around twilight, and at midnight. Servals might be active for a longer time on cool or rainy days. During the hot midday, they rest or groom themselves in the shade of bushes and grasses. Servals remain cautious of their vicinity, though they may be less alert when no large carnivores or prey animals are around. Servals walk as much as every night. Servals will often use special trails to reach certain hunting areas. A solitary animal, there is little social interaction among servals except in the mating season, when pairs of opposite sexes may stay together. The only long-lasting bond appears to be of the mother and her cubs, which leave their mother only when they are a year old.
Both males and females establish home ranges, and are most active only in certain regions ('core areas') within them. The area of these ranges can vary from ; prey density, availability of cover and human interference could be significant factors in determining their size. Home ranges might overlap extensively, but occupants show minimal interaction. Aggressive encounters are rare, as servals appear to mutually avoid one another rather than fight and defend their ranges. On occasions where two adult servals meet in conflict over territory, a ritualistic display may ensue, in which one will place a paw on the other's chest while observing their rival closely; this interaction rarely escalates into a fight.
Agonistic behavior involves vertical movement of the head (contrary to the horizontal movement observed in other cats), raising the hair and the tail, displaying the teeth and the white band on the ears, and yowling. Individuals mark their ranges and preferred paths by spraying urine on nearby vegetation, dropping scats along the way, and rubbing their mouths on grasses or the ground while releasing saliva. Servals tend to be sedentary, shifting only a few kilometres away even if they leave their range.
The serval is vulnerable to hyenas and African wild dogs. It will seek cover to escape its view, and, if the predator is very close, immediately flee in long leaps, changing its direction frequently and with the tail raised. The serval is an efficient, though not frequent, climber; an individual was observed to have climbed a tree to a height of more than to escape dogs. Like many cats, the serval is able to purr; it also has a high-pitched chirp, and can hiss, cackle, growl, grunt, and meow.
Hunting and diet
The serval is a carnivore that preys on rodents, particularly vlei rats, shrews, small birds, hares, frogs, insects, and reptiles, and also feeds on grass that can facilitate digestion or act as an emetic. Up to 90% of the preyed animals weigh less than ; occasionally it also hunts larger prey such as duikers, hares, flamingoes, spoonbills, waterfowl and young antelopes. The percentage of rodents in the diet has been estimated at 80–97%. Apart from vlei rats, other rodents recorded frequently in the diet include the African grass rat, African pygmy mouse and multimammate mice.
The serval locates prey by its strong sense of hearing. It remains motionless for up to 15 minutes; when prey is within range, it jumps with all four feet up to in the air and attacks with its front paws. To kill small prey, it slowly stalks it, then pounces on it with the forefeet directed toward the chest, and finally lands on it with its forelegs outstretched. The prey, receiving a blow from one or both of the serval's forepaws, is incapacitated, and the serval bites it on the head or the neck and immediately swallows it. Snakes are dealt more blows and even bites, and may be consumed even as they are moving. Larger prey, such as larger birds, are killed by a sprint followed by a leap to catch them as they are trying to flee, and are eaten slowly. Servals have been observed caching large kills to be consumed later by concealing them in dead leaves and grasses. Servals typically get rid of the internal organs of rodents while eating, and pluck feathers from birds before consuming them. During a leap, a serval can reach more than above the ground and cover a horizontal distance of up to . Servals appear to be efficient hunters; a study in Ngorongoro showed that servals were successful in half of their hunting attempts, regardless of the time of hunting, and a mother serval was found to have a success rate of 62%. The number of kills in a 24-hour period averaged 15 to 16. Scavenging has been observed, but very rarely.
Reproduction
Both sexes become sexually mature when they are one to two years old. Oestrus in females lasts one to four days; it typically occurs once or twice a year, though it can occur three or four times a year if the mother loses her litters. Observations of captive servals suggest that when a female enters oestrus, the rate of urine-marking increases in her as well as the males in her vicinity. Zoologist Jonathan Kingdon described the behavior of a female serval in oestrus in his 1997 book East African Mammals. He noted that she would roam restlessly, spray urine frequently holding her vibrating tail in a vertical manner, rub her head near the place she has marked, salivate continuously, give out sharp and short "miaow"s that can be heard for quite a distance, and rub her mouth and cheeks against the face of an approaching male. The time when mating takes place varies geographically; births peak in winter in Botswana, and toward the end of the dry season in the Ngorongoro Crater. A trend generally observed across the range is that births precede the breeding season of murid rodents.
Gestation lasts for two to three months, following which a litter of one to four kittens is born. Births take place in secluded areas, for example in dense vegetation or burrows abandoned by aardvarks and porcupines. Blind at birth, newborns weigh nearly and have soft, woolly hair (greyer than in adults) and unclear markings. The eyes open after nine to thirteen days. Weaning begins a month after birth; the mother brings small kills to her kittens and calls out to them as she approaches the "den". A mother with young kittens rests for a notably lesser time and has to spend almost twice the time and energy for hunting than do other servals. If disturbed, the mother shifts her kittens one by one to a more secure place. Kittens eventually start accompanying their mother to hunts. At around six months, they acquire their permanent canines and begin to hunt themselves; they leave their mother at about 12 months of age. They may reach sexual maturity from 12 to 25 months of age. Life expectancy is about 10 years in the wild and up to 20 years in captivity.
Conservation
The degradation of wetlands and grasslands is a major threat to the survival of the serval. Trade of serval skins, though on the decline, still occurs in countries such as Benin and Senegal. In West Africa, the serval has significance in traditional medicine. Pastoralists often kill servals to protect their livestock, though servals generally do not prey on livestock.
The serval is listed as least concern on the IUCN Red List, and is included in CITES Appendix II. It occurs in several protected areas across its range. Hunting of servals is prohibited in Algeria, Botswana, Congo, Kenya, Liberia, Morocco, Mozambique, Nigeria, Rwanda, Tunisia, and South Africa's Cape Province; hunting regulations apply in Angola, Burkina Faso, Central African Republic, the Democratic Republic of the Congo, Ghana, Malawi, Senegal, Sierra Leone, Somalia, Tanzania, Togo, and Zambia.
In culture
The association of servals with human beings dates to the time of Ancient Egypt. Servals are depicted as gifts or traded objects from Nubia in Egyptian art.
Servals are occasionally kept as pets, although their wild nature means that ownership of servals is regulated in some countries. Servals can also be crossed with domestic cats to produce the savannah cat breed.
| Biology and health sciences | Felines | Animals |
275334 | https://en.wikipedia.org/wiki/Hermit%20crab | Hermit crab | Hermit crabs are anomuran decapod crustaceans of the superfamily Paguroidea that have adapted to occupy empty scavenged mollusc shells to protect their fragile exoskeletons. There are over 800 species of hermit crab, most of which possess an asymmetric abdomen concealed by a snug-fitting shell. Hermit crabs' soft (non-calcified) abdominal exoskeleton means they must occupy shelter produced by other organisms or risk being defenseless.
The strong association between hermit crabs and their shelters has significantly influenced their biology. Almost 800 species carry mobile shelters (most often calcified snail shells); this protective mobility contributes to the diversity and multitude of these crustaceans which are found in almost all marine environments. In most species, development involves metamorphosis from symmetric, free-swimming larvae to morphologically asymmetric, benthic-dwelling, shell-seeking crabs. Such physiological and behavioral extremes facilitate a transition to a sheltered lifestyle, revealing the extensive evolutionary lengths that led to their superfamily success.
Classification
The hermit crabs of Paguroidea are more closely related to squat lobsters and porcelain crabs than they are to true crabs (Brachyura). Together with the squat lobsters and porcelain crabs, they all belong to the infraorder Anomura, the sister taxon to Brachyura.
However, the relationship of king crabs to the rest of Paguroidea has been a highly contentious topic. Many studies based on their physical characteristics, genetic information, and combined data demonstrate the longstanding hypothesis that the king crabs in the family Lithodidae are derived hermit crabs descended from pagurids and should be classified as a family within Paguroidea. The molecular data has disproven an alternate view based on morphological arguments that the Lithodidae (king crabs) nest with the Hapalogastridae in a separate superfamily, Lithodoidea. As such, in 2023, the family Lithodidae was placed back into Paguroidea after having been moved out of it in 2007.
Nine families are formally recognized in the superfamily Paguroidea, containing around 1200 species in total in 135 genera.
Calcinidae Fraaije, Van Bakel & Jagt, 2017 – seven genera
Coenobitidae Dana, 1851 – two genera: terrestrial hermit crabs and the coconut crab
Diogenidae Ortmann, 1892 – 20 genera of "left-handed hermit crabs"
Lithodidae Samouelle, 1819 – 15 genera of "king crabs"
Paguridae Latreille, 1802 – 76 genera of "true hermit crabs"
Parapaguridae Smith, 1882 – 10 genera of "anemone hermit crabs"
Parapylochelidae Fraaije et al., 2012 – two genera
Pylochelidae Bate, 1888 – 9 genera of "symmetrical hermit crabs"
Pylojacquesidae McLaughlin & Lemaitre, 2001 – two genera
Phylogeny
The placement of Paguroidea within Anomura can be shown in the cladogram below, which also shows the king crabs of Lithodidae as sister taxon to the hermit crabs of Paguridae:
Fossil record
The fossil record of in situ hermit crabs using gastropod shells stretches back to the Late Cretaceous. Before that time, at least some hermit crabs used ammonite shells instead, as shown by a specimen of Palaeopagurus vandenengeli from the Speeton Clay Formation, Yorkshire, UK, from the Lower Cretaceous, as well as a specimen of a diogenid hermit crab from the Upper Jurassic of Russia. The earliest record of the superfamily extends back to the earliest part of the Jurassic, with the oldest known species being Schobertella hoelderi from the late Hettangian of Germany.
Aquatic and terrestrial hermit crabs
Hermit crabs can be informally divided into two groups: aquatic hermit crabs and terrestrial hermit crabs.
The land hermit crabs belong to the family Coenobitidae. They spend most of their life on land in tropical areas, though they require access to water to keep their gills damp or wet to survive and to reproduce.
Description
Hermit crab species range in size and shape, from species only a few millimeters long to Coenobita brevimanus (Indos Crab), which can approach the size of a coconut and live 12–70 years. The shell-less hermit crab Birgus latro (coconut crab) is the world's largest terrestrial invertebrate.
Most species have long, spirally curved abdomens, which are soft, unlike the hard, calcified abdomens seen in related crustaceans. The abdomen is protected from predators by a salvaged empty seashell carried by the hermit crab, into which its whole body can retract. Most frequently, hermit crabs use the shells of sea snails (although the shells of bivalves and scaphopods and even hollow pieces of wood and stone are used by some species). The tip of the hermit crab's abdomen is adapted to clasp strongly onto the columella of the snail shell.
Development and reproduction
Hermit Crab young develop in stages, with the first two (the nauplius and protozoea) occurring inside the egg. Most hermit crab larvae hatch at the third stage, the zoea. In this larval stage, the crab has several long spines, a long, narrow abdomen, and large fringed antennae. Several zoeal moults are followed by the final larval stage, the megalopa.
The sexual behavior exhibited by hermit crabs varies from species to species. But a broad description is as follows, if the female possesses any larvae from a previous mating, she moults and lets them go. Female hermit crabs are ready to mate shortly before moulting. In certain species the male grabs the pre-moult female for sometimes hours to days. During the time in which the female molts the male may engage in movements such as jerking or shaking the female towards the male before reproduction.
The female will then put her claws in her mouth signaling the male she is ready to mate. Then they both move their bodies mostly out of their shells, and mate. Both crabs then go back inside their shells, and they may mate again. In some species the male performs post-copulatory behavior until the female has the eggs on her legs (pleopods).
Hermit crabs molt as they develop and grow. In doing so they shed an exoskeleton that resembles a limp crab. The molting process is long and can take up to 60 days to complete. There are 4 stages to molting: Intermolt, Proecdysis, ecdysis, and postedysis. Intermolt is the time between molts where a hermit crab will store energy. Proecdysis is the premolt stage where the old exoskeleton starts to shed and the new one forms. Ecdysis is the main phase of the molt where the crab will be able to crawl out of the old exoskeleton and is left with a new, soft one. Lastly postedysis (post molt) is where the new exoskeleton hardens and the hermit crab will eat the old exoskeleton.
In some larger species of hermit crab they have exhibited burying the exoskeleton and leaving it.
Behavior
Hermit crabs are omnivorous scavengers, and mostly nocturnal.
Shells and shell remodeling
As hermit crabs grow, they require larger shells. Since suitable intact gastropod shells are sometimes a limited resource, competition often occurs between hermit crabs for shells. The availability of empty shells at any given place depends on the relative abundance of gastropods and hermit crabs, matched for size. An equally important issue is the population of organisms that prey upon gastropods and leave the shells intact. Hermit crabs kept together may fight or kill a competitor to gain access to the shell they favour. However, if the crabs vary significantly in size, fights over empty shells are rare. Hermit crabs with undersized shells cannot grow as fast as those with well-fitting shells, and are more likely to be eaten if they cannot retract completely into the shell.
Shells used by hermit crabs have usually been remodeled by previous hermit crab owners. This involves a hermit crab hollowing out the shell, making it lighter. Only small hermit crabs are able to live without remodelled shells. Most big hermit crabs that have been transferred to a normal shell die. Even if they were able to survive, hollowing out a shell takes precious energy, making it undesirable to any hermit crab. They achieve this remodeling by both chemically and physically carving out the interiors of their shell. These shells can last for generations, explaining why some hermit crabs are able to live in areas where snails have become locally extinct.
There are cases when seashells are not available and hermit crabs will use alternatives such as tin cans, custom-made shells, or any other types of debris, which often proves fatal to the hermit crabs (as they can climb into, but not out of, slippery plastic debris). This can even create a chain reaction of fatality, because a dead hermit crab will release a signal to tell others that a shell is available, luring more hermit crabs to their deaths. More specifically, they are attracted to the scent of dead hermit crab flesh.
For some larger marine species, supporting one or more sea anemones on the shell can scare away predators. The sea anemone also benefits, because it is in a prime position to consume fragments of the hermit crab's meals. Other very close symbiotic relationships are known from encrusting bryozoans and hermit crabs forming bryoliths.
In February 2024, Polish researchers reported that 10 of 16 terrestrial hermit crab species were observed using artificial shells, including discarded plastic waste, broken glass bottles and light bulbs, in lieu of natural shells.
Shell fighting
Shell fighting is a behavior observed in all hermit crabs. It is a process in which the attacker hermit crab attempts to steal the shell of the victim, using a fairly intricate process. It usually only occurs if there is no empty shell suitable for the growing hermit crab. These fights are usually between the same species, though they can also occur between two separate species.
If the defending crab does not retreat to the inside of its shell, an aggressive interaction will usually take place, until the defending crab retreats, or the attacker flees. After the defender has retreated, the attacker will usually turn the shell over multiple times, holding it with its legs. It then places its chelipeds into the shell's opening.
Then the crabs start the "positioning" behavior, this consists of the attacker moving side to side, over the opening of the defender's shell. This movement usually forms a figure 8. The attacker then goes into the aptly named "rapping" behavior. The attacker holds its legs and chepelothorax stationary, while it moves its shell down on the defender's shell. It is done quite rapidly, and is usually enough to produce an audible sound. It seems like little to no contact happens directly between the two crabs.
After a number of "raps", the defender may come out of its shell completely, usually positioning itself of one of the shells. The attacker then checks the now free shell, and then changes shell rapidly. As the crab tries its new shell, it usually holds its old shell, as it may decide to come back to the old one. The defeated crab then runs to the empty shell. If the defeated crab does not stay close to the shells, it is usually eaten.
Several hermit crab species, both terrestrial and marine, have been observed forming a vacancy chain to exchange shells. When an individual crab finds a new empty shell, or steals one from another, it will leave its own shell and inspect the vacant shell for size. If the shell is found to be too large, the crab goes back to its own shell and then waits by the vacant shell for up to 8 hours. As new crabs arrive they also inspect the shell and, if it is too big, wait with the others, forming a group of up to 20 individuals, holding onto each other in a line from the largest to the smallest crab. As soon as a crab that is the right size for the vacant shell arrives and claims it—leaving its old shell vacant—all the crabs in the queue swiftly exchange shells in sequence, each one moving up to the next size. If the original shell was taken from another hermit crab, the victim is usually left without a shell, and gets eaten. Hermit crabs often "gang up" on one of their species with what they perceive to be a better shell, and pry its shell away from it before competing for it until one takes it over.
Aggressive behaviors
Aggressive behaviors for hermit crabs are quite similar to one another, with some variations present between species. It usually consists of moving or positioning the legs and the chelipeds, also known as the claw or pincer. Usually these displays are enough to avoid confrontation. Sometimes two opposing crabs will do multiple actions, with no apparent pattern. These confrontations usually last a few seconds, though some may last a few minutes, for those especially stubborn crabs.
They can also raise a leg which is sometimes referred to as an "ambulatory raise". This can happen with multiple legs such as with the first two walking legs, or both the first and second pair. This is referred to as "double ambulatory raise", and "quadruple ambulatory raise", respectively. The exact form of this movement is variable between species. In some other species there is another distinct movement, where they move their leg away and upwards from the body, while it moves forwards, this same movement continues as the limb is brought down. This movement is sometimes called an "ambulatory poke".
They also use their chelipeds as a warning display, usually used in two distinct variations. The first one consists of the crab lifting its whole body (shell included), and spreading its legs, then moving its cheliped forward until the dactylus (top part of the claw) is perpendicular with the ground. This movement is usually called an "cheliped presentation" This position may be more distinct in some species, such as those in the genus Pagurus. The second variation called the "cheliped extension", is usually a purely visual movement, though it may sometimes be used to strike a crab. The chelipeds move forward and upwards, until the limb is parallel with the ground, usually used to push another crab out of the way. If a larger crab pushes a smaller one, the smaller one may be moved multiple centimeters.
The crabs of the family Paguridae, have another distinct type of movement. Individuals may crawl upon another's crab shell. If the size is just right the crab climbed upon may move rapidly up and down or sideways, usually causing the other crab to fall off.
Grouping behavior
Some species such as Clibanarius tricolor, Calcinus tibicen and Pagurus miamensis are semi gregarious, showing unique behaviors in groups. While these three species all show gregarious behavior, C. tricolor, forms the densest and bigger groups. The crabs of Clibanarius tricolor congregate during the day, and usually stay with their same respective group, day after day. At 4:00 p.m. the crabs would start moving in their groups, and by 5:00 p.m. they had left their congregation. The congregations usually move in one general direction, and may be close to other crabs. This behavior seems to be lost under controlled conditions, however.
Associations with other animals
The shells of hermit crabs have multiple "associates" whose exact roles have not been well described. These associates are usually categorized into two groups, those who live in the interior of the shell and those who live on the exterior. Some of the interior associates include nereid worms which have a commensal relation, the worms help the hermit crabs keep their shell clean along with the crabs of the family Porcellanidae. It is not rare to see both the worms and crabs in the same shell.
There are also associations with Amphipods, such as the relationship between the hermit crab species Pagurus hemphilli and the amphipod genus Liljeborgia. The coloration of this amphipod matches the coloration of the hermit crab and the Crustose rhodophycean algae which commonly grows in their shells. Specimens of P. hemphilli tolerated the presence of their guest, while other hermit crab species attempted eating them.
Some of the exterior associates are the epifauna, such as barnacles and Crepidula, which may be a hindrance to the crabs, as they may ruin the stability or just add weight to the shell. Some species of hermit crabs have live colonies of Hydractina, while others rejected them. Some species just keep the colony in their shells, while others are actively detaching and re-attaching the sea anemone. Most hermit crabs attempt to place the most anemones as possible, while some others steal the anemone another hermit crab is carrying. There is a mutually beneficial relationship between the two, as they help defend against predators.
Hermit crabs as pets
Several marine species of hermit crabs are common in the marine aquarium trade. They are commonly kept in reef fish tanks.
Two of the most common terrestrial hermit crabs kept as pets are the Caribbean hermit crab (Coenobita clypeatus), and the Ecuadorian hermit crab (Coenobita compressus). Despite their reputation as 'throwaway' and 'low maintenance' pets, hermit crabs can actually live for up to 15 or more years with proper care. The oldest known pet hermit crab lived for 45 years. Hermit crabs need a proper tank set up that will provide all of their needs in order to thrive. Hermit crabs should not be regularly handled, they are prey animals and typically panic while being handled, which can cause injury to the crab or the owner. Hermit crabs will try to hide when scared. They will also pinch, which can break skin. A drop or a fall onto hard surfaces can be lethal to a hermit crab.
Hermit crabs need a consistent temperature of 75–85°F, and a consistent humidity of 75–85% humidity. Low humidity will result in a hermit crab slowly suffocating. Hermit crabs breathe using modified gills. These modified gills need to be moist in order to function. Hermit crabs should be kept in glass tanks of an appropriate size in order to maintain the humidity and temperature needed. At least 10 gallons of tank space should be provided per hermit crab. Overcrowding a tank can result in aggressiveness and cannibalism between crabs.
Hermit crabs also require both salt water and freshwater sources deep enough for the crab to fully submerge. All water should be treated to remove chemicals, and saltwater should be prepared using a marine grade salt mix. Further, like many pets, hermit crabs need enrichment and need opportunities for hiding and climbing. Huts, wood, and artificial plants can be used to fill this need. In the wild hermit crabs may walk several miles a night for purposes of foraging or migration. Hermit crabs are nocturnal and are most active during the night.
| Biology and health sciences | Crabs and hermit crabs | Animals |
275509 | https://en.wikipedia.org/wiki/Palliative%20care | Palliative care | Palliative care (derived from the Latin root , meaning "to cloak") is an interdisciplinary medical caregiving approach aimed at optimising quality of life and mitigating or reducing suffering among people with serious, complex, and often terminal illnesses. Within the published literature, many definitions of palliative care exist.
The World Health Organization (WHO) describes palliative care as:"an approach that improves the quality of life of patients and their families facing the problem associated with life-threatening illness, through the prevention and relief of suffering by means of early identification and impeccable assessment and treatment of pain and other problems, physical, psychosocial, and spiritual". In the past, palliative care involved a disease-specific approach, but today the WHO takes a broader patient-centered approach that suggests that the principles of palliative care should be applied as early as possible to any chronic and ultimately fatal illness. This shift was important because if a disease-oriented approach is followed, the needs and preferences of the patient are not fully met and aspects of care, such as pain, quality of life, and social support, as well as spiritual and emotional needs, fail to be addressed. Rather, a patient-centered model prioritises relief of suffering and tailors care to increase the quality of life for terminally ill patients.
Palliative care is appropriate for individuals with serious illnesses across the age spectrum and can be provided as the main goal of care or in tandem with curative treatment. It is ideally provided by interdisciplinary teams which can include physicians, nurses, occupational and physical therapists, psychologists, social workers, chaplains, and dietitians. Palliative care can be provided in a variety of contexts, including hospitals, outpatient clinics, skilled nursing, and home settings. Although an important part of end-of-life care, palliative care is not limited to individuals near the end of life.
Palliative care focuses primarily on improving the quality of life for those with chronic illnesses and this is supported by evidence. It is commonly the case that palliative care is provided at the end of life, but it can be helpful for a person of any stage of illness that is critical or at any age.
Scope
Palliative care is able to improve healthcare quality in three sectors: Physical and emotional relief, strengthening of patient-physician communication and decision-making, and coordinated continuity of care across various healthcare settings, including hospital, home, and hospice. The overall goal of palliative care is to improve quality of life of individuals with serious illness, any life-threatening condition which either reduces an individual's daily function or quality of life or increases caregiver burden, through pain and symptom management, identification and support of caregiver needs, and care coordination. Palliative care can be delivered at any stage of illness alongside other treatments with curative or life-prolonging intent and is not restricted to people receiving end-of-life care. Historically, palliative care services were focused on individuals with incurable cancer, but this framework is now applied to other diseases, including severe heart failure, chronic obstructive pulmonary disease, multiple sclerosis and other neurodegenerative conditions. Forty million people each year are expected to need palliative care, with approximately 78% of this population living in low and middle income countries. However, only 14% of this population is able to receive this kind of care, with a majority in high-income countries, making this an important sector to pay attention to.
Palliative care can be initiated in a variety of care settings, including emergency rooms, hospitals, hospice facilities, or at home. For some severe disease processes, medical specialty professional organizations recommend initiating palliative care at the time of diagnosis or when disease-directed options would not improve a patient's prognosis. For example, the American Society of Clinical Oncology recommends that patients with advanced cancer should be "referred to interdisciplinary palliative care teams that provide inpatient and outpatient care early in the course of disease, alongside active treatment of their cancer" within eight weeks of diagnosis.
Appropriately engaging palliative care providers as a part of patient care improves overall symptom control, quality of life, and family satisfaction of care while reducing overall healthcare costs.
Palliative care vis-à-vis hospice care
The distinction between palliative care and hospice differs depending on global context. In the United States, the term hospice refers specifically to a benefit provided by the federal government since 1982. Hospice care services and palliative care programs share similar goals of mitigating unpleasant symptoms, controlling pain, optimizing comfort, and addressing psychological distress. Hospice care focuses on comfort and psychological support and curative therapies are not pursued. Under the Medicare Hospice Benefit, individuals certified by two physicians to have less than six months to live (assuming a typical course) have access to specialized hospice services through various insurance programs (Medicare, Medicaid, and most health maintenance organizations and private insurers). An individual's hospice benefits are not revoked if that individual lives beyond a six-month period. In the United States, in order to be eligible for hospice, patients usually forego treatments aimed at cure, unless they are minors. This is to avoid what is called concurrent care, where two different clinicians are billing for the same service. In 2016 a movement began to extend the reach of concurrent care to adults who were eligible for hospice but not yet emotionally prepared to forego curative treatments.
Outside the United States, the term hospice usually refers to a building or institution that specializes in palliative care. These institutions provide care to patients with end of life and palliative care needs. In the common vernacular outside the United States, hospice care and palliative care are synonymous and are not contingent on different avenues of funding. Over 40% of all dying patients in the United States currently undergo hospice care. Most of the hospice care occurs at a home environment during the last weeks/months of their lives. Of those patients, 86.6% believe their care is "excellent". Hospice's philosophy is that death is a part of life, so it is personal and unique. Caregivers are encouraged to discuss death with the patients and encourage spiritual exploration (if they so wish).
History
The field of palliative care grew out of the hospice movement, which is commonly associated with Dame Cicely Saunders, who founded St. Christopher's Hospice for the terminally ill in 1967, and Elisabeth Kübler-Ross who published her seminal work "On Death and Dying" in 1969. In 1974, Balfour Mount coined the term "palliative care". and Paul Henteleff became the director of a new "terminal care" unit at Saint Boniface Hospital in Winnipeg. In 1987, Declan Walsh established a palliative medicine service at the Cleveland Clinic Cancer Center in Ohio which later expanded to become the training site of the first palliative care clinical and research fellowship as well as the first acute pain and palliative care inpatient unit in the United States. The program evolved into The Harry R. Horvitz Center for Palliative Medicine which was designated as an international demonstration project by the World Health Organization and accredited by the European Society for Medical Oncology as an Integrated Center of Oncology and Palliative Care.
Advances in palliative care have since inspired a dramatic increase in hospital-based palliative care programs. Notable research outcomes forwarding the implementation of palliative care programs include:
Evidence that hospital palliative care consult teams are associated with significant hospital and overall health system cost savings.
Evidence that palliative care services increase the likelihood of dying at home and reduce symptom burden without impacting on caregiver grief among the vast majority of Americans who prefer to die at home.
Evidence that providing palliative care in tandem with standard oncologic care among patients with advanced cancer is associated with lower rates of depression, increased quality of life, and increased length of survival compared to those receiving standard oncologic care and may even prolong survival.
Over 90% of US hospitals with more than 300 beds have palliative care teams, yet only 17% of rural hospitals with 50 or more beds have palliative care teams. Hospice and palliative medicine has been a board certified sub-specialty of medicine in the United States since 2006. Additionally, in 2011, The Joint Commission began an Advanced Certification Program for Palliative Care that recognizes hospital inpatient programs demonstrating outstanding care and enhancement of the quality of life for people with serious illness.
Practice of palliative care
Medications used in palliative care can be common medications but used for a different indication based on established practices with varying degrees of evidence. Examples include the use of antipsychotic medications, anticonvulsants, and morphine. Routes of administration may differ from acute or chronic care, as many people in palliative care lose the ability to swallow. A common alternative route of administration is subcutaneous, as it is less traumatic and less difficult to maintain than intravenous medications. Other routes of administration include sublingual, intramuscular and transdermal. Medications are often managed at home by family or nursing support.
Palliative care interventions in care homes may contribute to lower discomfort for residents with dementia and to improve family members' views of the quality of care. However, higher quality research is needed to support the benefits of these interventions for older people dying in these facilities.
High-certainty evidence supports the finding that implementation of home-based end-of-life care programs may increase the number of adults who will die at home and slightly improve patient satisfaction at a one-month follow-up. The impact of home-based end-of-life care on caregivers, healthcare staff, and health service costs are uncertain.
Pain, distress, and anxiety
For many patients, end of life care can cause emotional and psychological distress, contributing to their total suffering. An interdisciplinary palliative care team consisting of a mental health professional, social worker, counselor, as well as spiritual support such as a chaplain, can play important roles in helping people and their families cope using various methods such as counseling, visualization, cognitive methods, drug therapy and relaxation therapy to address their needs. Palliative pets can play a role in this last category.
Total pain
In the 1960s, hospice pioneer Cicely Saunders first introduced the term "total pain" to describe the heterogenous nature of pain. This is the idea that a patient's experience of total pain has distinctive roots in the physical, psychological, social and spiritual realm but that they are all still closely linked to one another. Identifying the cause of pain can help guide care for some patients, and impact their quality of life overall.
Physical pain
Physical pain can be managed using pain medications as long as they do not put the patient at further risk for developing or increasing medical diagnoses such as heart problems or difficulty breathing. Patients at the end of life can exhibit many physical symptoms that can cause extreme pain such as dyspnea (or difficulty breathing), coughing, xerostomia (dry mouth), nausea and vomiting, constipation, fever, delirium, and excessive oral and pharyngeal secretions ("Death Rattle").
Radiation is commonly used with palliative intent to alleviate pain in patients with cancer. As an effect from radiation may take days to weeks to occur, patients dying a short time following their treatment are unlikely to receive benefit.
Psychosocial pain and anxiety
Once the immediate physical pain has been dealt with, it is important to remember to be a compassionate and empathetic caregiver that is there to listen and be there for their patients. Being able to identify the distressing factors in their life other than the pain can help them be more comfortable. When a patient has their needs met then they are more likely to be open to the idea of hospice or treatments outside comfort care. Having a psychosocial assessment allows the medical team to help facilitate a healthy patient-family understanding of adjustment, coping and support. This communication between the medical team and the patients and family can also help facilitate discussions on the process of maintaining and enhancing relationships, finding meaning in the dying process, and achieving a sense of control while confronting and preparing for death. For adults with anxiety, medical evidence in the form of high-quality randomized trials is insufficient to determine the most effective treatment approach to reduce the symptoms of anxiety.
Spirituality
Among spiritual persons, spirituality is typically considered a fundamental component of palliative care. Hospice facilities where palliative care is administered usually have available chaplains.
According to the Clinical Practice Guidelines for Quality Palliative Care, spirituality is a "dynamic and intrinsic aspect of humanity" and has been associated with "an improved quality of life for those with chronic and serious illness", especially for patients who are living with incurable and advanced illnesses of a chronic nature. Spiritual beliefs and practices can influence perceptions of pain and distress, as well as quality of life among advanced cancer patients. Spiritual needs are often described in literature as including loving/being loved, forgiveness, and deciphering the meaning of life.
Most spiritual interventions are subjective and complex. Many have not been well evaluated for their effectiveness, however tools can be used to measure and implement effective spiritual care.
Nausea and vomiting
Nausea and vomiting are common in people who have advanced terminal illness and can cause distress. Several antiemetic pharmacologic options are suggested to help alleviate these symptoms. For people who do not respond to first-line medications, levomepromazine may be used, however there have been insufficient clinical trials to assess the effectiveness of this medication. Haloperidol and droperidol are other medications that are sometimes prescribed to help alleviate nausea and vomiting, however further research is also required to understand how effective these medications may be.
Hydration and nutrition
Many terminally ill people cannot consume adequate food or drink. Providing medically assisted food or drink to prolong their life and improve the quality of their life is common, however there have been few high quality studies to determine best practices and the effectiveness of these approaches.
Symptom assessment
One instrument used in palliative care is the Edmonton Symptom Assessment Scale (ESAS), which consists of eight visual analog scales (VAS) ranging from 0–10, indicating the levels of pain, activity, nausea, depression, anxiety, drowsiness, appetite, sensation of well-being, and sometimes shortness of breath. A score of 0 indicates absence of the symptom, and a score of 10 indicates the worst possible severity. The instrument can be completed by the patient, with or without assistance, or by nurses and relatives.
Pediatric palliative care
Pediatric palliative care is family-centered, specialized medical care for children with serious illnesses that focuses on mitigating the physical, emotional, psychosocial, and spiritual suffering associated with illness to ultimately optimize quality of life.
Pediatric palliative care practitioners receive specialized training in family-centered, developmental and age-appropriate skills in communication and facilitation of shared decision making; assessment and management of pain and distressing symptoms; advanced knowledge in care coordination of multidisciplinary pediatric caregiving medical teams; referral to hospital and ambulatory resources available to patients and families; and psychologically supporting children and families through illness and bereavement.
Symptoms assessment and management of children
As with palliative care for adults, symptom assessment and management is a critical component of pediatric palliative care as it improves quality of life, gives children and families a sense of control, and prolongs life in some cases. The general approach to assessment and management of distressing symptoms in children by a palliative care team is as follows:
Identify and assess symptoms through history taking (focusing on location, quality, time course, as well as exacerbating and mitigating stimuli). Symptoms assessment in children is uniquely challenging due to communication barriers depending on the child's ability to identify and communicate about symptoms. Thus, both the child and caregivers should provide the clinical history. With this said, children as young as four years of age can indicate the location and severity of pain through visual mapping techniques and metaphors.
Perform a thorough exam of the child. Special attention to the child's behavioral response to exam components, particularly in regards to potentially painful stimuli. A commonly held myth is that premature and neonatal infants do not experience pain due to their immature pain pathways, but research demonstrates pain perception in these age groups is equal or greater than that of adults. With this said, some children experiencing intolerable pain present with 'psychomotor inertia', a phenomenon where a child in severe chronic pain presents overly well behaved or depressed. These patients demonstrate behavioral responses consistent with pain relief when titrated with morphine. Finally, because children behaviorally respond to pain atypically, a playing or sleeping child should not be assumed to be without pain.
Identify the place of treatment (tertiary versus local hospital, intensive care unit, home, hospice, etc.).
Anticipate symptoms based on the typical disease course of the hypothesized diagnosis.
Present treatment options to the family proactively, based on care options and resources available in each of the aforementioned care settings. Ensuing management should anticipate transitions of palliative care settings to afford seamless continuity of service provision across health, education, and social care settings.
Consider both pharmacologic and non-pharmacologic treatment modalities (education and mental health support, administration of hot and cold packs, massage, play therapy, distraction therapy, hypnotherapy, physical therapy, occupational therapy, and complementary therapies) when addressing distressing symptoms. Respite care is an additional practice that can further aid alleviating the physical and mental pain from the child and their family. By allowing the caregiving to ensue by other qualified individuals, it allows the family time to rest and renew themselves
Assess how the child perceives their symptoms (based on personal views) to create individualized care plans.
After the implementation of therapeutic interventions, involve both the child and family in the reassessment of symptoms.
The most common symptoms in children with severe chronic disease appropriate for palliative care consultation are weakness, fatigue, pain, poor appetite, weight loss, agitation, lack of mobility, shortness of breath, nausea and vomiting, constipation, sadness or depression, drowsiness, difficulty with speech, headache, excess secretions, anemia, pressure area problems, anxiety, fever, and mouth sores. The most common end of life symptoms in children include shortness of breath, cough, fatigue, pain, nausea and vomiting, agitation and anxiety, poor concentration, skin lesions, swelling of the extremities, seizures, poor appetite, difficulty with feeding, and diarrhea. In older children with neurologic and neuromuscular manifestations of disease, there is a high burden of anxiety and depression that correlates with disease progression, increasing disability, and greater dependence on carers. From the caregiver's perspective, families find changes in behavior, reported pain, lack of appetite, changes in appearance, talking to God or angels, breathing changes, weakness, and fatigue to be the most distressing symptoms to witness in their loved ones.
As discussed above, within the field of adult palliative medicine, validated symptoms assessment tools are frequently utilized by providers, but these tools lack essential aspects of children's symptom experience. Within pediatrics, there is not a comprehensive symptoms assessment widely employed. A few symptoms assessment tools trialed among older children receiving palliative care include the Symptom Distress Scale, and the Memorial Symptom Assessment Scale, and Childhood Cancer Stressors Inventory. Quality of life considerations within pediatrics are unique and an important component of symptoms assessment. The Pediatric Cancer Quality of Life Inventory-32 (PCQL-32) is a standardized parent-proxy report which assesses cancer treatment-related symptoms (focusing mainly on pain and nausea). But again, this tool does not comprehensively assess all palliative are symptoms issues. Symptom assessment tools for younger age groups are rarely utilized as they have limited value, especially for infants and young children who are not at a developmental stage where they can articulate symptoms.
Communication with children and families
Within the realm of pediatric medical care, the palliative care team is tasked with facilitating family-centered communication with children and their families, as well as multidisciplinary pediatric caregiving medical teams to forward coordinated medical management and the child's quality of life. Strategies for communication are complex as the pediatric palliative care practitioners must facilitate a shared understanding of and consensus for goals of care and therapies available to the sick child amongst multiple medical teams who often have different areas of expertise. Additionally, pediatric palliative care practitioners must assess both the sick child and their family's understanding of complex illness and options for care, and provide accessible, thoughtful education to address knowledge gaps and allow for informed decision making. Finally, practitioners are supporting children and families in the queries, emotional distress, and decision making that ensues from the child's illness.
Many frameworks for communication have been established within the medical literature, but the field of pediatric palliative care is still in relative infancy. Communication considerations and strategies employed in a palliative setting include:
Developing supportive relationships with patients and families. An essential component of a provider's ability to provide individualized palliative care is their ability to obtain an intimate understanding of the child and family's preferences and overall character. On initial consultation, palliative care providers often focus on affirming a caring relationship with the pediatric patient and their family by first asking the child how they would describe themself and what is important to them, communicating in an age and developmentally cognizant fashion. The provider may then gather similar information from the child's caregivers. Questions practitioners may ask include 'What does the child enjoy doing? What do they most dislike doing? What does a typical day look like for the child?' Other topics potentially addressed by the palliative care provider may also include familial rituals as well as spiritual and religious beliefs, life goals for the child, and the meaning of illness within the broader context of the child and their family's life.
Developing a shared understanding of the child's condition with the patient and their family. The establishment of shared knowledge between medical providers, patients, and families is essential when determining palliative goals of care for pediatric patients. Initially, practitioners often elicit information from the patient and child to ascertain these parties' baseline understanding of the child's situation. Assessing for baseline knowledge allows the palliative care provider to identify knowledge gaps and provide education on those topics. Through this process, families can pursue informed, shared medical decision making regarding their child's care. A framework often employed by pediatric palliative care providers is 'ask, tell, ask' where the provider asks the patient and their family for a question to identify their level of comprehension of the situation, and then subsequently supplements the family's knowledge with additional expert knowledge. This information is often conveyed without jargon or euphemism to maintain trust and ensure understanding. Providers iteratively check for comprehension of this knowledge supplementation by asking questions related to previous explanations, as information retention can be challenging when undergoing a stressful experience.
Establishing meaning and dignity regarding the child's illness. As part of developing a shared understanding of a child's illness and character, palliative providers will assess both the child and their family's symbolic and emotional relationship to disease. As both the somatic and psychologic implications of illness can be distressing to children, palliative care practitioners look for opportunities to establish meaning and dignity regarding the child's illness by contextualizing disease within a broader framework of the child's life. Derived from the fields of dignity therapy and meaning-centered psychotherapy, the palliative care provider may explore the following questions with the sick child and their family:
What gives your life meaning, worth, or purpose?
Where do you find strength and support?
What inspires you?
How do you like to be thought of?
What are you most proud of?
What are the particular things you would like your family to know or remember about you?
When was the last time you laughed really hard?
Are you frightened by all of this? What, in particular, are you most frightened of?
What is the meaning of this (illness) experience for you? Do you ever think about why this happened to you?
Assessing preferences for decision making. Medical decision making in a pediatric setting is unique in that it is often the child's legal guardians, not the patient, who ultimately consent for most medical treatments. Yet within a palliative care setting, it is particularly consequential to incorporate the child's preferences within the ultimate goals of care. Equally important to consider, families may vary in the level of responsibility they want in this decision-making process. Their preference may range from wanting to be the child's sole decision makers, to partnering with the medical team in a shared decision making model, to advocating for full deferral of decision-making responsibility to the clinician. Palliative care providers clarify a family's preferences and support needs for medical decision making by providing context, information, and options for treatment and medical palliation. In the case of critically ill babies, parents are able to participate more in decision making if they are presented with options to be discussed rather than recommendations by the doctor. Utilizing this style of communication also leads to less conflict with doctors and might help the parents cope better with the eventual outcomes.
Optimizing the environment for effective conversations around prognosis and goals of care. Essential to facilitating supportive, clear communication around potentially distressing topics such as prognosis and goals of care for seriously ill pediatric patients is optimizing the setting where this communication will take place and developing informed consensus among the child's caregiving team regarding goals and options for care. Often, these conversations occur within the context of family meetings, which are formal meetings between families and the child's multidisciplinary medical team. Prior to the family meeting, providers often meet to discuss the child's overall case, reasonably expected prognosis, and options for care, in addition to clarifying specific roles each provider will take on during the family meeting. During this meeting, the multidisciplinary medical team may also discuss any legal or ethical considerations related to the case. Palliative care providers often facilitate this meeting and help synthesize its outcome for children and their families. Experts in optimized communication, palliative care providers may opt to hold the family meeting in a quiet space where the providers and family can sit and address concerns during a time when all parties are not constrained. Additionally, parents' preferences regarding information exchange with the sick child present should be clarified. If the child's guardians are resistant to disclosing information in front of their child, the child's provider may explore parental concerns on the topic. When excluded from family meetings and moments of challenging information exchange, adolescents, in particular, may have challenges with trusting their medical providers if they feel critical information is being withheld. It is important to follow the child's lead when deciding whether to disclose difficult information. Additionally, including them in these conversations can help the child fully participate in their care and medical decision making. Finally, it is important to prioritize the family's agenda while additionally considering any urgent medical decisions needed to advance the child's care.
Supporting emotional distress. A significant role of the pediatric palliative care provider is to help support children, their families, and their caregiving teams through the emotional stress of illness. Communication strategies the palliative care provider may employ in this role are asking for permission when engaging with potentially distressing conversations, naming emotions witnessed to create opportunities to discuss complex emotional responses to illness, actively listening, and allowing for invitational silence. The palliative care provider may iteratively assess the child and family's emotional responses and needs during challenging conversations. At times, the medical team may be hesitant to discuss a child's prognosis out of fear of increasing distress. This sentiment is not supported by the literature; among adults, end of life discussions are not associated with increased rates of anxiety or depression. Though this topic is not well studied in pediatric populations, conversations about prognosis have the potential to increase in parental hope and peace of mind.
SPIKE framework. This is a framework that is designed to assist healthcare workers deliver bad news. The acronym stands for: setting, perception, invitation, knowledge, empathy, and summarize/strategy. When giving bad news it is important to consider the setting, which considers the environment in which the healthcare provider is delivering the news including privacy, sitting, time, and the inclusion of family members. What to say should also be considered, as well as rehearsed. It is important to understand how a patient is receiving the information by asking open ended questions and asking them to repeat what they learned in their own words, which is perception aspect of the framework. The healthcare provider should seek an invitation from the patient to disclose additional information before doing so in order to prevent overwhelming or distressing the patient further. In order to ensure the patient understands what is being told, knowledge must be used. This includes speaking in a way that the patient will understand, using simple words, not being excessively blunt, giving information in small chunks and checking in with the patient to confirm that they understand, and not providing poor information that may not be completely true. In order to alleviate some of a patient's distress it is crucial to be empathetic in the sense of understanding how a patient is feeling and the reactions they are having. This can allow one to change how they are delivering information, allow the patient to have time to process the information, or console them if needed. Connecting with patients is an important step in delivering bad news; maintaining eye contact proves that the healthcare provider is present and the patient and family has their full attention. Furthermore, the provider may make a connection by touching the patients shoulder or hand, giving them a physical connection to know that they are not alone. Finally, it is important to summarize all the information given in order to ensure the patient fully understands and takes away the major points. Additionally, patients who have a clear plan for the future are less likely to feel anxious and uncertain, but it is important to ask people if they are ready for that information before providing them with it.
Geriatric palliative care
With the transition in the population toward lower child mortality and lower death rates, countries around the world are seeing larger elderly populations. In some countries, this means a growing burden on national resources in the shape of social security and health care payments. As aging populations put increasing pressure on existing resources, long-term palliative care for patients' non-communicable, chronic conditions has emerged as a necessary approach to increase these patient's quality of life, through prevention and relief by identifying, assessing, and treating the source of pain and other psychosocial and spiritual problems.
In religions
Doctrine of Catholic Church, as a traditional reference, accepts and supports the use of palliative care. Most of the main religions in the world (and the biggest in amount of believers) are concordant with this point of view.
In society
Certification and training for services
In most countries, hospice care and palliative care is provided by an interdisciplinary team consisting of physicians, pharmacists, nurses, nursing assistants, social workers, chaplains, and caregivers. In some countries, additional members of the team may include certified nursing assistants and home healthcare aides, as well as volunteers from the community (largely untrained but some being skilled medical personnel), and housekeepers.
In the United Kingdom, Palliative Medicine specialist training is delivered alongside Internal Medicine stage two training over an indicative four years. Entry into Palliative medicine training is possible following successful completion of both a foundation programme and a core training programme. There are two core training programmes for Palliative Medicine training:
Internal Medical Training (IMT)
Acute Care Common Stem - Internal Medicine (ACCS-IM)
In the United States, the physician sub-specialty of hospice and palliative medicine was established in 2006 to provide expertise in the care of people with life-limiting, advanced disease, and catastrophic injury; the relief of distressing symptoms; the coordination of interdisciplinary care in diverse settings; the use of specialized care systems including hospice; the management of the imminently dying patient; and legal and ethical decision making in end of life care.
Caregivers, both family and volunteers, are crucial to the palliative care system. Caregivers and people being treated often form lasting friendships over the course of care. As a consequence caregivers may find themselves under severe emotional and physical strain. Opportunities for caregiver respite are some of the services hospices provide to promote caregiver well-being. Respite may last a few hours up to several days (the latter being done by placing the primary person being cared for in a nursing home or inpatient hospice unit for several days).
In the US, board certification for physicians in palliative care was through the American Board of Hospice and Palliative Medicine; recently this was changed to be done through any of 11 different speciality boards through an American Board of Medical Specialties-approved procedure. Additionally, board certification is available to osteopathic physicians (D.O.) in the United States through four medical specialty boards through an American Osteopathic Association Bureau of Osteopathic Specialists-approved procedure. More than 50 fellowship programs provide one to two years of specialty training following a primary residency. In the United Kingdom palliative care has been a full specialty of medicine since 1989 and training is governed by the same regulations through the Royal College of Physicians as with any other medical speciality. Nurses, in the United States and internationally, can receive continuing education credits through Palliative Care specific trainings, such as those offered by End-of-Life Nursing Education Consortium (ELNEC).
The Tata Memorial Centre in Mumbai has offered a physician's course in palliative medicine since 2012, the first one of its kind in the country.
Regional variation in services
In the United States, hospice and palliative care represent two different aspects of care with similar philosophies, but with different payment systems and location of services. Palliative care services are most often provided in acute care hospitals organized around an interdisciplinary consultation service, with or without an acute inpatient palliative care unit. Palliative care may also be provided in the dying person's home as a "bridge" program between traditional US home care services and hospice care or provided in long-term care facilities. In contrast over 80% of hospice care in the US is provided at home with the remainder provided to people in long-term care facilities or in free standing hospice residential facilities. In the UK hospice is seen as one part of the speciality of palliative care and no differentiation is made between 'hospice' and 'palliative care'.
In the UK palliative care services offer inpatient care, home care, day care and outpatient services, and work in close partnership with mainstream services. Hospices often house a full range of services and professionals for children and adults. In 2015 the UK's palliative care was ranked as the best in the world "due to comprehensive national policies, the extensive integration of palliative care into the National Health Service, a strong hospice movement, and deep community engagement on the issue".
In 2021 the UK's National Palliative and End of Life Care Partnership published their six ambitions for 2021–26. These include fair access to end of life care for everyone regardless of who they are, where they live or their circumstances, and the need to maximise comfort and wellbeing. Informed and timely conversations are also highlighted.
Acceptance and access
The focus on a person's quality of life has increased greatly since the 1990s. In the United States today, 55% of hospitals with more than 100 beds offer a palliative-care program, and nearly one-fifth of community hospitals have palliative-care programs. A relatively recent development is the palliative-care team, a dedicated health care team that is entirely geared toward palliative treatment.
Physicians practicing palliative care do not always receive support from the people they are treating, family members, healthcare professionals or their social peers. More than half of physicians in one survey reported that they have had at least one experience where a patient's family members, another physician or another health care professional had characterized their work as being "euthanasia, murder or killing" during the last five years. A quarter of them had received similar comments from their own friends or family member, or from a patient.
Despite significant progress that has been made to increase access to palliative care within the United States and other countries, many countries have not yet considered palliative care as a public health problem, and therefore do not include it in their public health agenda. Resources and cultural attitudes both play significant roles in the acceptance and implementation of palliative care in the health care agenda. A study identified the current gaps in palliative care for people with severe mental illness (SMI's). They found that due to the lack of resources within both mental health and end of life services people with SMI's faced a number of barriers to accessing timely and appropriate palliative care. They called for a multidisciplinary team approach, including advocacy, with a point of contact co-ordinating the appropriate support for the individual. They also state that end of life and mental health care needs to be included in the training for professionals.
A review states that by restricting referrals to palliative care only when patients have a definitive time line for death, something that the study found to often be inaccurate, can have negative implications for the patient both when accessing end of life care, or being unable to access services due to not receiving a time line from medical professionals. The authors call for a less rigid approach to referrals to palliative care services in order to better support the individual, improve quality of life remaining and provide more holistic care.
Many people with chronic pain are stigmatized and treated as opioid addicts. Patients can build a tolerance to drugs and have to take more and more to manage their pain. The symptoms of chronic pain patients do not show up on scans, so the doctor must go off trust alone. This is the reason that some wait to consult their doctor and endure sometimes years of pain before seeking help.
In media
Palliative care was the subject of the 2018 Netflix short documentary, End Game by directors Rob Epstein and Jeffrey Friedman about terminally ill patients in a San Francisco hospital and features the work of palliative care physician, BJ Miller. The film's executive producers were Steven Ungerleider, David C. Ulich and Shoshana R. Ungerleider.
In 2016, an open letter to the singer David Bowie written by a palliative care doctor, Professor Mark Taubert, talked about the importance of good palliative care, being able to express wishes about the last months of life, and good tuition (nutrition?) and education about end of life care generally. The letter went viral after David Bowie's son Duncan Jones shared it. The letter was subsequently read out by the actor Benedict Cumberbatch and the singer Jarvis Cocker at public events.
Research
Research funded by the UK's National Institute for Health and Care Research (NIHR) has addressed these areas of need. Examples highlight inequalities faced by several groups and offers recommendations. These include the need for close partnership between services caring for people with severe mental illness, improved understanding of barriers faced by Gypsy, Traveller and Roma communities, the provision of flexible palliative care services for children from ethnic minorities or deprived areas.
Other research suggests that giving nurses and pharmacists easier access to electronic patient records about prescribing could help people manage their symptoms at home. A named professional to support and guide patients and carers through the healthcare system could also improve the experience of care at home at the end of life. A synthesised review looking at palliative care in the UK created a resource showing which services were available and grouped them according to their intended purpose and benefit to the patient. They also stated that currently in the UK palliative services are only available to patients with a timeline to death, usually 12 months or less. They found these timelines to often be inaccurate and created barriers to patients accessing appropriate services. They call for a more holistic approach to end of life care which is not restricted by arbitrary timelines.
| Biology and health sciences | Medical procedures | null |
275549 | https://en.wikipedia.org/wiki/GNU%20Project | GNU Project | The GNU Project () is a free software, mass collaboration project announced by Richard Stallman on September 27, 1983. Its goal is to give computer users freedom and control in their use of their computers and computing devices by collaboratively developing and publishing software that gives everyone the rights to freely run the software, copy and distribute it, study it, and modify it. GNU software grants these rights in its license.
In order to ensure that the entire software of a computer grants its users all freedom rights (use, share, study, modify), even the most fundamental and important part, the operating system (including all its numerous utility programs) needed to be free software. Stallman decided to call this operating system GNU (a recursive acronym meaning "GNU's not Unix!"), basing its design on that of Unix, a proprietary operating system. According to its manifesto, the founding goal of the project was to build a free operating system, and if possible, "everything useful that normally comes with a Unix system so that one could get along without any software that is not free." Development was initiated in January 1984. In 1991, the Linux kernel appeared, developed outside the GNU project by Linus Torvalds, and in December 1992 it was made available under version 2 of the GNU General Public License. Combined with the operating system utilities already developed by the GNU project, it allowed for the first operating system that was free software, commonly known as Linux.
The project's current work includes software development, awareness building, political campaigning, and sharing of new material.
Origins
Richard Stallman announced his intent to start coding the GNU Project in a Usenet message in September 1983. Despite never having used Unix prior, Stallman felt that it was the most appropriate system design to use as a basis for the GNU Project, as it was portable and "fairly clean".
When the GNU project first started they had an Emacs text editor with Lisp for writing editor commands, a source level debugger, a yacc-compatible parser generator, and a linker. The GNU system required its own C compiler and tools to be free software, so these also had to be developed. By June 1987, the project had accumulated and developed free software for an assembler, an almost finished portable optimizing C compiler (GCC), an editor (GNU Emacs), and various Unix utilities (such as ls, grep, awk, make and ld). They had an initial kernel that needed more updates.
Once the kernel and the compiler were finished, GNU was able to be used for program development. The main goal was to create many other applications to be like the Unix system. GNU was able to run Unix programs but was not identical to it. GNU incorporated longer file names, file version numbers, and a crash-proof file system. The GNU Manifesto was written to gain support and participation from others for the project. Programmers were encouraged to take part in any aspect of the project that interested them. People could donate funds, computer parts, or even their own time to write code and programs for the project.
The origins and development of most aspects of the GNU Project (and free software in general) are shared in a detailed narrative in the Emacs help system. (C-h g runs the Emacs editor command describe-gnu-project.) It is the same detailed history as at their web site.
GNU Manifesto
The GNU Manifesto was written by Richard Stallman to gain support and participation in the GNU Project. In the GNU Manifesto, Stallman listed four freedoms essential to software users: freedom to run a program for any purpose, freedom to study the mechanics of the program and modify it, freedom to redistribute copies, and freedom to improve and change modified versions for public use. To implement these freedoms, users needed full access to the source code. To ensure code remained free and provide it to the public, Stallman created the GNU General Public License (GPL), which allowed software and the future generations of code derived from it to remain free for public use.
Philosophy and activism
Although most of the GNU Project's output is technical in nature, it was launched as a social, ethical, and political initiative. As well as producing software and licenses, the GNU Project has published a number of writings, the majority of which were authored by Richard Stallman.
Free software
The GNU project uses software that is free for users to copy, edit, and distribute. It is free in the sense that users can change the software to fit individual needs. The way programmers obtain the free software depends on where they get it. The software could be provided to the programmer from friends or over the Internet, or the company a programmer works for may purchase the software.
Funding
Proceeds from associate members, purchases, and donations support the GNU Project.
Copyleft
Copyleft is what helps maintain free use of this software among other programmers. Copyleft gives the legal right to everyone to use, edit, and redistribute programs or programs' code as long as the distribution terms do not change. As a result, any user who obtains the software legally has the same freedoms as the rest of its users do.
The GNU Project and the Free Software Foundation sometimes differentiate between "strong" and "weak" copyleft. "Weak" copyleft programs typically allow distributors to link them together with non-free programs, while "strong" copyleft strictly forbids this practice. Most of the GNU Project's output is released under a strong copyleft, although some is released under a weak copyleft or a lax, push-over free software license.
Operating system development
The first goal of the GNU project was to create a whole free-software operating system. Because UNIX was already widespread and ran on more powerful machines, compared to contemporary CP/M or MS-DOS machines of time, it was decided it would be a Unix-like operating system. Richard Stallman later commented that he considered MS-DOS "a toy".
By 1992, the GNU project had completed all of the major operating system utilities, but had not completed their proposed operating system kernel, GNU Hurd. With the release of the Linux kernel, started independently by Linus Torvalds in 1991, and released under the GPLv2 with version 0.12 in 1992, for the first time it was possible to run an operating system composed completely of free software. Though the Linux kernel is not part of the GNU project, it was developed using GCC and other GNU programming tools and was released as free software under the GNU General Public License. Most compilation of the Linux kernel is still done with GNU toolchains, but it is currently possible to use the Clang compiler and the LLVM toolchain for compilation.
As of present, the GNU project has not released a version of GNU/Hurd that is suitable for production environments since the commencement of the GNU/Hurd project over .
GNU/Linux
A stable version (or variant) of GNU can be run by combining the GNU packages with the Linux kernel, making a functional Unix-like system. The GNU project calls this GNU/Linux, and the defining features are the combination of:
GNU packages (except for GNU Hurd) The GNU packages consist of numerous operating system tools and utilities (shell, coreutils, compilers, libraries, etc.) including a library implementation of all of the functions specified in POSIX System Application Program Interface (POSIX.1). The GCC compiler can generate machine-code for a large variety of computer-architectures.
Linux kernel – this implements program scheduling, multitasking, device drivers, memory management, etc. and allows the system to run on a large variety of computer-architectures. Linus Torvalds released the Linux kernel under the GNU General Public License in 1992; it is however not part of the GNU project.
non-GNU programs – various free software packages which are not a part of the GNU Project but are released under the GNU General Public License or another FSF-approved Free Software License.
Within the GNU website, a list of projects is laid out and each project has specifics for what type of developer is able to perform the task needed for a certain piece of the GNU project. The skill level ranges from project to project but anyone with background knowledge in programming is encouraged to support the project.
The packaging of GNU tools, together with the Linux kernel and other programs, is usually called a Linux distribution (distro). The GNU Project calls the combination of GNU and the Linux kernel "GNU/Linux", and asks others to do the same, resulting in the GNU/Linux naming controversy.
Most Linux distros combine GNU packages with a Linux kernel which contains proprietary binary blobs.
GNU Free System Distribution Guidelines
The GNU Free System Distribution Guidelines (GNU FSDG) is a system distribution commitment that explains how an installable system distribution (such as a Linux distribution) qualifies as free (libre), and helps distribution developers make their distributions qualify.
The list mostly describes distributions that are a combination of GNU packages with a Linux-libre kernel (a modified Linux kernel that removes binary blobs, obfuscated code, and portions of code under proprietary licenses) and consist only of free software (eschewing proprietary software entirely). Distributions that have adopted the GNU FSDG include Dragora GNU/Linux-Libre, GNU Guix System, Hyperbola GNU/Linux-libre, Parabola GNU/Linux-libre, Trisquel GNU/Linux, PureOS, and a few others.
The Fedora Project's distribution license guidelines were used as a basis for the FSDG. The Fedora Project's own guidelines, however, currently do not follow the FSDG, and thus the GNU Project does not consider Fedora to be a fully free (libre) GNU/Linux distribution.
Strategic projects
From the mid-1990s onward, with many companies investing in free software development, the Free Software Foundation redirected its funds toward the legal and political support of free software development. Software development from that point on focused on maintaining existing projects, and starting new projects only when there was an acute threat to the free software community. One of the most notable projects of the GNU Project is the GNU Compiler Collection, whose components have been adopted as the standard compiler system on many Unix-like systems.
The copyright of most works by the GNU Project is owned by the Free Software Foundation.
GNOME
The GNOME desktop effort was launched by the GNU Project because another desktop system, KDE, was becoming popular but required users to install Qt, which was then proprietary software. To prevent people from being tempted to install KDE and Qt, the GNU Project simultaneously launched two projects. One was the Harmony toolkit. This was an attempt to make a free software replacement for Qt. Had this project been successful, the perceived problem with the KDE would have been solved. The second project was GNOME, which tackled the same issue from a different angle. It aimed to make a replacement for KDE that had no dependencies on proprietary software. The Harmony project did not make much progress, but GNOME developed very well. Eventually, the proprietary component that KDE depended on (Qt) was released as free software. GNOME has since dissociated itself from the GNU Project and the Free Software Foundation, and is now independently managed by the GNOME Project.
GNU Enterprise
GNU Enterprise (GNUe) was a meta-project started in 1996, and can be regarded as a sub-project of the GNU Project. GNUe's goal is to create free "enterprise-class data-aware applications" (enterprise resource planners, etc.). GNUe is designed to collect Enterprise software for the GNU system in a single location (much like the GNOME project collects Desktop software),it was later Decommissioned.
Recognition
In 2001, the GNU Project received the USENIX Lifetime Achievement Award for "the ubiquity, breadth, and quality of its freely available redistributable and modifiable software, which has enabled a generation of research and commercial development".
| Technology | Computer software | null |
15822342 | https://en.wikipedia.org/wiki/Continental%20fragment | Continental fragment | Continental crustal fragments, partly synonymous with microcontinents, are pieces of continents that have broken off from main continental masses to form distinct islands that are often several hundred kilometers from their place of origin.
Causes
Continental fragments and microcontinent crustal compositions are very similar to those of regular continental crust. The rifting process that caused the continental fragments to form most likely impacts their layers and overall thickness along with the addition of mafic intrusions to the crust. Studies have determined that the average crustal thickness of continental fragments is approximately . The sedimentary layer of continental fragments can be up to thick and can overlay two to three crustal layers. Continental fragments have an average crustal density of which is very similar to that of typical continental crust.
Strike-slip fault zones cause the fragmentation of microcontinents. The zones link the extensional zones where continental pieces are already isolated through the remaining continental bridges. Additionally, they facilitate quick crustal thinning across narrow zones and near-vertical strike-slip-dominated faults. They develop fault-block patterns that slice the portion of continent into detachable slivers. The continental fragments are located at various angles from their transform faults.
History
Some microcontinents are fragments of Gondwana or other ancient cratonic continents; examples include Madagascar; the northern Mascarene Plateau, which includes the Seychelles Microcontinent; and the island of Timor. Other islands, such as several in the Caribbean Sea, are composed largely of granitic rock as well, but all continents contain both granitic and basaltic crust, and there is no clear dividing line between islands and microcontinents under such a definition. The Kerguelen Plateau is a large igneous province formed by a volcanic hotspot; however, it was associated with the breakup of Gondwana and was for a time above water, so it is considered a microcontinent, though not a continental fragment. Other hotspot islands such as the Hawaiian Islands and Iceland are considered neither microcontinents nor continental fragments. Not all islands can be considered microcontinents: Borneo, the British Isles, Newfoundland, and Sri Lanka, for example, are each within the continental shelf of an adjacent continent, separated from the mainland by inland seas flooding its margins.
Several islands in the eastern Indonesian Archipelago are considered continental fragments, although this designation is controversial. The archipelago is home to numerous microcontinents with complex geology and tectonics. This makes it complicated to classify landmasses and determine causation for the formation of the landmass. These include southern Bacan, Banggai-Sulu Islands (Sulawesi), the Buru-Seram-Ambon complex (Maluku), Obi, Sumba, and Timor (Nusa Tenggara)
List of continental fragments and microcontinents
Continental fragments (pieces of Pangaea smaller than Australia)
Parts of
Possibly Sumba, Timor, and other islands of eastern Indonesia; Sulawesi formed via the subduction of a microcontinent
South Orkney microcontinent
Other microcontinents (formed post-Pangaea)
Cuba, Hispaniola, Jamaica, and other granitic Caribbean islands
Future microcontinents
Ajan, a continent that will form in 3 to 20 million years time because of its breakoff with mainland Africa.
| Physical sciences | Tectonics | Earth science |
4839019 | https://en.wikipedia.org/wiki/Frequency%20%28statistics%29 | Frequency (statistics) | In statistics, the frequency or absolute frequency of an event is the number of times the observation has occurred/been recorded in an experiment or study. These frequencies are often depicted graphically or tabular form.
Types
The cumulative frequency is the total of the absolute frequencies of all events at or below a certain point in an ordered list of events.
The relative frequency (or empirical probability) of an event is the absolute frequency normalized by the total number of events:
The values of for all events can be plotted to produce a frequency distribution.
In the case when for certain , pseudocounts can be added.
Depicting frequency distributions
A frequency distribution shows a summarized grouping of data divided into mutually exclusive classes and the number of occurrences in a class. It is a way of showing unorganized data notably to show results of an election, income of people for a certain region, sales of a product within a certain period, student loan amounts of graduates, etc. Some of the graphs that can be used with frequency distributions are histograms, line charts, bar charts and pie charts. Frequency distributions are used for both qualitative and quantitative data.
Construction
Decide the number of classes. Too many classes or too few classes might not reveal the basic shape of the data set, also it will be difficult to interpret such frequency distribution. The ideal number of classes may be determined or estimated by formula: (log base 10), or by the square-root choice formula where n is the total number of observations in the data. (The latter will be much too large for large data sets such as population statistics.) However, these formulas are not a hard rule and the resulting number of classes determined by formula may not always be exactly suitable with the data being dealt with.
Calculate the range of the data (Range = Max – Min) by finding the minimum and maximum data values. Range will be used to determine the class interval or class width.
Decide the width of the classes, denoted by h and obtained by (assuming the class intervals are the same for all classes).
Generally the class interval or class width is the same for all classes. The classes all taken together must cover at least the distance from the lowest value (minimum) in the data to the highest (maximum) value. Equal class intervals are preferred in frequency distribution, while unequal class intervals (for example logarithmic intervals) may be necessary in certain situations to produce a good spread of observations between the classes and avoid a large number of empty, or almost empty classes.
Decide the individual class limits and select a suitable starting point of the first class which is arbitrary; it may be less than or equal to the minimum value. Usually it is started before the minimum value in such a way that the midpoint (the average of lower and upper class limits of the first class) is properly placed.
Take an observation and mark a vertical bar (|) for a class it belongs. A running tally is kept till the last observation.
Find the frequencies, relative frequency, cumulative frequency etc. as required.
The following are some commonly used methods of depicting frequency:
Histograms
A histogram is a representation of tabulated frequencies, shown as adjacent rectangles or squares (in some of situations), erected over discrete intervals (bins), with an area proportional to the frequency of the observations in the interval. The height of a rectangle is also equal to the frequency density of the interval, i.e., the frequency divided by the width of the interval. The total area of the histogram is equal to the number of data. A histogram may also be normalized displaying relative frequencies. It then shows the proportion of cases that fall into each of several categories, with the total area equaling 1. The categories are usually specified as consecutive, non-overlapping intervals of a variable. The categories (intervals) must be adjacent, and often are chosen to be of the same size. The rectangles of a histogram are drawn so that they touch each other to indicate that the original variable is continuous.
Bar graphs
A bar chart or bar graph is a chart with rectangular bars with lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally. A vertical bar chart is sometimes called a column bar chart.
Frequency distribution table
A frequency distribution table is an arrangement of the values that one or more variables take in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval, and in this way, the table summarizes the distribution of values in the sample.
This is an example of a univariate (=single variable) frequency table. The frequency of each response to a survey question is depicted.
A different tabulation scheme aggregates values into bins such that each bin encompasses a range of values. For example, the heights of the students in a class could be organized into the following frequency table.
Joint frequency distributions
Bivariate joint frequency distributions are often presented as (two-way) contingency tables:
The total row and total column report the marginal frequencies or marginal distribution, while the body of the table reports the joint frequencies.
Interpretation
Under the frequency interpretation of probability, it is assumed that as the length of a series of trials increases without bound, the fraction of experiments in which a given event occurs will approach a fixed value, known as the limiting relative frequency.
This interpretation is often contrasted with Bayesian probability. In fact, the term 'frequentist' was first used by M. G. Kendall in 1949, to contrast with Bayesians, whom he called "non-frequentists". He observed
3....we may broadly distinguish two main attitudes. One takes probability as 'a degree of rational belief', or some similar idea...the second defines probability in terms of frequencies of occurrence of events, or by relative proportions in 'populations' or 'collectives'; (p. 101)
...
12. It might be thought that the differences between the frequentists and the non-frequentists (if I may call them such) are largely due to the differences of the domains which they purport to cover. (p. 104)
...
I assert that this is not so ... The essential distinction between the frequentists and the non-frequentists is, I think, that the former, in an effort to avoid anything savouring of matters of opinion, seek to define probability in terms of the objective properties of a population, real or hypothetical, whereas the latter do not. [emphasis in original]
Applications
Managing and operating on frequency tabulated data is much simpler than operation on raw data. There are simple algorithms to calculate median, mean, standard deviation etc. from these tables.
Statistical hypothesis testing is founded on the assessment of differences and similarities between frequency distributions. This assessment involves measures of central tendency or averages, such as the mean and median, and measures of variability or statistical dispersion, such as the standard deviation or variance.
A frequency distribution is said to be skewed when its mean and median are significantly different, or more generally when it is asymmetric. The kurtosis of a frequency distribution is a measure of the proportion of extreme values (outliers), which appear at either end of the histogram. If the distribution is more outlier-prone than the normal distribution it is said to be leptokurtic; if less outlier-prone it is said to be platykurtic.
Letter frequency distributions are also used in frequency analysis to crack ciphers, and are used to compare the relative frequencies of letters in different languages and other languages are often used like Greek, Latin, etc.
| Mathematics | Statistics | null |
12956141 | https://en.wikipedia.org/wiki/Squilla%20mantis | Squilla mantis | Squilla mantis is a species of mantis shrimp found in shallow coastal areas of the Mediterranean Sea and the Eastern Atlantic Ocean: it is also known as "pacchero" or "canocchia". Its abundance has led to it being the only commercially fished mantis shrimp in the Mediterranean.
Description
Individuals grow up to long, and this mantis shrimp is of the spearer type. It is generally dull brown in colouration, but has two brown eye spots, circled in white, at the base of the telson. Other species – including smashers – are also sold in the aquarium trade as Squilla mantis.
Distribution and ecology
S. mantis digs burrows in muddy and sandy bottoms near the coasts of the Mediterranean Sea and adjacent warm parts of the eastern Atlantic Ocean. It remains in its burrow during the day and comes out at night to hunt, and in the winter to mate.
It is found around the entire coast of the Mediterranean, and in the Atlantic Ocean south from the Gulf of Cádiz to Angola, as well as around the Canary Islands, and Madeira. It has historically been recorded from the Bay of Biscay and the British Isles, but is not known to occur there any more. It is particularly abundant where there is significant run-off from rivers, and where the substrate is suitable for burrowing. In the Mediterranean, the outflows from the Nile, Po, Ebro and Rhône provide these conditions.
The alpheid shrimp Athanas amazone often lives in the burrows of S. mantis, despite being of a similar size to other shrimp which S. mantis feeds on. The relationship between the two species remains unknown, although a second similar case has been reported for the species Athanas squillophilus in the burrows of Oratosquilla oratoria in Japanese waters.
Fishery
S. mantis is the only native stomatopod to be fished for on a commercial scale in the Mediterranean. Over 7,000 t is caught annually, 85% of which is caught on Italian shores of the Adriatic Sea, with further production in the Ionian Sea, off Sardinia, off the coast of Catalonia and off the Balearic Islands. Outside the Mediterranean, it is consumed in Andalusia in the Gulf of Cadiz under the name of "galeras".
| Biology and health sciences | Malacostraca | Animals |
484727 | https://en.wikipedia.org/wiki/American%20Brahman | American Brahman | The Brahman is an American breed of zebuine-taurine hybrid beef cattle. It was bred in the United States from 1885 using cattle originating in India, imported at various times from the United Kingdom, India, and Brazil. These were mainly Gir, Guzerá and Nelore stock, with some Indu-Brasil, Krishna Valley and Ongole. The Brahman has a high tolerance of heat, sunlight and humidity, and good resistance to parasites. It has been exported to many countries, particularly in the tropics; in Australia it is the most numerous breed of cattle. It has been used in the creation of numerous taurine-indicine hybrids, some of which – such as the Brangus and Brahmousin – are established as separate breeds.
History
Zebuine (Asian humped) cattle were present in the United States from 1849, when a single bull of Indian origin was imported from the United Kingdom to South Carolina. In 1885 a pair of grey bulls was brought directly from India to Texas; one was large, weighing over , the other weighed little more than half that. Cross-breeding of these with local taurine cows was the first step in the creation of the Brahman breed. Other small groups of Indian cattle were imported up to about 1906, mostly to Texas; some of them were imported to be displayed as circus animals, and were later sold to ranchers. In 1924 and 1925 a total of 210 bulls and 18 cows of mainly zebuine-taurine hybrid Guzerá stock, but also including some Gir and Nelore, were brought from Brazil to the United States through Mexico.
A breed association, the American Brahman Breeders Association, was formed in 1924, and a herd-book was started. The name 'Brahman' was chosen by J. W. Sartwelle, secretary of the association. In 1939 the herd-book was closed, thereafter recording only the offspring of registered parents. The registration in 1946 of eighteen imported Brazilian bulls, mainly Indu-Brasil and Gir, was permitted, as were some later additions of imported stock. The association registered all American indicine cattle in the same herd-book until 1991, when herd-books for Gir, Guzerat, Indu-Brasil, Nelore and Tabapua were separated from that for the American Red and Grey Brahman.
Exports of cattle of this breed to Australia began in 1933 and continued until 1954, amounting to 49 head in all; by 1973 their offspring numbered more than . Some further imports, numbering about 700 head in total, took place after 1981. By 1987 there were over a million in Queensland alone, and by the end of the century there were more of them in Australia than of any other breed, particularly in the tropical north of the country.
The Brahman is reported from fifty-five countries, in all inhabited continents, with an estimated world population of over 1.8 million head. Populations of over are reported by Argentina, Colombia, the Dominican Republic, Ecuador, Mexico, Mozambique and South Africa. No population data for the United States has been reported since 2014, when there were just under head.
Characteristics
The Brahman has good tolerance of heat and humidity, good resistance to insects, and good tolerance of poor feeding conditions. It has been found to do well in southern coastal areas of the United States. These characteristics may be transmitted to cross-breeds of Brahman with cattle of European origin, which can also benefit from heterosis ('hybrid vigor').
Use
The Brahman is reared for the meat industry, particularly in areas where good resistance to hot or tropical conditions is needed. As with other zebuine cattle, the meat is of lower quality than that of specialised European beef cattle breeds. For this reason it is commonly cross-bred with cattle of those breeds, either by raising hybrid calves born to pure-bred parents, or by creating a composite or hybrid breed, of which there are many. Some of them, such as the Brahmousin (Brahman x Limousin), Brangus (Brahman x Angus) and Simbrah (Brahman x Simmental) have acquired breed status in their own right, but many others have not. These include the Brahorn (Brahman x Shorthorn), the Bravon (Brahman x Devon) and South Bravon (Brahman x South Devon), the Bra-Swiss (Brahman x Brown Swiss), the Sabre (Brahman x Sussex) and the Braford (Brahman x Hereford).
In Oman and Fujairah, Brahman bulls are used in the traditional sport of bull-butting; they may be fed milk and honey to prepare them for the contest.
Gallery
| Biology and health sciences | Cattle | null |
484824 | https://en.wikipedia.org/wiki/American%20alligator | American alligator | The American alligator (Alligator mississippiensis), sometimes referred to as a gator, or common alligator is a large crocodilian reptile native to the Southeastern United States and a small section of northeastern Mexico. It is one of the two extant species in the genus Alligator, and is larger than the only other living alligator species, the Chinese alligator.
Adult male American alligators measure in length, and can weigh up to , with unverified sizes of up to and weights of making it the second largest member by length and the heaviest of the family Alligatoridae, after the black caiman. Females are smaller, measuring in length. The American alligator inhabits subtropical and tropical freshwater wetlands, such as marshes and cypress swamps, from southern Texas to North Carolina. It is distinguished from the sympatric American crocodile by its broader snout, with overlapping jaws and darker coloration, and is less tolerant of saltwater but more tolerant of cooler climates than the American crocodile, which is found only in tropical and warm subtropical climates.
American alligators are apex predators and consume fish, amphibians, reptiles, birds, and mammals. Hatchlings feed mostly on invertebrates. They play an important role as ecosystem engineers in wetland ecosystems through the creation of alligator holes, which provide both wet and dry habitats for other organisms. Throughout the year (in particular during the breeding season), American alligators bellow to declare territory, and locate suitable mates. Male American alligators use infrasound to attract females. Eggs are laid in a nest of vegetation, sticks, leaves, and mud in a sheltered spot in or near the water. Young are born with yellow bands around their bodies and are protected by their mother for up to one year. This species displays parental care, which is rare for most reptiles. Mothers protect their eggs during the incubation period, and moves the hatchlings to the water using her mouth.
The conservation status of the American alligator is listed as Least Concern by the International Union for Conservation of Nature. Historically, hunting had decimated their population, and the American alligator was listed as an endangered species by the Endangered Species Act of 1973. Subsequent conservation efforts have allowed their numbers to increase and the species was removed from endangered status in 1987. The species is the official state reptile of three states: Florida, Louisiana, and Mississippi.
History and taxonomy
The American alligator was first classified in 1801 by French zoologist François Marie Daudin as Crocodilus mississipiensis. In 1807, Georges Cuvier created the genus Alligator for it, based on the English common name alligator (derived from Spanish word , "the lizard").
The American alligator and its closest living relative, the Chinese alligator, belong the subfamily Alligatorinae. Alligatorinae is the sister group to the caimans of Caimaninae, which together comprise the family Alligatoridae, which can be shown in the cladogram below:
Evolution
Fossils identical to the existing American alligator are found throughout the Pleistocene, from 2.5 million to 11.7 thousand years ago. In 2016, a Late Miocene fossil skull of an alligator, dating to approximately seven or eight million years ago, was discovered in Marion County, Florida. Unlike the other extinct alligator species of the same genus, the fossil skull was virtually indistinguishable from that of the modern American alligator. This alligator and the American alligator are now considered to be sister taxa, suggesting that the A. mississippiensis lineage has existed in North America for seven to eight million years.
The alligator's full mitochondrial genome was sequenced in the 1990s, and it suggests the animal evolved at a rate similar to mammals and greater than birds and most cold-blooded vertebrates. However, the full genome, published in 2014, suggests that the alligator evolved much more slowly than mammals and birds.
Characteristics
Domestic American alligators range from long and slender to short and robust, possibly in response to variations in factors such as growth rate, diet, and climate.
Size
The American alligator is a relatively large species of crocodilian. On average, it is the largest species in the family Alligatoridae, with only the black caiman being possibly larger. Weight varies considerably depending on length, age, health, season, and available food sources. Similar to many other reptiles that range expansively into temperate zones, American alligators from the northern end of their range, such as southern Arkansas, Alabama, and northern North Carolina, tend to reach smaller sizes. Large adult American alligators tend to be relatively robust and bulky compared to other similar-length crocodilians; for example, captive males measuring were found to weigh , although captive specimens may outweigh wild specimens due to lack of hunting behavior and other stressors.
Large male American alligators reach an expected maximum size up to in length and weigh up to , while females reach an expected maximum of . However, the largest free-ranging female had a total length of and weighed . On rare occasions, a large, old male may grow to an even greater length.
Largest
During the 19th and 20th centuries, larger males reaching were reported. The largest reported individual size was a male killed in 1890 by Edward McIlhenny on Marsh Island, Louisiana, and reportedly measured at in length, but no voucher specimen was available, since the American alligator was left on a muddy bank after having been measured due to having been too massive to relocate. If the size of this animal was correct, it would have weighed about . In Arkansas, a man killed an American alligator that was and . The largest American alligator ever killed in Florida was , as reported by the Everglades National Park, although this record is unverified. The largest American alligator scientifically verified in Florida for the period from 1977 to 1993 was reportedly and weighed , although another specimen (size estimated from skull) may have measured . A specimen that was long and weighed is the largest American alligator killed in Alabama and has been declared the SCI world record in 2014.
Reported sizes
Average
American alligators do not normally reach such extreme sizes. In mature males, most specimens grow up to about in length, and weigh up to , while in females, the mature size is normally around , with a body weight up to . In Newnans Lake, Florida, adult males averaged in weight and in length, while adult females averaged and measured . In Lake Griffin State Park, Florida, adults weighed on average . Weight at sexual maturity per one study was stated as averaging while adult weight was claimed as .
Relation to age
There is a common belief stated throughout reptilian literature that crocodilians, including the American alligator, exhibit indeterminate growth, meaning the animal continues to grow for the duration of its life. However, these claims are largely based on assumptions and observations of juvenile and young adult crocodilians, and recent studies are beginning to contradict this claim. For example, one long-term mark-recapture study (1979–2015) done at the Tom Yawkey Wildlife Center in South Carolina found evidence to support patterns of determinate growth, with growth ceasing upon reaching a certain age (43 years for males and 31 years for females).
Sexual dimorphism
While noticeable in very mature specimens, the sexual dimorphism in size of the American alligator is relatively modest among crocodilians. For contrast, the sexual dimorphism of saltwater crocodiles is much more extreme, with mature males nearly twice as long as and at least four times as heavy as female saltwater crocodiles. Given that female American alligators have relatively higher survival rates at an early age and a large percentage of given populations consists of immature or young breeding American alligators, relatively few large mature males of the expected mature length of or more are typically seen.
Color
Dorsally, adult American alligators may be olive, brown, gray, or black. However, they are on average one of the most darkly colored modern crocodilians (although other alligatorid family members are also fairly dark), and can reliably be distinguished by color via their more blackish dorsal scales against crocodiles. Meanwhile, their undersides are cream-colored. Some American alligators are missing or have an inhibited gene for melanin, which makes them albino. These American alligators are extremely rare and almost impossible to find in the wild. They could only survive in captivity, as they are very vulnerable to the sun and predators.
Jaws, teeth, and snout
American alligators have 74–80 teeth. As they grow and develop, the morphology of their teeth and jaws change significantly. Juveniles have small, needle-like teeth that become much more robust and narrow snouts that become broader as the individuals develop. These morphological changes correspond to shifts in the American alligators' diets, from smaller prey items such as fish and insects to larger prey items such as turtles, birds, and other large vertebrates. American alligators have broad snouts, especially in captive individuals. When the jaws are closed, the edges of the upper jaws cover the lower teeth, which fit into the jaws' hollows. Like the spectacled caiman, this species has a bony nasal ridge, though it is less prominent. American alligators are often mistaken for a similar animal: the American crocodile. An easy characteristic to distinguish the two is the fourth tooth. Whenever an American alligator's mouth is closed, the fourth tooth is no longer visible. It is enclosed in a pocket in the upper jaw.
Bite
Adult American alligators held the record as having the strongest laboratory-measured bite of any living animal, measured at up to . This experiment had not been, at the time of the paper published, replicated in any other crocodilians, and the same laboratory was able to measure a greater bite force of in saltwater crocodiles; notwithstanding this very high biting force, the muscles opening the American alligator's jaw are quite weak, and the jaws can be held closed by hand or tape when an American alligator is captured. No significant difference is noted between the bite forces of male and female American alligators of equal size. Another study noted that as the American alligator increases in size, the force of its bite also increases.
Movement
When on land, an American alligator moves either by sprawling or walking, the latter involving the reptile lifting its belly off the ground. The sprawling of American alligators and other crocodylians is not similar to that of salamanders and lizards, being similar to walking. Therefore, the two forms of land locomotion can be termed the "low walk" and the "high walk". Unlike most other land vertebrates, American alligators increase their speed through the distal rather than proximal ends of their limbs.
In the water, American alligators swim like fish, moving their pelvic regions and tails from side to side. Swimming is assisted by webbed rear feet as well, which bear four toes in contrast to the five toes of the front feet. During respiration, air flow is unidirectional, looping through the lungs during inhalation and exhalation; the American alligator's abdominal muscles can alter the position of the lungs within the torso, thus shifting the center of buoyancy, which allows the American alligator to dive, rise, and roll within the water.
Distribution
American alligators, being native both to the Nearctic and Neotropical realms, are found in the wild in the Southeastern United States, from the Lowcountry in South Carolina, south to Everglades National Park in Florida, and west to the southeastern region of Texas. They are found in parts of North Carolina, South Carolina, Georgia, Florida, Louisiana, Alabama, Mississippi, Arkansas, Oklahoma and Texas. Some of these locations appear to be relatively recent introductions, with often small but reproductive populations. Louisiana has the largest American alligator population of any U.S. state. In the future, possible American alligator populations may be found in areas of Mexico adjacent to the Texas border. The range of the American alligator is slowly expanding northwards, including into areas they once found such as Virginia. American alligators have been naturally expanding their range into Tennessee, and have established a small population in the southwestern part of that state via inland waterways, according to the state's wildlife agency. They have been extirpated from Virginia, and occasional vagrants from North Carolina wander into the Great Dismal Swamp.
In 2021, an individual was found in Calvert County, Maryland, near Chesapeake Bay, where it was shot and killed by a hunter using a crossbow. Additional reports of American alligators from this region exist, though they are believed to be escaped or released exotic pets.
Conservation status
American alligators are currently listed as least concern by the IUCN Red List, even though from the 1800s to the mid-1900s, they were being hunted and poached by humans unsustainably.
Historically, hunting and habitat loss have severely affected American alligator populations throughout their range, and whether the species would survive was in doubt. In 1967, the American alligator was listed as an endangered species (under a law that was the precursor to the Endangered Species Act of 1973), since it was believed to be in danger of extinction throughout all or a significant portion of its range.
Both the United States Fish and Wildlife Service (USFWS) and state wildlife agencies in the South contributed to the American alligator's recovery. Protection under the Endangered Species Act allowed the species to recuperate in many areas where it had been depleted. States began monitoring their American alligator populations to ensure that they would continue to grow. In 1987, the USFWS removed the animal from the endangered species list, as it was considered to be fully recovered. The USFWS still regulates the legal trade in American alligators and their products to protect still endangered crocodilians that may be passed off as American alligators during trafficking.
American alligators are listed under Appendix II of the Convention on International Trade in Endangered Species (CITES) meaning that international trade in the species (including parts and derivatives) is regulated.
Habitat
They inhabit swamps, streams, rivers, ponds, and lakes as well as wetland prairies interspersed with shallow open water and canals with associated levees. A lone American alligator was spotted for over 10 years living in a river north of Atlanta, Georgia. Females and juveniles are also found in Carolina Bays and other seasonal wetlands. While they prefer fresh water, American alligators may sometimes wander into brackish water, but are less tolerant of salt water than American crocodiles, as the salt glands on their tongues do not function. One study of American alligators in north-central Florida found the males preferred open lake water during the spring, while females used both swampy and open-water areas. During summer, males still preferred open water, while females remained in the swamps to construct their nests and lay their eggs. Both sexes may den underneath banks or clumps of trees during the winter.
In some areas of their range, American alligators are an unusual example of urban wildlife; golf courses are often favored by the species due to an abundance of water and a frequent supply of prey animals such as fish and birds.
Cold tolerance
American alligators are less vulnerable to cold than American crocodiles. Unlike an American crocodile, which would immediately succumb to the cold and drown in water at or less, an American alligator can survive in such temperatures for some time without displaying any signs of discomfort. This adaptiveness is thought to be why American alligators are widespread further north than the American crocodile. In fact, the American alligator is found farther from the equator and is more equipped to handle cooler conditions than any other crocodilian. When the water begins to freeze, American alligators go into a period of brumation; they stick their snouts through the surface, which allows them to breathe above the ice, and they can remain in this state for several days.
Ecology and behavior
Basking
American alligators primarily bask on shore, but also climb into and perch on tree limbs to bask if no shoreline is available. This is not often seen, since if disturbed, they quickly retreat back into the water by jumping from their perch.
Holes
American alligators modify wetland habitats, most dramatically in flat areas such as the Everglades, by constructing small ponds known as alligator holes. This behavior has qualified the American alligator to be considered a keystone species. Alligator holes retain water during the dry season and provide a refuge for aquatic organisms, which survive the dry season by seeking refuge in alligator holes, so are a source of future populations. The construction of nests along the periphery of alligator holes, as well as a buildup of soils during the excavation process, provides drier areas for other reptiles to nest and a place for plants that are intolerant of inundation to colonize. Alligator holes are an oasis during the Everglades dry season, so are consequently important foraging sites for other organisms. In the limestone depressions of cypress swamps, alligator holes tend to be large and deep, while those in marl prairies and rocky glades are usually small and shallow, and those in peat depressions of ridge and slough wetlands are more variable.
Feeding
Bite and mastication
The teeth of the American alligator are designed to grip prey, but cannot rip or chew flesh like teeth of some other predators (such as canids and felids), and depend on their gizzard, instead, to masticate their food. The attainment of adulthood enables the consumption of large mammals and the crushing of large turtles. The American alligator is capable of biting through a turtle's shell or a moderately sized mammal bone.
Possible tool use
American alligators have been documented using lures to hunt prey such as birds. This means they are among the first reptiles recorded to use tools. By balancing sticks and branches on their heads, American alligators are able to lure birds looking for suitable nesting material to kill and consume. This strategy, which is shared by the mugger crocodile, is particularly effective during the nesting season, in which birds are more likely to gather appropriate nesting materials. This strategy has been documented in two Florida zoos occurring multiple times a day in peak nesting season and in some parks in Louisiana. The use of tools was documented primarily during the peak rookery season when birds were primarily looking for sticks.
However, a three-day experiment to reproduce the use of sticks as lures, published in 2019, failed to document the behavior. Researchers placed sticks at densities of 30 to 35 sticks per meter squared near four captive populations, two near rookeries and two at no-rookery sites. While stick-displaying behavior was observed several times, it was not more frequent near rookeries. In fact, in some comparisons, it was associated with no-rookery sites. This implies American alligators do not tailor this behavior to specific contexts, leaving the purpose, if any, of stick-displaying ambiguous.
Aquatic vs terrestrial prey
Fish and other aquatic prey taken in the water or at the water's edge form the major part of American alligator's diet and may be eaten at any time of the day or night. Adult American alligators also spend considerable time hunting on land, up to from water, ambushing terrestrial animals on trailsides and road shoulders. Usually, terrestrial hunting occurs on nights with warm temperatures. When hunting terrestrial prey, American alligators may also ambush them from the edge of the water by grabbing them and pulling the prey into the water, the preferred method of predation of larger crocodiles.
Additionally, American alligators have recently been filmed and documented killing and eating sharks and rays; four incidents documented indicated that bonnetheads, lemon sharks, Atlantic stingrays, and nurse sharks are components of the animal's diet. Sharks are also known to prey on American alligators, in turn, indicating that encounters between the two predators are common.
Common prey
American alligators are considered an apex predator throughout their range. They are opportunists and their diet is determined largely by both their size and age and the size and availability of prey. Most American alligators eat a wide variety of animals, including invertebrates, fish, birds, turtles, snakes, amphibians, and mammals. Hatchlings mostly feed on invertebrates such as insects, insect larvae, snails, spiders, and worms, as well as small fish and frogs. As they grow, American alligators gradually expand to larger prey. Once an American alligator reaches full size and power in adulthood, any animal living in the water or coming to the water to drink is potential prey. Most animals captured by American alligators are considerably smaller than itself. A few examples of animals consumed are largemouth bass, spotted gar, freshwater pearl mussels, American green tree frogs, yellow mud turtles, cottonmouths, common moorhens, and feral wild boars. Stomach contents show, among native mammals, muskrats and raccoons are some of the most commonly eaten species. In Louisiana, where introduced nutria are common, they are perhaps the most regular prey for adult American alligators, although only larger adults commonly eat this species. It has also been reported that large American alligators prey on medium-sized American alligators, which had preyed on hatchlings and smaller juveniles.
If an American alligator's primary food resource is not available, it will sometimes feed on carrion and non-prey items such as rocks and artificial objects, like bottle caps. These items help the American alligator in the process of digestion by crushing up the meat and bones of animals, especially animals with shells.
Large animals
Other animals may occasionally be eaten, even large deer or feral wild boars, but these are not normally part of the diet. American alligators occasionally prey on large mammals, but usually do so when fish and smaller prey levels go down. Rarely, American alligators have been observed killing and eating bobcats, but such events are not common and have little effect on bobcat populations. Although American alligators have been listed as predators of the Nilgai and the West Indian manatees, very little evidence exists of such predation. In the 2000s, when invasive Burmese pythons first occupied the Everglades, American alligators have been recorded preying on sizable snakes, possibly controlling populations and preventing the invasive species from spreading northwards. However, the python is also known to occasionally prey on alligators, a form of both competition and predation. American alligator predation on Florida panthers is rare, but has been documented. Such incidents usually involve a panther trying to cross a waterway or coming down to a swamp or river to get a drink. American alligator predation on American black bears has also been recorded.
Domestic animals
Occasionally, domestic animals, including dogs, cats, and calves, are taken as available, but are secondary to wild and feral prey. Other prey, including snakes, lizards, and various invertebrates, are eaten occasionally by adults.
Birds
Water birds, such as herons, egrets, storks, waterfowl and large dabbling rails such as gallinules or coots, are taken when possible. Occasionally, unwary adult birds are grabbed and eaten by American alligators, but most predation on bird species occurs with unsteady fledgling birds in late summer, as fledgling birds attempt to make their first flights near the water's edge.
Fruit
In 2013, American alligators and other crocodilians were reported to also eat fruit. Such behavior has been witnessed, as well as documented from stomach contents, with the American alligators eating such fruit as wild grapes, elderberries, and citrus fruits directly from the trees. Thirty-four families and 46 genera of plants were represented among seeds and fruits found in the stomach contents of American alligators. The discovery of this unexpected part of the American alligator diet further reveals that they may be responsible for spreading seeds from the fruit they consume across their habitat.
Cooperative hunting
Additionally, American alligators engage in what seems to be cooperative hunting. One observation of cooperative hunting techniques was where there are pushing American alligators and catching American alligators and they were observed taking turns in each position. Another observation said that about 60 American alligators gathered in an area and would form a semicircle with about half of them and would push the fish closer to the bank. Once one of the American alligators caught a fish another one would enter into its spot, and it would take the fish to the resting area. This was reported to have occurred two days in a row.
In Florida and East Texas
The diet of adult American alligators from central Florida lakes is dominated by fish, but the species is highly opportunistic based upon local availability. In Lake Griffin, fish made up 54% of the diet by weight, with catfish being most commonly consumed, while in Lake Apopka, fish made up 90% of the food and mostly shad were taken; in Lake Woodruff, the diet was 84% fish and largely consists of bass and sunfish. Unusually in these regions, reptiles and amphibians were the most important nonpiscivore prey, mostly turtles and water snakes. In southern Louisiana, crustaceans (largely crawfish and crabs) were found to be present in the southeastern American alligators, but largely absent in southwestern American alligators, which consumed a relatively high proportion of reptiles, although fish were the most recorded prey for adults, and adult males consumed a large portion of mammals.
In East Texas, diets were diverse and adult American alligators took mammals, reptiles, amphibians, and invertebrates (e.g. snails) in often equal measure as they did fish.
Vocalizations
Mechanism
An American alligator is able to abduct and adduct the vocal folds of its larynx, but not to elongate or shorten them; yet in spite of this, it can modulate fundamental frequency very well. Their vocal folds consists of epithelium, lamina propria and muscle. Sounds ranged from 50 to 1200 Hz. In one experiment conducted on the larynx, the fundamental frequency depended on both the glottal gap and stiffness of the larynx tissues. As the frequency increases, there's high tension and large strains. The fundamental frequency has been influenced by the glottal gap size and subglottal pressure and when the phonation threshold pressure has been exceeded, there will be vocal fold vibration.
Calls
Crocodilians are the most vocal of all non-avian reptiles and have a variety of different calls depending on the age, size, and sex of the animal. The American alligator can perform specific vocalizations to declare territory, signal distress, threaten competitors, and locate suitable mates. Juveniles can perform a high-pitched hatchling call (a "yelping" trait common to many crocodilian species' hatchling young) to alert their mothers when they are ready to emerge from the nest. Juveniles also make a distress call to alert their mothers if they are being threatened. Adult American alligators can growl, hiss, or cough to threaten others and declare territory.
Bellowing
Both males and females bellow loudly by sucking air into their lungs and blowing it out in intermittent, deep-toned roars to attract mates and declare territory. Males are known to use infrasound during mating bellows. Their bellowing initiates the beginning of the courtship period for American alligators. Bellowing is performed in a "head oblique, tail arched" posture. Infrasonic waves from a bellowing male can cause the surface of the water directly over and to either side of his back to literally "sprinkle", in what is commonly called the "water dance". Large bellowing "choruses" of American alligators during the breeding season are commonly initiated by females and perpetuated by males. Observers of large bellowing choruses have noted they are often felt more than they are heard due to the intense infrasound emitted by males. American alligators bellow in B flat (specifically "B♭1", defined as an audio frequency of 58.27 Hz), and bellowing choruses can be induced by tuba players, sonic booms, and large aircraft.
Lifespan
American alligators typically live to the age of 50, and possibly over 70 years old. Males reach sexual maturity at around 11.6 years, and females at around 15.8 years. Although it was originally thought that American alligators never stop growing, studies have now found that males stop growing at around the age of 43 years, and females stop growing at around the age of 31 years.
Reproduction
Breeding season
The breeding season begins in the spring. On spring nights, American alligators gather in large numbers for group courtship, in the aforementioned "water dances". A study conducted in the 1980s at an alligator farm showed that homosexual courtship is common, with two-thirds of the recorded instances of sexual behaviour having been between two males. The female builds a nest of vegetation, sticks, leaves, and mud in a sheltered spot in or near the water.
Eggs
After the female lays her 20 to 50 white eggs, about the size of a goose egg, she covers them with more vegetation, which heats as it decays, helping to keep the eggs warm. This differs from Nile crocodiles, which lay their eggs in pits. The temperature at which American alligator eggs develop determines their sex (see temperature-dependent sex determination). Studies have found that eggs hatched at a temperature below or a temperature above will produce female offspring, while those at a temperature between will produce male offspring. The nests built on levees are warmer, thus produce males, while the cooler nests of wet marsh produce females. The female remains near the nest throughout the 65-day incubation period, protecting it from intruders. When the young begin to hatch — their "yelping" calls can sometimes even be heard just before hatching commences — the mother quickly digs them out and carries them to the water in her mouth, as some other crocodilian species are known to do.
Young
The young are tiny replicas of adults, with a series of yellow bands around their bodies that serve as camouflage. Hatchlings gather into pods and are guarded by their mother and keep in contact with her through their "yelping" vocalizations. Young American alligators eat small fish, frogs, crayfish, and insects. They are preyed on by large fish, birds, raccoons, Florida panthers, and adult American alligators. Mother American alligators eventually become more aggressive towards their young, which encourages them to disperse. Young American alligators grow a year and reach adulthood at .
Parasites
American alligators are commonly infected with parasites. In a 2016 Texas study, 100% of the specimens collected were infected with parasites, and by at least 20 different species of parasites, including lung pentastomids, gastric nematodes, intestinal helminths. When compared to American alligators from different states there was no significant difference in prevalence.
Interactions with exotic species
Nutria were introduced into coastal marshes from South America in the mid-20th century, and their population has since exploded into the millions. They cause serious damage to coastal marshes and may dig burrows in levees. Hence, Louisiana has had a bounty to try to reduce nutria numbers. Large American alligators feed heavily on nutria, so American alligators may not only control nutria populations in Louisiana, but also prevent them spreading east into the Everglades. Since hunting and trapping preferentially take the large American alligators that are the most important in eating nutria, some changes in harvesting may be needed to capitalize on their ability to control nutria.
Recently, a population of Burmese pythons became established in Everglades National Park. Substantial American alligator populations in the Everglades might be a contributing factor, as a competitor, in keeping the python populations low, preventing the spread of the species north. While events of predation by Burmese pythons on sizable American alligators have been observed, no evidence of a net negative effect has been seen on overall American alligator populations.
Indicators of environmental restoration
American alligators play an important role in the restoration of the Everglades as biological indicators of restoration success. American alligators are highly sensitive to changes in the hydrology, salinity, and productivity of their ecosystems; all are factors that are expected to change with Everglades restoration. American alligators also may control the long-term vegetation dynamics in wetlands by reducing the population of small mammals, particularly nutria, which may otherwise overgraze marsh vegetation. In this way, the vital ecological service they provide may be important in reducing rates of coastal wetland losses in Louisiana. They may provide a protection service for water birds nesting on islands in freshwater wetlands. American alligators prevent predatory mammals from reaching island-based rookeries and in return eat spilled food and birds that fall from their nests. Wading birds appear to be attracted to areas with American alligators and have been known to nest at heavily trafficked tourist attractions with large numbers of American alligators, such as the St. Augustine Alligator Farm in St. Augustine, Florida.
Relationship with humans
Attacks on humans
American alligators are capable of killing humans, but fatal attacks are rare. Mistaken identity leading to an attack is always possible, especially in or near cloudy waters. American alligators are often less aggressive towards humans than larger crocodile species, a few of which (mainly the Nile and saltwater crocodiles) may prey on humans with some regularity. Alligator bites are serious injuries, due to the reptile's sheer bite force and risk of infection. Even with medical treatment, an American alligator bite may still result in a fatal infection.
As human populations increase, and as they build houses in low-lying areas, or fish or hunt near water, incidents are inevitable where humans intrude on American alligators and their habitats. Since 1948, 257 documented attacks on humans in Florida (about five incidents per year) have been reported, of which an estimated 23 resulted in death. Only nine fatal attacks occurred in the United States throughout the 1970s–1990s, but American alligators killed 12 people between 2001 and 2007. An additional report of alligator attacks showed a total of 376 injuries and 15 deaths recorded all from 1948 to 2004, leading this to an increase of the alligator population. In May 2006, American alligators killed three Floridians in less than a week. At least 28 fatal attacks by American alligators have occurred in the United States since 1970.
Wrestling
Since the late 1880s, alligator wrestling has been a source of entertainment for some. Created by the Miccosukee and Seminole tribes prior to its popularity for tourism, this tourism tradition remains popular despite criticism from animal-rights activists.
Farming
Today, alligator farming is a large, growing industry in Georgia, Florida, Texas, and Louisiana. These states produce a combined annual total of some 45,000 alligator hides. Alligator hides bring good prices and hides in the 6- to 7-ft range have sold for $300 each. The market for alligator meat is growing, and about of meat are produced annually. According to the Florida Department of Agriculture and Consumer Services, raw alligator meat contains roughly 200 Calories (840 kJ) per 3-oz (85-g) portion, of which 27 Calories (130 kJ) come from fat.
Culture and film
The American alligator is the official state reptile of Florida, Louisiana, and Mississippi. Several organizations and products from Florida have been named after the animal.
"Gators" has been the nickname of the University of Florida's sports teams since 1911. In 1908, a printer made a spur-of-the-moment decision to print an alligator emblem on a shipment of the school's football pennants. The mascot stuck, and was made official in 1911, perhaps because the team captain's nickname was Gator. Allegheny College and San Francisco State University both have Gators as their mascots, as well.
The Gator Bowl is a college football game held in Jacksonville annually since 1946, with Gator Bowl Stadium hosting the event until the 1993 edition. The Gatornationals is a NHRA drag race held at the Gainesville Raceway in Gainesville since 1970.
| Biology and health sciences | Crocodilia | Animals |
484939 | https://en.wikipedia.org/wiki/Egg | Egg | An egg is an organic vessel grown by an animal to carry a possibly fertilized egg cell (a zygote) and to incubate from it an embryo within the egg until the embryo has become an animal fetus that can survive on its own, at which point the animal hatches.
Most arthropods, vertebrates (excluding live-bearing mammals), and mollusks lay eggs, although some, such as scorpions, do not.
Reptile eggs, bird eggs, and monotreme eggs are laid out of water and are surrounded by a protective shell, either flexible or inflexible. Eggs laid on land or in nests are usually kept within a warm and favorable temperature range while the embryo grows. When the embryo is adequately developed it hatches, i.e., breaks out of the egg's shell. Some embryos have a temporary egg tooth they use to crack, pip, or break the eggshell or covering.
The largest recorded egg is from a whale shark and was in size. Whale shark eggs typically hatch within the mother. At and up to , the ostrich egg is the largest egg of any living bird, though the extinct elephant bird and some non-avian dinosaurs laid larger eggs. The bee hummingbird produces the smallest known bird egg, which measures between long and weighs half of a gram (around 0.02 oz). Some eggs laid by reptiles and most fish, amphibians, insects, and other invertebrates can be even smaller.
Reproductive structures similar to the egg in other kingdoms are termed "spores", or in spermatophytes "seeds", or in gametophytes "egg cells".
Eggs of different animal groups
Several major groups of animals typically have readily distinguishable eggs.
Fish and amphibian eggs
The most common reproductive strategy for fish is known as oviparity, in which the female lays undeveloped eggs that are externally fertilized by a male. Typically large numbers of eggs are laid at one time (an adult female cod can produce 4–6 million eggs in one spawning) and the eggs are then left to develop without parental care. When the larvae hatch from the egg, they often carry the remains of the yolk in a yolk sac which continues to nourish the larvae for a few days as they learn how to swim. Once the yolk is consumed, there is a critical point after which they must learn how to hunt and feed or they will die.
A few fish, notably the rays and most sharks use ovoviviparity in which the eggs are fertilized and develop internally. However, the larvae still grow inside the egg consuming the egg's yolk and without any direct nourishment from the mother. The mother then gives birth to relatively mature young. In certain instances, the physically most developed offspring will devour its smaller siblings for further nutrition while still within the mother's body. This is known as intrauterine cannibalism.
In certain scenarios, some fish such as the hammerhead shark and reef shark are viviparous, with the egg being fertilized and developed internally, but with the mother also providing direct nourishment.
The eggs of fish and amphibians are jellylike. Cartilaginous fish (sharks, skates, rays, chimaeras) eggs are fertilized internally and exhibit a wide variety of both internal and external embryonic development. Most fish species spawn eggs that are fertilized externally, typically with the male inseminating the eggs after the female lays them. These eggs do not have a shell and would dry out in the air. Even air-breathing amphibians lay their eggs in water, or in protective foam as with the Coast foam-nest treefrog, Chiromantis xerampelina.
Bird eggs
Bird eggs are laid by females and incubated for a time that varies according to the species; a single young hatches from each egg. Average clutch sizes range from one (as in condors) to about 17 (the grey partridge). Some birds lay eggs even when not fertilized (e.g. hens); it is not uncommon for pet owners to find their lone bird nesting on a clutch of unfertilized eggs, which are sometimes called wind-eggs.
Colours
The default colour of vertebrate eggs is the white of the calcium carbonate from which the shells are made, but some birds, mainly passerines, produce coloured eggs. The colour comes from pigments deposited on top of the calcium carbonate base; biliverdin and its zinc chelate, and bilirubin, give a green or blue ground colour, while protoporphyrin IX produces reds and browns as a ground colour or as spotting.
Non-passerines typically have white eggs, except in some ground-nesting groups such as the Charadriiformes, sandgrouse and nightjars, where camouflage is necessary, and some parasitic cuckoos which have to match the passerine host's egg. Most passerines, in contrast, lay coloured eggs, even if there is no need of cryptic colors. However, some have suggested that the protoporphyrin markings on passerine eggs actually act to reduce brittleness by acting as a solid-state lubricant. If there is insufficient calcium available in the local soil, the egg shell may be thin, especially in a circle around the broad end. Protoporphyrin speckling compensates for this, and increases inversely to the amount of calcium in the soil.
For the same reason, later eggs in a clutch are more spotted than early ones as the female's store of calcium is depleted.
The color of individual eggs is also genetically influenced, and appears to be inherited through the mother only, suggesting that the gene responsible for pigmentation is on the sex-determining W chromosome (female birds are WZ, males ZZ).
It used to be thought that color was applied to the shell immediately before laying, but subsequent research shows that coloration is an integral part of the development of the shell, with the same protein responsible for depositing calcium carbonate, or protoporphyrins when there is a lack of that mineral.
In species such as the common guillemot, which nest in large groups, each female's eggs have very different markings, making it easier for females to identify their own eggs on the crowded cliff ledges on which they breed.
Yolks of birds' eggs are yellow from carotenoids, it is affected by their living conditions and diet.
Shell
Bird eggshells are diverse. For example:
cormorant eggs are rough and chalky
tinamou eggs are shiny
duck eggs are oily and waterproof
cassowary eggs are heavily pitted
Tiny pores in bird eggshells allow the embryo to breathe. The domestic hen's egg has around 7000 pores.
Some bird eggshells have a coating of vaterite spherules, which is a rare polymorph of calcium carbonate. In Greater Ani Crotophaga major this vaterite coating is thought to act as a shock absorber, protecting the calcite shell from fracture during incubation, such as colliding with other eggs in the nest.
Shape
Most bird eggs have an oval shape, with one end rounded and the other more pointed. This shape results from the egg being forced through the oviduct. Muscles contract the oviduct behind the egg, pushing it forward. The egg's wall is still shapeable, and the pointed end develops at the back.. One hypothesis is that long, pointy eggs are an incidental consequence of having a streamlined body typical of birds with strong flying abilities; flight narrows the oviduct, which changes the type of egg a bird can lay.
Cliff-nesting birds often have highly conical eggs. They are less likely to roll off, tending instead to roll around in a tight circle; this trait is likely to have arisen due to evolution via natural selection. In contrast, many hole-nesting birds have nearly spherical eggs.
Predation
Many animals feed on eggs. For example, principal predators of the black oystercatcher's eggs include raccoons, skunks, mink, river and sea otters, gulls, crows and foxes. The stoat (Mustela erminea) and long-tailed weasel (M. frenata) steal ducks' eggs. Snakes of the genera Dasypeltis and Elachistodon specialize in eating eggs.
Brood parasitism occurs in birds when one species lays its eggs in the nest of another. In some cases, the host's eggs are removed or eaten by the female, or expelled by her chick. Brood parasites include the cowbirds and many Old World cuckoos.
Amniote eggs and embryos
Like amphibians, amniotes are air-breathing vertebrates, but they have complex eggs or embryos, including an amniotic membrane. Amniotes include reptiles (including dinosaurs and their descendants, birds) and mammals.
Reptile eggs are often rubbery and are always initially white. They are able to survive in the air. Often the sex of the developing embryo is determined by the temperature of the surroundings, with cooler temperatures favouring males. Not all reptiles lay eggs; some are viviparous ("live birth").
Dinosaurs laid eggs, some of which have been preserved as petrified fossils.
Among mammals, early extinct species laid eggs, as do platypuses and echidnas (spiny anteaters). Platypuses and two genera of echidna are Australian monotremes. Marsupial and placental mammals do not lay eggs, but their unborn young do have the complex tissues that identify amniotes.
Mammalian eggs
The eggs of the egg-laying mammals (the platypus and the echidnas) are macrolecithal eggs very much like those of reptiles. The eggs of marsupials are likewise macrolecithal, but rather small, and develop inside the body of the female, but do not form a placenta. The young are born at a very early stage, and can be classified as a "larva" in the biological sense.
In placental mammals, the egg itself is void of yolk, but develops an umbilical cord from structures that in reptiles would form the yolk sac. Receiving nutrients from the mother, the fetus completes the development while inside the uterus.
Invertebrate eggs
Eggs are common among invertebrates, including insects, spiders, mollusks, and crustaceans.
Evolution and structure
All sexually reproducing life, including both plants and animals, produces gametes. The male gamete cell, sperm, is usually motile whereas the female gamete cell, the ovum, is generally larger and sessile. The male and female gametes combine to produce the zygote cell. In multicellular organisms, the zygote subsequently divides in an organised manner into smaller more specialised cells, so that this new individual develops into an embryo. In most animals, the embryo is the sessile initial stage of the individual life cycle, and is followed by the emergence (that is, the hatching) of a motile stage. The zygote or the ovum itself or the sessile organic vessel containing the developing embryo may be called the egg.
A recent proposal suggests that the phylotypic animal body plans originated in cell aggregates before the existence of an egg stage of development. Eggs, in this view, were later evolutionary innovations, selected for their role in ensuring genetic uniformity among the cells of incipient multicellular organisms.
Formation
The cycle of the egg's formation is started by the gamete ovum being released (ovulated) and egg formation being started. The finished egg is then ovipositioned and eventual egg incubation can start.
Scientific classifications
Scientists often classify animal reproduction according to the degree of development that occurs before the new individuals are expelled from the adult body, and by the yolk which the egg provides to nourish the embryo.
Egg size and yolk
Vertebrate eggs can be classified by the relative amount of yolk. Simple eggs with little yolk are called microlecithal, medium-sized eggs with some yolk are called mesolecithal, and large eggs with a large concentrated yolk are called macrolecithal. This classification of eggs is based on the eggs of chordates, though the basic principle extends to the whole animal kingdom.
Microlecithal
Small eggs with little yolk are called microlecithal. The yolk is evenly distributed, so the cleavage of the egg cell cuts through and divides the egg into cells of fairly similar sizes. In sponges and cnidarians, the dividing eggs develop directly into a simple larva, rather like a morula with cilia. In cnidarians, this stage is called the planula, and either develops directly into the adult animals or forms new adult individuals through a process of budding.
Microlecithal eggs require minimal yolk mass. Such eggs are found in flatworms, roundworms, annelids, bivalves, echinoderms, the lancelet and in most marine arthropods. In anatomically simple animals, such as cnidarians and flatworms, the fetal development can be quite short, and even microlecithal eggs can undergo direct development. These small eggs can be produced in large numbers. In animals with high egg mortality, microlecithal eggs are the norm, as in bivalves and marine arthropods. However, the latter are more complex anatomically than e.g. flatworms, and the small microlecithal eggs do not allow full development. Instead, the eggs hatch into larvae, which may be markedly different from the adult animal.
In placental mammals, where the embryo is nourished by the mother throughout the whole fetal period, the egg is reduced in size to essentially a naked egg cell.
Mesolecithal
Mesolecithal eggs have comparatively more yolk than the microlecithal eggs. The yolk is concentrated in one part of the egg (the vegetal pole), with the cell nucleus and most of the cytoplasm in the other (the animal pole). The cell cleavage is uneven, and mainly concentrated in the cytoplasma-rich animal pole.
The larger yolk content of the mesolecithal eggs allows for a longer fetal development. Comparatively anatomically simple animals will be able to go through the full development and leave the egg in a form reminiscent of the adult animal. This is the situation found in hagfish and some snails. Animals with smaller size eggs or more advanced anatomy will still have a distinct larval stage, though the larva will be basically similar to the adult animal, as in lampreys, coelacanth and the salamanders.
Macrolecithal
Eggs with a large yolk are called macrolecithal. The eggs are usually few in number, and the embryos have enough food to go through full fetal development in most groups. Macrolecithal eggs are only found in selected representatives of two groups: Cephalopods and vertebrates.
Macrolecithal eggs go through a different type of development than other eggs. Due to the large size of the yolk, the cell division can not split up the yolk mass. The fetus instead develops as a plate-like structure on top of the yolk mass, and only envelopes it at a later stage. A portion of the yolk mass is still present as an external or semi-external yolk sac at hatching in many groups. This form of fetal development is common in bony fish, even though their eggs can be quite small. Despite their macrolecithal structure, the small size of the eggs does not allow for direct development, and the eggs hatch to a larval stage ("fry"). In terrestrial animals with macrolecithal eggs, the large volume to surface ratio necessitates structures to aid in transport of oxygen and carbon dioxide, and for storage of waste products so that the embryo does not suffocate or get poisoned from its own waste while inside the egg, see amniote.
In addition to bony fish and cephalopods, macrolecithal eggs are found in cartilaginous fish, reptiles, birds and monotreme mammals. The eggs of the coelacanths can reach a size of in diameter, and the young go through full development while in the uterus, living on the copious yolk.
Egg-laying reproduction
Animals are commonly classified by their manner of reproduction, at the most general level distinguishing egg-laying (Latin. oviparous) from live-bearing (Latin. viviparous).
These classifications are divided into more detail according to the development that occurs before the offspring are expelled from the adult's body. Traditionally:
Ovuliparity means the female spawns unfertilized eggs (ova), which must then be externally fertilised. Ovuliparity is typical of bony fish, anurans, echinoderms, bivalves and cnidarians. Most aquatic organisms are ovuliparous. The term is derived from the diminutive meaning "little egg".
Oviparity is where fertilisation occurs internally and so the eggs laid by the female are zygotes (or newly developing embryos), often with important outer tissues added (for example, in a chicken egg, no part outside of the yolk originates with the zygote). Oviparity is typical of birds, reptiles, some cartilaginous fish and most arthropods. Terrestrial organisms are typically oviparous, with egg-casings that resist evaporation of moisture.
Ovo-viviparity is where the zygote is retained in the adult's body but there are no trophic (feeding) interactions. That is, the embryo still obtains all of its nutrients from inside the egg. Most live-bearing fish, amphibians or reptiles are actually ovoviviparous. Examples include the reptile Anguis fragilis, the sea horse (where zygotes are retained in the male's ventral "marsupium"), and the frogs Rhinoderma darwinii (where the eggs develop in the vocal sac) and Rheobatrachus (where the eggs develop in the stomach).
Histotrophic viviparity means embryos develop in the female's oviducts but obtain nutrients by consuming other ova, zygotes or sibling embryos (oophagy or adelphophagy). This intra-uterine cannibalism occurs in some sharks and in the black salamander Salamandra atra. Marsupials excrete a "uterine milk" supplementing the nourishment from the yolk sac.
Hemotrophic viviparity is where nutrients are provided from the female's blood through a designated organ. This most commonly occurs through a placenta, found in most mammals. Similar structures are found in some sharks and in the lizard Pseudomoia pagenstecheri. In some hylid frogs, the embryo is fed by the mother through specialized gills.
The term hemotrophic derives from the Latin for blood-feeding, contrasted with histotrophic for tissue-feeding.
Human use
Food
Eggs laid by many different species, including birds, reptiles, amphibians, and fish, have probably been eaten by people for millennia. Popular choices for egg consumption are chicken, duck, roe, and caviar, but by a wide margin the egg most often humanly consumed is the chicken egg, typically unfertilized.
Eggs and Kashrut
According to the Kashrut, that is the set of Jewish dietary laws, kosher food may be consumed according to halakha (Jewish law). Eggs are considered pareve (neither meat nor dairy) despite being an animal product and can be mixed with either milk or kosher meat.
Vaccine manufacture
Many vaccines for infectious diseases are produced in fertile chicken eggs. The basis of this technology was the discovery in 1931 by Alice Miles Woodruff and Ernest William Goodpasture at Vanderbilt University that the rickettsia and viruses that cause a variety of diseases will grow in chicken embryos. This enabled the development of vaccines against influenza, chicken pox, smallpox, yellow fever, typhus, Rocky mountain spotted fever and other diseases.
Culture
Eggs are an important symbol in folklore and mythology, often representing life and rebirth, healing and protection, and sometimes featuring in creation myths. Egg decoration is a common practice in many cultures worldwide. Christians view Easter eggs as symbolic of the resurrection of Jesus Christ. A popular Easter tradition in some parts of the world is the decoration of hard-boiled eggs (usually by dyeing, but often by hand-painting or spray-painting). Adults often hide the eggs for children to find, an activity known as an Easter egg hunt. A similar tradition of egg painting exists in areas of the world influenced by the culture of Persia. Before the spring equinox in the Persian New Year tradition (called Norouz), each family member decorates a hard-boiled egg and sets them together in a bowl. The tradition of a dancing egg is held during the feast of Corpus Christi in Catalan cities since the 16th century. It consists of an emptied egg, positioned over the water jet from a fountain, which starts turning without falling.
Although a food item, raw eggs are sometimes thrown at houses, cars, or people. This act, known commonly as "egging" in the various English-speaking countries, is a minor form of vandalism and, therefore, usually a criminal offense and is capable of damaging property (egg whites can degrade certain types of vehicle paint) as well as potentially causing serious eye injury. On Halloween, for example, trick or treaters have been known to throw eggs (and sometimes flour) at property or people from whom they received nothing. Eggs are also often thrown in protests, as they are inexpensive and nonlethal, yet very messy when broken.
Collecting
Egg collecting was a popular hobby in some cultures, including European Australians. Traditionally, the embryo would be removed before a collector stored the egg shell.
Collecting eggs of wild birds is now banned by many jurisdictions, as the practice can threaten rare species. In the United Kingdom, the practice is prohibited by the Protection of Birds Act 1954 and Wildlife and Countryside Act 1981. However, illegal collection and trading persists.
Since the protection of wild bird eggs was regulated, early collections have come to the museums as curiosities. For example, the Australian Museum hosts a collection of about 20,000 registered clutches of eggs, and the collection in Western Australia Museum has been archived in a gallery. Scientists regard egg collections as a good natural-history data, as the details recorded in the collectors' notes have helped them to understand birds' nesting behaviors.
| Biology and health sciences | Biology | null |
484942 | https://en.wikipedia.org/wiki/Gelada | Gelada | The gelada (Theropithecus gelada, , ), sometimes called the bleeding-heart monkey or the gelada baboon, is a species of Old World monkey found only in the Ethiopian Highlands, living at elevations of above sea level. It is the only living member of the genus Theropithecus, a name derived from the Greek root words for "beast-ape" (θηρο-πίθηκος : thēro-píthēkos). Like its close relatives in genus Papio, the baboons, it is largely terrestrial, spending much of its time foraging in grasslands, with grasses comprising up to 90% of its diet.
It has buff to dark brown hair with a dark face and pale eyelids. Adult males have longer hair on their backs and a conspicuous bright red patch of skin shaped like an hourglass on their chests. Females also have a bare patch of skin but it is less pronounced, except during estrus, when it brightens and exhibits a "necklace" of fluid-filled blisters. Males average and females average in weight. The head-body length is with a tail of .
The gelada has a complex multilevel social structure. Reproductive units and male units are the two basic groupings. A band comprises a mix of multiple reproductive units and male units; a community is made up of one to four bands. Within the reproductive units the females are commonly closely related. Males will move from their natal group to try to control a unit of their own and females within the unit can choose to support or oppose the new male. When more than one male is in the unit, only one can mate with the females. The gelada has a diverse repertoire of vocalizations thought to be near in complexity to that of humans.
The population of geladas is thought to have dropped from 440,000 in the 1970s to 200,000 in 2008. Despite the heavy loss, it is listed as least concern by the International Union for Conservation of Nature.
Taxonomy and evolution
Since 1979, the gelada is customarily placed in its own genus (Theropithecus), though some genetic research suggests that this monkey should be grouped with its baboon (genus Papio) kin; other researchers have classified the species even more distantly from Papio. While Theropithecus gelada is the only living species of its genus, separate, larger species are known from the fossil record: T. brumpti, T. darti and T. oswaldi, formerly classified under genus Simopithecus. Theropithecus, while restricted at present to Ethiopia, is also known from fossil specimens found in Africa and the Mediterranean into Asia, including South Africa, Malawi, the Democratic Republic of the Congo, Tanzania, Uganda, Kenya, Algeria, Morocco, Spain, and India (more exactly at Mirzapur, Cueva Victoria, Pirro Nord, Ternifine, Hadar, Turkana, Makapansgat, and Swartkrans).
The two subspecies of gelada are:
Northern gelada, T. g. gelada
Eastern gelada, southern gelada, or Heuglin's gelada, T. g. obscurus
Common name
The gelada has been referred to by other names, including the "gelada baboon", "bleeding-heart baboon", or simply "baboon", implying a monophyletic relationship with baboons, which historically included (apart from Theropithecus) the genera Papio (true baboons), and Mandrillus (mandrills and drills). Since the 1990s, however, molecular phylogenetic studies clarified relationships among papionin monkeys, demonstrating that mangabeys of the genus Lophocebus are more closely related to Papio and Theropithecus, while mangabeys of the genus Cercocebus are more closely related to Mandrillus. These findings largely invalidated any scientifically based justification for referring to mandrills and drills as baboons, as doing so while excluding the unbaboon-like Lophocebus mangabeys would create a polyphyletic group. The status of geladas was less clear and the relationships among Papio, Lophocebus, and Theropithecus continue to reflect high levels of uncertainty, which are further complicated by the discovery of the kipunji. Nevertheless, the most recent and extensive phylogenetic study to date demonstrates that, while large fractions of the genome show an alternative history, the dominant relationship across the genome supports a closer relationship between Papio and Lophocebus, with Theropithecus as the outgroup. As a close sister relationship between Papio and Theropithecus is the least-supported scenario in recent studies, i "gelada baboon" and other names implying a close relationship with baboons, with increasing clarity, are not scientifically justified, leading researchers to advocate for the common name to be simply "gelada".
Description
The gelada is large and robust, and it is covered with buff to dark-brown, coarse hair and has a dark face with pale eyelids. Its arms and feet are nearly black. Its short tail ends in a tuft of hair. Adult males have a long, heavy cape of hair on their backs. The gelada has a hairless face with a short muzzle that looks more similar to a chimpanzee's than a baboon's. It can also be physically distinguished from a baboon by the bright patch of skin on its chest. This patch is hourglass-shaped. On males, it is bright red and surrounded by white hair; on females, it is far less pronounced, but when in estrus, the female's patch brightens, and a "necklace" of fluid-filled blisters forms on the patch. This is thought to be analogous to the swollen buttocks common to most baboons experiencing estrus. In addition, females have knobs of skin around their patches. Geladas also have well-developed ischial callosities. Sexual dimorphism is seen in this species; males average 18.5 kg (40.8 lb), while females are smaller, averaging 11 kg (24.3 lb). The head and body length of this species is 50–75 cm (19.7–29.5 in) for both sexes. Tail length is 30–50 cm (11.8–19.7 in).
The gelada has several adaptations for its terrestrial and graminivorous (grass-eating) lifestyle. It has small, sturdy fingers adapted for pulling grass and narrow, small incisors adapted for chewing it. The gelada has a unique gait, known as the shuffle gait, that it uses when feeding. It squats bipedally and moves by sliding its feet without changing its posture. Because of this gait, the gelada's rump is hidden beneath, so is unavailable for display; its bright red chest patch is visible, though.
Range and ecology
Geladas are found only in the high grasslands of the deep gorges of the central Ethiopian plateau. They live in elevations above sea level, using the cliffs for sleeping and montane grasslands for foraging. These grasslands have widely spaced trees and also contain bushes and dense thickets. The highland areas where they live tend to be cooler and less arid than lowland areas. Thus, the geladas usually do not experience the negative effects that the dry season has on food availability. Nevertheless, in some areas, they do experience frost in the dry season, as well as hailstorms in the wet season.
Geladas are the only primates that are primarily graminivores and grazers – grass blades make up to 90% of their diet. They eat both the blades and the seeds of grasses. When both blades and seeds are available, geladas prefer the seeds. They eat flowers, rhizomes, and roots when available, using their hands to dig for the latter two. They consume herbs, small plants, fruits, creepers, bushes, and thistles. Insects can be eaten, but only rarely and only if they can easily be obtained. During the dry season, herbs are preferred over grasses. Geladas consume their food more like ungulates than primates, and they can chew their food as effectively as zebra.
Geladas are primarily diurnal. At night, they sleep on the ledges of cliffs. At sunrise, they leave the cliffs and travel to the tops of the plateaus to feed and socialize. When morning ends, social activities tend to wane and the geladas primarily focus on foraging. They travel during this time, as well. When evening arrives, they exhibit more social activities before descending to the cliffs to sleep. Predators observed to hunt geladas include domestic dogs, leopards, servals, hyenas, and lammergeiers.
Behavior
Social structure
Geladas live in a complex, multilevel society similar to that of the hamadryas baboon. The smallest and most basic groups are the reproductive units, which include up to 12 females, their young, and one to four males, and the all-male units, which are made up of 2–15 males. The next level of gelada societies are the bands, which are made up of two to 27 reproductive units and several all-male units. Herds consist of up to 60 reproductive units that are sometimes from different bands and last for short times. Communities are made of one to four bands whose home ranges overlap extensively. A gelada typically lives around 15 years.
Within the reproductive units, the females tend to be closely related and have strong social bonds. Reproductive units split if they become too large. While females have strong social bonds in the group, a female only interacts with at most three other members of her unit. Grooming and other social interactions among females usually occur between pairs. Females in a reproductive unit exist in a hierarchy, with higher-ranking females having more reproductive success and more offspring than lower-ranking females. Closely related females tend to have a similar hierarchical status. Females generally stay in their natal units for life; cases of females leaving are rare. Aggression within a reproduction unit, which is rare, is usually just between the females. Aggression is more frequent between members of different reproductive units and is usually started by females, but males and females from both sides can join and engage if the conflict escalates.
Males can remain in a reproductive unit for four to five years. While geladas have traditionally been considered to have a male-transfer society, many males appear to be likely to return and breed in their natal bands. Nevertheless, gelada males leave their natal units and try to take over a unit of their own. A male can take over a reproductive unit either through direct aggression and fighting or by joining one as a subordinate and taking some females with him to create a new unit. When more than one male is in a unit, only one of them can mate with the females. The females in the group together can have power over the dominant male. When a new male tries to take over a unit and overthrow the resident male, the females can choose to support or oppose him. The male maintains his relationship with the females by grooming them rather than forcing his dominance, in contrast to the society of the hamadryas baboon. Females accept a male into the unit by presenting themselves to him. Not all the females may interact with the male. Usually, one may be his main partner. The male may sometimes be monopolized by this female. The male may try to interact with the other females, but they are usually unresponsive.
Most all-male units consist of several subadults and one young adult, led by one male. A member of an all-male unit may spend two to four years in the group before attempting to join a reproductive unit. All-male groups are generally aggressive towards both reproductive units and other all-male units. As in reproductive units, aggression within all-male units is rare. As bands, reproductive units exist in a common home range. Within the band, members are closely related and between the units there is no social hierarchy. Bands usually break apart every eight to nine years as a new band forms in a new home range.
Researchers from the University of the Free State in South Africa, while observing gelada during field studies, discovered that the monkeys were capable of "cheating" on their partners and covering up their infidelity. A nondominant male mates surreptitiously with a female, with both suppressing their normal mating cries so as not to be overheard. If discovered, the dominant male attacks the miscreants in a clear form of punishment. It is the first time that evidence of the knowledge of cheating and fear of discovery have been recorded among animals in the wild. Dr. Aliza le Roux of the university's Department of Zoology and Entomology believes that dishonesty and punishment are not uniquely human traits, and that the observed evidence of this behaviour among gelada monkeys suggests that the roots of the human system of deceit, crime, and punishment lie very deep indeed.
Mixed-species association was observed between solitary Ethiopian wolves and geladas. According to the study's findings, gelada monkeys typically do not move on encountering Ethiopian wolves, even when they were in the middle of the herd; 68% of encounters resulted in no movement and only 11% resulted in a movement greater than . In stark contrast, the geladas always fled great distances to the cliffs for safety whenever they encountered aggressive domestic dogs.
Reproduction and parenting
When in estrus, the female points her posterior towards a male and raises it, moving her tail to one side. The male then approaches the female and inspects her chest and genital areas. A female will copulate up to five times per day, usually around midday. Breeding and reproduction can occur at any time of the year, although some areas have birth peaks.
Most births occur at night. Newborn infants have red faces and closed eyes, and they are covered in black hair. On average, newborn infants weigh .
If a new male assumes mastery of a harem, females impregnated by the previous leader have an 80% likelihood of aborting, in a phenomenon known as the Bruce effect. Females come into estrus quickly after giving birth, so males have little incentive for practising infanticide, although it does occur in some communities in the Arsi region of Ethiopia, which may be an incentive for females to abort and avoid investing caring for an infant that will most likely be killed.
Infanticide in geladas remains fairly uncommon, though, compared to many primates that live in one-male units such as gorillas or gray langurs. The females that cancel their pregnancy are thought to bond with the new leader faster. When a male loses his position as dominant harem master, the females and new leader may allow him to remain in the social unit as a nonbreeding resident to act as a babysitter. This way, the ex-leader can protect any infants he had fathered from being killed by the new leader, the females can protect the infants fathered by him, and when the new leader faces a potential rival, the ex-leader will be more inclined to help support him in keeping rivals at bay.
Mortality among infants occurs at its highest in the wet season, but on average, over 85% of infants survive to their fourth birthday, one of the great advantages of living in an environment with a food source few other animals can exploit, so is unable to sustain many large predators.
Females that have just given birth stay on the periphery of the reproductive unit. Other adult females may take an interest in the infants and even kidnap them. An infant is carried on its mother's belly for the first five weeks, and thereafter on her back. Infants can move independently at around five months old. A subordinate male in a reproductive unit may help care for an infant when it is six months old.
When herds form, juveniles and infants may gather into play groups of around 10 individuals. When males reach puberty, they gather into unstable groups independent of the reproductive units. Females sexually mature at around three years, but do not give birth for another year. Males reach puberty at about four to five years, but they are usually unable to reproduce because of social constraints and wait until they are about eight to ten years old. Average lifespan in the wild is 15 years.
Communication
Adult geladas use a diverse repertoire of vocalizations for various purposes, such as: contact, reassurance, appeasement, solicitation, ambivalence, aggression, and defense. The level of complexity of these vocalizations is thought to be near that of humans. They sit around and chatter at each other, signifying to those around that they matter, in a way, to the individual "speaking". To some extent, calls are related to the status of an individual. In addition, females have calls signaling their estrus. Geladas communicate through gestures, as well. They display threats by flipping their upper lips back on their nostrils to display their teeth and gums, and by pulling back their scalps to display the pale eyelids. A gelada submits by fleeing or presenting itself.
In 2016, a research group at the University of Michigan found that gelada vocalizations obey Menzerath's law, observing that calls are abbreviated when used in longer sequences.
Conservation status and human interactions
The gelada is considered a crop pest by farmers near Simien National Park. In 2005, they caused an average of of crop damage per animal. The geladas had a distinct preference for barley.
In 2008, the IUCN assessed the gelada as least concern, although their population had reduced from an estimated 440,000 in the 1970s to around 200,000 in 2008. It is listed in Appendix II of CITES. Major threats to the gelada are a reduction of their range as a result of agricultural expansion and shooting as crop pests. Previously, these monkeys were trapped for use as laboratory animals or hunted to obtain their capes to make items of clothing. As of 2008, proposals have been made for a new Blue Nile Gorges National Park and Indeltu (Shebelle) Gorges Reserve to protect larger numbers.
| Biology and health sciences | Old World monkeys | Animals |
486432 | https://en.wikipedia.org/wiki/Processor%20register | Processor register | A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900.
Almost all computers, whether load/store architecture or not, load items of data from a larger memory into registers where they are used for arithmetic operations, bitwise operations, and other operations, and are manipulated or tested by machine instructions. Manipulated items are then often stored back to main memory, either by the same instruction or by a subsequent one. Modern processors use either static or dynamic random-access memory (RAM) as main memory, with the latter usually accessed via one or more cache levels.
Processor registers are normally at the top of the memory hierarchy, and provide the fastest way to access data. The term normally refers only to the group of registers that are directly encoded as part of an instruction, as defined by the instruction set. However, modern high-performance CPUs often have duplicates of these "architectural registers" in order to improve performance via register renaming, allowing parallel and speculative execution. Modern x86 design acquired these techniques around 1995 with the releases of Pentium Pro, Cyrix 6x86, Nx586, and AMD K5.
When a computer program accesses the same data repeatedly, this is called locality of reference. Holding frequently used values in registers can be critical to a program's performance. Register allocation is performed either by a compiler in the code generation phase, or manually by an assembly language programmer.
Size
Registers are normally measured by the number of bits they can hold, for example, an 8-bit register, 32-bit register, 64-bit register, 128-bit register, or more. In some instruction sets, the registers can operate in various modes, breaking down their storage memory into smaller parts (32-bit into four 8-bit ones, for instance) to which multiple data (vector, or one-dimensional array of data) can be loaded and operated upon at the same time. Typically it is implemented by adding extra registers that map their memory into a larger register. Processors that have the ability to execute single instructions on multiple data are called vector processors.
Types
A processor often contains several kinds of registers, which can be classified according to the types of values they can store or the instructions that operate on them:
User-accessible registers can be read or written by machine instructions. The most common division of user-accessible registers is a division into data registers and address registers.
Control registers
s can hold numeric data values such as integers and, in some architectures, floating-point numbers, as well as characters, small bit arrays and other data. In some older architectures, such as the IBM 704, the IBM 709 and successors, the PDP-1, the PDP-4/PDP-7/PDP-9/PDP-15, the PDP-5/PDP-8, and the HP 2100, a special data register known as the accumulator is used implicitly for many operations.
s hold addresses and are used by instructions that indirectly access primary memory.
Some processors contain registers that may only be used to hold an address or only to hold numeric values (in some cases used as an index register whose value is added as an offset from some address); others allow registers to hold either kind of quantity. A wide variety of possible addressing modes, used to specify the effective address of an operand, exist.
The stack pointer is used to manage the run-time stack. Rarely, other data stacks are addressed by dedicated address registers (see stack machine).
General-purpose registers (s) can store both data and addresses, i.e., they are combined data/address registers; in some architectures, the register file is unified so that the GPRs can store floating-point numbers as well.
Status registers hold truth values often used to determine whether some instruction should or should not be executed.
s (FPRs) store floating-point numbers in many architectures.
Constant registers hold read-only values such as zero, one, or pi.
hold data for vector processing done by SIMD instructions (Single Instruction, Multiple Data).
Special-purpose registers (SPRs) hold some elements of the program state; they usually include the program counter, also called the instruction pointer, and the status register; the program counter and status register might be combined in a program status word (PSW) register. The aforementioned stack pointer is sometimes also included in this group. Embedded microprocessors, such as microcontrollers, can also have special function registers corresponding to specialized hardware elements.
Model-specific registers (also called machine-specific registers) store data and settings related to the processor itself. Because their meanings are attached to the design of a specific processor, they are not expected to remain standard between processor generations.
Memory type range registers (MTRRs)
s are not accessible by instructions and are used internally for processor operations.
The instruction register holds the instruction currently being executed.
Registers related to fetching information from RAM, a collection of storage registers located on separate chips from the CPU:
Memory buffer register (MBR), also known as memory data register (MDR)
Memory address register (MAR)
Architectural registers are the registers visible to software and are defined by an architecture. They may not correspond to the physical hardware if register renaming is being performed by the underlying hardware.
Hardware registers are similar, but occur outside CPUs.
In some architectures (such as SPARC and MIPS), the first or last register in the integer register file is a pseudo-register in that it is hardwired to always return zero when read (mostly to simplify indexing modes), and it cannot be overwritten. In Alpha, this is also done for the floating-point register file. As a result of this, register files are commonly quoted as having one register more than how many of them are actually usable; for example, 32 registers are quoted when only 31 of them fit within the above definition of a register.
Examples
The following table shows the number of registers in several mainstream CPU architectures. Note that in x86-compatible processors, the stack pointer (ESP) is counted as an integer register, even though there are a limited number of instructions that may be used to operate on its contents. Similar caveats apply to most architectures.
Although all of the below-listed architectures are different, almost all are in a basic arrangement known as the von Neumann architecture, first proposed by the Hungarian-American mathematician John von Neumann. It is also noteworthy that the number of registers on GPUs is much higher than that on CPUs.
Usage
The number of registers available on a processor and the operations that can be performed using those registers has a significant impact on the efficiency of code generated by optimizing compilers. The Strahler number of an expression tree gives the minimum number of registers required to evaluate that expression tree.
| Technology | Computer hardware | null |
486436 | https://en.wikipedia.org/wiki/Rotational%E2%80%93vibrational%20spectroscopy | Rotational–vibrational spectroscopy | Rotational–vibrational spectroscopy is a branch of molecular spectroscopy that is concerned with infrared and Raman spectra of molecules in the gas phase. Transitions involving changes in both vibrational and rotational states can be abbreviated as rovibrational (or ro-vibrational) transitions. When such transitions emit or absorb photons (electromagnetic radiation), the frequency is proportional to the difference in energy levels and can be detected by certain kinds of spectroscopy. Since changes in rotational energy levels are typically much smaller than changes in vibrational energy levels, changes in rotational state are said to give fine structure to the vibrational spectrum. For a given vibrational transition, the same theoretical treatment as for pure rotational spectroscopy gives the rotational quantum numbers, energy levels, and selection rules. In linear and spherical top molecules, rotational lines are found as simple progressions at both higher and lower frequencies relative to the pure vibration frequency. In symmetric top molecules the transitions are classified as parallel when the dipole moment change is parallel to the principal axis of rotation, and perpendicular when the change is perpendicular to that axis. The ro-vibrational spectrum of the asymmetric rotor water is important because of the presence of water vapor in the atmosphere.
Overview
Ro-vibrational spectroscopy concerns molecules in the gas phase. There are sequences of quantized rotational levels associated with both the ground and excited vibrational states. The spectra are often resolved into lines due to transitions from one rotational level in the ground vibrational state to one rotational level in the vibrationally excited state. The lines corresponding to a given vibrational transition form a band.
In the simplest cases the part of the infrared spectrum involving vibrational transitions with the same rotational quantum number (ΔJ = 0) in ground and excited states is called the Q-branch. On the high frequency side of the Q-branch the energy of rotational transitions is added to the energy of the vibrational transition. This is known as the R-branch of the spectrum for ΔJ = +1. The P-branch for ΔJ = −1 lies on the low wavenumber side of the Q branch. The appearance of the R-branch is very similar to the appearance of the pure rotation spectrum (but shifted to much higher wavenumbers), and the P-branch appears as a nearly mirror image of the R-branch. The Q branch is sometimes missing because of transitions with no change in J being forbidden.
The appearance of rotational fine structure is determined by the symmetry of the molecular rotors which are classified, in the same way as for pure rotational spectroscopy, into linear molecules, spherical-, symmetric- and asymmetric- rotor classes. The quantum mechanical treatment of rotational fine structure is the same as for pure rotation.
The strength of an absorption line is related to the number of molecules with the initial values of the vibrational quantum number ν and the rotational quantum number , and depends on temperature. Since there are actually states with rotational quantum number , the population with value increases with initially, and then decays at higher . This gives the characteristic shape of the P and R branches.
A general convention is to label quantities that refer to the vibrational ground and excited states of a transition with double prime and single prime, respectively. For example, the rotational constant for the ground state is written as and that of the excited state as
Also, these constants are expressed in the molecular spectroscopist's units of cm−1. so that in this article corresponds to in the definition of rotational constant at Rigid rotor.
Method of combination differences
Numerical analysis of ro-vibrational spectral data would appear to be complicated by the fact that the wavenumber for each transition depends on two rotational constants, and . However combinations which depend on only one rotational constant are found by subtracting wavenumbers of pairs of lines (one in the P-branch and one in the R-branch) which have either the same lower level or the same upper level. For example, in a diatomic molecule the line denoted P(J + 1) is due to the transition (v = 0, J + 1) → (v = 1, J) (meaning a transition from the state with vibrational quantum number ν going from 0 to 1 and the rotational quantum number going from some value J + 1 to J, with J > 0), and the line R(J − 1) is due to the transition (v = 0, J − 1) → (v = 1, J). The difference between the two wavenumbers corresponds to the energy difference between the (J + 1) and (J − 1) levels of the lower vibrational state and is denoted by since it is the difference between levels differing by two units of J. If centrifugal distortion is included, it is given by
where means the frequency (or wavenumber) of the given line. The main term, comes from the difference in the energy of the rotational state, and that of the state,
The rotational constant of the ground vibrational state B′′ and centrifugal distortion constant, D′′ can be found by least-squares fitting this difference as a function of J. The constant B′′ is used to determine the internuclear distance in the ground state as in pure rotational spectroscopy. (See Appendix)
Similarly the difference R(J) − P(J) depends only on the constants B′ and D′ for the excited vibrational state (v = 1), and B′ can be used to determine the internuclear distance in that state (which is inaccessible to pure rotational spectroscopy).
Linear molecules
Heteronuclear diatomic molecules
Diatomic molecules with the general formula AB have one normal mode of vibration involving stretching of the A-B bond. The vibrational term values , for an anharmonic oscillator are given, to a first approximation, by
where v is a vibrational quantum number, ωe is the harmonic wavenumber and χe is an anharmonicity constant.
When the molecule is in the gas phase, it can rotate about an axis, perpendicular to the molecular axis, passing through the centre of mass of the molecule. The rotational energy is also quantized, with term values to a first approximation given by
where J is a rotational quantum number and D is a centrifugal distortion constant. The rotational constant, Bv depends on the moment of inertia of the molecule, Iv, which varies with the vibrational quantum number, v
where mA and mB are the masses of the atoms A and B, and d represents the distance between the atoms. The term values of the ro-vibrational states are found (in the Born–Oppenheimer approximation) by combining the expressions for vibration and rotation.
The first two terms in this expression correspond to a harmonic oscillator and a rigid rotor, the second pair of terms make a correction for anharmonicity and centrifugal distortion. A more general expression was given by Dunham.
The selection rule for electric dipole allowed ro-vibrational transitions, in the case of a diamagnetic diatomic molecule is
The transition with Δv=±1 is known as the fundamental transition. The selection rule has two consequences.
Both the vibrational and rotational quantum numbers must change. The transition : (Q-branch) is forbidden
The energy change of rotation can be either subtracted from or added to the energy change of vibration, giving the P- and R- branches of the spectrum, respectively.
The calculation of the transition wavenumbers is more complicated than for pure rotation because the rotational constant Bν is different in the ground and excited vibrational states. A simplified expression for the wavenumbers is obtained when the centrifugal distortion constants and are approximately equal to each other.
where positive m values refer to the R-branch and negative values refer to the P-branch. The term ω0 gives the position of the (missing) Q-branch, the term implies an progression of equally spaced lines in the P- and R- branches, but the third term, shows that the separation between adjacent lines changes with changing rotational quantum number. When is greater than , as is usually the case, as J increases the separation between lines decreases in the R-branch and increases in the P-branch. Analysis of data from the infrared spectrum of carbon monoxide, gives value of of 1.915 cm−1 and of 1.898 cm−1. The bond lengths are easily obtained from these constants as r0 = 113.3 pm, r1 = 113.6 pm. These bond lengths are slightly different from the equilibrium bond length. This is because there is zero-point energy in the vibrational ground state, whereas the equilibrium bond length is at the minimum in the potential energy curve. The relation between the rotational constants is given by
where ν is a vibrational quantum number and α is a vibration-rotation interaction constant which can be calculated when the B values for two different vibrational states can be found. For carbon monoxide req = 113.0 pm.
Nitric oxide, NO, is a special case as the molecule is paramagnetic, with one unpaired electron. Coupling of the electron spin angular momentum with the molecular vibration causes lambda-doubling with calculated harmonic frequencies of 1904.03 and 1903.68 cm−1. Rotational levels are also split.
Homonuclear diatomic molecules
The quantum mechanics for homonuclear diatomic molecules such as dinitrogen, N2, and fluorine, F2, is qualitatively the same as for heteronuclear diatomic molecules, but the selection rules governing transitions are different. Since the electric dipole moment of the homonuclear diatomics is zero, the fundamental vibrational transition is electric-dipole-forbidden and the molecules are infrared inactive. However, a weak quadrupole-allowed spectrum of N2 can be observed when using long path-lengths both in the laboratory and in the atmosphere. The spectra of these molecules can be observed by Raman spectroscopy because the molecular vibration is Raman-allowed.
Dioxygen is a special case as the molecule is paramagnetic so magnetic-dipole-allowed transitions can be observed in the infrared. The unit electron spin has three spatial orientations with respect to the molecular rotational angular momentum vector, N, so that each rotational level is split into three states with total angular momentum (molecular rotation plus electron spin) , J = N + 1, N, and N - 1, each J state of this so-called p-type triplet arising from a different orientation of the spin with respect to the rotational motion of the molecule. Selection rules for magnetic dipole transitions allow transitions between successive members of the triplet (ΔJ = ±1) so that for each value of the rotational angular momentum quantum number N there are two allowed transitions. The 16O nucleus has zero nuclear spins angular momentum, so that symmetry considerations demand that N may only have odd values.
Raman spectra of diatomic molecules
The selection rule is
so that the spectrum has an O-branch (∆J = −2), a Q-branch (∆J = 0) and an S-branch (∆J=+2). In the approximation that B′′ = B′ = B the wavenumbers are given by
since the S-branch starts at J=0 and the O-branch at J=2. So, to a first approximation, the separation between S(0) and O(2) is 12B and the separation between adjacent lines in both O- and S- branches is 4B. The most obvious effect of the fact that B′′ ≠ B′ is that the Q-branch has a series of closely spaced side lines on the low-frequency side due to transitions in which ΔJ=0 for J=1,2 etc. Useful difference formulae, neglecting centrifugal distortion are as follows.
Molecular oxygen is a special case as the molecule is paramagnetic, with two unpaired electrons.
For homonuclear diatomics, nuclear spin statistical weights lead to alternating line intensities between even- and odd- levels. For nuclear spin I = 1/2 as in 1H2 and 19F2 the intensity alternation is 1:3. For 2H2 and 14N2, I=1 and the statistical weights are 6 and 3 so that the even- levels are twice as intense. For 16O2 (I=0) all transitions with even values of are forbidden.
Polyatomic linear molecules
These molecules fall into two classes, according to symmetry: centrosymmetric molecules with point group D∞h, such as carbon dioxide, CO2, and ethyne or acetylene, HCCH; and non-centrosymmetric molecules with point group C∞v such as hydrogen cyanide, HCN, and nitrous oxide, NNO. Centrosymmetric linear molecules have a dipole moment of zero, so do not show a pure rotation spectrum in the infrared or microwave regions. On the other hand, in certain vibrational excited states the molecules do have a dipole moment so that a ro-vibrational spectrum can be observed in the infrared.
The spectra of these molecules are classified according to the direction of the dipole moment change vector. When the vibration induces a dipole moment change pointing along the molecular axis the term parallel is applied, with the symbol . When the vibration induces a dipole moment pointing perpendicular to the molecular axis the term perpendicular is applied, with the symbol . In both cases the P- and R- branch wavenumbers follow the same trend as in diatomic molecules. The two classes differ in the selection rules that apply to ro-vibrational transitions. For parallel transitions the selection rule is the same as for diatomic molecules, namely, the transition corresponding to the Q-branch is forbidden. An example is the C-H stretching mode of hydrogen cyanide.
For a perpendicular vibration the transition ΔJ=0 is allowed. This means that the transition is allowed for the molecule with the same rotational quantum number in the ground and excited vibrational state, for all the populated rotational states. This makes for an intense, relatively broad, Q-branch consisting of overlapping lines due to each rotational state. The N-N-O bending mode of nitrous oxide, at ca. 590 cm−1 is an example.
The spectra of centrosymmetric molecules exhibit alternating line intensities due to quantum state symmetry effects, since rotation of the molecule by 180° about a 2-fold rotation axis is equivalent to exchanging identical nuclei. In carbon dioxide, the oxygen atoms of the predominant isotopic species 12C16O2 have spin zero and are bosons, so that the total wavefunction must be symmetric when the two 16O nuclei are exchanged. The nuclear spin factor is always symmetric for two spin-zero nuclei, so that the rotational factor must also be symmetric which is true only for even-J levels. The odd-J rotational levels cannot exist and the allowed vibrational bands consist of only absorption lines from even-J initial levels. The separation between adjacent lines in the P- and R- branches is close to 4B rather than 2B as alternate lines are missing. For acetylene the hydrogens of 1H12C12C1H have spin-1/2 and are fermions, so the total wavefunction is antisymmetric when two 1H nuclei are exchanged. As is true for ortho and para hydrogen the nuclear spin function of the two hydrogens has three symmetric ortho states and one antisymmetric para states. For the three ortho states, the rotational wave function must be antisymmetric corresponding to odd J, and for the one para state it is symmetric corresponding to even J. The population of the odd J levels are therefore three times higher than the even J levels, and alternate line intensities are in the ratio 3:1.Straughan and Walker vol2, pp 186−8
Spherical top molecules
These molecules have equal moments of inertia about any axis, and belong to the point groups Td (tetrahedral AX4) and Oh (octahedral AX6). Molecules with these symmetries have a dipole moment of zero, so do not have a pure rotation spectrum in the infrared or microwave regions.
Tetrahedral molecules such as methane, CH4, have infrared-active stretching and bending vibrations, belonging to the T2 (sometimes written as F2) representation. These vibrations are triply degenerate and the rotational energy levels have three components separated by the Coriolis interaction. The rotational term values are given, to a first order approximation, by
where is a constant for Coriolis coupling. The selection rule for a fundamental vibration is
Thus, the spectrum is very much like the spectrum from a perpendicular vibration of a linear molecule, with a strong Q-branch composed of many transitions in which the rotational quantum number is the same in the vibrational ground and excited states, The effect of Coriolis coupling is clearly visible in the C-H stretching vibration of methane, though detailed study has shown that the first-order formula for Coriolis coupling, given above, is not adequate for methane.
Symmetric top molecules
These molecules have a unique principal rotation axis of order 3 or higher. There are two distinct moments of inertia and therefore two rotational constants. For rotation about any axis perpendicular to the unique axis, the moment of inertia is and the rotational constant is , as for linear molecules. For rotation about the unique axis, however, the moment of inertia is and the rotational constant is . Examples include ammonia, NH3 and methyl chloride, CH3Cl (both of molecular symmetry described by point group C3v), boron trifluoride, BF3 and phosphorus pentachloride, PCl5 (both of point group D3h), and benzene, C6H6 (point group D6h).
For symmetric rotors a quantum number J is associated with the total angular momentum of the molecule. For a given value of J, there is a 2J+1- fold degeneracy with the quantum number, M taking the values +J ...0 ... -J. The third quantum number, K is associated with rotation about the principal rotation axis of the molecule. As with linear molecules, transitions are classified as parallel, or perpendicular,, in this case according to the direction of the dipole moment change with respect to the principal rotation axis. A third category involves certain overtones and combination bands which share the properties of both parallel and perpendicular transitions. The selection rules are
If K ≠ 0, then ΔJ = 0, ±1 and ΔK = 0
If K = 0, then ΔJ = ±1 and ΔK = 0
ΔJ = 0, ±1 and ΔK = ±1
The fact that the selection rules are different is the justification for the classification and it means that the spectra have a different appearance which can often be immediately recognized.
An expression for the calculated wavenumbers of the P- and R- branches may be given as
in which m = J+1 for the R-branch and -J for the P-branch. The three centrifugal distortion constants , and are needed to fit the term values of each level. The wavenumbers of the sub-structure corresponding to each band are given by
represents the Q-branch of the sub-structure, whose position is given by
.
Parallel bands
The C-Cl stretching vibration of methyl chloride, CH3Cl, gives a parallel band since the dipole moment change is aligned with the 3-fold rotation axis. The line spectrum shows the sub-structure of this band rather clearly; in reality, very high resolution spectroscopy would be needed to resolve the fine structure fully. Allen and Cross show parts of the spectrum of CH3D and give a detailed description of the numerical analysis of the experimental data.
Perpendicular bands
The selection rule for perpendicular bands give rise to more transitions than with parallel bands. A band can be viewed as a series of sub-structures, each with P, Q and R branches. The Q-branches are separated by approximately 2(A′-B′). The asymmetric HCH bending vibration of methyl chloride is typical. It shows a series of intense Q-branches with weak rotational fine structure. Analysis of the spectra is made more complicated by the fact that the ground-state vibration is bound, by symmetry, to be a degenerate vibration, which means that Coriolis coupling also affects the spectrum.
Hybrid bands
Overtones of a degenerate fundamental vibration have components of more than one symmetry type. For example, the first overtone of a vibration belonging to the E representation in a molecule like ammonia, NH3, will have components belonging to A1 and E representations. A transition to the A1 component will give a parallel band and a transition to the E'' component will give perpendicular bands; the result is a hybrid band.
Inversion in ammonia
For ammonia, NH3, the symmetric bending vibration is observed as two branches near 930 cm−1 and 965 cm−1. This so-called inversion doubling arises because the symmetric bending vibration is actually a large-amplitude motion known as inversion, in which the nitrogen atom passes through the plane of the three hydrogen atoms, similar to the inversion of an umbrella. The potential energy curve for such a vibration has a double minimum for the two pyramidal geometries, so that the vibrational energy levels occur in pairs which correspond to combinations of the vibrational states in the two potential minima. The two v = 1 states combine to form a symmetric state (1+) at 932.5 cm−1 above the ground (0+) state and an antisymmetric state (1−) at 968.3 cm−1.
The vibrational ground state (v = 0) is also doubled although the energy difference is much smaller, and the transition between the two levels can be measured directly in the microwave region, at ca. 24 GHz (0.8 cm−1). This transition is historically significant and was used in the ammonia maser, the fore-runner of the laser.
Asymmetric top molecules
Asymmetric top molecules have at most one or more 2-fold rotation axes. There are three unequal moments of inertia about three mutually perpendicular principal axes. The spectra are very complex. The transition wavenumbers cannot be expressed in terms of an analytical formula but can be calculated using numerical methods.
The water molecule is an important example of this class of molecule, particularly because of the presence of water vapor in the atmosphere. The low-resolution spectrum shown in green illustrates the complexity of the spectrum. At wavelengths greater than 10 μm (or wavenumbers less than 1000 cm−1) the absorption is due to pure rotation. The band around 6.3 μm (1590 cm−1) is due to the HOH bending vibration; the considerable breadth of this band is due to the presence of extensive rotational fine structure. High-resolution spectra of this band are shown in Allen and Cross, p 221. The symmetric and asymmetric stretching vibrations are close to each other, so the rotational fine structures of these bands overlap. The bands at shorter wavelength are overtones and combination bands, all of which show rotational fine structure. Medium resolution spectra of the bands around 1600 cm−1 and 3700 cm−1 are shown in Banwell and McCash, p91.
Ro-vibrational bands of asymmetric top molecules are classed as A-, B- or C- type for transitions in which the dipole moment change is along the axis of smallest moment of inertia to the highest.
Experimental methods
Ro-vibrational spectra are usually measured at high spectral resolution. In the past, this was achieved by using an echelle grating as the spectral dispersion element in a grating spectrometer. This is a type of diffraction grating optimized to use higher diffraction orders. Today at all resolutions the preferred method is FTIR. The primary reason for this is that infrared detectors are inherently noisy, and FTIR detects summed signals at multiple wavelengths simultaneously achieving a higher signal to noise by virtue of Fellgett's advantage for multiplexed methods. The resolving power of an FTIR spectrometer depends on the maximum retardation of the moving mirror. For example, to achieve a resolution of 0.1 cm−1, the moving mirror must have a maximum displacement of 10 cm from its position at zero path difference. Connes measured the vibration-rotation spectrum of Venusian CO2 at this resolution. A spectrometer with 0.001 cm−1 resolution is now available commercially. The throughput advantage of FTIR is important for high-resolution spectroscopy as the monochromator in a dispersive instrument with the same resolution would have very narrow entrance and exit slits.
When measuring the spectra of gases it is relatively easy to obtain very long path-lengths by using a multiple reflection cell. This is important because it allows the pressure to be reduced so as to minimize pressure broadening of the spectral lines, which may degrade resolution. Path lengths up to 20m are commercially available.
Appendix
The method of combination differences uses differences of wavenumbers in the P- and R- branches to obtain data that depend only on rotational constants in the vibrational ground or excited state. For the excited state
This function can be fitted, using the method of least-squares to data for carbon monoxide, from Harris and Bertolucci. The data calculated with the formula
in which centrifugal distortion is ignored, are shown in the columns labelled with (1). This formula implies that the data should lie on a straight line with slope 2B′′ and intercept zero. At first sight the data appear to conform to this model, with a root mean square residual of 0.21 cm−1. However, when centrifugal distortion is included, using the formula
the least-squares fit is improved markedly, with ms residual decreasing to 0.000086 cm−1. The calculated data are shown in the columns labelled with (2).
| Physical sciences | Molecular physics | Physics |
486527 | https://en.wikipedia.org/wiki/Astrophysics%20Data%20System | Astrophysics Data System | The SAO/NASA Astrophysics Data System (ADS) is a digital library portal for researchers on astronomy and physics, operated for NASA by the Smithsonian Astrophysical Observatory. ADS maintains three bibliographic collections containing over 15 million records, including all arXiv e-prints. Abstracts and full-text of major astronomy and physics publications are indexed and searchable through the portal.
Historical context
Johann Friedrich Weidler published the first comprehensive history of astronomy in 1741 and the first astronomical bibliography in 1755. This was an effort to archive and classify earlier astronomical knowledge and works.
This effort was continued by Jérôme de La Lande who published his Bibliographie astronomique in 1803, a work that covered the period from 480 BCE to the year of publication.
The Bibliographie générale de l’astronomie, Volume I and Volume II, published by J.C. Houzeau and A. Lancaster, followed in 1882 until 1889.
As the number of astronomers and astronomical publications grew, bibliographical efforts became institutional tasks, first at the Observatoire Royal de Belgique, where the Bibliography of Astronomy was published from 1881 to 1898, and then at the Astronomischer Rechen-Institut in Heidelberg, where the yearly Astronomischer Jahresbericht was published from 1899 to 1968. After 1968, this was replaced by the yearly Astronomy and Astrophysics Abstracts book series, which continued until the end of the 20th century.
History
The first suggestion of a digital database of journal paper abstracts was made at a conference on Astronomy from Large Data-Bases held in Garching bei München in 1987.
An initial version of ADS, with a database consisting of 40 papers, was created as a proof of concept in 1988. The ADS Abstract Service became available for general use via proprietary network software in April 1993, and it was connected to SIMBAD a few months later. In early 1994 the ADS web-based service was launched, which effectively quadrupled the number of active users in the five weeks following its introduction.
In 2011 the ADS launched ADS Labs Streamlined Search which introduced facets for query refinement and selection. In 2013, ADS Labs 2.0 started featuring a new search engine, full-text search functionality, scalable facets, and an API was introduced. In 2015, the new ADS, code-named Bumblebee, was released as ADS-beta. The ADS-beta system features a micro-services API and client-side dynamic page loading served on a cloud platform. In May 2018 the beta label was dropped and Bumblebee became the default ADS interface—with some legacy features (ADS Classic) remaining available. Development continues to the present day, with an extensible API available: enabling users to build their own utilities on top of the ADS bibliographic record.
The ADS service is distributed worldwide with twelve mirror sites in twelve countries and with the database synchronized by weekly updates using rsync, a mirroring utility which allows updates to only the portions of the database which have changed. All updates are triggered centrally, but they initiate scripts at the mirror sites which "pull" updated data from the main ADS servers.
Data in the system
At first, the journal articles available via ADS were exclusively scanned bitmaps created from the paper journals and the abstracts created using optical character recognition software. Some of these scanned articles up to around 1995 are available for free by agreement with the journal publishers, with some dating from as far back as the early 19th century. Eventually, because of a wider spread of online editions of journal publications, abstracts would start to instead be loaded into ADS directly.
Papers are indexed within the database by their bibliographic record which contains the details of the journal they were published in, and various associated metadata, such as author lists, references and citations. Originally this data was stored in ASCII format but eventually the limitations of this encouraged the database maintainers to migrate all records to an XML (Extensible Markup Language) format in 2000. Bibliographic records are now stored as an XML element with sub-elements for the various metadata.
Scanned articles are stored in TIFF format at both medium and high resolution. The TIFF files are converted on demand into GIF files, for on-screen viewing, and PDF or PostScript files for printing. The generated files are then cached to eliminate needlessly frequent regenerations for popular articles. As of 2000, ADS contained 250 GB of scans, which consisted of 1,128,955 article pages comprising 138,789 articles. By 2005 this had grown to 650 GB and was expected to grow further to about 900 GB by 2007. No further information has been published (2005).
The database initially contained only astronomical references, but has now grown to incorporate three databases, covering astronomy
references (including planetary sciences and solar physics), physics references (including instrumentation and geosciences), as well as preprints of scientific papers from arXiv. The astronomy database is by far the most advanced and its use accounts for about 85% of the total ADS usage. Articles are assigned to the different databases according to the subject rather than the journal they are published in, so that articles from any one journal might appear in all three subject databases. The separation of the databases allows searching in each discipline to be tailored, so that words can automatically be given different weight functions in different database searches, depending on how common they are in the relevant field.
Data in the preprint archive is updated daily from arXiv which is the dominant repository of physics and astronomy preprints. The advent of preprint servers has, like ADS, had a significant impact on the rate of astronomical research, as papers are often made available from preprint servers weeks or months before they are published in the journals. The incorporation of preprints from arXiv into ADS means that the search engine can return the most current research available, with the caveat that preprints may not have been peer-reviewed or proofread to the required standard for publication in the main journals. The database of ADS links preprints with subsequently published articles wherever possible, so that citation and reference searches will return links to the journal article where the preprint was cited.
Software and hardware
The software runs on a system that was written specifically for the ADS, allowing for extensive customization for astronomical needs that would not have been possible with general purpose database software. The scripts are designed to be as platform independent as possible, given the need to facilitate mirroring on different systems around the world, although the growing use of Linux as the operating system of choice within astronomy has led to increasing optimization of the scripts for installation on that platform.
The main ADS server is located at the Center for Astrophysics Harvard & Smithsonian in Cambridge, Massachusetts, and is a dual 64-bit X86 Intel server with two quad-core 3.0 GHz CPUs and 32 GB of RAM, running the CentOS 5.4 Linux distribution. As of 2022, there are mirrors located in China, Chile, France, Germany, Japan, Russia, the United Kingdom, and Ukraine.
Indexing
ADS currently (2005) receives abstracts or tables of contents from almost two hundred journal sources. The service may receive data referring to the same article from multiple sources, and creates one bibliographic reference based on the most accurate data from each source. The common use of TeX and LaTeX by almost all scientific journals greatly facilitates the incorporation of bibliographic data into the system in a standardized format, and importing HTML-coded web-based articles is also simple. ADS utilizes Python and Perl scripts for importing, processing and standardizing bibliographic data.
The apparently mundane task of converting author names into a standard Surname, Initial format is actually one of the more difficult to automate, due to the wide variety of naming conventions around the world and the possibility that a given name such as Davis could be a first name, middle name or surname. The accurate conversion of names requires a detailed knowledge of the names of authors active in astronomy, and ADS maintains an extensive database of author names, which is also used in searching the database (see below).
For electronic articles, a list of the references given at the end of the article is easily extracted. For scanned articles, reference extraction relies on OCR. The reference database can then be "inverted" to list the citations for each paper in the database. Citation lists have been used in the past to identify popular articles missing from the database; mostly these were from before 1975 and have now been added to the system.
Coverage
The database now contains over fifteen million articles. In the cases of the major journals of astronomy (Astrophysical Journal, Astronomical Journal, Astronomy and Astrophysics, Publications of the Astronomical Society of the Pacific and the Monthly Notices of the Royal Astronomical Society), coverage is complete, with all issues indexed from number 1 to the present. These journals account for about two-thirds of the papers in the database, with the rest consisting of papers published in over 100 other journals from around the world, as well as in conference proceedings.
While the database contains the complete contents of all the major journals and many minor ones as well, its coverage of references and citations is much less complete. | Physical sciences | Databases | Astronomy |
486873 | https://en.wikipedia.org/wiki/Cowpea | Cowpea | The cowpea (Vigna unguiculata) is an annual herbaceous legume from the genus Vigna. Its tolerance for sandy soil and low rainfall have made it an important crop in the semiarid regions across Africa and Asia. It requires very few inputs, as the plant's root nodules are able to fix atmospheric nitrogen, making it a valuable crop for resource-poor farmers and well-suited to intercropping with other crops. The whole plant is used as forage for animals, with its use as cattle feed likely responsible for its name.
Four subspecies of cowpeas are recognised, of which three are cultivated. A high level of morphological diversity is found within the species with large variations in the size, shape, and structure of the plant. Cowpeas can be erect, semierect (trailing), or climbing. The crop is mainly grown for its seeds, which are high in protein, although the leaves and immature seed pods can also be consumed.
Cowpeas were domesticated in Africa and are one of the oldest crops to be farmed. A second domestication event probably occurred in Asia, before they spread into Europe and the Americas. The seeds are usually cooked and made into stews and curries, or ground into flour or paste.
Most cowpeas are grown on the African continent, particularly in Nigeria and Niger, which account for 66% of world production. A 1997 estimate suggests that cowpeas are cultivated on of land, have a worldwide production of 3 million tonnes and are consumed by 200 million people on a daily basis. Insect infestation is a major constraint to the production of cowpea, sometimes causing over 90% loss in yield. The legume pod borer Maruca vitrata is the main preharvest pest of the cowpea and the cowpea weevil Callosobruchus maculatus the main postharvest pest.
Taxonomy and etymology
Vigna unguiculata is a member of the Vigna (peas and beans) genus. Unguiculata is Latin for "with a small claw", which reflects the small stalks on the flower petals. Common names for cultivated cowpeas include black-eye pea, southern pea, niebe (alternatively ñebbe), and crowder pea. All cultivated cowpeas are found within the universally accepted V. unguiculata subspecies unguiculata classification, which is then commonly divided into four cultivar groups: unguiculata, biflora, sesquipedalis, and textilis. The classification of the wild relatives within V. unguiculata is more complicated, with over 20 different names having been used and between 3 and 10 subgroups described. The original subgroups of stenophylla, dekindtiana, and tenuis appear to be common in all taxonomic treatments, while the variations pubescens and protractor were raised to subspecies level by a 1993 characterisation.
The first written reference of the word 'cowpea' appeared in 1798 in the United States. The name was most likely acquired due to their use as a fodder crop for cows. Black-eyed pea, a common name used for the unguiculata cultivar group, describes the presence of a distinctive black spot at the hilum of the seed. Black-eyed peas were first introduced to the southern states in the United States and some early varieties had peas squashed closely together in their pods, leading to the other common names of southern pea and crowder pea.
The sesquipedalis subspecies arrived in the United States via Asia. It is characterised by unusually long pods, leading to the Latin name (sesquipedalis means "foot and a half long") and the common names of yardlong bean, asparagus bean, and Chinese long-bean.
Description
A large morphological diversity is found within the crop, and the growth conditions and grower preferences for each variety vary from region to region. However, as the plant is primarily self-pollinating, its genetic diversity within varieties is relatively low. Cowpeas can either be short and bushy (as short as ) or act like a vine by climbing supports or trailing along the ground (to a height of ). The taproot can penetrate to a depth of after eight weeks.
The size and shape of the leaves vary greatly, making this an important feature for classifying and distinguishing cowpea varieties. Another distinguishing feature of cowpeas is the long peduncles, which hold the flowers and seed pods. One peduncle can support four or more seed pods. Flower colour varies through different shades of purple, pink, yellow, and white and blue.
Seeds and seed pods from wild cowpeas are very small, while cultivated varieties can have pods between long. A pod can contain six to 13 seeds that are usually kidney-shaped, although the seeds become more spherical the more restricted they are within the pod. Their texture and colour are very diverse. They can have a smooth or rough coat and be speckled, mottled, or blotchy. Colours include white, cream, green, red, brown, and black, or various combinations.
History
Compared to most other important crops, little is known about the domestication, dispersal, and cultivation history of the cowpea. Although there is no archaeological evidence for early cowpea cultivation, the centre of diversity of the cultivated cowpea is West Africa, leading an early consensus that this is the likely centre of origin and place of early domestication. New research using molecular markers has suggested that domestication may have instead occurred in East Africa and currently both theories carry equal weight.
While the date of cultivation began may be uncertain, it is still considered one of the oldest domesticated crops. Remains of charred cowpeas from rock shelters in Central Ghana have been dated to the 2nd millennium BC. In 2300 BC, the cowpea is believed to have made its way into Southeast Asia, where secondary domestication events may have occurred. From there they traveled north to the Mediterranean, where they were used by the Greeks and Romans. The first written references to the cowpea were in 300 BC and they probably reached Central and North America during the slave trade through the 17th to early 19th centuries.
Cultivation
Cowpeas thrive in poor dry conditions, growing well in soils up to 85% sand. This makes them a particularly important crop in arid, semidesert regions where not many other crops will grow. As well as an important source of food for humans in poor, arid regions, the crop can also be used as feed for livestock. Its nitrogen-fixing ability means that as well as functioning as a sole crop, the cowpea can be effectively intercropped with sorghum, millet, maize, cassava, or cotton.
The optimum temperature for cowpea growth is , making it only available as a summer crop for most of the world. It grows best in regions with an annual rainfall between . The ideal soils are sandy and it has better tolerance for infertile and acid soil than most other crops. Generally, for the erect varieties and for the climbing and trailing varieties. The seeds can be harvested after about 100 days or the whole plant used as forage after about 120 days. Leaves can be picked from 4 weeks after planting.
These characteristics, along with its low fertilisation requirements, make the cowpea an ideal crop for resource-poor farmers living in the Sahel region of West Africa. Early-maturing varieties of the crop can thrive in the semiarid climate, where rainfall is often less than . The timing of planting is crucial, as the plant must mature during the seasonal rains. The crop is mostly intercropped with pearl millet, and plants are selected that provide both food and fodder value instead of the more specialised varieties.
Storage of the seeds can be problematic in Africa due to potential infestation by postharvest pests. Traditional methods of protecting stored grain include using the insecticidal properties of Neem extracts, mixing the grain with ash or sand, using vegetable oils, combining ash and oil into a soap solution or treating the cowpea pods with smoke or heat. More modern methods include storage in airtight containers, using gamma irradiation, or heating or freezing the seeds. Temperatures of kill the weevil larvae, leading to a recent push to develop cheap forms of solar heating that can be used to treat stored grain. One of the more recent developments is the use a cheap, reusable double-bagging system (called PICs) that asphyxiates the cowpea weevils.
Pests and diseases
Insects are a major factor in the low yields of African cowpea crops, and they affect each tissue component and developmental stage of the plant. In bad infestations, insect pressure is responsible for over 90% loss in yield. The legume pod borer, Maruca vitrata, is the main preharvest pest of the cowpea. Other important pests include pod sucking bugs, thrips, aphids, cowpea curculios and post-harvest beetles Callosobruchus maculatus and Callosobruchus chinensis.
M. vitrata causes the most damage to the growing cowpea due to their large host range and cosmopolitan distribution. It causes damage to the flower buds, flowers, and pods of the plant, with infestations resulting in a 20–88% loss of yield. While the insect can cause damage through all growth stages, most of the damage occurs during flowering. Biological control has had limited success, so most preventive methods rely on the use of agrichemicals. Genetically modified cowpeas has been developed to express the cry protein from Bacillus thuringiensis, which is toxic to lepidopteran species including the maruca. BT Cowpea was commercialised in Nigeria in 2019.
Severe C. maculatus infestations can affect 100% of the stored peas and cause up to 60% loss within a few months. The weevil generally enters the cowpea pod through holes before harvest and lays eggs on the dry seed. The larvae burrow their way into the seed, feeding on the endosperm. The weevil develops into a sexually mature adult within the seed. An individual bruchid can lay 20–40 eggs, and in optimal conditions, each egg can develop into a reproductively active adult in 3 weeks. The most common methods of protection involve the use of insecticides, the main pesticides used being carbamates, synthetic pyrethroids, and organophosphates.
Cowpea is susceptible to nematode, fungal, bacterial, and virus diseases, which can result in substantial loss in yield. Common diseases include blights, root rot, wilt, powdery mildew, root knot, rust and leaf spot. The plant is susceptible to mosaic viruses, which cause a green mosaic pattern to appear in the leaves. The cowpea mosaic virus (CPMV), discovered in 1959, has become a useful research tool. CPMV is stable and easy to propagate to a high yield, making it useful in vector development and protein expression systems. One of the plant's defenses against some insect attacks is the cowpea trypsin inhibitor (CpTI). CpTI has been transgenically inserted into other crops as a pest deterrent. CpTI is the only gene obtained outside of B. thuringiensis that has been inserted into a commercially available genetically modified crop.
Besides biotic stresses, cowpea also faces various challenges in different parts of the world such as drought, heat, and cold. Drought lowers the growth rate and development, ultimately reducing yield, although cowpea is considered more drought tolerant than most other crops. Drought at the preflowering stage in cowpea can reduce the yield potential by 360 kg/ha. Crop wild relatives are the prominent source of genetic material, which can be tapped to improve biotic/abiotic tolerance in crops. International Institute of Tropical Agriculture (IITA), Nigeria and Institut de l’Environment et de Recherches Agricoles are looking to tap into the genetic diversity of wild cowpeas and transfer that into cultivars to make them more tolerant to different stresses and adaptive to climate change.
Culinary use
Cowpeas are grown mostly for their edible beans, although the leaves, green seeds and pods can also be consumed, meaning the cowpea can be used as a food source before the dried peas are harvested. Like other legumes, cowpeas are cooked to make them edible, usually by boiling. Cowpeas can be prepared in stews, soups, purees, casseroles and curries. They can also be processed into a paste or flour. Chinese long beans can be eaten raw or cooked, but as they easily become waterlogged are usually sautéed, stir-fried, or deep-fried.
A common snack in Africa is koki or moin-moin, where the cowpeas are mashed into a paste, mixed with spices and steamed in banana leaves. Dan wake cowpea dumplings are common in northern Nigeria and environs. They also use the cowpea paste as a supplement in infant formula when weaning babies off milk. Slaves brought to America and the West Indies cooked cowpeas much the same way as they did in Africa, although many people in the American South considered cowpeas not suitable for human consumption. A popular dish was Hoppin' John, which contained black-eyed peas cooked with rice and seasoned with pork. Over time, cowpeas became more universally accepted and now Hoppin' John is seen as a traditional Southern dish ritually served on New Year's Day.
Nutrition and health
Cowpea seeds provide a rich source of proteins and food energy, as well as minerals and vitamins. This complements the mainly cereal diet in countries that grow cowpeas as a major food crop. A seed can consist of 25% protein and has very low fat content. Cowpea starch is digested more slowly than the starch from cereals, which is more beneficial to human health. The grain is a rich source of folic acid, an important vitamin that helps prevent neural tube defects in unborn babies.
The cowpea has often been referred to as "poor man's meat" due to the high levels of protein found in the seeds and leaves. However, it does contain some antinutritional elements, notable phytic acid and protease inhibitors, which reduce the nutritional value of the crop. Methods such as fermentation, soaking, germination, debranning, and autoclaving are used to combat the antinutritional properties of the cowpea by increasing the bioavailability of nutrients within the crop. Although little research has been conducted on the nutritional value of the leaves and immature pods, what is available suggests that the leaves have a similar nutritional value to black nightshade and sweet potato leaves, while the green pods have less antinutritional factors than the dried seeds.
Production and consumption
Most cowpeas are grown on the African continent, particularly in Nigeria and Niger, which account for 66% of world cowpea production. The Sahel region also contains other major producers such as Burkina Faso, Ghana, Senegal, and Mali. Niger is the main exporter of cowpeas and Nigeria the main importer. Exact figures for cowpea production are hard to come up with as it is not a major export crop. Estimating world cowpea production is rather difficult, as it is usually grown in a mixture with other crops, but according to a 1997 estimate, cowpeas were cultivated on and had a worldwide production of . While they play a key role in subsistence farming and livestock fodder, the cowpea is also seen as a major cash crop by Central and West African farmers, with an estimated 200 million people consuming cowpea on a daily basis.
According to the Food and Agriculture Organization of the United Nations, as of 2012, the average cowpea yield in Western Africa was an estimated , which is still 50% below the estimated potential production yield. In some tradition cropping methods, the yield can be as low as .
Outside Africa, the major production areas are Asia, Central America, and South America. Brazil is the world's second-leading producer of cowpea seed, accounting for 17% of annual cowpea production, although most is consumed within the country.
| Biology and health sciences | Pulses | Plants |
487061 | https://en.wikipedia.org/wiki/Vigna | Vigna | Vigna is a genus of plants in the legume family, Fabaceae, with a pantropical distribution. It includes some well-known cultivated species, including many types of beans. Some are former members of the genus Phaseolus. According to Hortus Third, Vigna differs from Phaseolus in biochemistry and pollen structure, and in details of the style and stipules.
Vigna is also commonly confused with the genus Dolichos, but the two differ in stigma structure.
Vigna are herbs or occasionally subshrubs. The leaves are pinnate, divided into 3 leaflets. The inflorescence is a raceme of yellow, blue, or purple pea flowers. The fruit is a legume pod of varying shapes containing seeds.
Familiar food species include the adzuki bean (V. angularis), the black gram (V. mungo), the cowpea (V. unguiculata, including the variety known as the black-eyed pea), and the mung bean (V. radiata). Each of these may be used as a whole bean, a bean paste, or as bean sprouts.
The genus is named after Domenico Vigna, a seventeenth-century Italian botanist and director of the Orto botanico di Pisa.
Uses
Root tubers of Vigna species have traditionally been used as food by the Indigenous Peoples of the Northern Territory.
Selected species
The genus Vigna contains at least 90 species, including:
Subgenus Ceratotropis
Vigna aconitifolia (Jacq.) Maréchal—moth bean, mat bean, Turkish gram
Vigna angularis (Willd.) Ohwi & H. Ohashi—adzuki bean, red bean
Vigna angularis var. angularis (Willd.) Ohwi & H. Ohashi
Vigna angularis var. nipponensis (Ohwi) Ohwi & H. Ohashi
Vigna glabrescens Maréchal et al.
Vigna grandiflora (Prain) Tateishi & Maxted
Vigna hirtella Ridley
Vigna minima (Roxb.) Ohwi & H. Ohashi
Vigna mungo (L.) Hepper—black gram, black lentil, white lentil, urd-bean, urad bean
Vigna mungo var. silvestris Lukoki, Maréchal & Otoul
Vigna nakashimae (Ohwi) Ohwi & H. Ohashi
Vigna nepalensis Tateishi & Maxted
Vigna radiata (L.) Wilczek—mung bean, green gram, golden gram, mash bean, green soy, celera-bean, Jerusalem-pea
Vigna radiata var. radiata (L.) Wilczek
Vigna radiata var. sublobata (Roxb.) Verdc.
Vigna reflexopilosa Hayata—Creole-bean
Vigna reflexopilosa var. reflexopilosa Hayata
Vigna reflexopilosa var. glabra Tomooka & Maxted
Vigna riukiuensis (Ohwi) Ohwi & H. Ohashi
Vigna stipulacea Kuntze
Vigna subramaniana (Babu ex Raizada) M. Sharma
Vigna tenuicaulis N. Tomooka & Maxted
Vigna trilobata (L.) Verdc.—jungle mat bean, jungli-bean, African gram, three-lobe-leaved cowpea
Vigna trinervia (Heyne ex Wall.) Tateishi & Maxted
Vigna umbellata (Thunb.) Ohwi & H. Ohashi—ricebean, red bean, climbing mountain-bean, mambi bean, Oriental-bean
Subgenus Haydonia
Vigna monophylla Taub.
Vigna nigritia Hook. f.
Vigna schimperi Baker
Vigna triphylla (R. Wilczek) Verdc.
Subgenus Lasiospron
Vigna diffusa (Scott-Elliot) A. Delgado & Verdc.
Vigna juruana (Harms) Verdc.
Vigna lasiocarpa (Mart. ex Benth.) Verdc.
Vigna longifolia (Benth.) Verdc.
Vigna schottii (Bentham) A. Delgado & Verdc.
Vigna trichocarpa (C. Wright ex Sauvalle) A. Delgado
Vigna vexillata (L.) A. Rich.—zombi pea, wild cowpea
Vigna vexillata var. angustifolia
Vigna vexillata var. youngiana
Subgenus Vigna
Vigna ambacensis Welw. ex Bak.
Vigna angivensis Baker
Vigna filicaulis Hepper
Vigna friesiorum Harms
Vigna gazensis Baker f.
Vigna hosei (Craib) Backer—Sarowak/Sarawak bean
Vigna luteola (Jacq.) Benth.—Dalrymple vigna
Vigna membranacea A. Rich.
Vigna membranacea subsp. caesia (Chiov.) Verdc.
Vigna membranacea subsp. membranacea A. Rich.
Vigna monantha Thulin
Vigna racemosa (G. Don) Hutch. & Dalziel
Vigna subterranea (L.) Verdc.—Bambara groundnut, Congo goober, hog-peanut, jugo bean, njugumawe (Swahili) (sometimes separated in Voandzeia)
Vigna unguiculata (L.) Walp.—cowpea, crowder pea, Southern pea, Reeve's-pea, snake-bean
Vigna unguiculata subsp. cylindrica—catjang
Vigna unguiculata subsp. dekindtiana—wild cowpea, African cowpea, Ethiopian cowpea
Vigna unguiculata subsp. sesquipedalis—yardlong bean, long-podded cowpea, asparagus bean, Chinese long bean, pea-bean
Vigna unguiculata subsp. unguiculata—black-eyed pea, black-eyed bean
Incertae sedis
Vigna comosa
Vigna dalzelliana
Vigna debilis Fourc.
Vigna decipiens
Vigna dinteri Harms
Vigna dolichoides Baker in Hooker f.
Vigna frutescens
Vigna gracilis
Vigna kirkii
Vigna lanceolata—pencil yam, Maloga-bean, parsnip-bean, merne arlatyeye (Arrernte)
Vigna lobata (Willd.) Endl.
Vigna lobatifolia
Vigna marina (Burm.f.) Merr.—dune-bean, notched cowpea, sea-bean, mohihihi, nanea (Hawaiian)
Vigna multiflora
Vigna nervosa
Vigna oblongifolia
Vigna owahuensis Vogel—Oahu cowpea
Vigna parkeri—creeping vigna
Vigna pilosa
| Biology and health sciences | Pulses | Plants |
487143 | https://en.wikipedia.org/wiki/Midbrain | Midbrain | The midbrain or mesencephalon is the uppermost portion of the brainstem connecting the diencephalon and cerebrum with the pons. It consists of the cerebral peduncles, tegmentum, and tectum.
It is functionally associated with vision, hearing, motor control, sleep and wakefulness, arousal (alertness), and temperature regulation.
The name mesencephalon comes from the Greek mesos, "middle", and enkephalos, "brain".
Structure
The midbrain is the shortest segment of the brainstem, measuring less than 2cm in length. It is situated mostly in the posterior cranial fossa, with its superior part extending above the tentorial notch.
The principal regions of the midbrain are the tectum, the cerebral aqueduct, tegmentum, and the cerebral peduncles. Rostrally the midbrain adjoins the diencephalon (thalamus, hypothalamus, etc.), while caudally it adjoins the hindbrain (pons, medulla and cerebellum). In the rostral direction, the midbrain noticeably splays laterally.
The midbrain is typically sectioned axially at either the superior or inferior colliculi levels. Visualizing these cross-sections as an upside-down bear face helps remember its structures, with the peduncles forming ears, aqueducts mouth, and tectum chin.
Tectum
The tectum (Latin for roof) is the part of the midbrain dorsal to the cerebral aqueduct. The position of the tectum is contrasted with the tegmentum, which refers to the region in front of the ventricular system, or floor of the midbrain.
It is involved in certain reflexes in response to visual or auditory stimuli. The reticulospinal tract, which exerts some control over alertness, takes input from the tectum, and travels both rostrally and caudally from it.
The corpora quadrigemina are four mounds, called colliculi, in two pairs – a superior and an inferior pair, on the surface of the tectum. The superior colliculi process some visual information, aid the decussation of several fibres of the optic nerve (some fibres remain ipsilateral), and are involved with saccadic eye movements. The tectospinal tract connects the superior colliculi to the cervical nerves of the neck, and co-ordinates head and eye movements. Each superior colliculus also sends information to the corresponding lateral geniculate nucleus, with which it is directly connected. The homologous structure to the superior colliculus in non mammalian vertebrates including fish and amphibians, is called the optic tectum; in those animals, the optic tectum integrates sensory information from the eyes and certain auditory reflexes.
The inferior colliculi – located just above the trochlear nerve – process certain auditory information. Each inferior colliculus sends information to the corresponding medial geniculate nucleus, with which it is directly connected.
Cerebral aqueduct
The cerebral aqueduct is the part of the ventricular system which links the third ventricle (rostrally) with the fourth ventricle (caudally); as such it is responsible for continuing the circulation of cerebrospinal fluid. The cerebral aqueduct is a narrow channel located between the tectum and the tegmentum, and is surrounded by the periaqueductal grey, which has a role in analgesia, quiescence, and bonding. The dorsal raphe nucleus (which releases serotonin in response to certain neural activity) is located at the ventral side of the periaqueductal grey, at the level of the inferior colliculus.
The nuclei of two pairs of cranial nerves are similarly located at the ventral side of the periaqueductal grey – the pair of oculomotor nuclei (which control the eyelid, and most eye movements) is located at the level of the superior colliculus, while the pair of trochlear nuclei (which helps focus vision on more proximal objects) is located caudally to that, at the level of the inferior colliculus, immediately lateral to the dorsal raphe nucleus. The oculomotor nerve emerges from the nucleus by traversing the ventral width of the tegmentum, while the trochlear nerve emerges via the tectum, just below the inferior colliculus itself; the trochlear is the only cranial nerve to exit the brainstem dorsally. The Edinger-Westphal nucleus (which controls the shape of the lens and size of the pupil) is located between the oculomotor nucleus and the cerebral aqueduct.
Tegmentum
The midbrain tegmentum is the portion of the midbrain ventral to the cerebral aqueduct, and is much larger in size than the tectum. It communicates with the cerebellum by the superior cerebellar peduncles, which enter at the caudal end, medially, on the ventral side; the cerebellar peduncles are distinctive at the level of the inferior colliculus, where they decussate, but they dissipate more rostrally. Between these peduncles, on the ventral side, is the median raphe nucleus, which is involved in memory consolidation.
The main bulk of the tegmentum contains a complex synaptic network of neurons, primarily involved in homeostasis and reflex actions. It includes portions of the reticular formation. A number of distinct nerve tracts between other parts of the brain pass through it. The medial lemniscus – a narrow ribbon of fibres – passes through in a relatively constant axial position; at the level of the inferior colliculus it is near the lateral edge, on the ventral side, and retains a similar position rostrally (due to widening of the tegmentum towards the rostral end, the position can appears more medial). The spinothalamic tract – another ribbon-like region of fibres – are located at the lateral edge of the tegmentum; at the level of the inferior colliculus it is immediately dorsal to the medial lemiscus, but due to the rostral widening of the tegmentum, is lateral of the medial lemiscus at the level of the superior colliculus.
A prominent pair of round, reddish, regions – the red nuclei (which have a role in motor co-ordination) – are located in the rostral portion of the midbrain, somewhat medially, at the level of the superior colliculus. The rubrospinal tract emerges from the red nucleus and descends caudally, primarily heading to the cervical portion of the spine, to implement the red nuclei's decisions. The area between the red nuclei, on the ventral side – known as the ventral tegmental area – is the largest dopamine-producing area in the brain, and is heavily involved in the neural reward system. The ventral tegmental area is in contact with parts of the forebrain – the mammillary bodies (from the Diencephalon) and hypothalamus (of the diencephalon).
Cerebral peduncles
The cerebral peduncles each form a lobe ventrally of the tegmentum, on either side of the midline. Beyond the midbrain, between the lobes, is the interpeduncular fossa, which is a cistern filled with cerebrospinal fluid .
The majority of each lobe constitutes the cerebral crus. The cerebral crus are the main tracts descending from the thalamus to caudal parts of the central nervous system; the central and medial ventral portions contain the corticobulbar and corticospinal tracts, while the remainder of each crus primarily contains tracts connecting the cortex to the pons. Older texts refer to the crus cerebri as the cerebral peduncle; however, the latter term actually covers all fibres communicating with the cerebrum (usually via the diencephalon), and therefore would include much of the tegmentum as well. The remainder of the crus pedunculi – small regions around the main cortical tracts – contain tracts from the internal capsule.
The portion of the lobes in connection with the tegmentum, except the most lateral portion, is dominated by a blackened band – the substantia nigra (literally black substance) – which is the only part of the basal ganglia system outside the forebrain. It is ventrally wider at the rostral end. By means of the basal ganglia, the substantia nigra is involved in motor-planning, learning, addiction, and other functions. There are two regions within the substantia nigra – one where neurons are densely packed (the pars compacta) and one where they are not (the pars reticulata), which serve a different role from one another within the basal ganglia system. The substantia nigra has extremely high production of melanin (hence the colour), dopamine, and noradrenalin; the loss of dopamine-producing neurons in this region contributes to the progression of Parkinson's disease.
Blood supply
The midbrain is supplied by the following arteries:
The tectum is supplied by the superior cerebellar artery.
The central part of the tegmentum is supplied by the paramedian branches of the basilar artery.
The lateral part of the midbrain is supplied by the posterior cerebral artery.
Venous blood from the midbrain is mostly drained into the basal vein as it passes around the peduncle. Some venous blood from the colliculi drains to the great cerebral vein.
Development
During embryonic development, the midbrain (also known as the mesencephalon) arises from the second vesicle of the neural tube, while the interior of this portion of the tube becomes the cerebral aqueduct. Unlike the other two vesicles – the forebrain and hindbrain – the midbrain does not develop further subdivision for the remainder of neural development. It does not split into other brain areas. While the forebrain, for example, divides into the telencephalon and the diencephalon.
Throughout embryonic development, the cells within the midbrain continually multiply; this happens to a much greater extent ventrally than it does dorsally. The outward expansion compresses the still-forming cerebral aqueduct, which can result in partial or total obstruction, leading to congenital hydrocephalus. The tectum is derived in embryonic development from the alar plate of the neural tube.
Function
The midbrain is the uppermost part of the brainstem. Its substantia nigra is closely associated with motor system pathways of the basal ganglia. The human midbrain is archipallian in origin, meaning that its general architecture is shared with the most ancient of vertebrates. Dopamine produced in the substantia nigra and ventral tegmental area plays a role in movement, movement planning, excitation, motivation and habituation of species from humans to the most elementary animals such as insects. Laboratory mice from lines that have been selectively bred for high voluntary wheel running have enlarged midbrains. The midbrain helps to relay information for vision and hearing.
Related terms
The term "tectal plate" or "quadrigeminal plate" is used to describe the junction of the gray and white matter in the embryo. ()
| Biology and health sciences | Nervous system | Biology |
487315 | https://en.wikipedia.org/wiki/Weak%20base | Weak base | A weak base is a base that, upon dissolution in water, does not dissociate completely, so that the resulting aqueous solution contains only a small proportion of hydroxide ions and the concerned basic radical, and a large proportion of undissociated molecules of the base.
pH, Kb, and Kw
Bases yield solutions in which the hydrogen ion activity is lower than it is in pure water, i.e., the solution is said to have a pH greater than 7.0 at standard conditions, potentially as high as 14 (and even greater than 14 for some bases). The formula for pH is:
Bases are proton acceptors; a base will receive a hydrogen ion from water, H2O, and the remaining H+ concentration in the solution determines pH. A weak base will have a higher H+ concentration than a stronger base because it is less completely protonated than a stronger base and, therefore, more hydrogen ions remain in its solution. Given its greater H+ concentration, the formula yields a lower pH value for the weak base. However, pH of bases is usually calculated in terms of the OH− concentration. This is done because the H+ concentration is not a part of the reaction, whereas the OH− concentration is. The pOH is defined as:
If we multiply the equilibrium constants of a conjugate acid (such as NH4+) and a conjugate base (such as NH3) we obtain:
As is just the self-ionization constant of water, we have
Taking the logarithm of both sides of the equation yields:
Finally, multiplying both sides by -1, we obtain:
With pOH obtained from the pOH formula given above, the pH of the base can then be calculated from , where pKw = 14.00.
A weak base persists in chemical equilibrium in much the same way as a weak acid does, with a base dissociation constant (Kb) indicating the strength of the base. For example, when ammonia is put in water, the following equilibrium is set up:
A base that has a large Kb will ionize more completely and is thus a stronger base. As shown above, the pH of the solution, which depends on the H+ concentration, increases with increasing OH− concentration; a greater OH− concentration means a smaller H+ concentration, therefore a greater pH. Strong bases have smaller H+ concentrations because they are more fully protonated, leaving fewer hydrogen ions in the solution. A smaller H+ concentration means a greater OH− concentration and, therefore, a greater Kb and a greater pH.
NaOH (s) (sodium hydroxide) is a stronger base than (CH3CH2)2NH (l) (diethylamine) which is a stronger base than NH3 (g) (ammonia). As the bases get weaker, the smaller the Kb values become.
Percentage protonated
As seen above, the strength of a base depends primarily on pH. To help describe the strengths of weak bases, it is helpful to know the percentage protonated-the percentage of base molecules that have been protonated. A lower percentage will correspond with a lower pH because both numbers result from the amount of protonation. A weak base is less protonated, leading to a lower pH and a lower percentage protonated.
The typical proton transfer equilibrium appears as such:
B represents the base.
In this formula, [B]initial is the initial molar concentration of the base, assuming that no protonation has occurred.
A typical pH problem
Calculate the pH and percentage protonation of a .20 M aqueous solution of pyridine, C5H5N. The Kb for C5H5N is 1.8 x 10−9.
First, write the proton transfer equilibrium:
The equilibrium table, with all concentrations in moles per liter, is
This means .0095% of the pyridine is in the protonated form of C5H5NH+.
Examples
Alanine
Ammonia, NH3
Methylamine, CH3NH2
Ammonium hydroxide, NH4OH
Simple Facts
An example of a weak base is ammonia. It does not contain hydroxide ions, but it reacts with water to produce ammonium ions and hydroxide ions.
The position of equilibrium varies from base to base when a weak base reacts with water. The further to the left it is, the weaker the base.
When there is a hydrogen ion gradient between two sides of the biological membrane, the concentration of some weak bases are focused on only one side of the membrane. Weak bases tend to build up in acidic fluids. Acid gastric contains a higher concentration of weak base than plasma. Acid urine, compared to alkaline urine, excretes weak bases at a faster rate.
| Physical sciences | Concepts | Chemistry |
487493 | https://en.wikipedia.org/wiki/Group%208%20element | Group 8 element | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
Group 8 is a group (column) of chemical elements in the periodic table. It consists of iron (Fe), ruthenium (Ru), osmium (Os) and hassium (Hs). "Group 8" is the modern standard designation for this group, adopted by the IUPAC in 1990. It should not be confused with "group VIIIA" in the CAS system, which is group 18 (current IUPAC), the noble gases. In the older group naming systems, this group was combined with groups 9 and 10 and called group "VIIIB" in the Chemical Abstracts Service (CAS) "U.S. system", or "VIII" in the old IUPAC (pre-1990) "European system" (and in Mendeleev's original table). The elements in this group are all transition metals that lie in the d-block of the periodic table.
While groups (columns) of the periodic table are usually named after their lightest member (as in "the oxygen group" for group 16), iron group has historically been used differently; most often, it means a set of adjacent elements on period (row) 4 of the table that includes iron, such as chromium, manganese, iron, cobalt, and nickel, or only the last three, or some other set, depending on the context.
Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior.
Basic properties
The following is copied from the pages of Iron, Ruthenium, Osmium, and Hassium respectively.
Pristine and smooth pure iron surfaces are a mirror-like silvery-gray. Iron reacts readily with oxygen and water to produce brown-to-black hydrated iron oxides, commonly known as rust. Unlike the oxides of some other metals that form passivating layers, rust occupies more volume than the metal and thus flakes off, exposing more fresh surfaces for corrosion. High-purity irons (e.g. electrolytic iron) are more resistant to corrosion.
Because it hardens platinum and palladium alloys, ruthenium is used in electrical contacts, where a thin film is sufficient to achieve the desired durability. With its similar properties to and lower cost than rhodium, electric contacts are a major use of ruthenium. The ruthenium plate is applied to the electrical contact and electrode base metal by electroplating or sputtering.
Osmium is a hard but brittle metal that remains lustrous even at high temperatures. It has a very low compressibility. Correspondingly, its bulk modulus is extremely high, reported between 395 and 462 GPa, which rivals that of diamond (443 GPa). The hardness of osmium is moderately high at 4 GPa. Because of its hardness, brittleness, low vapor pressure (the lowest of the platinum-group metals), and very high melting point (the fourth highest of all elements, after carbon, tungsten, and rhenium), solid osmium is difficult to machine, form, or work.
Very few properties of hassium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that hassium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, such as enthalpy of adsorption of hassium tetroxide, but properties of hassium metal remain unknown and only predictions are available. Though despite its radioactivity, chemists have formed hassium tetroxide and sodium hassate(VII) through various means.
Occurrence and production
In terms of mass, iron is the fourth most common element within the Earth's crust. It is found in many minerals, such as hematite, magnetite, and taconite. Iron is commercially produced by heating these minerals in a blast furnace with coke and calcium carbonate.
Ruthenium is a very rare metal in Earth's crust. It is often found in minerals such as pentlandite and pyroxinite. It can be commercially obtained as a waste product from refining nickel.
Osmium is found in osmiridium. It can also be obtained as a waste product from refining nickel.
Hassium is extremely radioactive, and as such is not found naturally in the Earth's crust. It is produced via the bombardment of lead-208 atoms with iron-58 atoms.
Biological role
Iron is a mineral used in the human body that is essential for good health. It is a component in the proteins of hemoglobin and myoglobin, both of which are responsible for transporting oxygen around the body. Iron is a part of some hormones as well. A lack of iron in the body can cause iron deficiency anemia, and an excess of iron in the body can be toxic.
Some ruthenium-containing molecules may be used to fight cancer. Normally, however, ruthenium plays no role in the human body.
Both osmium and hassium have no known biological roles.
| Physical sciences | Group 8 | Chemistry |
487497 | https://en.wikipedia.org/wiki/Group%209%20element | Group 9 element | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
Group 9, by modern IUPAC numbering, is a group (column) of chemical elements in the d-block of the periodic table. Members of Group 9 include cobalt (Co), rhodium (Rh), iridium (Ir) and meitnerium (Mt). These elements are among the rarest of the transition metals.
Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior; however, rhodium deviates from the pattern.
History
"Group 9" is the modern standard designation for this group, adopted by the IUPAC in 1990. In the older group naming systems, this group was combined with group 8 (iron, ruthenium, osmium, and hassium) and group 10 (nickel, palladium, platinum, and darmstadtium) and called group "VIIIB" in the Chemical Abstracts Service (CAS) "U.S. system", or "VIII" in the old IUPAC (pre-1990) "European system" (and in Mendeleev's original table).
Cobalt
Cobalt compounds have been used for centuries to impart a rich blue color to glass, glazes, and ceramics. Cobalt has been detected in Egyptian sculpture, Persian jewelry from the third millennium BC, in the ruins of Pompeii, destroyed in 79 AD, and in China, dating from the Tang dynasty (618–907 AD) and the Ming dynasty (1368–1644 AD).
Swedish chemist Georg Brandt (1694–1768) is credited with discovering cobalt c. 1735, showing it to be a previously unknown element, distinct from bismuth and other traditional metals. Brandt called it a new "semi-metal". He showed that compounds of cobalt metal were the source of the blue color in glass, which previously had been attributed to the bismuth found with cobalt. Cobalt became the first metal to be discovered since the pre-historical period. All other known metals (iron, copper, silver, gold, zinc, mercury, tin, lead and bismuth) had no recorded discoverers.
Rhodium
Rhodium was discovered in 1803 by William Hyde Wollaston, soon after he discovered palladium. He used crude platinum ore presumably obtained from South America. His procedure dissolved the ore in aqua regia and neutralized the acid with sodium hydroxide (NaOH). He then precipitated the platinum as ammonium chloroplatinate by adding ammonium chloride (). Most other metals like copper, lead, palladium, and rhodium were precipitated with zinc. Diluted nitric acid dissolved all but palladium and rhodium. Of these, palladium dissolved in aqua regia but rhodium did not, and the rhodium was precipitated by the addition of sodium chloride as . After being washed with ethanol, the rose-red precipitate was reacted with zinc, which displaced the rhodium in the ionic compound and thereby released the rhodium as free metal.
Iridium
Chemists who studied platinum dissolved it in aqua regia (a mixture of hydrochloric and nitric acids) to create soluble salts. They always observed a small amount of a dark, insoluble residue. In 1803, British scientist Smithson Tennant (1761–1815) analyzed the insoluble residue and concluded that it must contain a new metal. Vauquelin treated the powder alternately with alkali and acids and obtained a volatile new oxide, which he believed to be of this new metal—which he named ptene, from the Greek word ptēnós, "winged". Tennant, who had the advantage of a much greater amount of residue, continued his research and identified the two previously undiscovered elements in the black residue, iridium and osmium. He obtained dark red crystals (probably of ]·n) by a sequence of reactions with sodium hydroxide and hydrochloric acid. He named iridium after Iris (), the Greek winged goddess of the rainbow and the messenger of the Olympian gods, because many of the salts he obtained were strongly colored. Discovery of the new elements was documented in a letter to the Royal Society on June 21, 1804.
Meitnerium
Meitnerium was first synthesized on August 29, 1982, by a German research team led by Peter Armbruster and Gottfried Münzenberg at the Institute for Heavy Ion Research (Gesellschaft für Schwerionenforschung) in Darmstadt. The team bombarded a target of bismuth-209 with accelerated nuclei of iron-58 and detected a single atom of the isotope meitnerium-266:
+ → +
This work was confirmed three years later at the Joint Institute for Nuclear Research at Dubna (then in the Soviet Union).
Properties
[*] Predicted.
The first three elements are hard silvery-white metals:
Cobalt is a metallic element that can be used to turn glass a deep blue color. Cobalt is primarily used in lithium-ion batteries, and in the manufacture of magnetic, wear-resistant and high-strength alloys. The compounds cobalt silicate and cobalt(II) aluminate (CoAl2O4, cobalt blue) give a distinctive deep blue color to glass, ceramics, inks, paints and varnishes. Cobalt occurs naturally as only one stable isotope, cobalt-59. Cobalt-60 is a commercially important radioisotope, used as a radioactive tracer and for the production of high-energy gamma rays. Cobalt is also used in the petroleum industry as a catalyst when refining crude oil. This is to clean it of its sulfur content, which is very polluting when burned and causes acid rain.
Rhodium can be used in jewelry as a shiny metal. Rhodium is a hard, silvery, durable metal that has a high reflectance. Rhodium metal does not normally form an oxide, even when heated. Oxygen is absorbed from the atmosphere only at the melting point of rhodium but is released on solidification. Rhodium has both a higher melting point and lower density than platinum. It is not attacked by most acids as it is completely insoluble in nitric acid and dissolves slightly in aqua regia.
Iridium is mainly used as a hardening agent for platinum alloys. Iridium is the most corrosion-resistant metal known as it is not attacked by acids, including aqua regia. In the presence of oxygen, it reacts with cyanide salts. Traditional oxidants also react, including the halogens and oxygen at higher temperatures. Iridium also reacts directly with sulfur at atmospheric pressure to yield iridium disulfide.
All known isotopes of meitnerium are radioactive with short half-lives. Only minute quantities have been synthesized in laboratories. It has not been isolated in pure form, and its physical and chemical properties have not been determined yet. Based on what is known, meitnerium is considered a homologue to iridium.
Biological role
Of the group 9 elements, only cobalt has a biological role. It is a key constituent of cobalamin, also known as vitamin B, the primary biological reservoir of cobalt as an ultratrace element. Bacteria in the stomachs of ruminant animals convert cobalt salts into vitamin B, a compound which can only be produced by bacteria or archaea. A minimal presence of cobalt in soils therefore markedly improves the health of grazing animals, and an uptake of 0.20 mg/kg a day is recommended, because they have no other source of vitamin B.
Proteins based on cobalamin use corrin to hold the cobalt. Coenzyme B12 features a reactive C-Co bond that participates in the reactions. In humans, B12 has two types of alkyl ligand: methyl and adenosyl. MeB12 promotes methyl (−CH3) group transfers. The adenosyl version of B12 catalyzes rearrangements in which a hydrogen atom is directly transferred between two adjacent atoms with concomitant exchange of the second substituent, X, which may be a carbon atom with substituents, an oxygen atom of an alcohol, or an amine. Methylmalonyl coenzyme A mutase (MUT) converts MMl-CoA to Su-CoA, an important step in the extraction of energy from proteins and fats.
| Physical sciences | Group 9 | Chemistry |
487510 | https://en.wikipedia.org/wiki/Group%2012%20element | Group 12 element | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
Group 12, by modern IUPAC numbering, is a group of chemical elements in the periodic table. It includes zinc (Zn), cadmium (Cd), mercury (Hg), and copernicium (Cn). Formerly this group was named IIB (pronounced as "group two B", as the "II" is a Roman numeral) by CAS and old IUPAC system.
The three group 12 elements that occur naturally are zinc, cadmium and mercury. They are all widely used in electric and electronic applications, as well as in various alloys. The first two members of the group share similar properties as they are solid metals under standard conditions. Mercury is the only metal that is known to be a liquid at room temperature – as copernicium's boiling point has not yet been measured accurately enough, it is not yet known whether it is a liquid or a gas under standard conditions. While zinc is very important in the biochemistry of living organisms, cadmium and mercury are both highly toxic. As copernicium does not occur in nature, it has to be synthesized in the laboratory.
Physical and atomic properties
Like other groups of the periodic table, the members of group 12 show patterns in its electron configuration, especially the outermost shells, which result in trends in their chemical behavior:
The group 12 elements are all soft, diamagnetic, divalent metals. They have the lowest melting points among all transition metals. Zinc is bluish-white and lustrous, though most common commercial grades of the metal have a dull finish. Zinc is also referred to in nonscientific contexts as spelter. Cadmium is soft, malleable, ductile, and with a bluish-white color. Mercury is a liquid, heavy, silvery-white metal. It is the only common liquid metal at ordinary temperatures, and as compared to other metals, it is a poor conductor of heat, but a fair conductor of electricity.
The table below is a summary of the key physical properties of the group 12 elements. The data for copernicium is based on relativistic density-functional theory simulations.
Zinc is somewhat less dense than iron and has a hexagonal crystal structure. The metal is hard and brittle at most temperatures but becomes malleable between . Above , the metal becomes brittle again and can be pulverized by beating. Zinc is a fair conductor of electricity. For a metal, zinc has relatively low melting () and boiling points (). Cadmium is similar in many respects to zinc but forms complex compounds. Unlike other metals, cadmium is resistant to corrosion and as a result it is used as a protective layer when deposited on other metals. As a bulk metal, cadmium is insoluble in water and is not flammable; however, in its powdered form it may burn and release toxic fumes. Mercury has an exceptionally low melting temperature for a d-block metal. A complete explanation of this fact requires a deep excursion into quantum physics, but it can be summarized as follows: mercury has a unique electronic configuration where electrons fill up all the available 1s, 2s, 2p, 3s, 3p, 3d, 4s, 4p, 4d, 4f, 5s, 5p, 5d and 6s subshells. As such configuration strongly resists removal of an electron, mercury behaves similarly to noble gas elements, which form weak bonds and thus easily melting solids. The stability of the 6s shell is due to the presence of a filled 4f shell. An f shell poorly screens the nuclear charge that increases the attractive Coulomb interaction of the 6s shell and the nucleus (see lanthanide contraction). The absence of a filled inner f shell is the reason for the somewhat higher melting temperature of cadmium and zinc, although both these metals still melt easily and, in addition, have unusually low boiling points. Gold has atoms with one less 6s electron than mercury. Those electrons are more easily removed and are shared between the gold atoms forming relatively strong metallic bonds.
Zinc, cadmium and mercury form a large range of alloys. Among the zinc containing ones, brass is an alloy of zinc and copper. Other metals long known to form binary alloys with zinc are aluminium, antimony, bismuth, gold, iron, lead, mercury, silver, tin, magnesium, cobalt, nickel, tellurium and sodium. While neither zinc nor zirconium are ferromagnetic, their alloy exhibits ferromagnetism below 35 K. Cadmium is used in many kinds of solder and bearing alloys, due to a low coefficient of friction and fatigue resistance. It is also found in some of the lowest-melting alloys, such as Wood's metal. Because it is a liquid, mercury dissolves other metals and the alloys that are formed are called amalgams. For example, such amalgams are known with gold, zinc, sodium, and many other metals. Because iron is an exception, iron flasks have been traditionally used to trade mercury. Other metals that do not form amalgams with mercury include tantalum, tungsten and platinum. Sodium amalgam is a common reducing agent in organic synthesis, and is also used in high-pressure sodium lamps. Mercury readily combines with aluminium to form a mercury-aluminium amalgam when the two pure metals come into contact. Since the amalgam reacts with air to give aluminium oxide, small amounts of mercury corrode aluminium. For this reason, mercury is not allowed aboard an aircraft under most circumstances because of the risk of it forming an amalgam with exposed aluminium parts in the aircraft.
Chemistry
Most of the chemistry has been observed only for the first three members of the group 12. The chemistry of copernicium is not well established and therefore the rest of the section deals only with zinc, cadmium and mercury.
Periodic trends
All elements in this group are metals. The similarity of the metallic radii of cadmium and mercury is an effect of the lanthanide contraction. So, the trend in this group is unlike the trend in group 2, the alkaline earths, where metallic radius increases smoothly from top to bottom of the group. All three metals have relatively low melting and boiling points, indicating that the metallic bond is relatively weak, with relatively little overlap between the valence band and the conduction band. Thus, zinc is close to the boundary between metallic and metalloid elements, which is usually placed between gallium and germanium, though gallium participates in semi-conductors such as gallium arsenide.
Zinc and cadmium are electropositive while mercury is not. As a result, zinc and cadmium metal are good reducing agents. The elements of group 12 have an oxidation state of +2 in which the ions have the rather stable d10 electronic configuration, with a full sub-shell. However, mercury can easily be reduced to the +1 oxidation state; usually, as in the ion , two mercury(I) ions come together to form a metal-metal bond and a diamagnetic species. Cadmium can also form species such as [Cd2Cl6]4− in which the metal's oxidation state is +1. Just as with mercury, the formation of a metal-metal bond results in a diamagnetic compound in which there are no unpaired electrons; thus, making the species very reactive. Zinc(I) is known mostly in the gas phase, in such compounds as linear Zn2Cl2, analogous to calomel. In the solid phase, the rather exotic compound decamethyldizincocene (Cp*Zn–ZnCp*) is known.
Classification
The elements in group 12 are usually considered to be d-block elements, but not transition elements as the d-shell is full. Some authors classify these elements as main-group elements because the valence electrons are in ns2 orbitals. Nevertheless, they share many characteristics with the neighboring group 11 elements on the periodic table, which are almost universally considered to be transition elements. For example, zinc shares many characteristics with the neighboring transition metal, copper. Zinc complexes merit inclusion in the Irving-Williams series as zinc forms many complexes with the same stoichiometry as complexes of copper(II), albeit with smaller stability constants. There is little similarity between cadmium and silver as compounds of silver(II) are rare and those that do exist are very strong oxidizing agents. Likewise the common oxidation state for gold is +3, which precludes there being much common chemistry between mercury and gold, though there are similarities between mercury(I) and gold(I) such as the formation of linear dicyano complexes, [M(CN)2]−. According to IUPAC's definition of transition metal as an element whose atom has an incomplete d sub-shell, or which can give rise to cations with an incomplete d sub-shell, zinc and cadmium are not transition metals, while mercury is. This is because only mercury is known to have a compound where its oxidation state is higher than +2, in mercury(IV) fluoride (though its existence is disputed, as later experiments trying to confirm its synthesis could not find evidence of HgF4). However, this classification is based on one highly atypical compound seen at non-equilibrium conditions and is at odds to mercury's more typical chemistry, and Jensen has suggested that it would be better to regard mercury as not being a transition metal.
Relationship with the alkaline earth metals
Although group 12 lies in the d-block of the modern 18-column periodic table, the d electrons of zinc, cadmium, and (almost always) mercury behave as core electrons and do not take part in bonding. This behavior is similar to that of the main-group elements, but is in stark contrast to that of the neighboring group 11 elements (copper, silver, and gold), which also have filled d-subshells in their ground-state electron configuration but behave chemically as transition metals. For example, the bonding in chromium(II) sulfide (CrS) involves mainly the 3d electrons; that in iron(II) sulfide (FeS) involves both the 3d and 4s electrons; but that of zinc sulfide (ZnS) involves only the 4s electrons and the 3d electrons behave as core electrons. Indeed, useful comparison can be made between their properties and the first two members of group 2, beryllium and magnesium, and in earlier short-form periodic table layouts, this relationship is illustrated more clearly. For instance, zinc and cadmium are similar to beryllium and magnesium in their atomic radii, ionic radii, electronegativities, and also in the structure of their binary compounds and their ability to form complex ions with many nitrogen and oxygen ligands, such as complex hydrides and amines. However, beryllium and magnesium are small atoms, unlike the heavier alkaline earth metals and like the group 12 elements (which have a greater nuclear charge but the same number of valence electrons), and the periodic trends down group 2 from beryllium to radium (similar to that of the alkali metals) are not as smooth when going down from beryllium to mercury (which is more similar to that of the p-block main groups) due to the d-block and lanthanide contractions. It is also the d-block and lanthanide contractions that give mercury many of its distinctive properties.
Compounds
All three metal ions form many tetrahedral species, such as . Both zinc and cadmium can also form octahedral complexes such as the aqua ions [M(H2O)6]2+ which are present in aqueous solutions of salts of these metals. Covalent character is achieved by using the s and p orbitals. Mercury, however, rarely exceeds a coordination number of four. Coordination numbers of 2, 3, 5, 7 and 8 are also known.
History
The elements of group 12 have been found throughout history, being used since ancient times to being discovered in laboratories. The group itself has not acquired a trivial name, but it has been called group IIB in the past.
Zinc
Zinc has been found being used in impure forms in ancient times as well as in alloys such as brass that have been found to be over 2000 years old. Zinc was distinctly recognized as a metal under the designation of Fasada in the medical Lexicon ascribed to the Hindu king Madanapala (of Taka dynasty) and written about the year 1374. The metal was also of use to alchemists. The name of the metal was first documented in the 16th century, and is probably derived from the German for the needle-like appearance of metallic crystals.
The isolation of metallic zinc in the West may have been achieved independently by several people in the 17th century. German chemist Andreas Marggraf is usually given credit for discovering pure metallic zinc in a 1746 experiment by heating a mixture of calamine and charcoal in a closed vessel without copper to obtain a metal. Experiments on frogs by the Italian doctor Luigi Galvani in 1780 with brass paved the way for the discovery of electrical batteries, galvanization and cathodic protection. In 1799, Galvani's friend, Alessandro Volta, invented the Voltaic pile. The biological importance of zinc was not discovered until 1940 when carbonic anhydrase, an enzyme that scrubs carbon dioxide from blood, was shown to have zinc in its active site.
Cadmium
In 1817, cadmium was discovered in Germany as an impurity in zinc carbonate minerals (calamine) by Friedrich Stromeyer and Karl Samuel Leberecht Hermann. It was named after the Latin cadmia for "calamine", a cadmium-bearing mixture of minerals, which was in turn named after the Greek mythological character, Κάδμος Cadmus, the founder of Thebes. Stromeyer eventually isolated cadmium metal by roasting and reduction of the sulfide.
In 1927, the International Conference on Weights and Measures redefined the meter in terms of a red cadmium spectral line (1 m = 1,553,164.13 wavelengths). This definition has since been changed (see krypton). At the same time, the International Prototype Meter was used as standard for the length of a meter until 1960, when at the General Conference on Weights and Measures the meter was defined in terms of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in vacuum.
Mercury
Mercury has been found in Egyptian tombs which have been dated back to 1500 BC, where mercury was used in cosmetics. It was also used by the ancient Chinese who believed it would improve and prolong health. By 500 BC mercury was used to make amalgams (Medieval Latin amalgama, "alloy of mercury") with other metals. Alchemists thought of mercury as the First Matter from which all metals were formed. They believed that different metals could be produced by varying the quality and quantity of sulfur contained within the mercury. The purest of these was gold, and mercury was called for in attempts at the transmutation of base (or impure) metals into gold, which was the goal of many alchemists.
Hg is the modern chemical symbol for mercury. It comes from hydrargyrum, a Latinized form of the Greek word Ύδραργυρος (hydrargyros), which is a compound word meaning "water-silver" (hydr- = water, argyros = silver) — since it is liquid like water and shiny like silver. The element was named after the Roman god Mercury, known for speed and mobility. It is associated with the planet Mercury; the astrological symbol for the planet is also one of the alchemical symbols for the metal. Mercury is the only metal for which the alchemical planetary name became the common name.
Copernicium
The heaviest known group 12 element, copernicium, was first created on February 9, 1996, at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany, by Sigurd Hofmann, Victor Ninov et al. It was then officially named by the International Union of Pure and Applied Chemistry (IUPAC) after Nicolaus Copernicus on February 19, 2010, the 537th anniversary of Copernicus' birth.
Occurrence
Like in most other d-block groups, the abundance in Earth's crust of group 12 elements decreases with higher atomic number. Zinc is with 65 parts per million (ppm) the most abundant in the group while cadmium with 0.1 ppm and mercury with 0.08 ppm are orders of magnitude less abundant. Copernicium, as a synthetic element with a half-life of a few minutes, may only be present in the laboratories where it was produced.
Group 12 metals are chalcophiles, meaning the elements have low affinities for oxides and prefer to bond with sulfides. Chalcophiles formed as the crust solidified under the reducing conditions of the early Earth's atmosphere. The commercially most important minerals of group 12 elements are sulfide minerals. Sphalerite, which is a form of zinc sulfide, is the most heavily mined zinc-containing ore because its concentrate contains 60–62% zinc. No significant deposits of cadmium-containing ores are known. Greenockite (CdS), the only cadmium mineral of importance, is nearly always associated with sphalerite (ZnS). This association is caused by the geochemical similarity between zinc and cadmium which makes geological separation unlikely. As a consequence, cadmium is produced mainly as a byproduct from mining, smelting, and refining sulfidic ores of zinc, and, to a lesser degree, lead and copper. One place where metallic cadmium can be found is the Vilyuy River basin in Siberia. Although mercury is an extremely rare element in the Earth's crust, because it does not blend geochemically with those elements that constitute the majority of the crustal mass, mercury ores can be highly concentrated considering the element's abundance in ordinary rock. The richest mercury ores contain up to 2.5% mercury by mass, and even the leanest concentrated deposits are at least 0.1% mercury (12,000 times average crustal abundance). It is found either as a native metal (rare) or in cinnabar (HgS), corderoite, livingstonite and other minerals, with cinnabar being the most common ore.
While mercury and zinc minerals are found in large enough quantities to be mined, cadmium is too similar to zinc and therefore is always present in small quantities in zinc ores from where it is recovered. Identified world zinc resources total about 1.9 billion tonnes. Large deposits are in Australia, Canada and the United States with the largest reserves in Iran. At the current rate of consumption, these reserves are estimated to be depleted sometime between 2027 and 2055. About 346 million tonnes have been extracted throughout history to 2002, and one estimate found that about 109 million tonnes of that remains in use. In 2005, China was the top producer of mercury with almost two-thirds global share followed by Kyrgyzstan. Several other countries are believed to have unrecorded production of mercury from copper electrowinning processes and by recovery from effluents. Because of the high toxicity of mercury, both the mining of cinnabar and refining for mercury are hazardous and historic causes of mercury poisoning.
Production
Zinc is the fourth most common metal in use, trailing only iron, aluminium, and copper with an annual production of about 10 million tonnes. Worldwide, 95% of the zinc is mined from sulfidic ore deposits, in which sphalerite (ZnS) is nearly always mixed with the sulfides of copper, lead and iron. Zinc metal is produced using extractive metallurgy. Roasting converts the zinc sulfide concentrate produced during processing to zinc oxide. For further processing two basic methods are used: pyrometallurgy or electrowinning. Pyrometallurgy processing reduces zinc oxide with carbon or carbon monoxide at into the metal, which is distilled as zinc vapor. The zinc vapor is collected in a condenser. Electrowinning processing leaches zinc from the ore concentrate by sulfuric acid. After this step electrolysis is used to produce zinc metal.
Cadmium is a common impurity in zinc ores, and it is most isolated during the production of zinc. Some zinc ores concentrates from sulfidic zinc ores contain up to 1.4% of cadmium. Cadmium is isolated from the zinc produced from the flue dust by vacuum distillation if the zinc is smelted, or cadmium sulfate is precipitated out of the electrolysis solution.
The richest mercury ores contain up to 2.5% mercury by mass, and even the leanest concentrated deposits are at least 0.1% mercury, with cinnabar (HgS) being the most common ore in the deposits.
Mercury is extracted by heating cinnabar in a current of air and condensing the vapor.
Superheavy elements such as copernicium are produced by bombarding lighter elements in particle accelerators that induces fusion reactions. Whereas most of the isotopes of copernicium can be synthesized directly this way, some heavier ones have only been observed as decay products of elements with higher atomic numbers. The first fusion reaction to produce copernicium was performed by GSI in 1996, who reported the detection of two decay chains of copernicium-277 (though one was later retracted, as it had been based on data fabricated by Victor Ninov):
+ → +
Applications
Due to the physical similarities which they share, the group 12 elements can be found in many common situations. Zinc and cadmium are commonly used as anti-corrosion (galvanization) agents as they will attract all local oxidation until they completely corrode. These protective coatings can be applied to other metals by hot-dip galvanizing a substance into the molten form of the metal, or through the process of electroplating which may be passivated by the use of chromate salts. Group 12 elements are also used in electrochemistry as they may act as an alternative to the standard hydrogen electrode in addition to being a secondary reference electrode.
In the US, zinc is used predominantly for galvanizing (55%) and for brass, bronze and other alloys (37%). The relative reactivity of zinc and its ability to attract oxidation to itself makes it an efficient sacrificial anode in cathodic protection (CP). For example, cathodic protection of a buried pipeline can be achieved by connecting anodes made from zinc to the pipe. Zinc acts as the anode (negative terminus) by slowly corroding away as it passes electric current to the steel pipeline. Zinc is used to cathodically protect metals that are exposed to sea water from corrosion.
Zinc is used as an anode material for batteries such as in zinc–carbon batteries or zinc–air battery/fuel cells.
A widely used alloy which contains zinc is brass, in which copper is alloyed with anywhere from 3% to 45% zinc, depending upon the type of brass. Brass is generally more ductile and stronger than copper and has superior corrosion resistance. These properties make it useful in communication equipment, hardware, musical instruments, and water valves. Other widely used alloys that contain zinc include nickel silver, typewriter metal, soft and aluminium solder, and commercial bronze. Alloys of primarily zinc with small amounts of copper, aluminium, and magnesium are useful in die casting as well as spin casting, especially in the automotive, electrical, and hardware industries. These alloys are marketed under the name Zamak. Roughly one quarter of all zinc output in the United States (2009) is consumed in the form of zinc compounds, a variety of which are used industrially.
Cadmium has many common industrial uses as it is a key component in battery production, is present in cadmium pigments, coatings, and is commonly used in electroplating. In 2009, 86% of cadmium was used in batteries, predominantly in rechargeable nickel-cadmium batteries. The European Union banned the use of cadmium in electronics in 2004 with several exceptions but reduced the allowed content of cadmium in electronics to 0.002%. Cadmium electroplating, consuming 6% of the global production, can be found in the aircraft industry due to the ability to resist corrosion when applied to steel components.
Mercury is used primarily for the manufacture of industrial chemicals or for electrical and electronic applications. It is used in some thermometers, especially ones which are used to measure high temperatures. A still increasing amount is used as gaseous mercury in fluorescent lamps, while most of the other applications are slowly phased out due to health and safety regulations, and is in some applications replaced with less toxic but considerably more expensive Galinstan alloy. Mercury and its compounds have been used in medicine, although they are much less common today than they once were, now that the toxic effects of mercury and its compounds are more widely understood. It is still used as an ingredient in dental amalgams. In the late 20th century the largest use of mercury was in the mercury cell process (also called the Castner-Kellner process) in the production of chlorine and caustic soda.
Copernicium has no use other than research due to its very high radioactivity.
Biological role and toxicity
The group 12 elements have multiple effects on biological organisms as cadmium and mercury are toxic while zinc is required by most plants and animals in trace amounts.
Zinc is an essential trace element, necessary for plants, animals, and microorganisms. It is "typically the second most abundant transition metal in organisms" after iron and it is the only metal which appears in all enzyme classes. There are 2–4 grams of zinc distributed throughout the human body, and it plays "ubiquitous biological roles". A 2006 study estimated that about 10% of human proteins (2800) potentially bind zinc, in addition to hundreds which transport and traffic zinc. In the U.S., the Recommended Dietary Allowance (RDA) is 8 mg/day for women and 11 mg/day for men. Harmful excessive supplementation may be a problem and should probably not exceed 20 mg/day in healthy people, although the U.S. National Research Council set a Tolerable Upper Intake of 40 mg/day.
Mercury and cadmium are toxic and may cause environmental damage if they enter rivers or rain water. This may result in contaminated crops as well as the bioaccumulation of mercury in a food chain leading to an increase in illnesses caused by mercury and cadmium poisoning.
| Physical sciences | Group 12 | Chemistry |
487518 | https://en.wikipedia.org/wiki/Group%2010%20element | Group 10 element | |-
! colspan=2 style="text-align:left;" | ↓ Period
|-
! 4
|
|-
! 5
|
|-
! 6
|
|-
! 7
|
|-
| colspan="2"|
Legend
|}
Group 10, numbered by current IUPAC style, is the group of chemical elements in the periodic table that consists of nickel (Ni), palladium (Pd), platinum (Pt), and darmstadtium (Ds). All are d-block transition metals. All known isotopes of darmstadtium are radioactive with short half-lives, and are not known to occur in nature; only minute quantities have been synthesized in laboratories.
Characteristics
Chemical properties
The ground state electronic configurations of palladium and platinum are exceptions to Madelung's rule. According to Madelung's rule, the electronic configuration of palladium and platinum are expected to be [Kr] 5s2 4d8 and [Xe] 4f14 5d8 6s2 respectively. However, the 5s orbital of palladium is empty, and the 6s orbital of platinum is only partially filled. The relativistic stabilization of the 7s orbital is the explanation to the predicted electron configuration of darmstadtium, which, unusually for this group, conforms to that predicted by the Aufbau principle. In general, the ground state electronic configurations of heavier atoms and transition metals are more difficult to predict.
Group 10 elements are observed in oxidation states of +1 to +4. The +2 oxidation state is common for nickel and palladium, while +2 and +4 are common for platinum. Oxidation states of -2 and -1 have also been observed for nickel and platinum, and an oxidation state of +5 has been observed for palladium and platinum. Platinum has also been observed in oxidations states of -3 and +6. Theory suggests that platinum may produce a +10 oxidation state under specific conditions, but this remains to be shown empirically.
Physical properties
Darmstadtium has not been isolated in pure form, and its properties have not been conclusively observed; only nickel, palladium, and platinum have had their properties experimentally confirmed. Nickel, platinum, and palladium are typically silvery-white transition metals, and can also be readily obtained in powdered form. They are hard, have a high luster, and are highly ductile. Group 10 elements are resistant to tarnish (oxidation) at STP, are refractory, and have high melting and boiling points.
Occurrence and production
Nickel occurs naturally in ores, and it is the earth's 22nd most abundant element. Two prominent groups of ores from which it can be extracted are laterites and sulfide ores. Indonesia holds the world's largest nickel reserve, and is also its largest producer.
History
Discoveries of the elements
Nickel
The use of nickel, often mistaken for copper, dates as far back as 3500 BCE. Nickel has been discovered in a dagger dating to 3100 BCE, in Egyptian iron beads, a bronze reamer found in Syria dating to 3500–3100 BCE, as copper-nickel alloys in coins minted in Bactria, in weapons and pots near the Senegal river, and as agricultural tools used by Mexicans in the 1700s. There is evidence to suggest that the use of nickel in antiquity came from meteoric iron, such as in the Sumerian name for iron an-bar ("fire from heaven") or in Hittite texts that describe iron's heavenly origins. Nickel was not formally named as an element until A. F. Cronstedt isolated the impure metal from "kupfernickel" (Old Nick's copper) in 1751. In 1804, J. B. Richter determined the physical properties of nickel using a purer sample, describing the metal as ductile and strong with a high melting point. The strength of nickel-steel alloys were described in 1889 and since then, nickel steels saw extensive use first for military applications and then in the development of corrosion- and heat-resistant alloys during the 20th century.
Palladium
Palladium was isolated by William Hyde Wollaston in 1803 while he was working on refining platinum metals. Palladium was in a residue left behind after platinum was precipitated out of a solution of hydrochloric acid and nitric acid as (NH4)PtCl6. Wollaston named it after the recently discovered asteroid 2 Pallas and anonymously sold small samples of the metal to a shop, which advertised it as a "new noble metal" called "Palladium, or New Silver". This raised doubts about its purity, source, and the identity of its discoverer, causing controversy. He eventually identified himself and read his paper on the discovery of palladium to the Royal Society in 1805.
Platinum
Prior to its formal discovery, platinum was used in jewelry by native Ecuadorians of the province of Esmeraldas. The metal was found in small grains mixed with gold in river deposits, which the workers sintered with gold to form small trinkets such as rings. The first published report of platinum was written by Antonio de Ulloa, a Spanish mathematician, astronomer, and naval officer who observed "platina" (little silver) in the gold mines of Ecuador during a French expedition in 1736. Miners found the "platina" difficult to separate from gold, leading to the abandonment of those mines. Charles Wood (ironmaster) brought samples of the metal to England in 1741 and investigated its properties, observing its high melting point and its presence as small white grains in black metallic sand. Interest in the metal grew after Wood's findings were reported to the Royal Society. Henrik Teofilus Scheffer, a Swedish scientist, referred to the precious metal as "white gold" and the "seventh metal" in 1751, reporting its high durability, high density, and that it melted easily when mixed with copper or arsenic. Both Pierre-François Chabaneau (during the 1780s) and William Hyde Wollaston (during the 1800s) developed a powder metallurgy technique to produce malleable platinum, but kept their process a secret. However, their platinum ingots were brittle and tended to crack easily, likely due to impurities. In the 1800s, furnaces capable of sustaining high temperatures were invented, which eventually replaced powder metallurgy and introduced melted platinum to the market.
Applications
The group 10 metals share several uses. These include:
Decorative purposes, in the form of jewelry and electroplating.
Catalysts in a variety of chemical reactions.
Metal alloys.
Electrical components, due to their predictable changes in electrical resistivity with regard to temperature.
Superconductors, as components in alloys with other metals.
Biological role and toxicity
Platinum complexes are commonly used in chemotherapy as anticancer drugs due to their antitumor activity. Palladium complexes also show marginal antitumor activity, yet its poor activity is labile compared to platinum complexes.
| Physical sciences | Group 10 | Chemistry |
487691 | https://en.wikipedia.org/wiki/Dental%20floss | Dental floss | Dental floss is a cord of thin filaments, typically made of nylon or silk, used in interdental cleaning to remove food and dental plaque from between teeth or places a toothbrush has difficulty reaching or is unable to reach. Its regular use as part of oral cleaning is intended to maintain oral health.
Use of floss is recommended to prevent gingivitis and the build-up of plaque. The American Dental Association claims that up to 80% of plaque can be removed by flossing, and it may confer a particular benefit in individuals with orthodontic devices. However, empirical scientific evidence demonstrating the clinical benefit of flossing as an adjunct to routine tooth brushing alone remains limited.
History
Levi Spear Parmly (1790-1859), a dentist from New Orleans, is credited with inventing the first form of dental floss. In 1819, he recommended running a waxen silk thread "through the interstices of the teeth, between their necks and the arches of the gum, to dislodge that irritating matter which no brush can remove and which is the real source of disease." He considered this the most important part of oral care. Floss was not commercially available until 1882, when the Codman and Shurtleft company started producing unwaxed silk floss. In 1898, the Johnson & Johnson Corporation received the first patent for dental floss that was made from the same silk material used by doctors for silk stitches.
One of the earliest depictions of the use of dental floss in literary fiction is found in James Joyce's famous novel Ulysses (serialized 1918–1920), but the adoption of floss was low before World War II. During the war, nylon floss was developed by physician Charles C. Bass. Nylon floss was found to be better than silk because of its greater abrasion resistance and ability to be produced in great lengths and at various sizes.
Floss became part of American and Canadian daily personal dental care routines in the 1970s.
Use
Dental professionals recommend that a person floss once per day before or after brushing to reach the areas that the brush will not and allow the fluoride from the toothpaste to reach between the teeth. Floss is commonly supplied in plastic dispensers that contain 10 to 100 meters of floss. After pulling out approximately 40 cm of floss, the user pulls it against a blade in the dispenser to cut it off. The user then strings the piece of floss on a fork-like instrument, or holds it between their fingers using both hands with about 1–2 cm of floss exposed. The user guides the floss between each pair of teeth and gently curves it against the side of the tooth in a 'C' shape and guides it under the gumline. This removes particles of food stuck between teeth and dental plaque that adhere to dental surfaces below the gumline.
Types
Various dental flosses are commonly used in many forms, including waxed, unwaxed monofilaments and multifilaments. Dental floss that is made of monofilaments coated in wax slides easily between teeth, does not fray and is generally higher in cost than its uncoated counterparts. The most important difference between available dental flosses is thickness. Waxed and unwaxed floss are available in varying widths. Studies have shown that there is no difference in the effectiveness of waxed and unwaxed dental floss, but some waxed types of dental floss are said to contain antibacterial agents and/or sodium fluoride. Factors to consider in choosing a floss include the amount of space between teeth and user preference. Dental tape is a type of floss that is wider and flatter than conventional floss. Dental tape is recommended for people with larger tooth surface area.
The ability of different types of dental floss to remove dental plaque does not vary significantly; the least expensive floss has essentially the same impact on oral hygiene as the most expensive.
Factors to be considered when choosing the right floss or whether the use of floss as an interdental cleaning device is appropriate may be based on:
The tightness of the contact area: determines the width of floss
The contour of the gingival tissue
The roughness of the interproximal surface
The user's manual dexterity and preference: to determine if a supplemental device is required
Specialized plastic wands, or floss picks, have been produced to hold the floss. These may be attached to or separate from a floss dispenser. While wands do not pinch fingers like regular floss can, using a wand may be awkward and can also make it difficult to floss at all the angles possible with regular floss. These types of flossers also run the risk of missing the area under the gum line that needs to be flossed. On the other hand, the enhanced reach of a wand can make flossing the back teeth easier.
Dental floss is the most frequently recommended cleaning aid for teeth sides with a normal gingiva contour in which the spaces between teeth are tight and small. The dental term 'embrasure space' describes the size of the triangular-shaped space immediately under the contact point of two teeth. The size of the embrasure space is useful in selecting the most appropriate interdental cleaning aid. There are three interproximal embrasure types or classes as described below:
Type I – the gums fills embrasure space completely
Type II – the gums partially fills embrasure space
Type III – the gums do not fill embrasure space
The table below describes the types of interdental non-powered self-care products available.
The table below describes the different types of Interdental powered self-care products available.
Efficacy
Evidence
The American Dental Association has stated that flossing in combination with brushing of teeth can help prevent gum disease and halitosis.
However, evidence favoring commonplace use of floss remains limited. A 2008 systematic review concluded that adjunct flossing was no more effective than tooth brushing alone in reducing plaque or gingivitis. The authors concluded that routine instruction of flossing in gingivitis patients as helpful adjunct therapy is not supported by scientific evidence, and that flossing recommendations should be made by dental professionals on an individual basis.
A 2011 Cochrane Database systematic review identified "some evidence from 12 studies that flossing in addition to tooth brushing reduces gingivitis compared to tooth brushing alone", and "weak, very unreliable evidence from 10 studies that flossing plus tooth brushing may be associated with a small reduction in plaque at 1 and 3 months." Studies of flossing behavior are based on self-report and many people do not floss properly. A 2006 review of 6 studies in which professionals flossed the teeth of school children over a period of 1.7 years showed a 40% reduction in the risk of tooth decay.
More recently, a 2019 Cochrane Database systematic review compared toothbrushing alone to interdental cleaning devices, and also compared flossing to other interdental cleaning methods. In all, 35 randomized control trials met the criteria for inclusion, with all but 2 studies at high risk for performance bias. The authors concluded that "overall, the evidence was low to very low certainty, and the effect sizes observed may not be clinically important."
As many authors note, the efficacy of flossing may be highly variable based on individual preference, technique, and motivation. Moreover, flossing may be a relatively more difficult and tedious method of interdental cleaning compared to an interdental brush.
Exclusion from US Dietary Guidelines in 2015
There was a controversy when the 2015 United States Dietary Guidelines for Americans did not include a recommendation about flossing. The U.S. Department of Health and Human Services and the U.S. Department of Agriculture publish Dietary Guidelines for Americans every five years. Guidelines published in 2000, 2005 and 2010 recommended flossing as part of a combined approach to preventing dental diseases. The 2010 Guidelines mention flossing once in 95 pages, in 2005 the word also appears once in 71 pages and it appears twice in the 38-page 2000 document.
In August 2016, an Associated Press (AP) article titled "Medical benefits of dental floss unproven" reported on the omission of flossing from the 2015 document. The article tied the omission to the AP's Freedom of Information Request to the departments of Health and Human Services and Agriculture where it asked for the scientific evidence behind the Guidelines' flossing recommendation noting that "The guidelines must be based on scientific evidence, under the law." The story was picked up by other news organizations including The New York Times in an article entitled "Feeling Guilty About Not Flossing? Maybe There's No Need".
The American Dental Association contacted the U.S. Department of Health and Human Services about the omission and reported that the omission of the flossing recommendation was due to the fact that the Dietary Guidelines chose to focus on diet and that the omission was not because the Department questions the efficacy of flossing. As reported by Medscape
A website managed by a maker of dental floss referred to the entire episode as "Flossgate".
Floss for orthodontic appliances
Orthodontic appliances, such as brackets, wires, and bands, can harbor plaque with more virulent changes in bacterial composition, which can ultimately cause a reduction in periodontal health as indicated by increased gingival recession, bleeding on probing, and plaque retention measurements. Furthermore, fixed appliances makes plaque control more challenging and restricts the natural cleaning action of the tongue, lips, and cheek to remove food and bacterial debris from tooth surfaces, and also creates new plaque stagnation areas that stimulate the colonisation of pathogenic bacteria.Patients undergoing orthodontic treatment may be recommended to maintain a high level of plaque control through not only conscientious toothbrushing, but also proximal surface cleaning via interdental aids, with dental floss being the most recommended by dental professionals. Notably, small-scale clinical studies have demonstrated that dental floss, when used correctly, may lead to clinically significant improvements in proximal gingival health.
Floss threader
A floss threader is loop of fiber that is shaped in order to produce better handling characteristics. It is (similar to fishing line) used to thread floss into small, hard to reach sites around teeth. Threaders are sometimes required to floss with dental braces, fix retainers, and bridge.
Floss pick
A floss pick is a disposable oral hygiene device generally made of plastic and dental floss. The instrument is composed of two prongs extending from a thin plastic body of high-impact polystyrene material. A single piece of floss runs between the two prongs. The body of the floss pick generally tapers at its end in the shape of a toothpick.
There are two types of angled floss picks in the oral care industry, the Y-shaped angle and the F-shaped angle floss pick. At the base of the arch where the "Y" begins to branch there is a handle for gripping and maneuvering before it tapers off into a pick.
Floss picks are manufactured in a variety of shapes, colors and sizes for adults and children. The floss can be coated in fluoride, flavor or wax.
History of floss pick
In 1888, B.T. Mason wrapped a fibrous material around a toothpick and dubbed it the "combination tooth pick." In 1916, J.P. De L'eau invented a dental floss holder between two vertical poles. In 1935, F.H. Doner invented what today's consumer knows as the Y-shaped angled dental appliance. In 1963, James B. Kirby invented a tooth-cleaning device that resembles an archaic version of today's F-shaped floss pick.
In 1972, an inventor named Richard L. Wells found a way to attach floss to a single pick end. In the same year, another inventor named Harry Selig Katz came up with a method of making a disposable dental floss tooth pick.
In nature
A Japanese macaque and long-tailed macaques have been observed in the wild and in captivity flossing with human hair and feathers.
| Biology and health sciences | Hygiene products | Health |
25271733 | https://en.wikipedia.org/wiki/Dental%20trauma | Dental trauma | Dental trauma refers to trauma (injury) to the teeth and/or periodontium (gums, periodontal ligament, alveolar bone), and nearby soft tissues such as the lips, tongue, etc. The study of dental trauma is called dental traumatology.
Types
Dental injuries
Dental injuries include:
Enamel infraction
Enamel fracture
Enamel-dentine fracture
Enamel-dentine fracture involving pulp exposure
Root fracture of tooth
Periodontal injuries
Concussion (bruising)
Subluxation of the tooth (tooth knocked loose)
Luxation of the tooth (displaced)
Extrusive
Intrusive
Lateral
Avulsion of the tooth (tooth knocked out)
Injuries to supporting bone
This injury involves the alveolar bone and may extend beyond the alveolus. There are five different types of alveolar fractures:
Communicated fracture of the socket wall
Fracture of the socket wall
Dentoalveolar fracture (segmental)
Fracture of the maxilla: Le Fort fracture, zygomatic fracture, orbital blowout
Fracture of the mandible
Trauma injuries involving the alveolus can be complicated as it does not happen in isolation, very often presents along with other types of tooth tissue injuries.
Signs of dentoalveolar fracture:
Change to occlusion
Multiple teeth moving together as a segment and are normally displaced
Bruising of attached gingivae
Gingivae across the fracture line often lacerated
Investigation: Require more than one radiographic view to identify the fracture line.
Treatment: Reposition displaced teeth under local anaesthetic and stabilise the mobile segment with a splint for 4 weeks, suture any soft tissue lacerations.
Soft tissue laceration
Soft tissues injuries are presented commonly in association with dental trauma. Areas normally affected are lips, buccal mucosa, gingivae, frenum and tongue. The most common injuries are lips and gingivae. For lips, important to rule out presence of foreign objects in wounds and lacerations through careful examination. A radiograph can be taken to identify any potential foreign objects.
Gingivae lacerations that are small normally heals spontaneously and do not require any intervention. However, this can be one of the clinical presentation of an alveolar fracture. Gingivae bleeding especially around the margins may suggest injury to the periodontal ligament of the tooth.
The facial nerve and parotid duct should be examined for any potential damage when the buccal mucosa is involved.
Deep tissue wounds should be repaired in layers with sutures that are resorbable.
Primary teeth
Trauma to primary teeth occurs most commonly at the age of two to three years, during the development of motor coordination. When primary teeth are injured, the resulting treatment prioritises the safety of the adult tooth, and should avoid any risk of damaging the permanent successors. This is because the root apex of an injured primary tooth lies near the tooth germ of the adult tooth.
Therefore, a displaced primary tooth will be removed if it is found to have encroached upon the developing adult tooth germ. If this happens, parents should be advised of possible complications such as enamel hypoplasia, hypocalcification, crown/root dilaceration, or disruptions in tooth eruption sequence.
Potential sequelae can involve pulpal necrosis, pulp obliteration and root resorption. Necrosis is the most common complication and an assessment is generally made based on the colour supplemented with radiograph monitoring. A change in colour may mean that the tooth is still vital but if this persists it is likely to be non-vital.
Permanent teeth
Dental injuries
Periodontal injuries
Risk factors
Age, especially young children
Primary dentition stage (2–3 years old, when children's motor function is developing and start learning how to walk/ run)
Mixed dentition stage (8–10 years old)
Permanent dentition stage (13–15 years old)
Male > Female
Season (Many trauma incidents occur more in summer compared to winter)
Sports, especially contact sports such as football, hockey, rugby, basketball and skating
Piercing in tongue and lips
Military training
Acute changes in the barometric pressure, i.e. dental barotrauma, which can affect scuba divers and aviators
Class II malocclusion with increased overjet and Class II skeletal relationship and incompetent lips are the significant risk factors
Prevention
Prevention in general is relatively difficult as it is nearly impossible to stop accidents from happening, especially in children who are quite active. Regular use of a gum shield during sports and other high-risk activities (such as military training) is the most effective prevention for dental trauma. They are mainly being fitted on the upper teeth as it has higher risk of dental trauma compared to the lower teeth. Gum shields ideally have to be comfortable for users, retentive, odourless, tasteless and the materials should not be causing any harm to the body. However, studies in various high-risk populations for dental injuries have repeatedly reported low compliance of individuals for the regular using of mouthguard during activities. Moreover, even with regular use, effectiveness of prevention of dental injuries is not complete, and injuries can still occur even when mouthguards are used as users are not always aware of the best makes or size, which inevitably result in a poor fit.
Types of gum shield:
Stock ready-moulded
Not recommended as it does not conform the teeth at all
Poor retention
Poor fit
Higher risk of dislodging during contact sports and airway occlusion which may lead to respiratory distress
Self-moulded/Boil and bite
Limited range of sizes, which may result in poor fitting
Can be easily remoulded if distorted
Cheap
Custom-made
Made with ethylene vinyl acetate
The most ideal type of gum shield
Good retention
Able to build in multiple layers/laminations
Expensive
One of the most important measures is to impart knowledge and awareness about dental injury to those who are involved in sports environments like boxing and in school children in which they are at high risk of suffering dental trauma through an extensive educational campaign including lectures, leaflets, posters which should be presented in an easy understandable way.
Management
The management depends on the type of injury involved and whether it is a baby or an adult tooth. If teeth are completely knocked out baby front teeth should not be replaced. The area should be cleaned gently and the child brought to see a dentist. Adult front teeth (which usually erupt at around six years of age) can be replaced immediately if clean. If a tooth is avulsed, make sure it is a permanent tooth (primary teeth should not be replanted, and instead the injury site should be cleaned to allow the adult tooth to begin to erupt).
Reassure the patient and keep them calm.
If the tooth can be found, pick it up by the crown (the white part). Avoid touching the root part.
If the tooth is dirty, wash it briefly (ten seconds) under cold running water but do not scrub the tooth.
Place the tooth back in the socket where it was lost from, taking care to place it the correct way (matching the other tooth)
Encourage the patient to bite on a handkerchief to hold the tooth in position.
If it is not possible to replace the tooth immediately, ideally, the tooth should be placed in Hank's balanced salt solution, if not available, in a glass of milk or a container with the patient's saliva or in the patient's cheek (keeping it between the teeth and the inside of the cheek – note this is not suitable for young children who may swallow the tooth). Transporting the tooth in water is not recommended, as this will damage the delicate cells that make up the tooth's interior.
Seek emergency dental treatment immediately.
When the injured teeth are painful while functioning due to damage to the periodontal ligaments (e.g., dental subluxation), a temporary splinting of the injured teeth may relieve the pain and enhance eating ability. Splinting should only be used in certain situations. Splinting in lateral and extrusive luxation had a poorer prognosis than in root fractures. An avulsed permanent tooth should be gently rinsed under tap water and immediately re-planted in its original socket within the alveolar bone and later temporarily splinted by a dentist. Failure to re-plant the avulsed tooth within the first 40 minutes after the injury may result in very poor prognosis for the tooth. Management of injured primary teeth differs from management of permanent teeth; an avulsed primary tooth should not be re-planted (to avoid damage to the permanent dental crypt). This is due to the close proximity of the apex of a primary tooth to the permanent tooth underneath. The permanent dentition can suffer from tooth malformation, impacted teeth and eruption disturbances due to trauma to primary teeth. The priority should always be reducing potential damage to the underlying permanent dentition.
For other injuries, it is important to keep the area clean by using a soft toothbrush and antiseptic mouthwash such as chlorhexidine gluconate. Soft foods and avoidance of contact sports is also recommended in the short term. Dental care should be sought as quickly as possible.
Splinting
A tooth that has experienced trauma may become loose due to the periodontal ligament becoming damaged or fracture to the root of the tooth. Splinting ensures that the tooth is held in the correct position within the socket, ensuring that no further trauma occurs to enable healing. A splint can either be flexible or rigid. Flexible splints do not completely immobilise the traumatised tooth and still allow for functional movement. Contrastingly, rigid splints completely immobilise the traumatised tooth. The International Association of Dental Traumatology (IADT) guidelines recommend the use of flexible, non-rigid splints for a short duration by stating that both periodontal and pulpal healing is encouraged if the traumatised tooth is allowed slight movement and if the splinting time is not too long.
Complications
Not all sequelae of trauma are immediate and many of them can occur months or years after the initial incident thus required prolonged follow-up. Common complications are pulpal necrosis, pulpal obliteration, root resorption and damage to the successors teeth in primary teeth dental trauma. The most common complication was pulp necrosis (34.2%). 50% of the tooth that have trauma related to avulsion experienced ankylotic root resorption after a median TIC (time elapsed between the traumatic event and the diagnosis of complications) of 1.18 years. Teeth that have multiple traumatic events also showed to have higher chance of pulp necrosis (61.9%) compared to teeth that experienced a single traumatic injury (25.3%) in the studies (1)
Pulpal necrosis
Pulp necrosis usually occurs either as ischaemic necrosis (infarction) caused by disruption to the blood supply at the apical foramen or as an infection-related liquefactive necrosis following dental trauma (2). Signs of pulpal necrosis include
Persistent grey colour to tooth that does not fade
Radiographic signs of periapical inflammation
Clinical signs of infection: tenderness, sinus, suppuration, swelling
Treatment options will be extraction for the primary tooth. For the permanent tooth, endodontic treatment can be considered.
Root resorption
Root resorption following traumatic dental injuries, whether located along the root surface or within the root canal appears to be a sequel to wound healing events, where a significant amount of the PDL or pulp has been lost due to the effect of acute trauma.
Pulpal obliteration
4–24% of traumatized teeth will have some degrees of pulpal obliteration that is characterized by the loss of pulpal space radiographically and yellow discolouration of the clinical crown.
No treatment is needed if it is asymptomatic. Treatment options will be extraction for symptomatic primary tooth. For symptomatic permanent tooth, root canal treatment is often challenging because the pulp chamber is filled with calcified material and the drop-off sensation of entering a pulp chamber will not occur.
Damage to the successor teeth
Dental trauma to the primary teeth might cause damage to the permanent teeth. Damage to the permanent teeth especially during development stage might have following consequences:
Crown dilaceration
Odontoma-like malformation
Sequestration of permanent tooth germs
Root dilaceration
Arrest of root formation
Epidemiology
Dental trauma is most common in younger people, accounting for 17% of injuries to the body in those aged 0–6 years compared to an average of 5% across all ages. It is more frequently observed in males compared to females. Traumatic dental injuries are more common in permanent teeth compared to deciduous teeth and usually involve the front teeth of the upper jaw.
"The oral region comprises 1% of the total body area, yet it accounts for 5% of all bodily injuries. In preschool children, oral injuries make up as much as 17% of all bodily injuries. The incidence of traumatic dental injuries is 1–3%, and the prevalence is steady at 20–30%."
Almost 30% of the children in pre-school have mostly experienced trauma to primary teeth. Dental injuries involving the permanent teeth happen to almost 25% of children in school and 30% of adults. The incident varies in different countries as well as within the country itself. Dental traumatic accidents depends on one's activity status and also the surrounding environment factor but these are the main predisposing risk factor compared to a person's age and gender.
Trauma is the most common cause of loss of permanent incisors in childhood. Dental trauma often leads to complications such as pulpal necrosis, and it is nearly impossible to predict the long-term prognosis of the injured tooth; the injury often results in long-term restorative problems.
| Biology and health sciences | Types | Health |
1898396 | https://en.wikipedia.org/wiki/Chrome%20yellow | Chrome yellow | Chrome yellow is a bright, warm yellow pigment that has been used in art, fashion, and industry. It is the premier orange pigment for many applications.
Production of chrome yellow and related pigments
The raw pigment precipitates as a fine solid upon mixing lead(II) salts and a source of chromate. Approximately 90,000 tons of chrome yellow are produced annually as of 2001.
Chrome yellow pigments are usually encapsulated by coating with transparent oxides that protect the pigment from environmental factors that would diminish their colorant properties.
Related lead sulfochromate pigments are produced by the replacement of some chromate by sulfate, resulting in a mixed lead-chromate-sulfate compositions Pb(CrO4)1-x(SO4)x. This replacement is possible because sulfate and chromate are isostructural. Since sulfate is colorless, sulfochromates with high values of x are less intensely colored than lead chromate. In some cases, chromate is replaced by molybdate.
Permanence
Chrome yellow is moderately resistant to fading from exposure to light when it is chemically pure. Observations have found that over time though, it begins to darken and suffer discoloration by turning brown. This degradation is seen in some of Van Gogh's pieces. According to Gettens, especially when mixed with organic colors, it can take on a green tone. This effect is attributed to reduction of some chromate to chromium(III) oxide. Owing to its high lead content, the pigment is prone to discoloration over time, particularly in the presence of sulfur compounds. Its low cost had doubtlessly contributed to its continued use as an artists' color even though some subsequently discovered yellow pigments are more permanent. Artists began using cadmium yellow instead of chrome yellow when they became aware of chrome yellow's instability.
The pigment tends to react with hydrogen sulfide and darken on exposure to air over time, forming lead sulfide, and it contains the toxic heavy metal lead plus the toxic, carcinogenic chromate. For these reasons, it was replaced by another pigment, cadmium yellow (mixed with enough cadmium orange to produce a color equivalent to chrome yellow). Darkening may also occur from reduction by sulfur dioxide. Good quality pigments have been coated to inhibit contact with gases that can change their color. Cadmium pigments in turn are increasingly replaced with organic pigments such as arylides (Pigment Yellow 65) and isoindoles (PY 110).
Notable occurrences
Vincent van Gogh used chrome yellow in many of his paintings, including his famous Sunflowers series. Studies focusing on the techniques used in Van Gogh's Sunflowers series have revealed how Van Gogh skillfully mixed various shades of chrome yellow to achieve different effects. Chrome yellow has also been used in fashion and textiles, particularly in the 1920s and 1930s. The vibrant color was a popular choice for flapper dresses, hats, and accessories, and was often paired with other bright colors, such as pink and turquoise.
History
The pigment is derived from lead chromate, a chemical compound that was first synthesized in the early 1800s. The discovery of lead chromate, the primary component of chrome yellow, is credited to the French chemist Louis Nicolas Vauquelin. Vauquelin was studying the mineral crocoite, a natural form of lead chromate, when he identified the presence of a new element, chromium. The discovery led to the synthesis of a variety of new pigments, including chrome yellow. Chrome yellow quickly gained popularity among artists and designers for its bright, sunny hue, which was particularly well-suited for use in fashion and textiles. The earliest known use of chrome yellow in a painting is a work by Sir Thomas Lawrence from before 1810. The first recorded use of chrome yellow as a color name in English was in 1818. The pigment was also widely used in industrial applications, such as in the production of paint, plastics, and ceramics.
Safety
Because it contains not only lead but hexavalent chromium, chrome yellow has long been the focus on safety concerns. Its use is highly regulated. Its former use as a food colorant has long been discontinued. The continued wide use of this pigment is attributed to its very low solubility, which suppresses leaching of chromate and lead into biological fluids. The LD50 for rats is 5 g/kg.
| Physical sciences | Metallic oxyanions | Chemistry |
1898401 | https://en.wikipedia.org/wiki/Arc%20length | Arc length | Arc length is the distance between two points along a section of a curve. Development of a formulation of arc length suitable for applications to mathematics and the sciences is a focus of calculus. In the most basic formulation of arc length for a parametric curve (thought of as the trajectory of a particle), the arc length is obtained by integrating the speed of the particle over the path. Thus the length of a continuously differentiable curve , for , in the Euclidean plane is given as the integral
(because is the magnitude of the velocity vector , i.e., the particle's speed).
The defining integral of arc length does not always have a closed-form expression, and numerical integration may be used instead to obtain numerical values of arc length.
Determining the length of an irregular arc segment by approximating the arc segment as connected (straight) line segments is also called curve rectification. For a rectifiable curve these approximations don't get arbitrarily large (so the curve has a finite length).
General approach
A curve in the plane can be approximated by connecting a finite number of points on the curve using (straight) line segments to create a polygonal path. Since it is straightforward to calculate the length of each linear segment (using the Pythagorean theorem in Euclidean space, for example), the total length of the approximation can be found by summation of the lengths of each linear segment; that approximation is known as the (cumulative) chordal distance.
If the curve is not already a polygonal path, then using a progressively larger number of line segments of smaller lengths will result in better curve length approximations. Such a curve length determination by approximating the curve as connected (straight) line segments is called rectification of a curve. The lengths of the successive approximations will not decrease and may keep increasing indefinitely, but for smooth curves they will tend to a finite limit as the lengths of the segments get arbitrarily small.
For some curves, there is a smallest number that is an upper bound on the length of all polygonal approximations (rectification). These curves are called and the is defined as the number .
A signed arc length can be defined to convey a sense of orientation or "direction" with respect to a reference point taken as origin in the curve (see also: curve orientation and signed distance).
Formula for a smooth curve
Let be continuously differentiable (i.e., the derivative is a continuous function) function. The length of the curve is given by the formula
where is the Euclidean norm of the tangent vector to the curve.
To justify this formula, define the arc length as limit of the sum of linear segment lengths for a regular partition of as the number of segments approaches infinity. This means
where with for This definition is equivalent to the standard definition of arc length as an integral:
The last equality is proved by the following steps:
The second fundamental theorem of calculus shows where over maps to and . In the below step, the following equivalent expression is used.
The function is a continuous function from a closed interval to the set of real numbers, thus it is uniformly continuous according to the Heine–Cantor theorem, so there is a positive real and monotonically non-decreasing function of positive real numbers such that implies where and . Let's consider the limit N \to \infty of the following formula,
With the above step result, it becomes
Terms are rearranged so that it becomes
where in the leftmost side is used. By for so that , it becomes
with , , and . In the limit so thus the left side of approaches . In other words, in this limit, and the right side of this equality is just the Riemann integral of on This definition of arc length shows that the length of a curve represented by a continuously differentiable function on is always finite, i.e., rectifiable.
The definition of arc length of a smooth curve as the integral of the norm of the derivative is equivalent to the definition
where the supremum is taken over all possible partitions of This definition as the supremum of the all possible partition sums is also valid if is merely continuous, not differentiable.
A curve can be parameterized in infinitely many ways. Let be any continuously differentiable bijection. Then is another continuously differentiable parameterization of the curve originally defined by The arc length of the curve is the same regardless of the parameterization used to define the curve:
Finding arc lengths by integration
If a planar curve in is defined by the equation where is continuously differentiable, then it is simply a special case of a parametric equation where and The Euclidean distance of each infinitesimal segment of the arc can be given by:
The arc length is then given by:
Curves with closed-form solutions for arc length include the catenary, circle, cycloid, logarithmic spiral, parabola, semicubical parabola and straight line. The lack of a closed form solution for the arc length of an elliptic and hyperbolic arc led to the development of the elliptic integrals.
Numerical integration
In most cases, including even simple curves, there are no closed-form solutions for arc length and numerical integration is necessary. Numerical integration of the arc length integral is usually very efficient. For example, consider the problem of finding the length of a quarter of the unit circle by numerically integrating the arc length integral. The upper half of the unit circle can be parameterized as The interval corresponds to a quarter of the circle. Since and the length of a quarter of the unit circle is
The 15-point Gauss–Kronrod rule estimate for this integral of differs from the true length of
by and the 16-point Gaussian quadrature rule estimate of differs from the true length by only . This means it is possible to evaluate this integral to almost machine precision with only 16 integrand evaluations.
Curve on a surface
Let be a surface mapping and let be a curve on this surface. The integrand of the arc length integral is Evaluating the derivative requires the chain rule for vector fields:
The squared norm of this vector is
(where is the first fundamental form coefficient), so the integrand of the arc length integral can be written as (where and ).
Other coordinate systems
Let be a curve expressed in polar coordinates. The mapping that transforms from polar coordinates to rectangular coordinates is
The integrand of the arc length integral is The chain rule for vector fields shows that So the squared integrand of the arc length integral is
So for a curve expressed in polar coordinates, the arc length is:
The second expression is for a polar graph parameterized by .
Now let be a curve expressed in spherical coordinates where is the polar angle measured from the positive -axis and is the azimuthal angle. The mapping that transforms from spherical coordinates to rectangular coordinates is
Using the chain rule again shows that All dot products where and differ are zero, so the squared norm of this vector is
So for a curve expressed in spherical coordinates, the arc length is
A very similar calculation shows that the arc length of a curve expressed in cylindrical coordinates is
Simple cases
Arcs of circles
Arc lengths are denoted by s, since the Latin word for length (or size) is spatium.
In the following lines, represents the radius of a circle, is its diameter, is its circumference, is the length of an arc of the circle, and is the angle which the arc subtends at the centre of the circle. The distances and are expressed in the same units.
which is the same as This equation is a definition of
If the arc is a semicircle, then
For an arbitrary circular arc:
If is in radians then This is a definition of the radian.
If is in degrees, then which is the same as
If is in grads (100 grads, or grades, or gradians are one right-angle), then which is the same as
If is in turns (one turn is a complete rotation, or 360°, or 400 grads, or radians), then .
Great circles on Earth
Two units of length, the nautical mile and the metre (or kilometre), were originally defined so the lengths of arcs of great circles on the Earth's surface would be simply numerically related to the angles they subtend at its centre. The simple equation applies in the following circumstances:
if is in nautical miles, and is in arcminutes ( degree), or
if is in kilometres, and is in gradians.
The lengths of the distance units were chosen to make the circumference of the Earth equal kilometres, or nautical miles. Those are the numbers of the corresponding angle units in one complete turn.
Those definitions of the metre and the nautical mile have been superseded by more precise ones, but the original definitions are still accurate enough for conceptual purposes and some calculations. For example, they imply that one kilometre is exactly 0.54 nautical miles. Using official modern definitions, one nautical mile is exactly 1.852 kilometres, which implies that 1 kilometre is about nautical miles. This modern ratio differs from the one calculated from the original definitions by less than one part in 10,000.
Other simple cases
Historical methods
Antiquity
For much of the history of mathematics, even the greatest thinkers considered it impossible to compute the length of an irregular arc. Although Archimedes had pioneered a way of finding the area beneath a curve with his "method of exhaustion", few believed it was even possible for curves to have definite lengths, as do straight lines. The first ground was broken in this field, as it often has been in calculus, by approximation. People began to inscribe polygons within the curves and compute the length of the sides for a somewhat accurate measurement of the length. By using more segments, and by decreasing the length of each segment, they were able to obtain a more and more accurate approximation. In particular, by inscribing a polygon of many sides in a circle, they were able to find approximate values of π.
17th century
In the 17th century, the method of exhaustion led to the rectification by geometrical methods of several transcendental curves: the logarithmic spiral by Evangelista Torricelli in 1645 (some sources say John Wallis in the 1650s), the cycloid by Christopher Wren in 1658, and the catenary by Gottfried Leibniz in 1691.
In 1659, Wallis credited William Neile's discovery of the first rectification of a nontrivial algebraic curve, the semicubical parabola. The accompanying figures appear on page 145. On page 91, William Neile is mentioned as Gulielmus Nelius.
Integral form
Before the full formal development of calculus, the basis for the modern integral form for arc length was independently discovered by Hendrik van Heuraet and Pierre de Fermat.
In 1659 van Heuraet published a construction showing that the problem of determining arc length could be transformed into the problem of determining the area under a curve (i.e., an integral). As an example of his method, he determined the arc length of a semicubical parabola, which required finding the area under a parabola. In 1660, Fermat published a more general theory containing the same result in his De linearum curvarum cum lineis rectis comparatione dissertatio geometrica (Geometric dissertation on curved lines in comparison with straight lines).
Building on his previous work with tangents, Fermat used the curve
whose tangent at x = a had a slope of
so the tangent line would have the equation
Next, he increased a by a small amount to a + ε, making segment AC a relatively good approximation for the length of the curve from A to D. To find the length of the segment AC, he used the Pythagorean theorem:
which, when solved, yields
In order to approximate the length, Fermat would sum up a sequence of short segments.
Curves with infinite length
As mentioned above, some curves are non-rectifiable. That is, there is no upper bound on the lengths of polygonal approximations; the length can be made arbitrarily large. Informally, such curves are said to have infinite length. There are continuous curves on which every arc (other than a single-point arc) has infinite length. An example of such a curve is the Koch curve. Another example of a curve with infinite length is the graph of the function defined by f(x) = x sin(1/x) for any open set with 0 as one of its delimiters and f(0) = 0. Sometimes the Hausdorff dimension and Hausdorff measure are used to quantify the size of such curves.
Generalization to (pseudo-)Riemannian manifolds
Let be a (pseudo-)Riemannian manifold, the (pseudo-) metric tensor,
a curve in defined by parametric equations
and
The length of , is defined to be
,
or, choosing local coordinates ,
,
where
is the tangent vector of at The sign in the square root is chosen once for a given curve, to ensure that the square root is a real number. The positive sign is chosen for spacelike curves; in a pseudo-Riemannian manifold, the negative sign may be chosen for timelike curves. Thus the length of a curve is a non-negative real number. Usually no curves are considered which are partly spacelike and partly timelike.
In theory of relativity, arc length of timelike curves (world lines) is the proper time elapsed along the world line, and arc length of a spacelike curve the proper distance along the curve.
| Mathematics | Measurement | null |
1898647 | https://en.wikipedia.org/wiki/Singlet%20oxygen | Singlet oxygen | Singlet oxygen, systematically named dioxygen(singlet) and dioxidene, is a gaseous inorganic chemical with the formula O=O (also written as or ), which is in a quantum state where all electrons are spin paired. It is kinetically unstable at ambient temperature, but the rate of decay is slow.
The lowest excited state of the diatomic oxygen molecule is a singlet state.
It is a gas with physical properties differing only subtly from those of the more prevalent triplet ground state of O2. In terms of its chemical reactivity, however, singlet oxygen is far more reactive toward organic compounds. It is responsible for the photodegradation of many materials but can be put to constructive use in preparative organic chemistry and photodynamic therapy. Trace amounts of singlet oxygen are found in the upper atmosphere and in polluted urban atmospheres where it contributes to the formation of lung-damaging nitrogen dioxide. It often appears and coexists confounded in environments that also generate ozone, such as pine forests with photodegradation of turpentine.
The terms 'singlet oxygen' and 'triplet oxygen' derive from each form's number of electron spins. The singlet has only one possible arrangement of electron spins with a total quantum spin of 0, while the triplet has three possible arrangements of electron spins with a total quantum spin of 1, corresponding to three degenerate states.
In spectroscopic notation, the lowest singlet and triplet forms of O2 are labeled 1Δg and 3Σ, respectively.
Electronic structure
Singlet oxygen refers to one of two singlet electronic excited states. The two singlet states are denoted 1Σ and 1Δg (the preceding superscript "1" indicates a singlet state). The singlet states of oxygen are 158 and 95 kilojoules per mole higher in energy than the triplet ground state of oxygen. Under most common laboratory conditions, the higher energy 1Σ singlet state rapidly converts to the more stable, lower energy 1Δg singlet state. This more stable of the two excited states has its two valence electrons spin-paired in one π* orbital while the second π* orbital is empty. This state is referred to by the title term, singlet oxygen, commonly abbreviated 1O2, to distinguish it from the triplet ground state molecule, 3O2.
Molecular orbital theory predicts the electronic ground state denoted by the molecular term symbol 3Σ, and two low-lying excited singlet states with term symbols 1Δg and 1Σ. These three electronic states differ only in the spin and the occupancy of oxygen's two antibonding πg-orbitals, which are degenerate (equal in energy). These two orbitals are classified as antibonding and are of higher energy. Following Hund's first rule, in the ground state, these electrons are unpaired and have like (same) spin. This open-shell triplet ground state of molecular oxygen differs from most stable diatomic molecules, which have singlet (1Σ) ground states.
Two less stable, higher energy excited states are readily accessible from this ground state, again in accordance with Hund's first rule; the first moves one of the high energy unpaired ground state electrons from one degenerate orbital to the other, where it "flips" and pairs the other, and creates a new state, a singlet state referred to as the 1Δg state (a term symbol, where the preceding superscripted "1" indicates it as a singlet state). Alternatively, both electrons can remain in their degenerate ground state orbitals, but the spin of one can "flip" so that it is now opposite to the second (i.e., it is still in a separate degenerate orbital, but no longer of like spin); this also creates a new state, a singlet state referred to as the 1Σ state. The ground and first two singlet excited states of oxygen can be described by the simple scheme in the figure below.
The 1Δg singlet state is 7882.4 cm−1 above the triplet 3Σ ground state., which in other units corresponds to 94.29 kJ/mol or 0.9773 eV. The 1Σ singlet is 13 120.9 cm−1 (157.0 kJ/mol or 1.6268 eV) above the ground state.
Radiative transitions between the three low-lying electronic states of oxygen are formally forbidden as electric dipole processes. The two singlet-triplet transitions are forbidden both because of the spin selection rule ΔS = 0 and because of the parity rule that g-g transitions are forbidden. The singlet-singlet transition between the two excited states is spin-allowed but parity-forbidden.
The lower, O2(1Δg) state is commonly referred to as singlet oxygen. The energy difference of 94.3 kJ/mol between ground state and singlet oxygen corresponds to a forbidden singlet-triplet transition in the near-infrared at ~1270 nm. As a consequence, singlet oxygen in the gas phase is relatively long lived (54-86 milliseconds), although interaction with solvents reduces the lifetime to microseconds or even nanoseconds. In 2021, the lifetime of airborne singlet oxygen at air/solid interfaces was measured to be 550 microseconds.
The higher 1Σ state is moderately short lived. In the gas phase, it relaxes primarily to the ground state triplet with a mean lifetime of 11.8 seconds. However in solvents such as CS2 and CCl4, it relaxes to the lower singlet 1Δg in milliseconds due to radiationless decay channels.
Paramagnetism due to orbital angular momentum
Both singlet oxygen states have no unpaired electrons and therefore no net electron spin. The 1Δg is however paramagnetic as shown by the observation of an electron paramagnetic resonance (EPR) spectrum. The paramagnetism of the 1Δg state is due to a net orbital (and not spin) electronic angular momentum. In a magnetic field the degeneracy of the levels is split into two levels with z projections of angular momenta +1ħ and −1ħ around the molecular axis. The magnetic transition between these levels gives rise to the EPR transition.
Production
Various methods for the production of singlet oxygen exist. Irradiation of oxygen gas in the presence of an organic dye as a sensitizer, such as rose bengal, methylene blue, or porphyrins—a photochemical method—results in its production. Large steady state concentrations of singlet oxygen are reported from the reaction of triplet excited state pyruvic acid with dissolved oxygen in water. Singlet oxygen can also be produced by chemical procedures without irradiation. One chemical method involves the decomposition of triethylsilyl hydrotrioxide generated in situ from triethylsilane and ozone.
(C2H5)3SiH + O3 → (C2H5)3SiOOOH → (C2H5)3SiOH + O2(1Δg)
Another method uses a reaction of hydrogen peroxide with sodium hypochlorite in aqueous solution:
H2O2 + NaOCl → O2(1Δg) + NaCl + H2O
A retro-Diels Alder reaction of the diphenylanthracene peroxide can also yield singlet oxygen, along with an diphenylanthracene:
A third method liberates singlet oxygen via phosphite ozonides, which are, in turn, generated in situ such as triphenyl phosphite ozonide. Phosphite ozonides will decompose to give singlet oxygen:
(RO)3P + O3 → (RO)3PO3
(RO)3PO3 → (RO)3PO + O2(1Δg)
An advantage of this method is that it is amenable to non-aqueous conditions.
Reactions
Because of differences in their electron shells, singlet and triplet oxygen differ in their chemical properties; singlet oxygen is highly reactive. The lifetime of singlet oxygen depends on the medium and pressure. In normal organic solvents, the lifetime is only a few microseconds whereas in solvents lacking C-H bonds, the lifetime can be as long as seconds.
Unlike ground state oxygen, singlet oxygen participates in Diels–Alder [4+2]- and [2+2]-cycloaddition reactions and formal concerted ene reactions (Schenck ene reaction), causing photooxygenation. It oxidizes thioethers to sulfoxides. Organometallic complexes are often degraded by singlet oxygen. With some substrates 1,2-dioxetanes are formed; cyclic dienes such as 1,3-cyclohexadiene form [4+2] cycloaddition adducts.
The [4+2]-cycloaddition between singlet oxygen and furans is widely used in organic synthesis.
In singlet oxygen reactions with alkenic allyl groups, e.g., citronella, shown, by abstraction of the allylic proton, in an ene-like reaction, yielding the allyl hydroperoxide, R–O–OH (R = alkyl), which can then be reduced to the corresponding allylic alcohol.
In reactions with water, trioxidane, an unusual molecule with three consecutive linked oxygen atoms, is formed.
Biochemistry
In photosynthesis, singlet oxygen can be produced from the light-harvesting chlorophyll molecules. One of the roles of carotenoids in photosynthetic systems is to prevent damage caused by produced singlet oxygen by either removing excess light energy from chlorophyll molecules or quenching the singlet oxygen molecules directly.
In mammalian biology, singlet oxygen is one of the reactive oxygen species, which is linked to oxidation of LDL cholesterol and resultant cardiovascular effects. Polyphenol antioxidants can scavenge and reduce concentrations of reactive oxygen species and may prevent such deleterious oxidative effects.
Ingestion of pigments capable of producing singlet oxygen with activation by light can produce severe photosensitivity of skin (see phototoxicity, photosensitivity in humans, photodermatitis, phytophotodermatitis). This is especially a concern in herbivorous animals (see Photosensitivity in animals).
Singlet oxygen is the active species in photodynamic therapy.
Analytical and physical chemistry
Singlet oxygen luminesces concomitant with its decay to the triplet ground state. This phenomenon was first observed in the thermal degradation of the endo peroxide of rubrene.
| Physical sciences | Group 16 | Chemistry |
1899565 | https://en.wikipedia.org/wiki/Hedenbergite | Hedenbergite | Hedenbergite, CaFeSi2O6, is the iron rich end member of the pyroxene group having a monoclinic crystal system. The mineral is extremely rarely found as a pure substance, and usually has to be synthesized in a lab. It was named in 1819 after M.A. Ludwig Hedenberg, who was the first to define hedenbergite as a mineral. Contact metamorphic rocks high in iron are the primary geologic setting for hedenbergite. This mineral is unique because it can be found in chondrites and skarns (calc–silicate metamorphic rocks). Since it is a member of the pyroxene family, there is a great deal of interest in its importance to general geologic processes.
Properties
Hedenbergite has a number of specific properties. Its hardness is usually between five and six with two cleavage plains and conchoidal fracture. Color varies between black, greenish black, and dark brown with a resinous luster. Hedenbergite is a part of a pyroxene solid solution chain consisting of diopside and augite, and is the iron rich end member. One of the best indicators that you have located hedenbergite is the radiating prisms with a monoclinic crystal system. Hedenbergite is found primarily in metamorphic rocks.
Composition and structure
The pyroxene quadrilateral easily records the compositions of different pyroxenes contained in igneous rocks, such as diopside, hedenbergite, enstatite, ferrosilite. Hedenbergite is almost never found isolated. From the chemical formulas above, we can tell that the main differences in the compositions will be in terms of calcium, magnesium, and iron. D. H. Lindsley and J. L. Munoz (1969) did such an experiment in order to figure out exactly which combinations of temperature and pressure will cause particular minerals to combine. According to their experiment, at 1000 degrees with a pressure less than two kilobars the stable composition is a mixture of hedenbergite, olivine, and quartz. When the pressure moves to twenty kilobars, the composition moves towards the clinopyroxenes, which contains trace amounts of hedenbergite if any. For temperatures of 750 degrees Celsius, the compositions move from hedenbergite with olivine and quartz to ferrosilite with a greater amount of hedenbergite. If you combine the results of both of these sets of data, you can see that the stability of hedenbergite is more dependent on temperature as opposed to pressure.
Effects of chemical composition on elasticity
Pyroxenes are essential to the geologic processes that occur in the mantle and transition zones. One crystal was oriented with the C axis, and another perpendicular to the C axis. The elastic strength of a polyhedron is determined by the cation occupying the central site. As the bond length of the cations and anions decreases the bond strength increases making the mineral more compact and dense. Substitution between ions like Ca2+ and Mg2+ would not have a great effect on the resistance to compression while substitution of Si4+ would make it much harder to compress. Si4+ would be inherently stronger than Ca2+ due to the larger charge and electronegativity.
Occurrence in chondrites
Chondrites are meteorites that have experienced very little alteration by melting or differentiation since the formation of the Solar System 4.56 billion years ago. One of the most studied chondrites in existence is the Allende meteorite. Hedenbergite was found to be the most abundant secondary calcium-rich silicate phase within Allende chondules and is closely associated with other minerals such as sodalite and nepheline. Kimura and Ikeda (1995) also suggest that hedenbergite formation may have been the result of the consumption of CaO and SiO2 as plagioclases decomposed into sodalite and nepheline as well as alkali-calcium exchange before the condrules' incorporation into the parent body.
Occurrence in skarns
Hedenbergite can be found in skarns. A skarn is a metamorphic rock that is formed by the chemical alterations of the original minerals by hydrothermal causes. They are formed by large chemical reactions between adjacent lithologies. The Nickel Plate gold skarn deposit of the Hedley District in southern British Columbia is characterized by hedenbergitic pyroxene.
| Physical sciences | Silicate minerals | Earth science |
1902021 | https://en.wikipedia.org/wiki/Herrerasaurus | Herrerasaurus | Herrerasaurus is likely a genus of saurischian dinosaur from the Late Triassic period. Measuring long and weighing around , this genus was one of the earliest dinosaurs from the fossil record. Its name means "Herrera's lizard", after the rancher who discovered the first specimen in 1958 in South America. All known fossils of this carnivore have been discovered in the Ischigualasto Formation of Carnian age (late Triassic according to the ICS, dated to 231.4 million years ago) in northwestern Argentina. The type species, Herrerasaurus ischigualastensis, was described by Osvaldo Reig in 1963 and is the only species assigned to the genus. Ischisaurus and Frenguellisaurus are synonyms.
For many years, the classification of Herrerasaurus was unclear because it was known from very fragmentary remains. It was hypothesized to be a basal theropod, a basal sauropodomorph, a basal saurischian, or not a dinosaur at all but another type of archosaur. However, with the discovery of an almost complete skeleton and skull in 1988, Herrerasaurus has been classified as an early saurischian in most of the phylogenies on the origin and early evolution of dinosaurs.
It is a member of the Herrerasauridae, a family of similar genera that were among the earliest of the dinosaurian evolutionary radiation.
Discovery
Herrerasaurus was named by paleontologist Osvaldo Reig after Victorino Herrera, an Andean goatherd who first noticed its fossils in outcrops near the city of San Juan, Argentina in 1959. These rocks, which later yielded Eoraptor, are part of the Ischigualasto Formation and date from the late Carnian stage of the Late Triassic period. Reig named a second dinosaur from these rocks in the same publication as Herrerasaurus; this dinosaur, Ischisaurus cattoi, is now considered a junior synonym and a juvenile of Herrerasaurus.
Reig believed Herrerasaurus was an early example of a carnosaur, but this was the subject of much debate over the next 30 years, and the genus was variously classified during that time. In 1970, Steel classified Herrerasaurus as a prosauropod. In 1972, Peter Galton classified the genus as not diagnosable beyond Saurischia. Later, using cladistic analysis, some researchers put Herrerasaurus and Staurikosaurus at the base of the dinosaur tree before the separation between ornithischians and saurischians. Several researchers classified the remains as non-dinosaurian.
Two other partial skeletons, with skull material, were named Frenguellisaurus ischigualastensis by Fernando Novas in 1986, but this species too is now thought to be a synonym. Frenguellisaurus ischigualastensis was discovered in 1975, and was described by Novas (1986) who considered it a primitive saurischian, and possibly a theropod. Novas (1992) and Sereno and Novas (1992) examined the Frenguellisaurus remains and found them referable to Herrerasaurus. Ischisaurus cattoi was discovered in 1960 and described by Reig in 1963. Novas (1992) and Sereno and Novas (1992) reviewed its remains and found them also to be referable to Herrerasaurus.
A complete Herrerasaurus skull was found in 1988, by a team of paleontologists led by Paul Sereno. Based on the new fossils, authors such as Thomas Holtz and José Bonaparte classified Herrerasaurus at the base of the saurischian tree before the divergence between prosauropods and theropods. However, Sereno favored classifying Herrerasaurus (and the Herrerasauridae) as primitive theropods. These two classifications have become the most persistent, with Rauhut (2003) and Bittencourt and Kellner (2004) favoring the early theropod hypothesis, and Max Langer (2004), Langer and Benton (2006), and Randall Irmis and his coauthors (2007) favoring the basal saurischian hypothesis. If Herrerasaurus were indeed a theropod, it would indicate that theropods, sauropodomorphs, and ornithischians diverged even earlier than herrerasaurids, before the middle Carnian, and that "all three lineages independently evolved several dinosaurian features, such as a more advanced ankle joint or an open acetabulum". This view is further supported by ichnological records showing large tridactyl (three-toed) footprints that can be attributed only to a theropod dinosaur. These footprints date from the early Carnian Los Rastros Formation in Argentina, which predates Herrerasaurus by several million years.
The study of early dinosaurs such as Herrerasaurus and Eoraptor therefore has important implications for the concept of dinosaurs as a monophyletic group (a group descended from a common ancestor). The monophyly of dinosaurs was explicitly proposed in the 1970s by Galton and Robert T. Bakker, who compiled a list of cranial and postcranial synapomorphies (common anatomical traits derived from the common ancestor). Later authors proposed additional synapomorphies. An extensive study of Herrerasaurus by Sereno in 1992 suggested that of these proposed synapomorphies, only one cranial and seven postcranial features were actually derived from a common ancestor, and that the others were attributable to convergent evolution. Sereno's analysis of Herrerasaurus also led him to propose several new dinosaurian synapomorphies.
Description
Herrerasaurus was a lightly built bipedal carnivore with a long tail and a relatively small head. Adults had skulls up to long and were up to in total length and in weight. Smaller specimens were about long and weighed about .
Herrerasaurus was fully bipedal. It had strong hind limbs with short thighs and rather long feet, indicating that it was likely a swift runner. The foot had five toes, but only the middle three (digits II, III, and IV) bore weight. The outer toes (I and V) were small; the first toe had a small claw. The tail, partially stiffened by overlapping vertebral projections, balanced the body and was also an adaptation for speed. The forelimbs of Herrerasaurus were less than half the length of its hind limbs. The upper arm and forearm were rather short, while the manus (hand) was elongated. The first two fingers and the thumb ended in curved, sharp claws for grasping prey. The fourth and fifth digits were small stubs without claws.
Herrerasaurus displays traits that are found in different groups of dinosaurs, and several traits found in non-dinosaurian archosaurs. Although it shares most of the characteristics of dinosaurs, there are a few differences, particularly in the shape of its hip and leg bones. Its pelvis is like that of saurischian dinosaurs, but it has a bony acetabulum (where the femur meets the pelvis) that was only partially open. The ilium, the main hip bone, is supported by only two sacrals, a basal trait. However, the pubis points backwards, a derived trait as seen in dromaeosaurids and birds. Additionally, the end of the pubis has a booted shape, like those in avetheropods; and the vertebral centra have an hourglass shape as found in Allosaurus.
Herrerasaurus had a long, narrow skull that lacked nearly all the specializations that characterized later dinosaurs, and more closely resembled those of more primitive archosaurs such as Euparkeria. It had five pairs of fenestrae (skull openings) in its skull, two pairs of which were for the eyes and nostrils. Between the eyes and the nostrils were two antorbital fenestrae and a pair of tiny, slit-like holes called promaxillary fenestrae.
Herrerasaurus had a flexible joint in the lower jaw that could slide back and forth to deliver a grasping bite. This cranial specialization is unusual among dinosaurs but has evolved independently in some lizards. The rear of the lower jaw also had fenestrae. The jaws were equipped with large serrated teeth for biting and eating flesh, and the neck was slender and flexible.
According to Novas (1993), Herrerasaurus can be distinguished based on the following features: the presence of a premaxilla-maxilla fenestra, and the dorsal part of laterotemporal fenestra is less than a third as wide as the ventral part; the presence of a ridge on the lateral surface of the jugal bone, and a deeply incised supratemporal fossa that extends across the medial postorbital process; the subquadrate ventral squamosal process has a lateral depression, and the quadratojugal bone overlaps the posterodorsal quadrate face; the pterygoid process of the quadrate has an inturned, trough-shaped ventral margin, and the presence of a slender ribbed posterodorsal dentary process; the surangular bone has a forked anterior process for articulation with the posterodorsal dentary process; the humerus' internal tuberosity is proximally projected and separated from the humeral head by a deep groove (also present in coelophysoids); possesses enlarged hands, which are 60% of the size of the humerus+radius, and the humeral entepicondyle is ridge-like with anterior and posterior depressions; and the posterior border of the ilial peduncle forms a right angle with the dorsal border of the shaft on the ischium.
According to Sereno (1993), Herrerasaurus can be distinguished based on the following features, all of which are unknown in other herrerasaurids: a circular pit is present on the humeral ectepicondyle, a feature also present in Saturnalia; a saddle-shaped ulnar condyle of the humerus, and the articular surface for the ulnare on the ulna is convex; the articular surface of the ulnare is smaller than that of the ulna, a feature unknown in Staurikosaurus and Sanjuansaurus; the centrale is placed distal to the radiale; a broad subnarial process of the premaxilla, and a broad supratemporal depression (noted by Sereno and Novas, 1993); the basal tuber and the occipital condyle are subequal in width (noted by Sereno and Novas, 1993).
Classification
Herrerasaurus was originally considered to be a genus within Carnosauria, which then included forms similar to Megalosaurus and Antrodemus (the latter is probably equivalent to Allosaurus), even though Herrerasaurus lived many millions of years before them and therefore would have retained multiple primitive features. This carnosaurian classification was amended upon by Rozhdestvensky and Tatarinov in 1964, who classified Herrerasaurus within the family Gryponichidae inside Carnosauria. The same year, Walker published a differing opinion that Herrerasaurus instead was allied with Plateosauridae, although it differed in possessing a pubic boot. Walker also proposed that Herrerasaurus may instead be close to Poposaurus (now considered a pseudosuchian) and the unnamed theropod from the Dockum Group of Texas (now assigned to the rauisuchian Postosuchus). In 1985, Charig noted that Herrerasaurus was of uncertain classification, showing similarities to both "prosauropods" and "carnosaurians". Romer (1966), simply noted that Herrerasaurus was a prosauropod possibly within Plateosauridae. In the description of Staurikosaurus, Colbert noted that there were many similarities between his taxon and Herrerasaurus, but classified them in separate families, with Herrerasaurus in Teratosauridae. In 1970, Bonaparte also proposed similarities between Herrerasaurus and Staurikosaurus, and while classifying them both clearly as in Saurischia, he stated that they appeared as though they could not be placed in a current family. This was further supported by Benedetto in 1973, who named for the taxa the new family Herrerasauridae, which he classified as saurischians, possibly within Theropoda but not in Sauropodomorpha. However, in 1977 Galton proposed that Herrerasauridae only included Herrerasaurus, and found it to be Saurischian incertae sedis.
Proposed in 1987 by Brinkman and Sues, Herrerasaurus has at times been considered basal to Ornithischia and Saurischia, although Brinkmann and Sues still considered it to be inside Dinosauria. They supported this on the basis that Herrerasaurus has a large pedal digit V, and has a well developed medial wall on the acetabulum. Brinkmann and Sues considered Staurikosaurus and Herrerasaurus to not form a true group called Herrerasauridae, and that instead they were successively more primitive forms. Also, they considered the characters used by Benedetto to be invalid, instead representing only the plesiomorphic state that was found in both taxa. This was disagreed with in 1992 by Novas, who stated many derived synapomorphies of Herrerasauridae, such as a distinct pubic boot, but still classified them as basal to Ornithischia and Saurischia. Novas defined the family as the least common ancestor of Herrerasaurus and Staurikosaurus and all its descendants. A differing definition of Herrerasauridae as the most inclusive clade including Herrerasaurus but not Passer domesticus was first suggested by Sereno (1998), and more closely follows the original inclusion proposed by Benedetto. Another group, Herrerasauria was named by Galton in 1985, and defined as Herrerasaurus but not Liliensternus or Plateosaurus by Langer (2004), who used the node-based definition for Herrerasauridae.
In a revision of basal Dinosauria, Padian and May (1993) discussed the definition of the clade, and redefined it as the latest common ancestor of Triceratops and birds. They also discussed what this definition would do to the most basal taxa, such as Herrerasauridae, and Eoraptor. Padian and May considered that since both Herrerasauridae and Eoraptor lack many diagnostic features of Saurischia or Ornithischia, that they could not be considered inside Dinosauria.
A later 1994 study by Novas instead classified Herrerasaurus within Dinosauria, and strongly supported its position within Saurischia, as well as provided synapomorphies that it shared with Theropoda. Novas found that the primitive features of lacking a brevis fossa and having only two sacral vertebrae were simply reversals found in the genus. In 1996, Novas went further by supporting a theropod position for Herrerasaurus with a phylogenetic analysis, which placed it closer to Neotheropoda than Eoraptor or Sauropodomorpha. Langer (2004) mentioned that this hypothesis was widely accepted, but that more later authors instead preferred to place Herrerasaurus as well as Eoraptor basal to Theropoda and Sauropodomorpha, a clade called Eusaurischia. Langer (2004) conducted a phylogenetic analysis, and found that it was much more likely that Herrerasaurus was a basal saurischian, than either a theropod or a non-dinosaurian. Langer's proposal was supported by multiple studies until the discovery of Tawa, when Nesbitt et al. conducted a more inclusive analysis, and the resulting cladogram placed Herrerasauridae basal to Eoraptor, but closer to Dilophosaurus than Sauropodomorpha. Unlike Nesbitt, Ezcurra (2010) conducted a phylogenetic analysis to place his new taxon Chromogisaurus, and found that Herrerasauridae was basal to Eusaurischia.
In 2010, Alcocer and Martinez described a new taxon of herrerasaurid, Sanjuansaurus. It could be distinguished from Herrerasaurus based on multiple features. In the phylogenetic analysis, Herrerasaurus, Sanjuansaurus and Staurikosaurus all were in a polytomy, and Herrerasauridae was the most primitive group of saurischian, outside Eusaurischia, Eoraptor and Guaibasaurus. In 2011, Martinez et al. described Eodromaeus, a basal theropod from the same formation as Herrerasaurus. In a phylogenetic analysis, Eoraptor was placed within Sauropodomorpha, Herrerasauridae was placed as the most basal theropods, and Eodromaeus was placed as the next most basal. A more recent analysis, by Bittencourt et al. (2014), placed Herrerasauridae in a polytomy with Theropoda and Sauropodomorpha, with Eoraptor also being in an unresolved position. This cladogram is shown below.
Other members of the clade may include Chindesaurus from the Upper Petrified Forest (Chinle Formation) of Arizona, and possibly Caseosaurus from the Tecovas Formation of the Dockum Group in Texas, although the relationships of these animals are not fully understood, and not all paleontologists agree. Other possible basal theropods, Alwalkeria from the Late Triassic Lower Maleri Formation of India, and Teyuwasu, known from very fragmentary remains from the Late Triassic of Brazil, might be related. Paul (1988) noted that it had been incorrectly suggested that Staurikosaurus pricei was a juvenile Herrerasaurus. This claim was refuted when pelvic bones from a juvenile Herrerasaurus were discovered, which upon examination did not resemble the pelvic bones of Staurikosaurus.
Paleobiology
The teeth of Herrerasaurus indicate that it was a carnivore; its size indicates it would have preyed upon small and medium-sized plant eaters. These might have included other dinosaurs, such as Pisanosaurus, as well as the more plentiful rhynchosaurs and synapsids. Herrerasaurus itself may have been preyed upon by giant "rauisuchians" (loricatans) like Saurosuchus; puncture wounds were found in one skull.
Coprolites (fossilized dung) containing small bones but no trace of plant fragments, discovered in the Ischigualasto Formation, have been assigned to Herrerasaurus based on fossil abundance. Mineralogical and chemical analysis of these coprolites indicates that if the referral to Herrerasaurus was correct, this carnivore could digest bone.
Comparisons between the scleral rings of Herrerasaurus and modern birds and reptiles suggest that it may have been cathemeral, active throughout the day at short intervals.
In a 2001 study conducted by Bruce Rothschild and other paleontologists, 12 hand bones and 20 foot bones referred to Herrerasaurus were examined for signs of stress fracture, but none were found.
PVSJ 407, a Herrerasaurus ischigualastensis, had a pit in a skull bone attributed by Paul Sereno and Novas to a bite. Two additional pits occurred on the splenial. The areas around these pits are swollen and porous, suggesting the wounds were afflicted by a short-lived non-lethal infection. Because of the size and angles of the wound, it is likely that they were obtained in a fight with another Herrerasaurus.
Paleoecology
The holotype of Herrerasaurus (PVL 2566) was discovered in the Cancha de Bochas Member of the Ischigualasto Formation in San Juan, Argentina. It was collected in 1961 by Victorino Herrera, in sediments that were deposited in the Carnian stage of the Triassic period, approximately 231 to 229 million years ago. Over the years, the Ischigualasto Formation produced other fossils ultimately referred to Herrerasaurus. In 1958, A.S. Romer discovered specimen MCZ 7063, originally referred to Staurikosaurus in Carnian sediments. Herrerasaurus specimens PVL 2045 and MLP(4)61, were collected in 1959 and 1960, respectively, in sediments that were deposited in the Norian stage of the Triassic period, approximately 228 to 208 million years ago. However, these specimens are no longer regarded as pertaining to Herrerasaurus. In 1960, Scaglia collected specimen MACN 18.060, originally the holotype of Ischisaurus cattoi, in sediments deposited in the Carnian stage. In 1961, Scaglia collected Herrerasaurus specimen PVL 2558, in the Carnian beds of this formation. In 1990, the Cancha de Bochas Member produced more Herrerasaurus specimens, also from its Carnian beds. Specimen PVSJ 53, originally the holotype of Frenguellisaurus ischigualastensis, was collected by Gargiulo & Oñate in 1975 in sediments that were deposited in the Carnian stage.
Although Herrerasaurus shared the body shape of the large carnivorous dinosaurs, it lived during a time when dinosaurs were small and few. It was the time of non-dinosaurian reptiles, not dinosaurs, and a major turning point in the Earth's ecology. The vertebrate fauna of the Ischigualasto Formation and the slightly later Los Colorados Formation consisted mainly of a variety of crurotarsal archosaurs and synapsids. In the Ischigualasto Formation, dinosaurs constituted only about 10% of the total number of fossils, but by the end of the Triassic Period, dinosaurs were becoming the dominant large land animals, and the other archosaurs and synapsids declined in variety and number.
Studies suggest that the paleoenvironment of the Ischigualasto Formation was a volcanically active floodplain covered by forests and subject to strong seasonal rainfalls. The climate was moist and warm, though subject to seasonal variations. Vegetation consisted of ferns (Cladophlebis), horsetails, and giant conifers (Protojuniperoxylon). These plants formed lowland forests along the banks of rivers. Herrerasaurus remains appear to have been the most common among the carnivores of the Ischigualasto Formation. It lived in the jungles of Late Triassic South America alongside other early dinosaurs, such as Sanjuansaurus, Eoraptor, Panphagia, and Chromogisaurus, as well as rhynchosaurs (Scaphonyx), cynodonts (e.g., Exaeretodon, Ecteninion and Chiniquodon), dicynodonts (Ischigualastia), pseudosuchians (e.g., Saurosuchus, Sillosuchus and Aetosauroides), proterochampsids (e.g., Proterochampsa) and temnospondyls (Pelorocephalus).
| Biology and health sciences | Dinosaurs | Animals |
6332859 | https://en.wikipedia.org/wiki/Intensive%20care%20unit | Intensive care unit | An intensive care unit (ICU), also known as an intensive therapy unit or intensive treatment unit (ITU) or critical care unit (CCU), is a special department of a hospital or health care facility that provides intensive care medicine.
An intensive care unit (ICU) was defined by the task force of the World Federation of Societies of Intensive and Critical Care Medicine as "an organized system for the provision of care to critically ill patients that provides intensive and specialized medical and nursing care, an enhanced capacity for monitoring, and multiple modalities of physiologic organ support to sustain life during a period of life-threatening organ system insufficiency."
Patients may be referred directly from an emergency department or from a ward if they rapidly deteriorate, or immediately after surgery if the surgery is very invasive and the patient is at high risk of complications.
History
In 1854, Florence Nightingale left for the Crimean War, where triage was used to separate seriously wounded soldiers from those with non-life-threatening conditions. Florence provided several simple but powerful interventions: a clean environment, medical equipment, clean water, and fruits. With this work, the mortality rate decreased from 60% to 42% and then to 2.2%
In response to a polio epidemic (where many patients required constant ventilation and surveillance), Bjørn Aage Ibsen established the first intensive care unit globally in Copenhagen in 1953.
The first application of this idea in the United States was in 1951 by Dwight Harken. Harken's concept of intensive care has been adopted worldwide and has improved the chance of survival for patients. He opened the first intensive care unit in 1951. In the 1960s, he developed the first device to help the heart pump. He also implanted artificial aortic and mitral valves. He continued to pioneer in surgical procedures for operating on the heart. He established and worked in several organizations related to the heart.
In 1955, William Mosenthal, a surgeon at the Dartmouth-Hitchcock Medical Center also opened an early intensive care unit. In the 1960s, the importance of cardiac arrhythmias as a source of morbidity and mortality in myocardial infarctions (heart attacks) was recognized. This led to the routine use of cardiac monitoring in ICUs, especially after heart attacks.
Types
Hospitals may have various specialized ICUs that cater to a specific medical requirement or patient:
Equipment and systems
Common equipment in an ICU includes mechanical ventilators to assist breathing through an endotracheal tube or a tracheostomy tube; cardiac monitors for monitoring cardiac condition; equipment for the constant monitoring of bodily functions; a web of intravenous lines, feeding tubes, nasogastric tubes, suction pumps, drains, and catheters, syringe pumps; and a wide array of drugs to treat the primary condition(s) of hospitalization. Medically induced comas, analgesics, and induced sedation are common ICU tools needed and used to reduce pain and prevent secondary infections.
Burn recovery bed
Quality of care
The available data suggests a relation between ICU volume and quality of care for mechanically ventilated patients. After adjustment for severity of illnesses, demographic variables, and characteristics of different ICUs (including staffing by intensivists), higher ICU staffing was significantly associated with lower ICU and hospital mortality rates. A ratio of 2 patients to 1 nurse is recommended for a medical ICU, which contrasts to the ratio of 4:1 or 5:1 typically seen on medical floors. This varies from country to country, though; e.g., in Australia and the United Kingdom, most ICUs are staffed on a 2:1 basis (for high-dependency patients who require closer monitoring or more intensive treatment than a hospital ward can offer) or on a 1:1 basis for patients requiring extreme intensive support and monitoring; for example, a patient on multiple vasoactive medications to keep their blood pressure high enough to perfuse tissue. The patient may require multiple machines; Examples: continuous dialysis CRRT, a intra-aortic balloon pump, ECMO.
International guidelines recommend that every patient gets checked for delirium every day (usually twice or as much required) using a validated clinical tool. The two most widely used are the Confusion Assessment Method for the ICU (CAM-ICU) and the Intensive Care Delirium Screening Checklist (ICDSC). There are translations of these tools in over 20 languages and they are used globally in many ICU's. Nurses are the largest group of healthcare professionals working in ICUs. There are findings which have demonstrated that nursing leadership styles have impact on ICU quality measures particularly structural and outcomes measures.
Operational logistics
In the United States, up to 20% of hospital beds can be labelled as intensive-care beds; in the United Kingdom, intensive care usually will comprise only up to 2% of total beds. This high disparity is attributed to admission of patients in the UK only when considered the most severely ill.
Intensive care is an expensive healthcare service. A recent study conducted in the United States found that hospital stays involving ICU services were 2.5 times more costly than other hospital stays.
In the United Kingdom in 2003–04, the average cost of funding an intensive care unit was:
£838 per bed per day for a neonatal intensive care unit
£1,702 per bed per day for a pediatric intensive care unit
£1,328 per bed per day for an adult intensive care unit
Remote collaboration systems
Some hospitals have installed teleconferencing systems that allow doctors and nurses at a central facility (either in the same building, at a central location serving several local hospitals, or in rural locations another more urban facility) to collaborate with on-site staff and speak with patients (a form of [telemedicine]). This is variously called an eICU, virtual ICU, or tele-ICU. Remote staff typically have access to vital signs from live monitoring systems, and electronic health records so they may have access to a broader view of a patient's medical history. Often bedside and remote staff have met in person and may rotate responsibilities. Such systems are beneficial to intensive care units in order to ensure correct procedures are being followed for patients vulnerable to deterioration, to access vital signs remotely in order to keep patients that would have to be transferred to a larger facility if need be he/she may have demonstrated a significant decrease in stability.
| Biology and health sciences | Health facilities | Health |
2635607 | https://en.wikipedia.org/wiki/Bulldog%20type | Bulldog type | Bulldogs are a type of dog that were traditionally used for the blood sports of baiting and dog fighting, but today are kept for other purposes, including companion dogs, guard dogs and catch dogs. Bulldogs are typically stocky, powerful, square-built animals with large, strong, brachycephalic-type muzzles. "Bull" is a reference that originated in England that refers to the sport of bull-baiting, which was a national sport in England between the 13th and 18th century. It is believed that bulldogs were developed during the 16th century in the Elizabethan era from the larger mastiffs, as smaller, more compact dogs were better suited for baiting.
List of bulldog breeds
Extant breeds
Alano Español (Spanish Bulldog)
Alapaha Blue Blood Bulldog
American Bulldog
Bulldog
Campeiro Bulldog
Continental Bulldog
French Bulldog
Olde English Bulldogge
Perro de Presa Mallorquin
Serrano Bulldog
Extinct breeds
Bullenbeisser (German Bulldog)
Old English Bulldog
Toy Bulldog
Gallery
| Biology and health sciences | Dogs | Animals |
2636111 | https://en.wikipedia.org/wiki/Pelagic%20fish | Pelagic fish | Pelagic fish live in the pelagic zone of ocean or lake waters—being neither close to the bottom nor near the shore—in contrast with demersal fish that live on or near the bottom, and reef fish that are associated with coral reefs.
The marine pelagic environment is the largest aquatic habitat on Earth, occupying 1,370 million cubic kilometres (330 million cubic miles), and is the habitat for 11% of known fish species. The oceans have a mean depth of . About 98% of the total water volume is below , and 75% is below .
Marine pelagic fish can be divided into coastal (inshore) fish and oceanic (offshore) fish. Coastal pelagic fish inhabit the relatively shallow and sunlit waters above the continental shelf, while oceanic pelagic fish inhabit the vast and deep waters beyond the continental shelf (even though they also may swim inshore).
Pelagic fish range in size from small coastal forage fish, such as herrings and sardines, to large apex predator oceanic fishes, such as bluefin tuna and oceanic sharks. They are usually agile swimmers with streamlined bodies, capable of sustained cruising on long-distance migrations. Many pelagic fish swim in schools weighing hundreds of tonnes. Others, such as the large ocean sunfish, are solitary. There are also freshwater pelagic fish in some of the larger lakes, such as the Lake Tanganyika sardine.
Epipelagic fish
Epipelagic fish inhabit the epipelagic zone, the uppermost layer of the water column, ranging from sea level down to . It is also referred to as the surface waters or the sunlit zone, and includes the photic zone. The photic zone is defined as the surface waters down to the depth where the sunlight is attenuated to 1% of the surface value. This depth depends on how turbid the water is, but can extend to in clear water, coinciding with the epipelagic zone. The photic zone allows sufficient light for phytoplankton to photosynthesize.
A vast habitat for most pelagic fish, the epipelagic zone is well lit so visual predators can use their eyesight, is usually well mixed and oxygenated from wave action, and can be a good habitat for algae to grow. However, it is an almost featureless habitat. This lack of habitat variation results in a lack of species diversity, so the zone supports less than 2% of the world's known fish species. Much of the zone lacks nutrients for supporting fish, so epipelagic fish tend to be found in coastal water above the continental shelves, where land runoff can provide nutrients, or in those parts of the ocean where upwelling moves nutrients into the area.
Epipelagic fish can be divided broadly into small forage fish and larger predator fish that feed on them. Forage fish school and filter feed on plankton. Most epipelagic fish have streamlined bodies capable of sustained cruising on migrations. In general, predatory and forage fish share the same morphological features. Predator fish are usually fusiform with large mouths, smooth bodies, and deeply forked tails. Many use vision to prey on zooplankton or smaller fish, while others filter feed on plankton.
Most epipelagic predator fish and their smaller prey fish are countershaded with silvery colours that reduce visibility by scattering incoming light. The silvering is achieved with reflective fish scales that function as small mirrors. This may give an effect of transparency. At medium depths at sea, light comes from above, so a mirror that is oriented vertically makes animals such as fish invisible from the side.
In the shallower epipelagic waters, the mirrors must reflect a mixture of wavelengths, and the fish accordingly, has crystal stacks with a range of different spacings. A further complication for fish with bodies that are rounded in cross-section is that the mirrors would be ineffective if laid flat on the skin, as they would fail to reflect horizontally. The overall mirror effect is achieved with many small reflectors, all oriented vertically.
Although the number of species is limited, epipelagic fishes are abundant. What they lack in diversity they make up for in numbers. Forage fish occur in huge numbers, and large fish that prey on them often are sought after as premier food fish. As a group, epipelagic fishes form the most valuable fisheries in the world.
Many forage fish are facultative predators that can pick individual copepods or fish larvae out of the water column, and then change to filter feeding on phytoplankton when that gives better results energetically. Filter feeding fish usually use long fine gill rakers to strain small organisms from the water column. Some of the largest epipelagic fishes, such as the basking shark and whale shark, are filter feeders, and so are some of the smallest, such as adult sprats and anchovies.
Ocean waters that are exceptionally clear contain little food. Areas of high productivity tend to be somewhat turbid from plankton blooms. These attract the filter feeding plankton eaters, which in turn attract the higher predators. Tuna fishing tends to be optimum when water turbidity, measured by the maximum depth a secchi disc can be seen during a sunny day, is 15 to 35 metres.
Floating objects
Epipelagic fish are fascinated by floating objects. They aggregate in considerable numbers around objects such as drifting flotsam, rafts, jellyfish, and floating seaweed. The objects appear to provide a "visual stimulus in an optical void". Floating objects may offer refuge for juvenile fish from predators. An abundance of drifting seaweed or jellyfish can result in significant increases in the survival rates of some juvenile species.
Many coastal juveniles use seaweed for the shelter and the food that is available from invertebrates and other fish associated with it. Drifting seaweed, particularly the pelagic Sargassum, provide a niche habitat with its own shelter and food, and even supports its own unique fauna, such as the sargassum fish. One study, off Florida, found 54 species from 23 families living in flotsam from Sargassum mats. Jellyfish also are used by juvenile fish for shelter and food, even though jellyfish can prey on small fish.
Mobile oceanic species such as tuna can be captured by travelling long distances in large fishing vessels. A simpler alternative is to leverage off the fascination fish have with floating objects. When fishermen use such objects, they are called fish aggregating devices (FADs). FADs are anchored rafts or objects of any type, floating on the surface or just below it. Fishermen in the Pacific and Indian oceans set up floating FADs, assembled from all sorts of debris, around tropical islands, and then use purse seines to capture the fish attracted to them.
A study using sonar in French Polynesia, found large shoals of juvenile bigeye tuna and yellowfin tuna aggregated closest to the devices, 10 to 50 m. Farther out, 50 to 150 m, was a less dense group of larger yellowfin and albacore tuna. Yet farther out, to 500 m, was a dispersed group of various large adult tuna. The distribution and density of these groups was variable and overlapped. The FADs also were used by other fish, and the aggregations dispersed when it was dark.
Larger fish, even predator fish such as the great barracuda, often attract a retinue of small fish that accompany them in a strategically safe way. Skindivers who remain for long periods in the water also often attract a retinue of fish, with smaller fishes coming in close and larger fishes observing from a greater distance. Marine turtles, functioning as a mobile shelter for small fish, can be impaled accidentally by a swordfish trying to catch the fish.
Coastal fish
Coastal fish (also called neritic or inshore fish) inhabit the waters near the coast and above the continental shelf. Since the continental shelf is usually less than 200 metres deep, it follows that coastal fish that are not demersal fish, are usually epipelagic fish, inhabiting the sunlit epipelagic zone.
Coastal epipelagic fish are among the most abundant in the world. They include forage fish as well as the predator fish that feed on them. Forage fish thrive in those inshore waters where high productivity results from the upwelling and shoreline run off of nutrients. Some are partial residents that spawn in streams, estuaries, and bays, but most complete their life cycle in the zone.
Oceanic fish
Oceanic fish (also called open ocean or offshore fish) live in the waters that are not above the continental shelf. Oceanic fish can be contrasted with coastal fish, who do live above the continental shelf. However, the two types are not mutually exclusive, since there are no firm boundaries between coastal and ocean regions, and many epipelagic fish move between coastal and oceanic waters, particularly in different stages in their life cycle.
Oceanic epipelagic fish can be true residents, partial residents, or accidental residents. True residents live their entire life in the open ocean. Only a few species are true residents, such as tuna, billfish, flying fish, sauries, pilotfish, remoras, dolphinfish, ocean sharks, and ocean sunfish. Most of these species migrate back and forth across open oceans, rarely venturing over continental shelves. Some true residents associate with drifting jellyfish or seaweeds.
Partial residents occur in three groups: species that live in the zone only when they are juveniles (drifting with jellyfish and seaweeds); species that live in the zone only when they are adults (salmon, flying fish, dolphin, and whale sharks); and deep water species that make nightly migrations up into the surface waters (such as the lanternfish). Accidental residents occur occasionally when adults and juveniles of species from other environments are carried accidentally into the zone by currents.
Deep water fish
In the deep ocean, the waters extend far below the epipelagic zone and support very different types of pelagic fishes adapted to living in these deeper zones.
In deep water, marine snow is a continuous shower of mostly organic detritus falling from the upper layers of the water column. Its origin lies in activities within the productive photic zone. Marine snow includes dead or dying plankton, protists (diatoms), fecal matter, sand, soot, and other inorganic dust. The "snowflakes" grow over time and may reach several centimetres in diameter, travelling for weeks before reaching the ocean floor. However, most organic components of marine snow are consumed by microbes, zooplankton, and other filter feeding animals within the first 1,000 metres of their journey, that is, within the epipelagic zone. In this way marine snow can be considered the foundation of deep-sea mesopelagic and benthic ecosystems: As sunlight cannot reach them, deep-sea organisms rely heavily on marine snow as an energy source.
Some deep-sea pelagic groups, such as the lanternfish, ridgehead, marine hatchetfish, and lightfish families are sometimes termed pseudoceanic because, rather than having an even distribution in open water, they occur in significantly higher abundances around structural oases, notably seamounts, and over continental slopes. The phenomenon is explained by the likewise abundance of prey species that also are attracted to the structures.
The fish in the different pelagic and deep water benthic zones are physically structured, and behave, in ways that differ markedly from each other. Groups of coexisting species within each zone all seem to operate in similar ways, such as the small mesopelagic vertically migrating plankton-feeders, the bathypelagic anglerfishes, and the deep water benthic rattails.
Ray finned species, with spiny fins, are rare among deep sea fishes, which suggests that deep sea fish are ancient and so well adapted to their environment that invasions by more modern fishes have been unsuccessful. The few ray fins that do exist are mainly in the Beryciformes and Lampriformes, which also are ancient forms. Most deep sea pelagic fishes belong to their own orders, suggesting a long evolution in deep sea environments. In contrast, deep water benthic species are in orders that include many related shallow water fishes.
Many species move daily between zones in vertical migrations. In the following table, they are listed in the middle or deeper zone where they regularly are found.
Mesopelagic fish
Below the epipelagic zone, conditions change rapidly. Between 200 metres and approximately 1000 metres, light continues to fade until darkness is nearly complete. Temperatures fall through a thermocline to temperatures between and . This is the twilight or mesopelagic zone. Pressure continues to increase, at the rate of one atmosphere every 10 metres, while nutrient concentrations fall, along with dissolved oxygen and the rate at which the water circulates.
Sonar operators, using the sonar technology developed during World War II, were puzzled by what appeared to be a false sea floor 300–500 metres deep at day, and less deep at night. This turned out to be due to millions of marine organisms, most particularly small mesopelagic fish, with swimbladders that reflected the sonar.
Mesopelagic organisms migrate into shallower water at dusk to feed on plankton. The layer is deeper when the moon is out, and may move higher when the sky is dark. This phenomenon has come to be known as the deep scattering layer.
Most mesopelagic fish make daily vertical migrations, moving each night into the epipelagic zone, often following similar migrations of zooplankton, and returning to the depths for safety during the day. These vertical migrations occur over hundreds of meters.
These fish have muscular bodies, ossified bones, scales, well developed gills and central nervous systems, and large hearts and kidneys. Mesopelagic plankton feeders have small mouths with fine gill rakers, while the piscivores have larger mouths and coarser gill rakers.
Vertically migratory fish have swimbladders. The fish inflates its swimbladder to move up. Given the high pressures in the mesopelagic zone, this requires significant energy. As the fish ascends, the air in the swimbladder must decrease to prevent the swimbladder from bursting. To return to the depths, the swimbladder is deflated. The migration takes them through the thermocline, where the temperature changes between 10 and 20 °C, thus displaying considerable temperature tolerance.
Mesopelagic fish are adapted for an active life under low light conditions. Most of them are visual predators with large eyes. Some of the deeper water fish such as the Telescopefish have tubular eyes with big lenses and only rod cells that look upward. These give binocular vision and great sensitivity to small light signals. This adaptation gives improved terminal vision at the expense of lateral vision, and allows the predator to pick out squid, cuttlefish, and smaller fish that are silhouetted above them.
Mesopelagic fish usually lack defensive spines, and use colour for camouflage. Ambush predators are dark, black or red. Since the longer, red, wavelengths of light do not reach the deep sea, red effectively functions the same as black. Migratory forms use countershaded silvery colours. On their bellies, they often display photophores producing low grade light. For a predator from below, looking upward, this bioluminescence camouflages the silhouette of the fish. However, some of these predators have yellow lenses that filter the (red deficient) ambient light, leaving the bioluminescence visible.
The brownsnout spookfish is a species of barreleye and is the only vertebrate known to employ a mirror, as opposed to a lens, to focus an image in its eyes.
Sampling via deep trawling indicates that lanternfish account for as much as 65% of all deep sea fish biomass. Indeed, lanternfish are among the most widely distributed, populous, and diverse of all vertebrates, playing an important ecological role as prey for larger organisms. The estimated global biomass of lanternfish is 550–660 million tonnes, several times the entire world fisheries catch. Lanternfish also account for much of the biomass responsible for the deep scattering layer of the world's oceans. Sonar reflects off the millions of lanternfish swim bladders, giving the appearance of a false bottom.
The 2010 Malaspina Circumnavigation Expedition traveled 60,000 km, undertaking acoustic observations. It reported that mesopelagic biomass was 10 billion tonnes or more (10x prior estimates), comprising about 90 percent of all ocean fish biomass. Estimates of how much carbon these fish sequester remained highly uncertain as of 2024.
Mesopelagic fish do not constitute a major fishery as of 2024. Initial efforts in Iceland, Norway, and the Soviet Union did not create a commercial industry. The European Union funded the MEESO project to study abundance and fishing technologies for key mesopelagic species. To date, fish that appeal to the human palate have not been identified, leading harvesters to focus on animal feed markets instead.
Bigeye tuna are an epipelagic/mesopelagic species that is carnivorous, eating other fish. Satellite tagging has shown that bigeye tuna often spend prolonged periods cruising deep below the surface during the daytime, sometimes making dives as deep as . These movements are thought to be in response to the vertical migrations of prey organisms in the deep scattering layer.
Bathypelagic fish
Below the mesopelagic zone it is pitch dark. This is the midnight or bathypelagic zone, extending from 1000 m to the bottom deep water benthic zone. If the water is exceptionally deep, the pelagic zone below sometimes is called the lower midnight or abyssopelagic zone.
Conditions are somewhat uniform throughout these zones, the darkness is complete, the pressure is crushing, and temperatures, nutrients, and dissolved oxygen levels are all low.
Bathypelagic fish have special adaptations to cope with these conditions – they have slow metabolisms and unspecialized diets, being willing to eat anything that comes along. They prefer to sit and wait for food rather than waste energy searching for it. The behaviour of bathypelagic fish can be contrasted with the behaviour of mesopelagic fish. Mesopelagic are often highly mobile, whereas bathypelagic fish are almost all lie-in-wait predators, normally expending little energy in movement.
The dominant bathypelagic fishes are small bristlemouth and anglerfish; fangtooth, viperfish, daggertooth, and barracudina are also common. These fishes are small, many about long, and not many longer than . They spend most of their time waiting patiently in the water column for prey to appear or to be lured by their phosphors. What little energy is available in the bathypelagic zone filters from above in the form of detritus, faecal material, and the occasional invertebrate or mesopelagic fish. About 20% of the food that has its origins in the epipelagic zone falls down to the mesopelagic zone, but only about 5% filters down to the bathypelagic zone.
Bathypelagic fish are sedentary, adapted to outputting minimum energy in a habitat with very little food or available energy, not even sunlight, only bioluminescence. Their bodies are elongated with weak, watery muscles and skeletal structures. Since so much of the fish is water, they are not compressed by the great pressures at these depths. They often have extensible, hinged jaws with recurved teeth. They are slimy, without scales. The central nervous system is confined to the lateral line and olfactory systems, the eyes are small and may not function, and gills, kidneys and hearts, and swimbladders are small or missing.
These are the same features found in fish larvae, which suggests that during their evolution, bathypelagic fish have acquired these features through neoteny. As with larvae, these features allow the fish to remain suspended in the water with little expenditure of energy.
Despite their ferocious appearance, these beasts of the deep are mostly miniature fish with weak muscles, and are too small to represent any threat to humans.
The swimbladders of deep sea fish are either absent or scarcely operational, and bathypelagic fish do not normally undertake vertical migrations. Filling bladders at such great pressures incurs huge energy costs. Some deep sea fishes have swimbladders that function while they are young and inhabit the upper epipelagic zone, but they wither or fill with fat when the fish move down to their adult habitat.
The most important sensory systems are usually the inner ear, which responds to sound, and the lateral line, which responds to changes in water pressure. The olfactory system also can be important for males who find females by smell.
Bathypelagic fish are black, or sometimes red, with few photophores. When photophores are used, it is usually to entice prey or attract a mate. Because food is so scarce, bathypelagic predators are not selective in their feeding habits, but grab whatever comes close enough. They accomplish this by having a large mouth with sharp teeth for grabbing large prey and overlapping gill rakers that prevent small prey that have been swallowed from escaping.
It is not easy finding a mate in this zone. Some species depend on bioluminescence. Others are hermaphrodites, which doubles their chances of producing both eggs and sperm when an encounter occurs. The female anglerfish releases pheromones to attract tiny males. When a male finds her, he bites onto her and never lets go. When a male of the anglerfish species Haplophryne mollis bites into the skin of a female, he release an enzyme that digests the skin of his mouth and her body, fusing the pair to the point where the two circulatory systems join up. The male then atrophies into nothing more than a pair of gonads. This extreme sexual dimorphism ensures that, when the female is ready to spawn, she has a mate immediately available.
Many animal forms other than fish live in the bathypelagic zone, such as squid, large whales, octopuses, sponges, brachiopods, sea stars, and echinoids, but this zone is difficult for fish to live in.
Demersal fish
Demersal fish live on or near the bottom of the sea. Demersal fish are found by the seafloor in coastal areas on the continental shelf, and in the open ocean they are found along the outer continental margin on the continental slope and the continental rise. They are not generally found at abyssopelagic or hadopelagic depths or on the abyssal plain. They occupy a range of seafloors consisting of mud, sand, gravel, or rocks.
In deep waters, the fishes of the demersal zone are active and relatively abundant, compared to fishes of the bathypelagic zone.
Rattails and brotulas are common, and other well-established families are eels, eelpouts, hagfishes, greeneyes, batfishes, and lumpfishes.
The bodies of deep water benthic fishes are muscular with well developed organs. In this way they are closer to mesopelagic fishes than bathopelagic fishes. In other ways, they are more variable. Photophores are usually absent, eyes and swimbladders range from absent to well developed. They vary in size, with larger species greater than one metre not uncommon.
Deep sea benthic fish are usually long and narrow. Many are eels or shaped like eels. This may be because long bodies have long lateral lines. Lateral lines detect low-frequency sounds, and some benthic fishes appear to have muscles that drum such sounds to attract mates. Smell is also important, as indicated by the rapidity with which benthic fish find traps baited with bait fish.
The main diet of deep sea benthic fish is invertebrates of the deep sea benthos and carrion. Smell, touch, and lateral line sensitivities seem to be the main sensory devices for locating these.
Deep sea benthic fish can be divided into strictly benthic fish and benthopelagic fish. Usually, strictly benthic fish are negatively buoyant, while benthopelagic fish are neutrally buoyant. Strictly benthic fish stay in constant contact with the bottom. They either lie in wait as ambush predators or move actively over the bottom in search for food.
Benthopelagic fish
Benthopelagic fish inhabit the water just above the bottom, feeding on benthos and benthopelagic zooplankton. Most dermersal fish are benthopelagic.
They can be divided into flabby or robust body types. Flabby benthopelagic fishes are like bathopelagic fishes, they have a reduced body mass, and low metabolic rates, expending minimal energy as they lie and wait to ambush prey. An example of a flabby fish is the cusk-eel Acanthonus armatus, a predator with a huge head and a body that is 90% water. This fish has the largest ears (otoliths) and the smallest brain in relation to its body size of all known vertebrates.
Robust benthopelagic fish are muscular swimmers that actively cruise the bottom searching for prey. They may live around features, such as seamounts, which have strong currents. Examples are the orange roughy and Patagonian toothfish. Because these fish were once abundant, and because their robust bodies are good to eat, these fish have been harvested commercially.
Benthic fish
Benthic fish are not pelagic fish, but they are discussed here briefly, by way of completeness and contrast.
Some fishes do not fit into the above classification. For example, the family of nearly blind spiderfishes, common and widely distributed, feed on benthopelagic zooplankton. Yet they are strictly benthic fish, since they stay in contact with the bottom. Their fins have long rays they use to "stand" on the bottom while they face the current and grab zooplankton as it passes by.
The deepest-living fish known, the strictly benthic Abyssobrotula galatheae, eel-like and blind, feeds on benthic invertebrates.
At great depths, food scarcity and extreme pressure works to limit the survivability of fish. The deepest point of the ocean is about . Bathypelagic fishes are not normally found below . The greatest depth recorded for a benthic fish is . It may be that extreme pressures interfere with essential enzyme functions.
Benthic fishes are more diverse and are likely to be found on the continental slope, where there is habitat diversity and often, food supplies. Approximately 40% of the ocean floor consists of abyssal plains, but these flat, featureless regions are covered with sediment and largely devoid of benthic life (benthos). Deep sea benthic fishes are more likely to associate with canyons or rock outcroppings among the plains, where invertebrate communities are established. Undersea mountains (seamounts) can intercept deep sea currents and cause productive upwellings that support benthic fish. Undersea mountain ranges may separate underwater regions into different ecosystems.
Pelagic fisheries
Forage fish
Small pelagic fish are usually forage fish that are hunted by larger pelagic fish and other predators. Forage fish filter feed on plankton and are usually less than long. They often stay together in schools and may migrate large distances between spawning grounds and feeding grounds. They are found particularly in upwelling regions around the northeast Atlantic, off the coast of Japan, and off the west coasts of Africa and the Americas. Forage fish are generally short-lived, and their stocks fluctuate markedly over the years.
Herring are found in the North Sea and the North Atlantic at depths to . Important herring fisheries have existed in these areas for centuries. Herring of different sizes and growth rates belong to different populations, each of which have their own migration routes. When spawning, a female produces from 20,000 to 50,000 eggs. After spawning, the herrings are depleted of fat, and migrate back to feeding grounds rich in plankton. Around Iceland, three separate populations of herring were fished traditionally. These stocks collapsed in the late 1960s, although two have since recovered. After the collapse, Iceland turned to capelin, which now account for about half of Iceland's total catch.
Blue whiting are found in the open ocean and above the continental slope at depths between 100 and 1000 meters . They follow vertical migrations of the zooplankton they feed on to the bottom during daytime and to the surface at night time.
Traditional fisheries for anchovies and sardines also have operated in the Pacific, the Mediterranean, and the southeast Atlantic. The world annual catch of forage fish in recent years has been approximately 22 million tonnes, or one quarter of the world's total catch.
Predator fish
Medium size pelagic fishes include trevally, barracuda, flying fish, bonito, mahi mahi, and coastal mackerel. Many of these fish hunt forage fish, but are in turn, hunted by yet larger pelagic fish. Nearly all fish are predator fish to some measure, and apart from the top predators, the distinction between predator fish and prey or forage fish, is somewhat artificial.
Around Europe there are three populations of coastal mackerel. One population migrates to the North Sea, another stays in the waters of the Irish Sea, and the third population migrates southward along the west coast of Scotland and Ireland. The cruise speed of the mackerel is an impressive 10 kilometres per hour.
Many large pelagic fish are oceanic nomadic species that undertake long offshore migrations. They feed on small pelagic forage fish, as well as medium-sized pelagic fish. At times, they follow their schooling prey, and many species form schools themselves.
Examples of larger pelagic fish are tuna, billfish, king mackerel, sharks, and large rays.
Tuna in particular are of major importance to commercial fisheries. Although tuna migrate across oceans, trying to find them there is not the usual approach. Tuna tend to congregate in areas where food is abundant, along the boundaries of currents, around islands, near seamounts, and in some areas of upwelling along continental slopes. Tuna are captured by several methods: purse seine vessels enclose an entire surface school with special nets, pole and line vessels that use poles baited with other smaller pelagic fish as baitfish, and rafts called fish aggregating devices are set up, because tuna, as well as some other pelagic fish, tend to congregate under floating objects.
Other large pelagic fish are premier game fish, particularly marlin and swordfish.
Productivity
Upwelling occurs both along coastlines and in midocean when a collision of deep ocean currents brings cold water that is rich in nutrients to the surface. These upwellings support blooms of phytoplankton, which in turn, produce zooplankton and support many of the world's main fisheries. If the upwelling fails, then fisheries in the area fail.
In the 1960s the Peruvian anchoveta fishery was the world's largest fishery. The anchoveta population was greatly reduced during the 1972 El Niño event, when warm water drifted over the cold Humboldt Current, as part of a 50-year cycle, lowering the depth of the thermocline. The upwelling stopped and phytoplankton production plummeted, as did the anchoveta population, and millions of seabirds, dependent on the anchoveta, died. Since the mid-1980s, the upwelling has resumed, and the Peruvian anchoveta catch levels have returned to the 1960s levels.
Off Japan, the collision of the Oyashio Current with the Kuroshio Current produces nutrient-rich upwellings. Cyclic changes in these currents resulted in a decline in the sardine sardinops melanosticta populations. Fisheries catches fell from 5 million tonnes in 1988 to 280 thousand tonnes in 1998. As a further consequence, Pacific bluefin tuna stopped moving into the region to feed.
Ocean currents can shape how fish are distributed, both concentrating and dispersing them. Adjacent ocean currents can define distinct, if shifting, boundaries. These boundaries can even be visible, but usually their presence is marked by rapid changes in salinity, temperature, and turbidity.
For example, in the Asian northern Pacific, albacore are confined between two current systems. The northern boundary is determined by the cold North Pacific Current and the southern boundary is determined by the North Equatorial Current. To complicate things, their distribution is further modified within the area defined by the two current systems by another current, the Kuroshio Current, whose flows fluctuate seasonally.
Epipelagic fish often spawn in an area where the eggs and larvae drift downstream into suitable feeding areas, and eventually, drift into adult feeding areas.
Islands and banks can interact with currents and upwellings in a manner that results in areas of high ocean productivity. Large eddies can form downcurrent or downwind from islands, concentrating plankton. Banks and reefs can intercept deep currents that upwell.
Scombrids
Highly migratory species
Epipelagic fish generally move long distances between feeding and spawning areas, or as a response to changes in the ocean. Large ocean predators, such as salmon and tuna, can migrate thousands of kilometres, crossing oceans.
In a 2001 study, the movements of Atlantic bluefin tuna from an area off North Carolina were studied with the help of special popup tags. When attached to a tuna, these tags monitored the movements of the tuna for about a year, then detached and floated to the surface where they transmitted their information to a satellite. The study found that the tuna had four different migration patterns. One group confined itself to the western Atlantic for a year. Another group also stayed mainly in the western Atlantic, but migrated to the Gulf of Mexico for spawning. A third group moved across the Atlantic Ocean and back again. The fourth group crossed to the eastern Atlantic and then moved into the Mediterranean Sea for spawning. The study indicates that, while there is some differentiation by spawning areas, there is essentially only one population of Atlantic bluefin tuna, intermixing groups that between them, use all of the north Atlantic Ocean, the Gulf of Mexico, and the Mediterranean Sea.
The term highly migratory species (HMS) is a legal term that has its origins in Article 64 of the United Nations Convention on the Law of the Sea (UNCLOS).
The highly migratory species include: tuna and tuna-like species (albacore, Atlantic bluefin, bigeye tuna, skipjack, yellowfin, blackfin, little tunny, Pacific bluefin, southern bluefin and bullet), pomfret, marlin, sailfish, swordfish, saury and oceangoing sharks, as well as mammals such as dolphins, and other cetaceans.
Essentially, highly migratory species coincide with the larger of the "large pelagic fish", discussed in the previous section, if cetaceans are added and some commercially unimportant fish, such as the sunfish, are excluded. These are high trophic level species that undertake migrations of significant, but variable distances across oceans for feeding, often on forage fish, or reproduction, and also have wide geographic distributions. Thus, these species are found both inside the exclusive economic zones and in the high seas outside these zones. They are pelagic species, which means they mostly live in the open ocean and do not live near the sea floor, although they may spend part of their life cycle in nearshore waters.
Capture production
According to the Food and Agriculture Organization (FAO), the world harvest in 2005 consisted of 93.2 million tonnes captured by commercial fishing in wild fisheries. Of this total, about 45% were pelagic fish. The following table shows the world capture production in tonnes.
Threatened species
In 2009, the International Union for Conservation of Nature (IUCN) produced the first red list for threatened oceanic sharks and rays. They claim that approximately one third of open ocean sharks and rays are under threat of extinction. There are 64 species of oceanic sharks and rays on the list, including hammerheads, giant devil rays, and porbeagle.
Oceanic sharks are captured incidentally by swordfish and tuna high seas fisheries. In the past there were few markets for sharks, which were regarded as worthless bycatch. Now sharks are being increasingly targeted to supply emerging Asian markets, particularly for shark fins, which are used in shark fin soup.
The northwest Atlantic Ocean shark populations are estimated to have declined by 50% since the early 1970s. Oceanic sharks are vulnerable because they do not produce many young, and the young can take decades to mature.
In parts of the world the scalloped hammerhead shark has declined by 99% since the late 1970s. Its status on the red list is that it is globally endangered, meaning it is near extinction.
| Biology and health sciences | Fishes by habitat | Animals |
2636840 | https://en.wikipedia.org/wiki/Greenschist | Greenschist | Greenschists are metamorphic rocks that formed under the lowest temperatures and pressures usually produced by regional metamorphism, typically and 2–10 kilobars (). Greenschists commonly have an abundance of green minerals such as chlorite, serpentine, and epidote, and platy minerals such as muscovite and platy serpentine. The platiness gives the rock schistosity (a tendency to split into layers). Other common minerals include quartz, orthoclase, talc, carbonate minerals and amphibole (actinolite).
Greenschist is a general field petrologic term for metamorphic or altered mafic volcanic rock. In Europe, the term prasinite is sometimes used. A greenstone is sometimes a greenschist but can also be rock types without any schistosity, especially metabasalt (spilite). However, basalts may remain quite black if primary pyroxene does not revert to chlorite or actinolite. To qualify for the name, a rock must also exhibit schistosity or some foliation or layering. The rock is derived from basalt, gabbro or similar rocks containing sodium-rich plagioclase feldspar, chlorite, epidote and quartz.
Petrology
Greenschist is defined by the presence of the minerals chlorite, epidote, or actinolite, which give the rock its green color. Greenschists also have pronounced schistosity. Schistosity is a thin layering of the rock produced by metamorphism (a foliation) that permits the rock to easily be split into flakes or slabs less than thick. This arises from the presence of chlorite or other platy minerals that become aligned in layers during metamorphism.
Greenschist may also contain albite and often has a lepidoblastic, nematoblastic or schistose texture defined primarily by chlorite and actinolite. Grain size is rarely coarse, due primarily to the mineral assemblage. Chlorite and to a lesser extent actinolite typically exhibit small, flat or acicular crystal habits.
Greenstone is a field term for any massive mafic volcanic rock that has been altered to a greenish color by the formation of the same minerals that give the green color to greenschist, whether or not the rock displays schistosity. The term has also been used to describe any igneous intrusions into the Coal Measures Group of Scotland, to describe chamosite-rich mudstone of Early Jurassic age in Great Britain, or for nephrite or other greenish gemstones.
Greenschist facies
Greenschist facies is determined by the particular temperature and pressure conditions required to metamorphose basalt to form the typical greenschist facies minerals chlorite, actinolite, and albite. Greenschist facies results from low temperature, moderate pressure metamorphism. Metamorphic conditions which create typical greenschist facies assemblages are called the Barrovian Facies Sequence, and the lower-pressure Abukuma Facies Series. Temperatures of approximately and depths of about are the typical envelope of greenschist facies rocks.
The equilibrium mineral assemblage of rocks subjected to greenschist facies conditions depends on primary rock composition.
Basalt: chlorite + actinolite + albite +/- epidote
Ultramafic: chlorite + serpentine +/- talc +/- tremolite +/- diopside +/- brucite
Pelites: quartz +/- albite +/- k-feldspar +/- chlorite, muscovite, garnet, pyrophyllite +/- graphite
Calc-silicates: ''calcite +/- dolomite +/- quartz +/- micas, scapolite, wollastonite, etc.
In greater detail the greenschist facies is subdivided into subgreenschist, lower and upper greenschist. Lower temperatures are transitional with and overlap the prehnite-pumpellyite facies and higher temperatures overlap with and include sub-amphibolite facies.
If burial continues along Barrovian Sequence metamorphic trajectories, greenschist facies gives rise to amphibolite facies assemblages, dominated by amphibole and eventually to granulite facies. Lower pressure, normally contact metamorphism produces albite-epidote hornfels while higher pressures at great depth produces eclogite.
Oceanic basalts in the vicinity of mid-ocean ridges typically exhibit sub-greenschist alteration. The greenstone belts of the various Archean cratons are commonly altered to the greenschist facies. These ancient rocks are noted as host rocks for a variety of ore deposits in Australia, Namibia and Canada.
Greenschist-like rocks can also be formed under blueschist facies conditions if the original rock (protolith) contains enough magnesium. This explains the scarcity of blueschist preserved from before the Neoproterozoic Era 1000 Ma ago when the Earth's oceanic crust contained more magnesium than today's oceanic crust.
Use
Europe
In Minoan Crete, greenschist and blueschist were used to pave streets and courtyards between 1650 and 1600 BC. These rocks were likely quarried in Agia Pelagia on the north coast of central Crete.
Across Europe, greenschist rocks have been used to make axes. Several sites, including Great Langdale in England, have been identified.
Eastern North America
A form of chlorite schist was popular in prehistoric Native American communities for the production of axes and celts, as well as ornamental items. In the Middle Woodland period, greenschist was one of the many trade items that were part of the Hopewell culture exchange network, sometimes transported over thousands of kilometers.
During the time of the Mississippian culture, the polity of Moundville apparently had some control over the production and distribution of greenschist. The Moundville source has been shown to be from two localities in the Hillabee Formation of central and eastern Alabama.
| Physical sciences | Metamorphic rocks | Earth science |
820253 | https://en.wikipedia.org/wiki/Helmholtz%20decomposition | Helmholtz decomposition | In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus states that certain differentiable vector fields can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. In physics, often only the decomposition of sufficiently smooth, rapidly decaying vector fields in three dimensions is discussed. It is named after Hermann von Helmholtz.
Definition
For a vector field defined on a domain , a Helmholtz decomposition is a pair of vector fields and such that:
Here, is a scalar potential, is its gradient, and is the divergence of the vector field . The irrotational vector field is called a gradient field and is called a solenoidal field or rotation field. This decomposition does not exist for all vector fields and is not unique.
History
The Helmholtz decomposition in three dimensions was first described in 1849 by George Gabriel Stokes for a theory of diffraction. Hermann von Helmholtz published his paper on some hydrodynamic basic equations in 1858, which was part of his research on the Helmholtz's theorems describing the motion of fluid in the vicinity of vortex lines. Their derivation required the vector fields to decay sufficiently fast at infinity. Later, this condition could be relaxed, and the Helmholtz decomposition could be extended to higher dimensions. For Riemannian manifolds, the Helmholtz-Hodge decomposition using differential geometry and tensor calculus was derived.
The decomposition has become an important tool for many problems in theoretical physics, but has also found applications in animation, computer vision as well as robotics.
Three-dimensional space
Many physics textbooks restrict the Helmholtz decomposition to the three-dimensional space and limit its application to vector fields that decay sufficiently fast at infinity or to bump functions that are defined on a bounded domain. Then, a vector potential can be defined, such that the rotation field is given by , using the curl of a vector field.
Let be a vector field on a bounded domain , which is twice continuously differentiable inside , and let be the surface that encloses the domain with outward surface normal . Then can be decomposed into a curl-free component and a divergence-free component as follows:
where
and is the nabla operator with respect to , not .
If and is therefore unbounded, and vanishes faster than as , then one has
This holds in particular if is twice continuously differentiable in and of bounded support.
Derivation
Solution space
If is a Helmholtz decomposition of , then
is another decomposition if, and only if,
and
where
is a harmonic scalar field,
is a vector field which fulfills
is a scalar field.
Proof:
Set and . According to the definition
of the Helmholtz decomposition, the condition is equivalent to
.
Taking the divergence of each member of this equation yields
, hence is harmonic.
Conversely, given any harmonic function ,
is solenoidal since
Thus, according to the above section, there exists a vector field such that
.
If is another such vector field,
then
fulfills , hence
for some scalar field .
Fields with prescribed divergence and curl
The term "Helmholtz theorem" can also refer to the following. Let be a solenoidal vector field and d a scalar field on which are sufficiently smooth and which vanish faster than at infinity. Then there exists a vector field such that
if additionally the vector field vanishes as , then is unique.
In other words, a vector field can be constructed with both a specified divergence and a specified curl, and if it also vanishes at infinity, it is uniquely specified by its divergence and curl. This theorem is of great importance in electrostatics, since Maxwell's equations for the electric and magnetic fields in the static case are of exactly this type. The proof is by a construction generalizing the one given above: we set
where represents the Newtonian potential operator. (When acting on a vector field, such as , it is defined to act on each component.)
Weak formulation
The Helmholtz decomposition can be generalized by reducing the regularity assumptions (the need for the existence of strong derivatives). Suppose is a bounded, simply-connected, Lipschitz domain. Every square-integrable vector field has an orthogonal decomposition:
where is in the Sobolev space of square-integrable functions on whose partial derivatives defined in the distribution sense are square integrable, and , the Sobolev space of vector fields consisting of square integrable vector fields with square integrable curl.
For a slightly smoother vector field , a similar decomposition holds:
where .
Derivation from the Fourier transform
Note that in the theorem stated here, we have imposed the condition that if is not defined on a bounded domain, then shall decay faster than . Thus, the Fourier transform of , denoted as , is guaranteed to exist. We apply the convention
The Fourier transform of a scalar field is a scalar field, and the Fourier transform of a vector field is a vector field of same dimension.
Now consider the following scalar and vector fields:
Hence
Longitudinal and transverse fields
A terminology often used in physics refers to the curl-free component of a vector field as the longitudinal component and the divergence-free component as the transverse component. This terminology comes from the following construction: Compute the three-dimensional Fourier transform of the vector field . Then decompose this field, at each point k, into two components, one of which points longitudinally, i.e. parallel to k, the other of which points in the transverse direction, i.e. perpendicular to k. So far, we have
Now we apply an inverse Fourier transform to each of these components. Using properties of Fourier transforms, we derive:
Since and ,
we can get
so this is indeed the Helmholtz decomposition.
Generalization to higher dimensions
Matrix approach
The generalization to dimensions cannot be done with a vector potential, since the rotation operator and the cross product are defined (as vectors) only in three dimensions.
Let be a vector field on a bounded domain which decays faster than for and .
The scalar potential is defined similar to the three dimensional case as:
where as the integration kernel is again the fundamental solution of Laplace's equation, but in d-dimensional space:
with the volume of the d-dimensional unit balls and the gamma function.
For , is just equal to , yielding the same prefactor as above.
The rotational potential is an antisymmetric matrix with the elements:
Above the diagonal are entries which occur again mirrored at the diagonal, but with a negative sign.
In the three-dimensional case, the matrix elements just correspond to the components of the vector potential .
However, such a matrix potential can be written as a vector only in the three-dimensional case, because is valid only for .
As in the three-dimensional case, the gradient field is defined as
The rotational field, on the other hand, is defined in the general case as the row divergence of the matrix:
In three-dimensional space, this is equivalent to the rotation of the vector potential.
Tensor approach
In a -dimensional vector space with , can be replaced by the appropriate Green's function for the Laplacian, defined by
where Einstein summation convention is used for the index . For example, in 2D.
Following the same steps as above, we can write
where is the Kronecker delta (and the summation convention is again used). In place of the definition of the vector Laplacian used above, we now make use of an identity for the Levi-Civita symbol ,
which is valid in dimensions, where is a -component multi-index. This gives
We can therefore write
where
Note that the vector potential is replaced by a rank- tensor in dimensions.
Because is a function of only , one can replace , giving
Integration by parts can then be used to give
where is the boundary of . These expressions are analogous to those given above for three-dimensional space.
For a further generalization to manifolds, see the discussion of Hodge decomposition below.
Differential forms
The Hodge decomposition is closely related to the Helmholtz decomposition, generalizing from vector fields on R3 to differential forms on a Riemannian manifold M. Most formulations of the Hodge decomposition require M to be compact. Since this is not true of R3, the Hodge decomposition theorem is not strictly a generalization of the Helmholtz theorem. However, the compactness restriction in the usual formulation of the Hodge decomposition can be replaced by suitable decay assumptions at infinity on the differential forms involved, giving a proper generalization of the Helmholtz theorem.
Extensions to fields not decaying at infinity
Most textbooks only deal with vector fields decaying faster than with at infinity. However, Otto Blumenthal showed in 1905 that an adapted integration kernel can be used to integrate fields decaying faster than with , which is substantially less strict.
To achieve this, the kernel in the convolution integrals has to be replaced by .
With even more complex integration kernels, solutions can be found even for divergent functions that need not grow faster than polynomial.
For all analytic vector fields that need not go to zero even at infinity, methods based on partial integration and the Cauchy formula for repeated integration can be used to compute closed-form solutions of the rotation and scalar potentials, as in the case of multivariate polynomial, sine, cosine, and exponential functions.
Uniqueness of the solution
In general, the Helmholtz decomposition is not uniquely defined.
A harmonic function is a function that satisfies .
By adding to the scalar potential , a different Helmholtz decomposition can be obtained:
For vector fields , decaying at infinity, it is a plausible choice that scalar and rotation potentials also decay at infinity.
Because is the only harmonic function with this property, which follows from Liouville's theorem, this guarantees the uniqueness of the gradient and rotation fields.
This uniqueness does not apply to the potentials: In the three-dimensional case, the scalar and vector potential jointly have four components, whereas the vector field has only three. The vector field is invariant to gauge transformations and the choice of appropriate potentials known as gauge fixing is the subject of gauge theory. Important examples from physics are the Lorenz gauge condition and the Coulomb gauge. An alternative is to use the poloidal–toroidal decomposition.
Applications
Electrodynamics
The Helmholtz theorem is of particular interest in electrodynamics, since it can be used to write Maxwell's equations in the potential image and solve them more easily. The Helmholtz decomposition can be used to prove that, given electric current density and charge density, the electric field and the magnetic flux density can be determined. They are unique if the densities vanish at infinity and one assumes the same for the potentials.
Fluid dynamics
In fluid dynamics, the Helmholtz projection plays an important role, especially for the solvability theory of the Navier-Stokes equations. If the Helmholtz projection is applied to the linearized incompressible Navier-Stokes equations, the Stokes equation is obtained. This depends only on the velocity of the particles in the flow, but no longer on the static pressure, allowing the equation to be reduced to one unknown. However, both equations, the Stokes and linearized equations, are equivalent. The operator is called the Stokes operator.
Dynamical systems theory
In the theory of dynamical systems, Helmholtz decomposition can be used to determine "quasipotentials" as well as to compute Lyapunov functions in some cases.
For some dynamical systems such as the Lorenz system (Edward N. Lorenz, 1963), a simplified model for atmospheric convection, a closed-form expression of the Helmholtz decomposition can be obtained:
The Helmholtz decomposition of , with the scalar potential is given as:
The quadratic scalar potential provides motion in the direction of the coordinate origin, which is responsible for the stable fix point for some parameter range. For other parameters, the rotation field ensures that a strange attractor is created, causing the model to exhibit a butterfly effect.
Medical Imaging
In magnetic resonance elastography, a variant of MR imaging where mechanical waves are used to probe the viscoelasticity of organs, the Helmholtz decomposition is sometimes used to separate the measured displacement fields into its shear component (divergence-free) and its compression component (curl-free). In this way, the complex shear modulus can be calculated without contributions from compression waves.
Computer animation and robotics
The Helmholtz decomposition is also used in the field of computer engineering. This includes robotics, image reconstruction but also computer animation, where the decomposition is used for realistic visualization of fluids or vector fields.
| Mathematics | Multivariable and vector calculus | null |
821741 | https://en.wikipedia.org/wiki/Lever%20tumbler%20lock | Lever tumbler lock | A lever tumbler lock is a type of lock that uses a set of levers to prevent the bolt from moving in the lock. In the simplest form of these, lifting the tumbler above a certain height will allow the bolt to slide past.
The number of levers may vary, but is usually an odd number for a lock that can be opened from each side of the door in order to provide symmetry. A minimum number of levers may be specified to provide an anticipated level of security .
History
"Double acting" lever tumbler locks were invented in 1778 by Robert Barron of England. These required the lever to be lifted to a certain height by having a slot cut in the lever, so lifting the lever too far was as bad as not lifting the lever far enough. This type of lock is still used today, on doors in Europe, Africa, South America and some other parts of the world.
Design
The lock is made up of levers (usually made out of non-ferrous metals). Each lever needs to be lifted to a specific height by the key in order for the locking bolt to move. Typically, the belly of the lever is cut away to various depths to provide different combinations, or the gate is cut in a different location, to provide differs. A lever will have pockets (or gates) through which the bolt stump (or post or fence) moves during unlocking.
There has not always been universal agreement about which variants of the basic design merit the terms "lever lock" or "detainer lock" or both. Some authors use the term "detainer lock" to refer specifically to variants where the gates are "open" (i.e. at the edge of the lever), rather than "closed" (i.e. entirely surrounded by the lever).
Lever locks generally use a bitted key. Some locks used on safes use a double-bitted key, as do some door locks of a type often used in Southern and Eastern Europe.
Three-lever locks
A three-lever lock is a common type of lever lock, but is generally used for low security applications such as internal doors as their tolerances are much lower (there are fewer combinations of key available, so they are likely to unlock doors they shouldn't).
Five-lever locks
A five-lever lock is often required for home insurance and often recommended by the police for home security. There are various grades but the current British Standard (BS3621:2007) is usually required for insurance purposes. Locksmith Valerie Olifent states that, "The doors of many historic churches still carry an old wooden lock although often you find that a modern 5-lever mortice lock has been installed alongside it to meet insurance requirements." BS3621:2007 requires a bolt throw of 20 mm rather than the 14 mm of the earlier British Standard.
Most BS3621 locks have anti-pick devices built in to reduce the chance of lock picking, along with hardened bolts and anti-drill plates to reduce risk of physical attack.
Vulnerabilities
Lever tumbler locks can be picked with a tool called a curtain pick which is inserted into the keyway of the lock, and a force is applied to the locking bolt. The pick is then used to lift each lever inside the lock to the correct height so that the locking bolt can pass.
Higher security lever locks (such as the five-lever) usually have notches cut into the levers. These catch the locking bolt and prevent it from moving if picking is attempted (similar to the security pins in a pin tumbler lock).
The Chubb detector lock is a variation of the lever lock which was designed to detect and prevent picking attempts.
Lever locks can be drilled, but usually a template or stencil is required to mark the drilling point, as the lock mechanism is commonly mortised into the door and so it is harder to determine the point at which to drill.
| Technology | Mechanisms | null |
822082 | https://en.wikipedia.org/wiki/European%20dark%20bee | European dark bee | The Apis mellifera mellifera (commonly known as the European dark bee) is a subspecies of the western honey bee, evolving in central Asia, with a proposed origin of the Tien Shan Mountains and later migrating into eastern and then northern Europe after the last ice age from 9,000BC onwards. Its original range included the southern Urals in Russia and stretched through northern Europe and down to the Pyrenees. They are one of the two members of the 'M' lineage of Apis mellifera, the other being in western China. Traditionally they were called the Black German Bee, although they are now considered endangered in Germany. However today they are more likely to be named after the region in which they live, such as the British black bee, the Native Irish Honey Bee, the Cornish black bee and the Nordic brown bee, even though they are all the same subspecies, with the word "native" often inserted by local beekeepers, even in places where the bee is an introduced foreign species. It was domesticated in Europe and hives were brought to North America in the colonial era in 1622 where they were referred to as the English Fly by the Native Americans.
Appearance
The A. m. mellifera can be broadly distinguished from other subspecies by their stocky body, abundant thoracal and sparse abdominal hair which is brown, and overall dark coloration. When viewed from a distance, they appear blackish or rich dark brown. They are large for honey bees though they have unusually short tongues (). Their common name (dark or black bee) is derived from their brown-black color, with only a few lighter yellow spots on the abdomen. On a pigmentation rating from 0 (completely dark) to 9 (completely bright yellow) the A. m. mellifera scores 2.1, for comparison a A. m. carnica scores a 1.3 and a A. m. ligustica scores a 7.8. In 2019 research concluded that honey bees in Ireland that were completely dark contained less A. m. mellifera DNA than bees with yellow to orange spots on their abdomens, and bees with pigmentation on their first and second tergites (segments of their abdomens) contained a comparable amount of A. m. mellifera DNA than the completely dark bees, the authors speculated that the completely dark bees had obtained their darker pigmentation from A. m. carnica DNA.
Friedrich Ruttner worked closely with senior members of the BIBBA (Bee Improvement & Bee Breeders Association) in Britain to identify wing veins (wing morphometry) to achieve "racial purity" in the breeding of their bees, culminating in the publication of their book The Dark European Honeybee. However the process depends on the exact measuring methods employed.
Character
A. m. mellifera is descended from the 'M' lineage of Apis mellifera, of which all bees to a greater or lesser degree have aggression when compared to the 'C' lineage.
A. m. mellifera hybrids have an even greater reputation of aggression amongst beekeepers, which can increase in subsequent generations if left unchecked, although this characteristic can be overcome with continual selective breeding over some generations. They are nervous and aggressive to the extent that routine inspections will take longer, decreasing the enjoyment of managing their colonies. This characteristic is one that has been traditionally associated with A. m. mellifera going back to the now extinct Old British Black bee before the early 1900s: To quote Brother Adam who was the only beekeeper with first hand experience that committed his findings to paper:
"The native (Old British Black) bee had undoubtedly many extremely valuable characteristics, but equally so a great many serious defects and drawbacks. She was very bad tempered and very susceptible to brood diseases and would in any case not have been able to produce the crops (of honey) we have secured since her demise".
In 2014-2017 a European wide survey was conducted with 621 colonies, which included the various subspecies kept by beekeepers, it found that the A. m. mellifera was the most aggressive, had the highest swarming tendency and the lowest hygienic behaviour - a trait closely linked with Varroa sensitive hygiene.
Characteristics
higher levels of aggression
increased tendency to swarm
lower resistance to varroa mites due to poorer hygienic behaviour (VSH)
prone to inbreeding due to habit of Apiary Vicinity Mating, resulting in increased aggression
susceptibility to acarine mites due to their larger tracheas
difficulty entering smaller flowers due to their larger size
difficulty collecting nectar from longer flowers due to their shorter tongues
poorer pollinators of fruit trees and bushes
more prone to Balling the Queen, resulting in her death
susceptible to brood diseases
susceptible to a greater likelihood of Supersedure than other bees
non-prolific, population building up later in year, unable to take full advantage of an early spring nectar flow
A. m. mellifera Queens do not hybridize with non-A. m. mellifera Drones
Non-hybridization
In 2013 research was carried out in Poland which confirmed anecdotal evidence that A. m. mellifera virgin Queens do not readily mate with non-A. m. mellifera drones, "The progeny of AMM queens was fathered almost exclusively by AMM drones. On the other hand, progeny of AMC queens was fathered by drones of both subspecies". Further research was conducted in western Ireland on Beara peninsula (as part of genetic research carried out throughout the island in 2017), which confirmed the 2013 Polish research in that the A. m. mellifera virgin Queens were not mating with either A. m. carnica or Buckfast drones, nor their hybrids. Several conjectures were presented as an explanation to this characteristic of A. m. mellifera, but no conclusion was reached.
Significance
The A. m. mellifera had become established from the Urals to northwestern Europe by the 1800's until the introduction of other bee subspecies, considered more suited to modern beekeeping, such as the A. m. carnica or the Buckfast bee, a breed of bee whose ancestry originally included the remnants of the old British black bee (a strain or phenotype of A. m. mellifera), which became extinct due to the Isle of Wight Disease.
In the United States, research based on DNA sequencing analysis found DNA from the 'M' lineage of honey bees in the feral population of Arkansas, Louisiana, Mississippi, Oklahoma, and Missouri, believed in part to be the DNA from imported bees of over 100 years ago (DNA from the other bee lineages was also found in these feral populations, suggesting that they likely came from escaped swarms from apiaries at multiple unknown times in the past).
Promotion and conservation areas
Dedicated organizations have been attempting to establish exclusive conservation areas for A. m. mellifera, also breeding groups have been set up to "establish racial purity" of "native strains" and others running courses to train beekeepers in being able to calculate the "racial purity" of their bees through wing morphometry. Other organizations are attempting to establish that the A. m. mellifera in their local geographic region are a distinct "variety", some even claiming it is a separate subspecies, of the A. m. mellifera subspecies, but to date there is no published research to support this, however through morphometry and DNA analysis local geographic strains may be able to be identified, albeit not consistent across the geographic population, in which the strain's characteristics show less morphometric variation and therefore less environmental adaptability. With one group even starting a "project to develop their own native breed of bee". Many promoters of the A. m. mellifera claim that the sub-species is endangered and under threat from imports, even though DNA analysis has been able to show that the amount of non-A. m. mellifera DNA within local populations of A. m. mellifera remains relatively low, with an Irish survey showing that 97.8% of sampled bees were determined to be pure A. m. mellifera, and a further study across eight northwest European countries showing that their A. m. mellifera populations were genetically pure.
Nazi Germany
In 1937 the Third Reich implemented nativist policies to protect and promote the A. m. mellifera, as an extension of their ideology of "Blood and Soil" (Blut und Boden - a Nazi slogan expressing a racially defined group pertaining to a geographic area), by banning imports of Honey Bees (Apis mellifera) and regulating the breeding of bees, in which only registered breeders at designated locations were permitted to rear queens to supply German beekeepers; however a limited dispensation was made for a minority of A. m. carnica beekeepers in southern Germany constituting only 13% overall. But after the annexation of Austria in 1938 the amount of A. m. carnica breeders increased to 31%. In 1939 actions were taken to reduce the numbers (by approximately 95%) of A. m. carnica being bred in Germany, resulting in the Native German Dark bee being promoted fore-mostly. Beekeeping literature at the time used the racial ideological vocabulary of the National Socialists (only in concentrated form), such as: "What is not race is chaff!" "Foreign drones are to be exterminated" and "But what use is it if one day a Jewish bastard (a German with Jewish ancestry) is a genius, but our ethnic purity is destroyed in the process (through inter-marriage). It is no different with beekeeping, what use is the importation of foreign breeds (sub-species)... if our (Native) German bee is lost in the process (through inter-breeding)".
However starting in the winter of 1940 to 1942, beekeeping was devastated throughout Germany by huge colony deaths, later identified by Karl Von Frisch as a virulent strain of Nosema apis, through his work with the Nosema Council to try and tackle the problem; ironically it was this epidemic that saved Von Frisch from the Nazis' antisemitic policies, as his maternal Grandmother was Jewish, making him "25% Jewish" ("75% German").
As a result restrictions against the breeding of A. m. carnica was lifted and German beekeepers began to re-stock with more disease resistant Austrian A. m. carnica bees: After the war all National Socialism rhetoric was abandoned and breeding of bees was purely focused on performance and character. It was then decided by the German Beekeeping Associations to keep only the A. m. carnica bee due to its superior characteristics; as a result the Old German Dark bee (A. m. mellifera) is now considered an endangered sub-species in Germany.
Isle of Man
In 1988, the Importation of Bees Order made it illegal to import bees or used bee equipment into the Isle of Man. Originally this was done to prevent the Varroa mite from arriving on the island; in 2015 the EU "declared the Isle of Man officially free of the bee pest Varroa". However, in 2015 the Isle of Man Beekeepers' Federation launched the Manx Bee Improvement Group, to promote what they called the "Manx Dark Honey Bee (Apis mellifera mellifera)". They work closely with the BIBBA with the stated goal of eliminating "foreign strains" from the island through regular inspections of hives. Beekeepers on the Isle of Man are now compelled to register their bees in line with the Bee Diseases and Pest Control (Isle of Man) Order 2008, they must inform the Department of Environment, Food and Agriculture of any movement of bees or bee equipment and the creation of new hives; failure to register or comply, risks prosecution and "a fine not exceeding £5,000".
Isle of Læsø
In 1993 a conservation area for A. m. mellifera was established on the island of Læsø in Denmark, where it became illegal to keep and import any other type of bee other than Apis mellifera mellifera, this was met with protests and a legal battle lasting eight years from other beekeepers of A. m. ligustica, A. m. carnica and Buckfast bees as they did not "want to become a custodian of poor bees", they also stated that A. m. mellifera was "unproductive" and "not worthy of protection". They lost their case in 2001, and negotiations between A. m. mellifera beekeepers and non-A. m. mellifera beekeepers were concluded in 2004, splitting the island in two between them, ending a "history of sabotage of bees" on the island. The A. m. mellifera supporters claimed that they had "introduced apartheid on Læsø for the bees".
A 2014 European wide survey, which covered 621 colonies, found that the A. m. mellifera from Læsø had the lowest hygienic behaviour of all bees tested, (a trait closely linked with Varroa sensitive hygiene) which would make them more susceptible to varroa mites.
Islands of Colonsay and Oronsay
In 2013 the Scottish Government introduced the Bee Keeping (Colonsay and Oronsay) Order, making it an offence to keep any other honeybee (Apis mellifera) on either island other than the subspecies Apis mellifera mellifera. The Environment and Climate Change Minister said at the time, "The Bee Keeping Order illustrates how our non-native species legislation can be used to protect our native wildlife. The order is a targeted measure to protect an important population of black bees on Colonsay from hybridisation" (the "non-native species legislation" was used because Apis mellifera are considered to be non-native to Colonsay, but considered native to Scotland as it was the first honey bee to be introduced for use in beekeeping). The islands are home to fifty to sixty beehives (a minimum of fifty colonies of unrelated bees are required to prevent inbreeding) and are referred to now as the "Colonsay Dark Native Bee" even though they were collected from across Scotland in the previous thirty years and genetic analysis has shown introgression from Australian and New Zealand A. m. ligustica. In 2018 it was claimed by the Galtee Bee Breeding Group (GBBG) based in Ireland in County Tipperary that they had "sent bees to Colonsay", earlier DNA evidence had confirmed a genetic link between the two populations.
In the media
In the documentary More than Honey, the bee kept and bred by Swiss-German beekeeper Fred Jaggi was A. m. mellifera, referred to as the "local black breed", in which he strives to maintain "racially pure" bees, lamenting when he discovers yellow coloration in the colony of one of his queens, meaning that she has bred with a drone from a different sub-species and produced "little half-breeds", she is subsequently killed; we see in the documentary his pure bees succumbing to a brood disease and having to be gassed, then burned: Jaggi abandons the local black bees and the goal of racial purity, choosing A. m. carnica bees instead, with an apiary that includes hybrids to enhance genetic diversity, which are found to be "more disease resistant".
In 2012 a story began to circulate online and in some British newspapers, in which Dorian Pritchard, the Conservation officer for the BIBBA and President of SICAMM (International Association for the Protection of the European Dark Bee), was interviewed and quoted, saying that the Old British Black Bee (an extinct strain of A. m. mellifera) was not extinct and had been discovered in the rafters of a church in Northumberland. There were numerous inaccuracies in the story, including:
(1) The Old "British Black" bee was "wiped out by a strain of Spanish flu in 1919":
The Spanish flu only affected humans, it was the Isle of Wight Disease between 1904 through to 1945 that was believed to have wiped out the original Old British (and Irish) Black Bees of the British Isles.
(2) "The Spanish flu which wiped out ... every single bee in the UK":
No beekeepers at the time made this claim, what was claimed was that the indigenous Apis mellifera mellifera of the British Isles was wiped out, hybrids with other non-Apis mellifera mellifera bees often survived, notably A. m. ligustica and later the Buckfast bee bred by Brother Adam of Buckfast Abbey, also continental A. m. mellifera, imported in subsequent years to repopulate the country, showed stronger resistance to the Isle of Wight Disease.
(3) "The British Black bee is different from other bees ... ideally suited to the British climate ... more so than the European Black bee":
This suggests that the "British Black Bee" found in the church is a different subspecies than the "European Black Bee" (A. m. mellifera), while in fact they are the same subspecies, as acknowledged by Philip Denwood writing in SICAMM's magazine mellifera.ch in 2014 (as a member of BBKA and the BIBBA) "... in the last decade DNA studies ... have conclusively shown that modern specimens of Dark Bees from the UK and Ireland fit into the genetic specification of Apis mellifera mellifera (the European dark / black bee)".
Breeding for Varroa resistance
Varroa sensitive hygiene (VSH)
In 2010, it was announced at the VIth COLOSS Conference that a project using the British native honey bee Apis mellifera mellifera was to be set up to breed for Varroa Sensitive Hygiene (VSH). In April 2016, the Laboratory of Apiculture and Social Insects at the University of Sussex (LASI) began blogging about the project. They stated, "we have established LASI Queen Bees to supply our hygienic bees to UK beekeepers", supplying "several hundred queens to British beekeepers". By May 2017 many of the apiaries had a standstill order imposed on them by Bee Inspectors of the National Bee Unit to prevent the spread of EFB (European foulbrood) from infected colonies, a disease associated with a low nurse bee to brood ratio, resulting in lower hygiene levels within the hive. The LASI Queen Bees breeding project "using the British native honey bee" has not been revived.
Grooming behavior
In 2016 Dorian Pritchard, a prominent member of the BIBBA and SICAMM, published an article in The Journal of Apicultural Research, entitled "Grooming by honey bees as a component of varroa resistant behavior", in which he reviewed much of the existing research into the "assumed links" between the grooming behavior of honey bees and varroa resistance stating "one of the most effective recognized means of defense is body grooming", even though varroa mite resistance had already been achieved in 2008 through the breeding of bees with VSH.
In promoting A. m. mellifera for breeding of the grooming behavior, the paper states that "Anecdotal reports suggest that the high level of resistance of some British near-native A. m. mellifera strains may be due to grooming, but no detailed reports have yet been published".
Pritchard goes on to promote A. m. mellifera by citing research by Bak & Wilde (2016) into the grooming behavior and Pritchard states "that A. m. mellifera of the Augustowska line were outstandingly the most reactive to the presence of a mite placed on their bodies, 98% of bees reacting to shed the mite"; the Bak and Wilde research paper stated "as many as 98% of worker bees in this group (A. m. mellifera) made an attempt to remove mites", while for "Carniolan (A. m. carnica) bees" it was 89.3% and for "Caucasian (A. m. caucasia) bees" it was 86%. However, only 8.2% of the A. m. mellifera were successful in removing mites, for the A. m. caucasia it was 10.9%, and for the A. m. carnica it was near 3.5%. It was noted that "no mites were actually damaged in the laboratory experiments" and that "about 80% of mites removed remounted their hosts and remarkably, no physical damage was visible on any mites, even after bees had been seen vigorously shaking and even chewing them".
However, research into "hygienic behaviour" (VSH) previously published by Siuda et al. (2007) had concluded that the "Bees of A. m. mellifera (also the Augustowska line) demonstrated the strongest ability for cleaning comb cells from dead capped brood, however many of their behavioural characters did not promote the management of modern apiaries. The better solution would be rather the selection of lines with hygienic behaviour on the basis of Carniolan or Caucasian bees".
A subsequent paper published by Kruitwagen et al. (2017) concluded that the grooming behavior itself did not lead to Varroa resistance and on average led to higher mite levels.
Breeding for grooming behavior with the aim of achieving Varroa resistance is still promoted by A. m. mellifera organisations.
| Biology and health sciences | Hymenoptera | Animals |
822244 | https://en.wikipedia.org/wiki/Medicinal%20plants | Medicinal plants | [[File:Swertia perennis 230705.jpg|thumb|Swertia perennis found in high mountain places of Nepal]]
Medicinal plants, also called medicinal herbs, have been discovered and used in traditional medicine practices since prehistoric times. Plants synthesize hundreds of chemical compounds for various functions, including defense and protection against insects, fungi, diseases, against parasites and herbivorous mammals.
The earliest historical records of herbs are found from the Sumerian civilization, where hundreds of medicinal plants including opium are listed on clay tablets, . The Ebers Papyrus from ancient Egypt, , describes over 850 plant medicines. The Greek physician Dioscorides, who worked in the Roman army, documented over 1000 recipes for medicines using over 600 medicinal plants in , ; this formed the basis of pharmacopoeias for some 1500 years. Drug research sometimes makes use of ethnobotany to search for pharmacologically active substances, and this approach has yielded hundreds of useful compounds. These include the common drugs aspirin, digoxin, quinine, and opium. The compounds found in plants are diverse, with most in four biochemical classes: alkaloids, glycosides, polyphenols, and terpenes. Few of these are scientifically confirmed as medicines or used in conventional medicine.
Medicinal plants are widely used as folk medicine in non-industrialized societies, mainly because they are readily available and cheaper than modern medicines. The annual global export value of the thousands of types of plants with medicinal properties was estimated to be US$60 billion per year and growing at the rate of 6% per annum. In many countries, there is little regulation of traditional medicine, but the World Health Organization coordinates a network to encourage safe and rational use. The botanical herbal market has been criticized for being poorly regulated and containing placebo and pseudoscience products with no scientific research to support their medical claims. Medicinal plants face both general threats, such as climate change and habitat destruction, and the specific threat of over-collection to meet market demand.
History
Prehistoric times
Plants, including many now used as culinary herbs and spices, have been used as medicines, not necessarily effectively, from prehistoric times. Spices have been used partly to counter food spoilage bacteria, especially in hot climates, and especially in meat dishes that spoil more readily. Angiosperms (flowering plants) were the original source of most plant medicines. Human settlements are often surrounded by weeds used as herbal medicines, such as nettle, dandelion and chickweed. Humans were not alone in using herbs: some animals, such as non-human primates, monarch butterflies and sheep ingest plants when they are ill.
Samples from prehistoric burial sites indicate that Paleolithic peoples consumed plants. For instance, a 60,000-year-old Neanderthal burial site, "Shanidar IV", in northern Iraq yielded pollen from eight plant species. At Taforalt cave, Morocco, 15,000-year-old remains of ephedra were found inside a tomb, indicating its possible role in funeral rites. A mushroom found in the personal effects of Ötzi the Iceman, whose body was frozen in the Ötztal Alps for more than 5,000 years, may have been used against whipworm.
Ancient times
In ancient Sumeria, hundreds of medicinal plants including myrrh and opium are listed on clay tablets from around 3000 BC. The ancient Egyptian Ebers Papyrus lists over 800 plant medicines such as aloe, cannabis, castor bean, garlic, juniper, and mandrake.
In antiquity, various cultures across Europe, including the Romans, Celts, and Nordic peoples, also practiced herbal medicine as a significant component of their healing traditions.
The Romans had a rich tradition of herbal medicine, drawing upon knowledge inherited from the Greeks and expanding upon it. Notable works include those of Pedanius Dioscorides, whose "De Materia Medica" served as a comprehensive guide to medicinal plants and remained influential for centuries. Additionally, Pliny the Elder's "Naturalis Historia" contains valuable insights into Roman medical plant practices
Among the Celtic peoples of ancient Europe, herbalism played a vital role in both medicine and spirituality. Druids, the religious leaders of the Celts, were reputed to possess deep knowledge of plants and their medicinal properties. Although written records are scarce, archaeological evidence, such as the discovery of medicinal plants at Celtic sites, provides insight into their herbal practices
In the Nordic regions, including Scandinavia and parts of Germany, herbal medicine was also prevalent in ancient times. The Norse sagas and Eddic poetry often mention the use of herbs for healing purposes. Additionally, archaeological findings, such as the remains of medicinal plants in Viking-age graves, attest to the importance of herbal remedies in Nordic culture
From ancient times to the present, Ayurvedic medicine as documented in the Atharva Veda, the Rig Veda and the Sushruta Samhita has used hundreds of herbs and spices, such as turmeric, which contains curcumin. The Chinese pharmacopoeia, the Shennong Ben Cao Jing records plant medicines such as chaulmoogra for leprosy, ephedra, and hemp. This was expanded in the Tang dynasty Yaoxing Lun. In the fourth century BC, Aristotle's pupil Theophrastus wrote the first systematic botany text, Historia plantarum. In around 60 AD, the Greek physician Pedanius Dioscorides, working for the Roman army, documented over 1000 recipes for medicines using over 600 medicinal plants in . The book remained the authoritative reference on herbalism for over 1500 years, into the seventeenth century.
Middle Ages
During the Middle Ages, herbalism continued to flourish across Europe, with distinct traditions emerging in various regions, often influenced by cultural, religious, indigenous, and geographical factors.
In the Early Middle Ages, Benedictine monasteries preserved medical knowledge in Europe, translating and copying classical texts and maintaining herb gardens. Hildegard of Bingen wrote Causae et Curae ("Causes and Cures") on medicine.
In France, herbalism thrived alongside the practice of medieval medicine, which combined elements of Ancient Greek and Roman traditions. Catholic monastic orders played a significant role in preserving and expanding herbal knowledge. Manuscripts like the "Tractatus de Herbis" from the 15th century depict French herbal remedies and their uses. Monasteries and convents served as centers of learning, where monks and nuns cultivated medicinal gardens. Likewise, in Italy, herbalism flourished with contribution Italian physicians like Matthaeus Platearius who compiled herbal manuscripts, such as the "Circa Instans," which served as practical guides for herbal remedies.
In the Iberian Peninsula, the regions of the North remained independent during the period of Islamic occupation, and retained their traditional and indigenous medical practices. Galicia and Asturias, possessed a rich herbal heritage shaped by its Celtic and Roman influences. The Galician people were known for their strong connection to the land and nature and preserved botanical knowledge, with healers, known as "curandeiros" or "meigas," who relied on local plants for healing purposes The Asturian landscape, characterized by lush forests and mountainous terrain, provided a rich source of medicinal herbs used in traditional healing practices, with "yerbatos," who possessed extensive knowledge of local plants and their medicinal properties Barcelona, located in the Catalonia region of northeastern Spain, was a hub of cultural exchange during the Middle Ages, fostering the preservation and dissemination of medical knowledge. Catalan herbalists, known as "herbolarios," compiled manuscripts detailing the properties and uses of medicinal plants found in the region. The University of Barcelona, founded in 1450, played a pivotal role in advancing herbal medicine through its botanical gardens and academic pursuits.
In Scotland and England, herbalism was deeply rooted in folk traditions and influenced by Celtic, Anglo-Saxon, and Norse practices. Herbal knowledge was passed down through generations, often by wise women known as "cunning folk." The "Physicians of Myddfai," a Welsh herbal manuscript from the 13th century, reflects the blending of Celtic and Christian beliefs in herbal medicine.
In the Islamic Golden Age, scholars translated many classical Greek texts including Dioscorides into Arabic, adding their own commentaries.
Herbalism flourished in the Islamic world, particularly in Baghdad and in Al-Andalus. Among many works on medicinal plants, Abulcasis (936–1013) of Cordoba wrote The Book of Simples, and Ibn al-Baitar (1197–1248) recorded hundreds of medicinal herbs such as Aconitum, nux vomica, and tamarind in his Corpus of Simples. Avicenna included many plants in his 1025 The Canon of Medicine. Abu-Rayhan Biruni, Ibn Zuhr, Peter of Spain, and John of St Amand wrote further pharmacopoeias.
Early Modern
The Early Modern period saw the flourishing of illustrated herbals across Europe, starting with the 1526 Grete Herball. John Gerard wrote his famous The Herball or General History of Plants in 1597, based on Rembert Dodoens, and Nicholas Culpeper published his The English Physician Enlarged.
Many new plant medicines arrived in Europe as products of Early Modern exploration and the resulting Columbian Exchange, in which livestock, crops and technologies were transferred between the Old World and the Americas in the 15th and 16th centuries. Medicinal herbs arriving in the Americas included garlic, ginger, and turmeric; coffee, tobacco and coca travelled in the other direction.
In Mexico, the sixteenth century Badianus Manuscript described medicinal plants available in Central America.
19th and 20th centuries
The place of plants in medicine was radically altered in the 19th century by the application of chemical analysis. Alkaloids were isolated from a succession of medicinal plants, starting with morphine from the poppy in 1806, and soon followed by ipecacuanha and strychnos in 1817, quinine from the cinchona tree, and then many others. As chemistry progressed, additional classes of potentially active substances were discovered in plants. Commercial extraction of purified alkaloids including morphine began at Merck in 1826. Synthesis of a substance first discovered in a medicinal plant began with salicylic acid in 1853. Around the end of the 19th century, the mood of pharmacy turned against medicinal plants, as enzymes often modified the active ingredients when whole plants were dried, and alkaloids and glycosides purified from plant material started to be preferred. Drug discovery from plants continued to be important through the 20th century and into the 21st, with important anti-cancer drugs from yew and Madagascar periwinkle.
Context
Medicinal plants are used with the intention of maintaining health, to be administered for a specific condition, or both, whether in modern medicine or in traditional medicine. The Food and Agriculture Organization estimated in 2002 that over 50,000 medicinal plants are used across the world. The Royal Botanic Gardens, Kew more conservatively estimated in 2016 that 17,810 plant species have a medicinal use, out of some 30,000 plants for which a use of any kind is documented.
In modern medicine, around a quarter of the drugs prescribed to patients are derived from medicinal plants, and they are rigorously tested. In other systems of medicine, medicinal plants may constitute the majority of what are often informal attempted treatments, not tested scientifically. The World Health Organization estimates, without reliable data, that some 80 percent of the world's population depends mainly on traditional medicine (including but not limited to plants); perhaps some two billion people are largely reliant on medicinal plants. The use of plant-based materials including herbal or natural health products with supposed health benefits, is increasing in developed countries. This brings attendant risks of toxicity and other effects on human health, despite the safe image of herbal remedies. Herbal medicines have been in use since long before modern medicine existed; there was and often still is little or no knowledge of the pharmacological basis of their actions, if any, or of their safety. The World Health Organization formulated a policy on traditional medicine in 1991, and since then has published guidelines for them, with a series of monographs on widely used herbal medicines.
Medicinal plants may provide three main kinds of benefit: health benefits to the people who consume them as medicines; financial benefits to people who harvest, process, and distribute them for sale; and society-wide benefits, such as job opportunities, taxation income, and a healthier labour force. However, development of plants or extracts having potential medicinal uses is blunted by weak scientific evidence, poor practices in the process of drug development, and insufficient financing.
Phytochemical basis
All plants produce chemical compounds which give them an evolutionary advantage, such as defending against herbivores or, in the example of salicylic acid, as a hormone in plant defenses. These phytochemicals have potential for use as drugs, and the content and known pharmacological activity of these substances in medicinal plants is the scientific basis for their use in modern medicine, if scientifically confirmed. For instance, daffodils (Narcissus) contain nine groups of alkaloids including galantamine, licensed for use against Alzheimer's disease. The alkaloids are bitter-tasting and toxic, and concentrated in the parts of the plant such as the stem most likely to be eaten by herbivores; they may also protect against parasites.
Modern knowledge of medicinal plants is being systematised in the Medicinal Plant Transcriptomics Database, which by 2011 provided a sequence reference for the transcriptome of some thirty species. Major classes of plant phytochemicals are described below, with examples of plants that contain them.
Alkaloids
Alkaloids are bitter-tasting chemicals, very widespread in nature, and often toxic, found in many medicinal plants. There are several classes with different modes of action as drugs, both recreational and pharmaceutical. Medicines of different classes include atropine, scopolamine, and hyoscyamine (all from nightshade), the traditional medicine berberine (from plants such as Berberis and Mahonia), caffeine (Coffea), cocaine (Coca), ephedrine (Ephedra), morphine (opium poppy), nicotine (tobacco), reserpine (Rauvolfia serpentina), quinidine and quinine (Cinchona), vincamine (Vinca minor), and vincristine (Catharanthus roseus).
Glycosides
Anthraquinone glycosides are found in medicinal plants such as rhubarb, cascara, and Alexandrian senna. Plant-based laxatives made from such plants include senna, rhubarb and Aloe.
The cardiac glycosides are powerful drugs from medicinal plants including foxglove and lily of the valley. They include digoxin and digitoxin which support the beating of the heart, and act as diuretics.
Polyphenols
Polyphenols of several classes are widespread in plants, having diverse roles in defenses against plant diseases and predators. They include hormone-mimicking phytoestrogens and astringent tannins. Plants containing phytoestrogens have been administered for centuries for gynecological disorders, such as fertility, menstrual, and menopausal problems. Among these plants are Pueraria mirifica, kudzu, angelica, fennel, and anise.
Many polyphenolic extracts, such as from grape seeds, olives or maritime pine bark, are sold as dietary supplements and cosmetics without proof or legal health claims for medicinal effects. In Ayurveda, the astringent rind of the pomegranate, containing polyphenols called punicalagins, is used as a medicine, with no scientific proof of efficacy.
Terpenes
Terpenes and terpenoids of many kinds are found in a variety of medicinal plants, and in resinous plants such as the conifers. They are strongly aromatic and serve to repel herbivores. Their scent makes them useful in essential oils, whether for perfumes such as rose and lavender, or for aromatherapy. Some have medicinal uses: for example, thymol is an antiseptic and was once used as a vermifuge (anti-worm medicine).
In practice
Cultivation
Medicinal plants demand intensive management. Different species each require their own distinct conditions of cultivation. The World Health Organization recommends the use of rotation to minimise problems with pests and plant diseases. Cultivation may be traditional or may make use of conservation agriculture practices to maintain organic matter in the soil and to conserve water, for example with no-till farming systems. In many medicinal and aromatic plants, plant characteristics vary widely with soil type and cropping strategy, so care is required to obtain satisfactory yields.
Preparation
Medicinal plants are often tough and fibrous, requiring some form of preparation to make them convenient to administer. According to the Institute for Traditional Medicine, common methods for the preparation of herbal medicines include decoction, powdering, and extraction with alcohol, in each case yielding a mixture of substances. Decoction involves crushing and then boiling the plant material in water to produce a liquid extract that can be taken orally or applied topically. Powdering involves drying the plant material and then crushing it to yield a powder that can be compressed into tablets. Alcohol extraction involves soaking the plant material in cold wine or distilled spirit to form a tincture.
Traditional poultices were made by boiling medicinal plants, wrapping them in a cloth, and applying the resulting parcel externally to the affected part of the body.
When modern medicine has identified a drug in a medicinal plant, commercial quantities of the drug may either be synthesised or extracted from plant material, yielding a pure chemical. Extraction can be practical when the compound in question is complex.
Usage
Plant medicines are in wide use around the world. In most of the developing world, especially in rural areas, local traditional medicine, including herbalism, is the only source of health care for people, while in the developed world, alternative medicine including use of dietary supplements is marketed aggressively using the claims of traditional medicine. As of 2015, most products made from medicinal plants had not been tested for their safety and efficacy, and products that were marketed in developed economies and provided in the undeveloped world by traditional healers were of uneven quality, sometimes containing dangerous contaminants. Traditional Chinese medicine makes use of a wide variety of plants, among other materials and techniques. Researchers from Kew Gardens found 104 species used for diabetes in Central America, of which seven had been identified in at least three separate studies. The Yanomami of the Brazilian Amazon, assisted by researchers, have described 101 plant species used for traditional medicines.
Drugs derived from plants including opiates, cocaine and cannabis have both medical and recreational uses. Different countries have at various times made use of illegal drugs, partly on the basis of the risks involved in taking psychoactive drugs.
Effectiveness
Plant medicines have often not been tested systematically, but have come into use informally over the centuries. By 2007, clinical trials had demonstrated potentially useful activity in nearly 16% of herbal extracts; there was limited in vitro or in vivo evidence for roughly half the extracts; there was only phytochemical evidence for around 20%; 0.5% were allergenic or toxic; and some 12% had basically never been studied scientifically. Cancer Research UK caution that there is no reliable evidence for the effectiveness of herbal remedies for cancer.
A 2012 phylogenetic study built a family tree down to genus level using 20,000 species to compare the medicinal plants of three regions, Nepal, New Zealand and the Cape of South Africa. It discovered that the species used traditionally to treat the same types of condition belonged to the same groups of plants in all three regions, giving a "strong phylogenetic signal". Since many plants that yield pharmaceutical drugs belong to just these groups, and the groups were independently used in three different world regions, the results were taken to mean 1) that these plant groups do have potential for medicinal efficacy, 2) that undefined pharmacological activity is associated with use in traditional medicine, and 3) that the use of a phylogenetic groups for possible plant medicines in one region may predict their use in the other regions.
Regulation
The World Health Organization (WHO) has been coordinating a network called the International Regulatory Cooperation for Herbal Medicines to try to improve the quality of medical products made from medicinal plants and the claims made for them. In 2015, only around 20% of countries had well-functioning regulatory agencies, while 30% had none, and around half had limited regulatory capacity. In India, where Ayurveda has been practised for centuries, herbal remedies are the responsibility of a government department, AYUSH, under the Ministry of Health & Family Welfare.
WHO has set out a strategy for traditional medicines with four objectives: to integrate them as policy into national healthcare systems; to provide knowledge and guidance on their safety, efficacy, and quality; to increase their availability and affordability; and to promote their rational, therapeutically sound usage. WHO notes in the strategy that countries are experiencing seven challenges to such implementation, namely in developing and enforcing policy; in integration; in safety and quality, especially in assessment of products and qualification of practitioners; in controlling advertising; in research and development; in education and training; and in the sharing of information.
Drug discovery
The pharmaceutical industry has roots in the apothecary shops of Europe in the 1800s, where pharmacists provided local traditional medicines to customers, which included extracts like morphine, quinine, and strychnine. Therapeutically important drugs like camptothecin (from Camptotheca acuminata, used in traditional Chinese medicine) and taxol (from the Pacific yew, Taxus brevifolia) were derived from medicinal plants. The Vinca alkaloids vincristine and vinblastine, used as anti-cancer drugs, were discovered in the 1950s from the Madagascar periwinkle, Catharanthus roseus.
Hundreds of compounds have been identified using ethnobotany, investigating plants used by indigenous peoples for possible medical applications. Some important phytochemicals, including curcumin, epigallocatechin gallate, genistein and resveratrol are pan-assay interference compounds, meaning that in vitro'' studies of their activity often provide unreliable data. As a result, phytochemicals have frequently proven unsuitable as the lead substances in drug discovery. In the United States over the period 1999 to 2012, despite several hundred applications for new drug status, only two botanical drug candidates had sufficient evidence of medicinal value to be approved by the Food and Drug Administration.
The pharmaceutical industry has remained interested in mining traditional uses of medicinal plants in its drug discovery efforts. Of the 1073 small-molecule drugs approved in the period 1981 to 2010, over half were either directly derived from or inspired by natural substances. Among cancer treatments, of 185 small-molecule drugs approved in the period from 1981 to 2019, 65% were derived from or inspired by natural substances.
Safety
Plant medicines can cause adverse effects and even death, whether by side-effects of their active substances, by adulteration or contamination, by overdose, or by inappropriate prescription. Many such effects are known, while others remain to be explored scientifically. There is no reason to presume that because a product comes from nature it must be safe: the existence of powerful natural poisons like atropine and nicotine shows this to be untrue. Further, the high standards applied to conventional medicines do not always apply to plant medicines, and dose can vary widely depending on the growth conditions of plants: older plants may be much more toxic than young ones, for instance.
Plant extracts may interact with conventional drugs, both because they may provide an increased dose of similar compounds, and because some phytochemicals interfere with the body's systems that metabolise drugs in the liver including the cytochrome P450 system, making the drugs last longer in the body and have a cumulative effect. Plant medicines can be dangerous during pregnancy. Since plants may contain many different substances, plant extracts may have complex effects on the human body.
Quality, advertising, and labelling
Herbal medicine and dietary supplement products have been criticized as not having sufficient standards or scientific evidence to confirm their contents, safety, and presumed efficacy. Companies often make false claims about their herbal products promising health benefits that aren't backed by evidence to generate more sales. The market for dietary supplements and nutraceuticals grew by 5% during the COVID-19 pandemic, which led to the United States taking action to stop the deceptive marketing of herbal products to combat the virus.
Threats
Where medicinal plants are harvested from the wild rather than cultivated, they are subject to both general and specific threats. General threats include climate change and habitat loss to development and agriculture. A specific threat is over-collection to meet rising demand for medicines. A case in point was the pressure on wild populations of the Pacific yew soon after news of taxol's effectiveness became public. The threat from over-collection could be addressed by cultivation of some medicinal plants, or by a system of certification to make wild harvesting sustainable. A report in 2020 by the Royal Botanic Gardens, Kew identifies 723 medicinal plants as being at risk of extinction, caused partly by over-collection.
| Biology and health sciences | Alternative and traditional medicine | Health |
822307 | https://en.wikipedia.org/wiki/Maglev | Maglev | Maglev (derived from magnetic levitation) is a system of rail transport whose rolling stock is levitated by electromagnets rather than rolled on wheels, eliminating rolling resistance.
Compared to conventional railways, maglev trains can have higher top speeds, superior acceleration and deceleration, lower maintenance costs, improved gradient handling, and lower noise. However, they are more expensive to build, cannot use existing infrastructure, and use more energy at high speeds.
Maglev trains have set several speed records. The train speed record of was set by the experimental Japanese L0 Series maglev in 2015. From 2002 until 2021, the record for the highest operational speed of a passenger train of was held by the Shanghai maglev train, which uses German Transrapid technology. The service connects Shanghai Pudong International Airport and the outskirts of central Pudong, Shanghai. At its historical top speed, it covered the distance of in just over 8minutes.
Different maglev systems achieve levitation in different ways, which broadly fall into two categories: electromagnetic suspension (EMS) and electrodynamic suspension (EDS). Propulsion is typically provided by a linear motor. The power needed for levitation is typically not a large percentage of the overall energy consumption of a high-speed maglev system. Instead, overcoming drag takes the most energy. Vactrain technology has been proposed as a means to overcome this limitation.
Despite over a century of research and development, there are only seven operational maglev trains today — four in China, two in South Korea, and one in Japan.
History
Development
In the late 1940s, the British electrical engineer Eric Laithwaite, a professor at Imperial College London, developed the first full-size working model of the linear induction motor. He became professor of heavy electrical engineering at Imperial College in 1964, where he continued his successful development of the linear motor. Since linear motors do not require physical contact between the vehicle and guideway, they became a common fixture on advanced transportation systems in the 1960s and 1970s. Laithwaite joined one such project, the Tracked Hovercraft RTV-31, based near Cambridge, UK, although the project was cancelled in 1973.
The linear motor was naturally suited to use with maglev systems as well. In the early 1970s, Laithwaite discovered a new arrangement of magnets, the magnetic river, that allowed a single linear motor to produce both lift and forward thrust, allowing a maglev system to be built with a single set of magnets. Working at the British Rail Research Division in Derby, along with teams at several civil engineering firms, the "transverse-flux" system was developed into a working system.
The first commercial maglev people mover was simply called "MAGLEV" and officially opened in 1984 near Birmingham, England. It operated on an elevated section of monorail track between Birmingham Airport and Birmingham International railway station, running at speeds up to . The system was closed in 1995 due to reliability problems.
First maglev patent
High-speed transportation patents were granted to various inventors throughout the world. The first relevant patent, (2 December 1902), issued to Albert C. Albertson, used magnetic levitation to take part of the weight off of the wheels while using conventional propulsion.
Early United States patents for a linear motor propelled train were awarded to German inventor . The inventor was awarded (14 February 1905) and (21 August 1907). In 1907, another early electromagnetic transportation system was developed by F. S. Smith. In 1908, Cleveland mayor Tom L. Johnson filed a patent for a wheel-less "high-speed railway" levitated by an induced magnetic field. Jokingly known as "Greased Lightning," the suspended car operated on a 90-foot test track in Johnson's basement "absolutely noiseless[ly] and without the least vibration." A series of German patents for magnetic levitation trains propelled by linear motors were awarded to Hermann Kemper between 1937 and 1941. An early maglev train was described in , "Magnetic system of transportation", by G. R. Polgreen on 25 August 1959. The first use of "maglev" in a United States patent was in "Magnetic levitation guidance system" by Canadian Patents and Development Limited.
New York, United States, 1912
In 1912 French-American inventor Émile Bachelet demonstrated a model train with electromagnetic levitation and propulsion in Mount Vernon, New York. Bachelet's first related patent, was granted in 1912. The electromagnetic propulsion was by attraction of iron in the train by direct current solenoids spaced along the track. The electromagnetic levitation was due to repulsion of the aluminum base plate of the train by the pulsating current electromagnets under the track. The pulses were generated by Bachelet's own Synchronizing-interrupter supplied with 220 VAC. As the train moved it switched power to the section of track that it was on. Bachelet went on to demonstrate his model in London, England in 1914, which resulted in the registration of Bachelet Levitated Railway Syndicate Limited July 9 in London, just weeks before the start of WWI.
Bachelet's second related patent, granted the same day as the first, had the levitation electromagnets in the train and the track was aluminum plate. In the patent he stated that this was a much cheaper construction, but he did not demonstrate it.
New York, United States, 1968
In 1959, while delayed in traffic on the Throgs Neck Bridge, James Powell, a researcher at Brookhaven National Laboratory (BNL), thought of using magnetically levitated transportation. Powell and BNL colleague Gordon Danby worked out a maglev concept using static magnets mounted on a moving vehicle to induce electrodynamic lifting and stabilizing forces in specially shaped loops, such as figure-of-8 coils on a guideway. These were patented in 1968–1969.
Japan, 1969
Japan operates two independently developed maglev trains. One is HSST (and its descendant, the Linimo line) by Japan Airlines and the other, which is more well known, is SCMaglev by the Central Japan Railway Company.
The development of the latter started in 1969. The first successful SCMaglev run was made on a short track at the Japanese National Railways' (JNR's) Railway Technical Research Institute in 1972. Maglev trains on the Miyazaki test track (a later, 7 km long test track) regularly hit by 1979. After an accident destroyed the train, a new design was selected. In Okazaki, Japan (1987), the SCMaglev was used for test rides at the Okazaki exhibition. Tests in Miyazaki continued throughout the 1980s, before transferring to a far longer test track, long, in Yamanashi in 1997. The track has since been extended to almost . The world speed record for crewed trains was set there in 2015.
Development of HSST started in 1974. In Tsukuba, Japan (1985), the HSST-03 (Linimo) became popular at the Tsukuba World Exposition, in spite of its low top speed. In Saitama, Japan (1988), the HSST-04-1 was revealed at the Saitama exhibition in Kumagaya. Its fastest recorded speed was .
Construction of a new high-speed maglev line, the Chuo Shinkansen, started in 2014. It is being built by extending the SCMaglev test track in Yamanashi in both directions. The completion date is unknown, with the estimate of 2027 no longer possible following a local governmental rejection of a construction permit.
Hamburg, Germany, 1979
Transrapid 05 was the first maglev train with longstator propulsion licensed for passenger transportation. In 1979, a track was opened in Hamburg for the first (IVA 79). Interest was sufficient that operations were extended three months after the exhibition finished, having carried more than 50,000 passengers. It was reassembled in Kassel in 1980.
Ramenskoye, Moscow, USSR, 1979
In 1979 the USSR town of Ramenskoye (Moscow oblast) built an experimental test site for running experiments with cars on magnetic suspension. The test site consisted of a 60-metre ramp which was later extended to 980 metres. From the late 1970s to the 1980s five prototypes of cars were built that received designations from TP-01 (ТП-01) to TP-05 (ТП-05). The early cars were supposed to reach the speed up to .
The construction of a maglev track using the technology from Ramenskoye started in Armenian SSR in 1987 and was planned to be completed in 1991. The track was supposed to connect the cities of Yerevan and Sevan via the city of Abovyan. The original design speed was which was later lowered to . However, the Spitak earthquake in 1988 and the First Nagorno-Karabakh War caused the project to freeze. In the end the overpass was only partially constructed.
In the early 1990s, the maglev theme was continued by the Engineering Research Center "TEMP" (ИНЦ "ТЭМП") this time by the order from the Moscow government. The project was named V250 (В250). The idea was to build a high-speed maglev train to connect Moscow to the Sheremetyevo airport. The train would consist of 64-seater cars and run at speeds up to . In 1993, due to the financial crisis, the project was abandoned. However, from 1999 the "TEMP" research center had been participating as a co-developer in the creation of the linear motors for the Moscow Monorail system.
Birmingham, United Kingdom, 1984–1995
The world's first commercial maglev system was a low-speed maglev shuttle that ran between the airport terminal of Birmingham International Airport and the nearby Birmingham International railway station between 1984 and 1995. Its track length was , and trains levitated at an altitude of , levitated by electromagnets, and propelled with linear induction motors. It operated for 11 years and was initially very popular with passengers, but obsolescence problems with the electronic systems made it progressively unreliable as years passed, leading to its closure in 1995. One of the original cars is now on display at Railworld in Peterborough, together with the RTV31 hover train vehicle. Another is on display at the National Railway Museum in York.
Several favourable conditions existed when the link was built:
The British Rail Research vehicle was 3 tonnes and extension to the 8-tonne vehicle was easy.
Electrical power was available.
The airport and rail buildings were suitable for terminal platforms.
Only one crossing over a public road was required and no steep gradients were involved.
Land was owned by the railway or airport.
Local industries and councils were supportive.
Some government finance was provided and because of sharing work, the cost per organization was low.
After the system closed in 1995, the original guideway lay dormant until 2003, when a replacement cable-hauled system, the AirRail Link Cable Liner people mover, was opened.
Emsland, Germany, 1984–2011
Transrapid, a German maglev company, had a test track in Emsland with a total length of . The single-track line ran between Dörpen and Lathen with turning loops at each end. The trains regularly ran at up to . Paying passengers were carried as part of the testing process. The construction of the test facility began in 1980 and finished in 1984.
In 2006, a maglev train accident occurred in Lathen, killing 23 people. It was found to have been caused by human error in implementing safety checks. From 2006 no passengers were carried. At the end of 2011 the operation licence expired and was not renewed, and in early 2012 demolition permission was given for its facilities, including the track and factory.
In March 2021 it was reported the CRRC was investigating reviving the Emsland test track. In May 2019 CRRC had unveiled its "CRRC 600" prototype which is designed to reach .
Vancouver, Canada, and Hamburg, Germany, 1986–1988
In Vancouver, Canada, the HSST-03 by HSST Development Corporation (Japan Airlines and Sumitomo Corporation) was exhibited at Expo 86, and ran on a test track that provided guests with a ride in a single car along a short section of track at the fairgrounds. It was removed after the fair. It was shown at the Aoi Expo in 1987 and is now on static display at Okazaki Minami Park.
South Korea, 1993–2023
In 1993, South Korea completed the development of its own maglev train, shown off at the Daejeon Expo '93, which was developed further into a full-fledged maglev UTM-02 capable of travelling up to in 2006. This final model was incorporated in the Incheon Airport Maglev which opened on 3 February 2016, making South Korea the world's fourth country to operate its own self-developed maglev after the United Kingdom's Birmingham International Airport, Germany's Berlin M-Bahn, and Japan's Linimo. It links Incheon International Airport to the Yongyu Station and Leisure Complex on Yeongjong island. It offers a transfer to the Seoul Metropolitan Subway at AREX's Incheon International Airport Station and is offered free of charge to anyone to ride, operating between 9am and 6pm with 15-minute intervals.
The maglev system was co-developed by the South Korea Institute of Machinery and Materials (KIMM) and Hyundai Rotem. It is long, with six stations and a operating speed.
Two more stages are planned of and . Once completed it will become a circular line. It was shut down in September 2023.
Germany/China, 2010–present
Transport System Bögl (TSB) is a driverless maglev system developed by the German construction company Max Bögl since 2010. Its primary intended use is for short to medium distances (up to 30 km) and speeds up to 150 km/h for uses such as airport shuttles. The company has been doing test runs on an 820-meter-long test track at their headquarters in Sengenthal, Upper Palatinate, Germany, since 2012 clocking over 100,000 tests covering a distance of over 65,000 km as of 2018.
In 2018 Max Bögl signed a joint venture with the Chinese company Chengdu Xinzhu Road & Bridge Machinery Co. with the Chinese partner given exclusive rights of production and marketing for the system in China. The joint venture constructed a demonstration line near Chengdu, China, and two vehicles were airlifted there in June, 2020. In February 2021 a vehicle on the Chinese test track hit a top speed of .
China, since 2000
According to the International Maglev Board there are at least four maglev research programmes underway in China at: Southwest Jiaotong University (Chengdu), Tongji University (Shanghai), CRRC Tangshan-Changchun Railway Vehicle Co., and Chengdu Aircraft Industry Group. The latest high-speed prototype, unveiled in July 2021, was manufactured by CRRC Qingdao Sifang.
Low-to-medium speed
Development of the low-to-medium speed systems, that is, , by the CRRC has led to opening lines such as the Changsha Maglev Express in 2016 and the Line S1 in Beijing in 2017. In April 2020 a new model capable of and compatible with the Changsha line completed testing. The vehicle, under development since 2018, has a 30 percent increase in traction efficiency and a 60 percent increase in speed over the stock in use on the line since. The vehicles entered service in July 2021 with a top speed of .
CRRC Zhuzhou Locomotive said in April 2020 it is developing a model capable of .
High speed
There are two competing efforts for high-speed maglev systems, i.e., .
The first is based on the Transrapid technology used in the Shanghai maglev train and is developed by the CRRC under license from Thyssen-Krupp.
In 2006 the CM1 Dolphin prototype was unveiled and began testing on a new test track at Tongji University, northwest of Shanghai.
A prototype vehicle of the CRRC 600 was developed in 2019 and tested from June 2020.
In March 2021 a model began trials.
In July 2021, the CRRC 600 maglev, planned to travel at up to , was unveiled in Qingdao. It was reported to be the world's fastest ground vehicle.
A high-speed test track is under development in China and also, in April 2021, there was consideration given to re-opening the Emsland test facility in Germany.
A second, incompatible high-speed prototype was constructed by Max Bögl and Chengdu Xinzhu Road & Bridge Machinery Co. Ltd. and unveiled in January 2021. Developed at Southwest Jiaotong University in Chengdu, the Super Bullet Maglev design uses high-temperature superconducting magnets, is designed for and was demonstrated on a test track.
Technology
In the public imagination, maglev often evokes the concept of an elevated monorail track with a linear motor. Maglev systems may be monorail or dual rail—the SCMaglev MLX01 for instance uses a trench-like track—and not all monorail trains are maglevs. Some railway transport systems incorporate linear motors but use electromagnetism only for propulsion, without levitating the vehicle. Such trains have wheels and are not maglevs. Maglev tracks, monorail or not, can also be constructed at grade or underground in tunnels. Conversely, non-maglev tracks, monorail or not, can be elevated or underground too. Some maglev trains do incorporate wheels and function like linear motor-propelled wheeled vehicles at slower speeds but levitate at higher speeds. This is typically the case with electrodynamic suspension maglev trains. Aerodynamic factors may also play a role in the levitation of such trains.
The two main types of maglev technology are:
Electromagnetic suspension (EMS), electronically controlled electromagnets in the train attract it to a magnetically conductive (usually steel) track.
Electrodynamic suspension (EDS) uses superconducting electromagnets or strong permanent magnets that create a magnetic field, which induces currents in nearby metallic conductors when there is relative movement, which pushes and pulls the train towards the designed levitation position on the guide way.
Electromagnetic suspension (EMS)
In electromagnetic suspension (EMS) systems, the train levitates by attraction to a ferromagnetic (usually steel) rail while electromagnets, attached to the train, are oriented toward the rail from below. The system is typically arranged on a series of C-shaped arms, with the upper portion of the arm attached to the vehicle, and the lower inside edge containing the magnets. The rail is situated inside the C, between the upper and lower edges.
Magnetic attraction varies inversely with the square of distance, so minor changes in distance between the magnets and the rail produce greatly varying forces. These changes in force are dynamically unstable—a slight divergence from the optimum position tends to grow, requiring sophisticated feedback systems to maintain a constant distance from the track, (approximately ).
The major advantage to suspended maglev systems is that they work at all speeds, unlike electrodynamic systems, which only work at a minimum speed of about . This eliminates the need for a separate low-speed suspension system, and can simplify track layout. On the downside, the dynamic instability demands fine track tolerances, which can offset this advantage. Eric Laithwaite was concerned that to meet required tolerances, the gap between magnets and rail would have to be increased to the point where the magnets would be unreasonably large. In practice, this problem was addressed through improved feedback systems, which support the required tolerances. Air gap and energy efficiency can be improved by using the socalled "Hybrid Electromagnetic Suspension (H-EMS)", where the main levitation force is generated by permanent magnets, while the electromagnet controls the air gap, what is called electropermanent magnets. Ideally it would take negligible power to stabilize the suspension and in practice the power requirement is less than it would be if the entire suspension force were provided by electromagnets alone.
Electrodynamic suspension (EDS)
In electrodynamic suspension (EDS), both the guideway and the train exert a magnetic field, and the train is levitated by the repulsive and attractive force between these magnetic fields. In some configurations, the train can be levitated only by repulsive force. In the early stages of maglev development at the Miyazaki test track, a purely repulsive system was used instead of the later repulsive and attractive EDS system. The magnetic field is produced either by superconducting magnets (as in JR–Maglev) or by an array of permanent magnets (as in Inductrack). The repulsive and attractive force in the track is created by an induced magnetic field in wires or other conducting strips in the track.
A major advantage of EDS maglev systems is that they are dynamically stable—changes in distance between the track and the magnets creates strong forces to return the system to its original position. In addition, the attractive force varies in the opposite manner, providing the same adjustment effects. No active feedback control is needed.
However, at slow speeds, the current induced in these coils and the resultant magnetic flux is not large enough to levitate the train. For this reason, the train must have wheels or some other form of landing gear to support the train until it reaches take-off speed. Since a train may stop at any location, due to equipment problems for instance, the entire track must be able to support both low- and high-speed operation.
Another downside is that the EDS system naturally creates a field in the track in front and to the rear of the lift magnets, which acts against the magnets and creates magnetic drag. This is generally only a concern at low speeds, and is one of the reasons why JR abandoned a purely repulsive system and adopted the sidewall levitation system. At higher speeds other modes of drag dominate.
The drag force can be used to the electrodynamic system's advantage, however, as it creates a varying force in the rails that can be used as a reactionary system to drive the train, without the need for a separate reaction plate, as in most linear motor systems. Laithwaite led development of such "traverse-flux" systems at his Imperial College laboratory. Alternatively, propulsion coils on the guideway are used to exert a force on the magnets in the train and make the train move forward. The propulsion coils that exert a force on the train are effectively a linear motor: an alternating current through the coils generates a continuously varying magnetic field that moves forward along the track. The frequency of the alternating current is synchronized to match the speed of the train. The offset between the field exerted by magnets on the train and the applied field creates a force moving the train forward.
Tracks
The term maglev refers not only to the vehicles, but to the railway system as well, specifically designed for magnetic levitation and propulsion. All operational implementations of maglev technology make minimal use of wheeled train technology and are not compatible with conventional rail tracks. Because they cannot share existing infrastructure, maglev systems must be designed as standalone systems. The SPM maglev system is inter-operable with steel rail tracks and would permit maglev vehicles and conventional trains to operate on the same tracks.
MAN in Germany also designed a maglev system that worked with conventional rails, but it was never fully developed.
Evaluation
Each implementation of the magnetic levitation principle for train-type travel involves advantages and disadvantages.
Neither Inductrack nor the Superconducting EDS are able to levitate vehicles at a standstill, although Inductrack provides levitation at much lower speed; wheels are required for these systems. EMS systems are wheel-free.
The German Transrapid, Japanese HSST (Linimo), and Korean Rotem EMS maglevs levitate at a standstill, with electricity extracted from guideway using power rails for the latter two, and wirelessly for Transrapid. If guideway power is lost on the move, the Transrapid is still able to generate levitation down to speed, using the power from onboard batteries. This is not the case with the HSST and Rotem systems.
Propulsion
EMS systems such as HSST/Linimo can provide both levitation and propulsion using an onboard linear motor. But EDS systems and some EMS systems such as Transrapid levitate but do not propel. Such systems need some other technology for propulsion. A linear motor (propulsion coils) mounted in the track is one solution. Over long distances coil costs could be prohibitive.
Stability
Earnshaw's theorem shows that no combination of static magnets can be in a stable equilibrium. Therefore a dynamic (time varying) magnetic field is required to achieve stabilization. EMS systems rely on active electronic stabilization that constantly measures the bearing distance and adjusts the electromagnet current accordingly. EDS systems rely on changing magnetic fields to create currents, which can give passive stability.
Because maglev vehicles essentially fly, stabilisation of pitch, roll, and yaw is required. In addition to rotation, surge (forward and backward motions), sway (sideways motion), or heave (up and down motions) can be problematic.
Superconducting magnets on a train above a track made out of a permanent magnet lock the train into its lateral position. It can move linearly along the track, but not off the track. This is due to the Meissner effect and flux pinning.
Guidance system
Some systems use Null Current systems (also sometimes called Null Flux systems). These use a coil that is wound so that it enters two opposing, alternating fields, so that the average flux in the loop is zero. When the vehicle is in the straight ahead position, no current flows, but any moves off-line create flux that generates a field that naturally pushes/pulls it back into line.
Proposed technology enhancements
Evacuated tubes
Some systems (notably the Swissmetro system and the Hyperloop) propose the use of vactrains—maglev train technology used in evacuated (airless) tubes, which removes air drag. This has the potential to increase speed and efficiency greatly, as most of the energy for conventional maglev trains is lost to aerodynamic drag.
One potential risk for passengers of trains operating in evacuated tubes is that they could be exposed to the risk of cabin depressurization unless tunnel safety monitoring systems can repressurize the tube in the event of a train malfunction or accident though since trains are likely to operate at or near the Earth's surface, emergency restoration of ambient pressure should be straightforward. The RAND Corporation has depicted a vacuum tube train that could, in theory, cross the Atlantic or the USA in around 21 minutes.
Rail-maglev hybrid
The Polish startup Nevomo (previously Hyper Poland) is developing a system for modifying existing railway tracks into a maglev system, on which conventional wheel-rail trains, as well maglev vehicles can travel. Vehicles on this so-called 'magrail' system will be able to reach speeds of up to at significantly lower infrastructure costs than stand-alone maglev lines. In 2023 Nevomo conducted the first MagRail tests on Europe's longest test track for passive magnetic levitation, which the company had previously built in Poland.
Energy use
Energy for maglev trains is used to accelerate the train. Energy may be regained when the train slows down via regenerative braking. It also levitates and stabilises the train's movement. Most of the energy is needed to overcome air drag. Some energy is used for air conditioning, heating, lighting and other miscellany.
At low speeds the percentage of power used for levitation can be significant, consuming up to 15% more power than a subway or light rail service. For short distances the energy used for acceleration might be considerable.
The force used to overcome air drag increases with the square of the velocity and hence dominates at high speed. The energy needed per unit distance increases by the square of the velocity and the time decreases linearly. However power increases by the cube of the velocity. For example, 2.37 times as much power is needed to travel at than , while drag increases by 1.77 times the original force.
Aircraft take advantage of lower air pressure and lower temperatures by cruising at altitude to reduce energy consumption but unlike trains need to carry fuel on board. This has led to the suggestion of conveying maglev vehicles through partially evacuated tubes.
High-speed maglev comparison with conventional high-speed trains
Maglev transport is non-contact and electric powered. It relies less or not at all on the wheels, bearings and axles common to wheeled rail systems.
Speed: Maglev allows higher top speeds than conventional rail. While experimental wheel-based high-speed trains have demonstrated similar speeds, conventional trains will suffer from friction between wheels and track and thus elevating the maintenance cost if operating at such speed, unlike levitated maglev trains.
Maintenance: Maglev trains currently in operation have demonstrated the need for minimal guideway maintenance. Vehicle maintenance is also minimal (based on hours of operation, rather than on speed or distance traveled). Traditional rail is subject to mechanical wear and tear that increases rapidly with speed, also increasing maintenance. For example: the wearing down of brakes and overhead wire wear have caused problems for the Fastech 360 rail Shinkansen. Maglev would eliminate these issues.
Weather: In theory, maglev trains should be unaffected by snow, ice, severe cold, rain, or high winds. However, as of yet no maglev system has been installed in a location with such a harsh climate.
Acceleration: Maglev vehicles accelerate and decelerate faster than mechanical systems regardless of the slickness of the guideway or the slope of the grade, because they are non-contact systems.
Track: Maglev trains are not compatible with conventional track, and therefore require custom infrastructure for their entire route. By contrast conventional high-speed trains such as the TGV are able to run, albeit at reduced speeds, on existing rail infrastructure, thus reducing expenditure where new infrastructure would be particularly expensive (such as the final approaches to city terminals), or on extensions where traffic does not justify new infrastructure. John Harding, former chief maglev scientist at the Federal Railroad Administration, claimed that separate maglev infrastructure more than pays for itself with higher levels of all-weather operational availability and nominal maintenance costs. These claims have yet to be proven in an intense operational setting and they do not consider the increased maglev construction costs. However, in countries like China, there are discussion of building some key conventional high-speed rail tunnels/bridges to a standard that would allow them upgrading to maglev.
Efficiency: Conventional rail is probably more efficient at lower speeds. But due to the lack of physical contact between the track and the vehicle, maglev trains experience no rolling resistance, leaving only air resistance and electromagnetic drag, potentially improving power efficiency. Some systems, however, such as the Central Japan Railway Company SCMaglev use rubber tires at low speeds, reducing efficiency gains.
Mass: The electromagnets in many EMS and EDS designs require between 1 and 2 kilowatts per ton. The use of superconductor magnets can reduce the electromagnets' energy consumption. A 50-ton Transrapid maglev vehicle can lift an additional 20 tons, for a total of 70 tons, which consumes . Most energy use for the TRI is for propulsion and overcoming air resistance at speeds over .
Weight loading: High-speed rail requires more support and construction for its concentrated wheel loading. Maglev cars are lighter and distribute weight more evenly.
Noise: Because the major source of noise of a maglev train comes from displaced air rather than from wheels touching rails, maglev trains produce less noise than a conventional train at equivalent speeds. However, the psychoacoustic profile of the maglev may reduce this benefit: a study concluded that maglev noise should be rated like road traffic, while conventional trains experience a 5–10 dB "bonus", as they are found less annoying at the same loudness level.
Magnet reliability: Superconducting magnets are generally used to generate the powerful magnetic fields to levitate and propel the trains. These magnets must be kept below their critical temperatures (this ranges from 4.2 K to 77 K, depending on the material). New alloys and manufacturing techniques in superconductors and cooling systems have helped address this issue.
Control systems: No signalling systems are needed for high-speed maglev, because such systems are computer controlled. Human operators cannot react fast enough to manage high-speed trains. High-speed systems require dedicated rights of way and are usually elevated. Two maglev system microwave towers are in constant contact with trains. There is no need for train whistles or horns, either.
Terrain: Maglevs are able to ascend higher grades, offering more routing flexibility and reduced tunneling.
High-speed maglev comparison with aircraft
Differences between airplane and maglev travel:
Efficiency: For maglev systems the lift-to-drag ratio can exceed that of aircraft (for example Inductrack can approach 200:1 at high speed, far higher than any aircraft). This can make maglevs more efficient per kilometer. However, at high cruising speeds, aerodynamic drag is much larger than lift-induced drag. Jet-powered aircraft take advantage of low air density at high altitudes to significantly reduce air drag. Hence despite their lift-to-drag ratio disadvantage, they can travel more efficiently at high speeds than maglev trains that operate at sea level.
Routing: Maglevs offer competitive journey times for distances of or less. Additionally, maglevs can easily serve intermediate destinations. Air routes don't require infrastructure between the origin and destination airport and therefore provide greater flexibility to modify service endpoints as needed.
Availability: Maglevs are little affected by weather.
Travel time: Maglevs do not face the extended security protocols faced by air travelers nor is time consumed for taxiing, or for queuing for take-off and landing.
Economics
As more maglev systems are deployed, experts expect construction costs to drop by employing new construction methods and from economies of scale.
High-speed systems
The Shanghai maglev demonstration line cost US$1.2 billion to build in 2004. This total includes capital costs such as right-of-way clearing, extensive pile driving, on-site guideway manufacturing, in-situ pier construction at intervals, a maintenance facility and vehicle yard, several switches, two stations, operations and control systems, power feed system, cables and inverters, and operational training. Ridership is not a primary focus of this demonstration line, since the Longyang Road station is on the eastern outskirts of Shanghai. Once the line is extended to South Shanghai Train station and Hongqiao Airport station, which may not happen because of economic reasons, ridership was expected to cover operation and maintenance costs and generate significant net revenue.
The South Shanghai extension was expected to cost approximately US$18 million per kilometre. In 2006, the German government invested $125 million in guideway cost reduction development that produced an all-concrete modular design that is faster to build and is 30% less costly. Other new construction techniques were also developed that put maglev at or below price parity with new high-speed rail construction.
The United States Federal Railroad Administration, in a 2005 report to Congress, estimated cost per mile of between US$50 million and US$100 million. The Maryland Transit Administration (MTA) Environmental Impact Statement estimated a pricetag at US$4.9 billion for construction, and $53 million a year for operations of its project.
The proposed Chuo Shinkansen maglev in Japan was estimated to cost approximately US$82 billion to build, with a route requiring long tunnels. A Tokaido maglev route replacing the Shinkansen be 1/10 the cost, as no new tunnel would be needed, but noise pollution concerns made it infeasible.
Low-speed systems
The Japanese Linimo HSST, cost approximately US$100 million/km to build. Besides offering improved operation and maintenance costs over other transit systems, these low-speed maglevs provide ultra-high levels of operational reliability and introduce little noise and generate zero air pollution into dense urban settings.
Records
The highest-recorded maglev speed is , achieved in Japan by JR Central's L0 superconducting maglev on 21 April 2015, faster than the conventional TGV wheel-rail speed record. However, the operational and performance differences between these two very different technologies is far greater. The TGV record was achieved accelerating down a slight decline, requiring 13 minutes. It then took another for the TGV to stop, requiring a total distance of for the test. The L0 record, however, was achieved on the Yamanashi test track – less than one-third the distance. No maglev or wheel-rail commercial operation has actually been attempted at speeds over .
History of maglev speed records
Systems
Operational systems
High speed
Shanghai Maglev (2003)
The Shanghai Maglev Train, an implementation of the German Transrapid system, has a top speed of . The line is the fastest and first commercially operational high speed maglev. It connects Shanghai Pudong International Airport and the outskirts of central Pudong, Shanghai. The service covers a distance of in just 8 minutes.
In January 2001, the Chinese signed an agreement with Transrapid to build an EMS high-speed maglev line to link Pudong International Airport with Longyang Road Metro station on the southeastern edge of Shanghai. This Shanghai Maglev Train demonstration line, or Initial Operating Segment (IOS), has been in commercial operations since April 2004 and now operates 115 daily trips (up from 110 in 2010) that traverse the between the two stations in 8 minutes, achieving a top speed of and averaging . Prior to May 2021 services operated at up to , taking only 7 minutes to complete the trip. On a 12 November 2003 system commissioning test run, it achieved , its designed top cruising speed. The Shanghai maglev is faster than Birmingham technology and comes with on-time—to the second—reliability greater than 99.97%.
Plans to extend the line to Shanghai South Railway Station and Hongqiao Airport on the northwestern edge of Shanghai are on hold. After the Shanghai–Hangzhou Passenger Railway became operational in late 2010, the maglev extension became somewhat redundant and may be cancelled.
Low speed
Linimo (Tobu Kyuryo Line, Japan) (2005)
The commercial automated "Urban Maglev" system commenced operation in March 2005 in Aichi, Japan. The Tobu Kyuryo Line, otherwise known as the Linimo line, covers . It has a minimum operating radius of and a maximum gradient of 6%. The linear-motor magnetically levitated train has a top speed of . More than 10 million passengers used this "urban maglev" line in its first three months of operation. At , it is sufficiently fast for frequent stops, has little or no noise impact on surrounding communities, can navigate short radius rights of way, and operates during inclement weather. The trains were designed by the Chubu HSST Development Corporation, which also operates a test track in Nagoya.
Daejeon Expo Maglev (2008)
The first maglev test trials using electromagnetic suspension opened to public was HML-03, made by Hyundai Heavy Industries for the Daejeon Expo in 1993, after five years of research and manufacturing two prototypes, HML-01 and HML-02. Government research on urban maglev using electromagnetic suspension began in 1994. The first operating urban maglev was UTM-02 in Daejeon beginning on 21 April 2008 after 14 years of development and one prototype; UTM-01. The train runs on a track between Expo Park and National Science Museum which has been shortened with the redevelopment of Expo Park. The track currently ends at the street parallel to the science museum. Meanwhile, UTM-02 conducted the world's first-ever maglev simulation. However, UTM-02 is still the second prototype of a final model. The final UTM model of Rotem's urban maglev, UTM-03, was used for a new line that opened in 2016 on Incheon's Yeongjong island connecting Incheon International Airport (see below).
Changsha Maglev (2016)
The Hunan provincial government launched the construction of a maglev line between Changsha Huanghua International Airport and Changsha South Railway Station, covering a distance of 18.55 km. Construction started in May 2014 and was completed by the end of 2015. Trial runs began on 26 December 2015 and trial operations started on 6 May 2016. As of 13 June 2018 the Changsha maglev had covered a distance of 1.7 million km and carried nearly 6 million passengers. A second generation of these vehicles has been produced which have a top speed of . In July 2021 the new model entered service operating at a top speed of , which reduced the travel time by 3 minutes.
Beijing Line S1 (2017)
Beijing has built China's second low-speed maglev line, Line S1, Beijing Subway, using technology developed by National University of Defense Technology. The line was opened on 30 December 2017.
The line operates at speeds up to .
Fenghuang Maglev (2022)
Fenghuang Maglev () is a medium- to low-speed maglev line in Fenghuang County, Xiangxi, Hunan province, China. The line operates at speeds up to . The first phase is with 4 stations (and 2 more future infill stations). The first phase opened on 30 July 2022 and connects the Fenghuanggucheng railway station on the Zhangjiajie–Jishou–Huaihua high-speed railway with the Fenghuang Folklore Garden.
Maglevs under construction
Chūō Shinkansen (Japan)
The Chuo Shinkansen is a high-speed maglev line in Japan. Construction began in 2014, with commercial operations expected to start by 2027. The 2027 target was given up in July 2020. The Linear Chuo Shinkansen Project aims to connect Tokyo and Osaka by way of Nagoya, the capital city of Aichi, in approximately one hour, less than half the travel time of the fastest existing bullet trains connecting the three metropolises. The full track between Tokyo and Osaka was originally expected to be completed in 2045, but the operator is now aiming for 2037.
The L0 Series train type is undergoing testing by the Central Japan Railway Company (JR Central) for eventual use on the Chūō Shinkansen line. It set a crewed world speed record of on 21 April 2015. The trains are planned to run at a maximum speed of , offering journey times of 40 minutes between Tokyo (Shinagawa Station) and , and 1 hour 7 minutes between Tokyo and Osaka (Shin-Ōsaka Station).
Qingyuan Maglev (China)
Qingyuan Maglev Tourist Line () is a medium- to low-speed maglev line in Qingyuan, Guangdong province, China. The line will operate at speeds up to . The first phase is 8.1 km with three stations (and one more future infill station). The first phase was originally scheduled to open in October 2020 and will connect the Yinzhan railway station on the Guangzhou–Qingyuan intercity railway with the Qingyuan Chimelong Theme Park. In the long term the line will be 38.5 km.
Test tracks
AMT test track – Powder Springs, Georgia, USA
A second prototype system in Powder Springs, Georgia, USA, was built by American Maglev Technology, Inc. The test track is long with a curve. Vehicles are operated up to , below the proposed operational maximum of . A June 2013 review of the technology called for an extensive testing program to be carried out to ensure the system complies with various regulatory requirements including the American Society of Civil Engineers (ASCE) People Mover Standard. The review noted that the test track is too short to assess the vehicles' dynamics at the maximum proposed speeds.
FTA's UMTD program, USA
In the US, the Federal Transit Administration (FTA) Urban Maglev Technology Demonstration program funded the design of several low-speed urban maglev demonstration projects. It assessed HSST for the Maryland Department of Transportation and maglev technology for the Colorado Department of Transportation. The FTA also funded work by General Atomics at California University of Pennsylvania to evaluate the MagneMotion M3 and of the Maglev2000 of Florida superconducting EDS system. Other US urban maglev demonstration projects of note are the LEVX in Washington State and the Massachusetts-based Magplane.
San Diego, California USA
General Atomics has a test facility in San Diego, that is used to test Union Pacific's freight shuttle in Los Angeles. The technology is "passive" (or "permanent"), using permanent magnets in a Halbach array for lift and requiring no electromagnets for either levitation or propulsion. General Atomics received US$90 million in research funding from the federal government. They are also considering their technology for high-speed passenger services.
SCMaglev, Yamanashi Japan
Japan has a demonstration line in Yamanashi prefecture where test train SCMaglev L0 Series Shinkansen reached , faster than any wheeled trains. The demonstration line will become part of the Chūō Shinkansen linking Tokyo and Nagoya which, is currently under construction.
These trains use superconducting magnets, which allow for a larger gap, and repulsive/attractive-type electrodynamic suspension (EDS). In comparison, Transrapid uses conventional electromagnets and attractive-type electromagnetic suspension (EMS).
On 15 November 2014, The Central Japan Railway Company ran eight days of testing for the experimental maglev Shinkansen train on its test track in Yamanashi Prefecture. One hundred passengers covered a route between the cities of Uenohara and Fuefuki, reaching speeds of up to .
Sengenthal, Germany and Chengdu, China
Transport System Bögl, a division of German construction company Max Bögl, has built a test track in Sengenthal, Bavaria, Germany. In appearance, it's more like the German M-Bahn than the Transrapid system.
The vehicle tested on the track is patented in the US by Max Bögl. The company is also in a joint venture with a Chinese firm. A demonstration line has been built near Chengdu, China and two vehicles were airlifted there in June, 2000. In April 2021 a vehicle on the Chinese test track hit a top speed of .
Southwest Jiaotong University, China
On 31 December 2000, the first crewed high-temperature superconducting maglev was tested successfully at Southwest Jiaotong University, Chengdu, China. This system is based on the principle that bulk high-temperature superconductors can be levitated stably above or below a permanent magnet. The load was over and the levitation gap over . The system uses liquid nitrogen to cool the superconductor.
Jiading Campus of Tongji University, China
A maglev has been operating since 2006 at the Jiading Campus of Tongji University, northwest of Shanghai. The track uses the same design as the operating Shanghai Maglev. Top speed is restricted to due to the length of track and its topology.
MagRail test track, Poland
In the first quarter of 2022, Polish technology startup Nevomo completed the construction of Europe's longest test track for passive magnetic levitation. The 700 meter-long railway track in Subcarpathian Voivodeship in Poland allows vehicles utilizing the company's MagRail system to travel at speeds of up to 160 kph. The installation of all necessary wayside equipment was completed in December 2022 and tests began in spring 2023.
Proposed maglev systems
Many maglev systems have been proposed in North America, Asia, Europe and on the Moon. Many are in the early planning stages or were explicitly rejected.
Australia
Sydney-Illawarra
A maglev route was proposed between Sydney and Wollongong. The proposal came to prominence in the mid-1990s. The Sydney–Wollongong commuter corridor is the largest in Australia, with upwards of 20,000 people commuting each day. Existing trains use the Illawarra line, between the cliff face of the Illawarra escarpment and the Pacific Ocean, with travel times about 2 hours. The proposal would cut travel times to 20 minutes.
Melbourne
In late 2008, a proposal was put forward to the Government of Victoria to build a privately funded and operated maglev line to service the Greater Melbourne metropolitan area in response to the Eddington Transport Report that did not investigate above-ground transport options. The maglev would service a population of over 4 million and the proposal was costed at A$8 billion.
However, despite road congestion and Australia's highest roadspace per capita, the government dismissed the proposal in favour of road expansion including an A$8.5 billion road tunnel, $6 billion extension of the Eastlink to the Western Ring Road and a $700 million Frankston Bypass.
Canada
Toronto Zoo: Edmonton-based Magnovate proposed a new ride and transportation system at the Toronto Zoo reviving the Toronto Zoo Domain Ride system, which was closed following two severe accidents in 1994. The Zoo's board unanimously approved the proposal on 29 November 2018.
The company plans to construct and operate the $25 million system on the former route of the Domain Ride (known locally as the Monorail, despite not being considered one) at zero cost to the Zoo and operate it for 15 years, splitting the profits with the Zoo. The ride will serve a single-directional loop around Zoo grounds, serving five stations and likely replacing the current Zoomobile tour tram service. Planned to be operational by 2022 at the earliest, this would be the first commercial maglev system in North America should it be approved.
China
Beijing – Guangzhou line
A maglev test line linking Xianning in Hubei Province and Changsha in Hunan Province will start construction in 2020. The test line is about in length and might be part of Beijing – Guangzhou maglev in long-term planning. In 2021, the Guangdong government proposed a Maglev line between Hong Kong and Guangzhou via Shenzhen and beyond to Beijing.
Other proposed lines
Shanghai – Hangzhou
China planned to extend the existing Shanghai Maglev Train, initially by around to Shanghai Hongqiao Airport and then to the city of Hangzhou (Shanghai-Hangzhou Maglev Train). If built, this would be the first inter-city maglev rail line in commercial service.
The project was controversial and repeatedly delayed. In May 2007 the project was suspended by officials, reportedly due to public concerns about radiation from the system. In January and February 2008 hundreds of residents demonstrated in downtown Shanghai that the line route came too close to their homes, citing concerns about sickness due to exposure to the strong magnetic field, noise, pollution and devaluation of property near to the lines. Final approval to build the line was granted on 18 August 2008. Originally scheduled to be ready by Expo 2010, plans called for completion by 2014. The Shanghai municipal government considered multiple options, including building the line underground to allay public fears. This same report stated that the final decision had to be approved by the National Development and Reform Commission.
In 2007 the Shanghai municipal government was considering building a factory in Nanhui district to produce low-speed maglev trains for urban use.
Shanghai – Beijing
A proposed line would have connected Shanghai to Beijing, over a distance of , at an estimated cost of £15.5 billion. No projects had been revealed as of 2014.
Germany
On 25 September 2007, Bavaria announced a high-speed maglev-rail service from Munich to its airport. The Bavarian government signed contracts with Deutsche Bahn and Transrapid with Siemens and ThyssenKrupp for the €1.85 billion project.
On 27 March 2008, the German Transport minister announced the project had been cancelled due to rising costs associated with constructing the track. A new estimate put the project between €3.2–3.4 billion.
Hong Kong
In March 2021 a government official said Hong Kong would be included in a planned maglev network across China, planned to operate at and begin opening by 2030.
Hong Kong is already connected to the Chinese high-speed rail network by the Guangzhou–Shenzhen–Hong Kong Express Rail Link, which opened on Sunday 23 September 2018.
India
Mumbai – Delhi: A project was presented to then Indian railway minister (Mamata Banerjee) by an American company to connect Mumbai and Delhi. Then Prime Minister Manmohan Singh said that if the line project was successful the Indian government would build lines between other cities and also between Mumbai Central and Chhatrapati Shivaji International Airport.
Mumbai – Nagpur: The State of Maharashtra approved a feasibility study for a maglev train between Mumbai and Nagpur, some apart.
Chennai – Bangalore – Mysore: A detailed report was to be prepared and submitted by December 2012 for a line to connect Chennai to Mysore via Bangalore at a cost $26 million per kilometre, reaching speeds of .
Iran
In May 2009, Iran and a German company signed an agreement to use maglev to link Tehran and Mashhad. The agreement was signed at the Mashhad International Fair site between Iranian Ministry of Roads and Transportation and the German company. The line possibly could reduce travel time between Tehran and Mashhad to about 2.5 hours. Munich-based Schlegel Consulting Engineers said they had signed the contract with the Iranian ministry of transport and the governor of Mashad. "We have been mandated to lead a German consortium in this project," a spokesman said. "We are in a preparatory phase." The project could be worth between €10 billion and €12 billion, the Schlegel spokesman said.
Italy
A first proposal was formalized in April 2008, in Brescia, by journalist Andrew Spannaus who recommended a high-speed connection between Malpensa airport to the cities of Milan, Bergamo and Brescia.
In March 2011, Nicola Oliva proposed a maglev connection between Pisa airport and the cities of Prato and Florence (Santa Maria Novella train station and Florence Airport). The travelling time would be reduced from the typical 1 hour 15 minutes to around 20 minutes. The second part of the line would be a connection to Livorno, to integrate maritime, aerial and terrestrial transport systems.
Malaysia/Singapore
A consortium led by UEM Group Bhd and ARA Group proposed maglev technology to link Malaysian cities to Singapore. The idea was first mooted by YTL Group. Its technology partner then was said to be Siemens. High costs sank the proposal. The concept of a high-speed rail link from Kuala Lumpur to Singapore resurfaced. It was cited as a proposed "high impact" project in the Economic Transformation Programme (ETP) that was unveiled in 2010. Approval has been given for the Kuala Lumpur–Singapore high-speed rail project, but not using maglev technology.
The Moon
The Flexible Levitation on a Track (FLOAT) project, announced by NASA, plans to build a maglev train on the Moon.
Philippines
Philtram Consortium's Cebu Monorail project will be initially built as a monorail system. In the future, it will be upgraded to a patented maglev technology named Spin-Induced Lenz's Law Magnetic Levitation Train.
Switzerland
SwissRapide: The SwissRapide AG together with the SwissRapide Consortium was planning and developing the first maglev monorail system for intercity traffic between the country's major cities. SwissRapide was to be financed by private investors. In the long-term, the SwissRapide Express was to connect the major cities north of the Alps between Geneva and St. Gallen, including Lucerne and Basel. The first projects were Bern–Zürich, Lausanne–Geneva as well as Zürich–Winterthur. The first line (Lausanne–Geneva or Zürich–Winterthur) could go into service as early as 2020.
Swissmetro: An earlier project, Swissmetro AG envisioned a partially evacuated underground maglev (a vactrain). As with SwissRapide, Swissmetro envisioned connecting the major cities in Switzerland with one another. In 2011, Swissmetro AG was dissolved and the IPRs from the organisation were passed onto the EPFL in Lausanne.
United Kingdom
London – Glasgow: A line was proposed in the United Kingdom from London to Glasgow with several route options through the Midlands, Northwest and Northeast of England. It was reported to be under favourable consideration by the government. The approach was rejected in the Government white paper Delivering a Sustainable Railway published on 24 July 2007. Another high-speed link was planned between Glasgow and Edinburgh but the technology remained unsettled.
United States
Washington, D.C. to New York City: Using Superconducting Maglev (SCMAGLEV) technology developed by the Central Japan Railway Company, the Northeast Maglev would ultimately connect major Northeast metropolitan hubs and airports traveling more than , with a goal of one-hour service between Washington, D.C. and New York City. the Federal Railroad Administration and Maryland Department of Transportation were preparing an Environmental Impact Statement (EIS) to evaluate the potential impacts of constructing and operating the system's first leg between Washington, DC and Baltimore, Maryland with an intermediate stop at BWI Airport.
Union Pacific freight conveyor: Plans are under way by American railroad Union Pacific to build a container shuttle between the Ports of Los Angeles and Long Beach, with UP's intermodal container transfer facility. The system would be based on "passive" technology, especially well-suited to freight transfer as no power is needed on board. The vehicle is a chassis that glides to its destination. The system is being designed by General Atomics.
California-Nevada Interstate Maglev: High-speed maglev lines between major cities of southern California and Las Vegas are under study via the California-Nevada Interstate Maglev Project. This plan was originally proposed as part of an I-5 or I-15 expansion plan, but the federal government ruled that it must be separated from interstate public work projects.
After the decision, private groups from Nevada proposed a line running from Las Vegas to Los Angeles with stops in Primm, Nevada; Baker, California; and other points throughout San Bernardino County into Los Angeles. Politicians expressed concern that a high-speed rail line out of state would carry spending out of state along with travelers.
The Pennsylvania Project: The Pennsylvania High-Speed Maglev Project corridor extends from the Pittsburgh International Airport to Greensburg, with intermediate stops in Downtown Pittsburgh and Monroeville. This initial project was claimed to serve approximately 2.4 million people in the Pittsburgh metropolitan area. The Baltimore proposal competed with the Pittsburgh proposal for a US$90 million federal grant.
San Diego-Imperial County airport: In 2006, San Diego commissioned a study for a maglev line to a proposed airport located in Imperial County. SANDAG claimed that the concept would be an "airports [sic] without terminals", allowing passengers to check in at a terminal in San Diego ("satellite terminals"), take the train to the airport and directly board the airplane. In addition, the train would have the potential to carry freight. Further studies were requested although no funding was agreed.
Orlando International Airport to Orange County Convention Center: In December 2012, the Florida Department of Transportation gave conditional approval to a proposal by American Maglev to build a privately run , 5-station line from Orlando International Airport to Orange County Convention Center. The Department requested a technical assessment and said there would be a request for proposals issued to reveal any competing plans. The route requires the use of a public right of way. If the first phase succeeded American Maglev would propose two further phases (of ) to carry the line to Walt Disney World.
San Juan – Caguas: A maglev project was proposed linking Tren Urbano's Cupey Station in San Juan with two proposed stations in the city of Caguas, south of San Juan. The maglev line would run along Highway PR-52, connecting both cities. According to American Maglev project cost would be approximately US$380 million.
Incidents
Two incidents involved fires. A Japanese test train in Miyazaki, MLU002, was completely consumed by a fire in 1991.
On 11 August 2006, a fire broke out on the commercial Shanghai Transrapid shortly after arriving at the Longyang terminal. People were evacuated without incident before the vehicle was moved about 1 kilometre to keep smoke from filling the station. NAMTI officials toured the SMT maintenance facility in November 2010 and learned that the cause of the fire was "thermal runaway" in a battery tray. As a result, SMT secured a new battery vendor, installed new temperature sensors and insulators and redesigned the trays.
On 22 September 2006, a Transrapid train collided with a maintenance vehicle on a test/publicity run in Lathen (Lower Saxony / north-western Germany). Twenty-three people were killed and ten were injured; these were the first maglev crash fatalities. The accident was caused by human error. Charges were brought against three Transrapid employees after a year-long investigation.
Safety is a greater concern with high-speed public transport due to the potential for high impact force and large number of casualties. In the case of maglev trains as well as conventional high-speed rails, an incident could result from human error, including loss of power, or factors outside human control, such as ground movement caused by an earthquake.
| Technology | Rail and cable transport | null |
822575 | https://en.wikipedia.org/wiki/Biogeochemistry | Biogeochemistry | Biogeochemistry is the scientific discipline that involves the study of the chemical, physical, geological, and biological processes and reactions that govern the composition of the natural environment (including the biosphere, the cryosphere, the hydrosphere, the pedosphere, the atmosphere, and the lithosphere). In particular, biogeochemistry is the study of biogeochemical cycles, the cycles of chemical elements such as carbon and nitrogen, and their interactions with and incorporation into living things transported through earth scale biological systems in space and time. The field focuses on chemical cycles which are either driven by or influence biological activity. Particular emphasis is placed on the study of carbon, nitrogen, oxygen, sulfur, iron, and phosphorus cycles. Biogeochemistry is a systems science closely related to systems ecology.
History
Early Greek
Early Greeks established the core idea of biogeochemistry that nature consists of cycles.
18th-19th centuries
Agricultural interest in 18th-century soil chemistry led to better understanding of nutrients and their connection to biochemical processes. This relationship between the cycles of organic life and their chemical products was further expanded upon by Dumas and Boussingault in a 1844 paper that is considered an important milestone in the development of biogeochemistry. Jean-Baptiste Lamarck first used the term biosphere in 1802, and others continued to develop the concept throughout the 19th century. Early climate research by scientists like Charles Lyell, John Tyndall, and Joseph Fourier began to link glaciation, weathering, and climate.
20th century
The founder of modern biogeochemistry was Vladimir Vernadsky, a Russian and Ukrainian scientist whose 1926 book The Biosphere, in the tradition of Mendeleev, formulated a physics of the Earth as a living whole. Vernadsky distinguished three spheres, where a sphere was a concept similar to the concept of a phase-space. He observed that each sphere had its own laws of evolution, and that the higher spheres modified and dominated the lower:
Abiotic sphere – all the non-living energy and material processes
Biosphere – the life processes that live within the abiotic sphere
Nöesis or noosphere – the sphere of human cognitive process
Human activities (e.g., agriculture and industry) modify the biosphere and abiotic sphere. In the contemporary environment, the amount of influence humans have on the other two spheres is comparable to a geological force (see Anthropocene).
The American limnologist and geochemist G. Evelyn Hutchinson is credited with outlining the broad scope and principles of this new field. More recently, the basic elements of the discipline of biogeochemistry were restated and popularized by the British scientist and writer, James Lovelock, under the label of the Gaia Hypothesis. Lovelock emphasized a concept that life processes regulate the Earth through feedback mechanisms to keep it habitable. The research of Manfred Schidlowski was concerned with the biochemistry of the Early Earth.
Biogeochemical cycles
Biogeochemical cycles are the pathways by which chemical substances cycle (are turned over or moved through) the biotic and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, hydrosphere and lithosphere. There are biogeochemical cycles for chemical elements, such as for calcium, carbon, hydrogen, mercury, nitrogen, oxygen, phosphorus, selenium, iron and sulfur, as well as molecular cycles, such as for water and silica. There are also macroscopic cycles, such as the rock cycle, and human-induced cycles for synthetic compounds such as polychlorinated biphenyls (PCBs). In some cycles there are reservoirs where a substance can remain or be sequestered for a long period of time.
Research
Biogeochemistry research groups exist in many universities around the world. Since this is a highly interdisciplinary field, these are situated within a wide range of host disciplines including: atmospheric sciences, biology, ecology, geomicrobiology, environmental chemistry, geology, oceanography and soil science. These are often bracketed into larger disciplines such as earth science and environmental science.
Many researchers investigate the biogeochemical cycles of chemical elements such as carbon, oxygen, nitrogen, phosphorus and sulfur, as well as their stable isotopes. The cycles of trace elements, such as the trace metals and the radionuclides, are also studied. This research has obvious applications in the exploration of ore deposits and oil, and in the remediation of environmental pollution.
Some important research fields for biogeochemistry include:
modelling of natural systems
soil and water acidification recovery processes
eutrophication of surface waters
carbon sequestration
environmental remediation
global change
climate change
biogeochemical prospecting for ore deposits
soil chemistry
chemical oceanography
Evolutionary Biogeochemistry
Evolutionary biogeochemistry is a branch of modern biogeochemistry that applies the study of biogeochemical cycles to the geologic history of the Earth. This field investigates the origin of biogeochemical cycles and how they have changed throughout the planet's history, specifically in relation to the evolution of life.
| Physical sciences | Geochemistry | Earth science |
824083 | https://en.wikipedia.org/wiki/Cefazolin | Cefazolin | Cefazolin, also known as cefazoline and cephazolin, is a first-generation cephalosporin antibiotic used for the treatment of a number of bacterial infections. Specifically it is used to treat cellulitis, urinary tract infections, pneumonia, endocarditis, joint infection, and biliary tract infections. It is also used to prevent group B streptococcal disease around the time of delivery and before surgery. It is typically given by injection into a muscle or vein.
Common side effects include diarrhea, vomiting, yeast infections, and allergic reactions. Historically, it was thought to be contraindicated in patients with allergies to penicillin, although several recent studies have refuted this and it is proven to be safe in almost all patients, including those with known penicillin allergies. It is relatively safe for use during pregnancy and breastfeeding. Cefazolin is in the first-generation cephalosporin class of medication and works by interfering with the bacteria's cell wall.
Cefazolin was patented in 1967 and came into commercial use in 1971. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication.
Medical uses
Cefazolin is used in a variety of infections provided that susceptible organisms are involved. It is indicated for use in the following infections:
Respiratory tract infections
Urinary tract infections
Skin infections
Biliary tract infections
Bone and joint infections
Genital infections
Blood infections (sepsis)
Endocarditis
It can also be used peri-operatively to prevent infections post-surgery, and is often the preferred drug for surgical prophylaxis.
There is no penetration into the central nervous system and therefore cefazolin is not effective in treating meningitis.
Cefazolin has been shown to be effective in treating methicillin-susceptible Staphylococcus aureus (MSSA) but does not work in cases of methicillin-resistant Staphylococcus aureus (MRSA). In many instances of staphylococcal infections, such as bacteremia, cefazolin is an alternative to penicillin in patients who are allergic to penicillin. However, there is still potential for a reaction to occur with cefazolin and other cephalosporins in patients allergic to penicillin. Resistance to cefazolin is seen in several species of bacteria, such as Mycoplasma and Chlamydia, in which case different generations of cephalosporins may be more effective. Cefazolin does not fight against Enterococcus, anaerobic bacteria, or atypical bacteria, among others.
Bacterial susceptibility
As a first-generation cephalosporin antibiotic, cefazolin and other first-generation antibiotics are very active against gram-positive bacteria and some gram-negative bacteria. Their broad spectrum of activity can be attributed to their improved stability to many bacterial beta-lactamases compared to penicillins.
Spectrum of activity
Gram-positive aerobes:
Staphylococcus aureus (including beta-lactamase producing strains)
Staphylococcus epidermidis
Streptococcus pyogenes, Streptococcus agalactiae, Streptococcus pneumoniae and other strains of streptococci
Gram-Negative Aerobes:
Escherichia coli
Proteus mirabilis
Klebsiella pneumoniae
Non susceptible
The following are not susceptible:
Methicillin-resistant staphylococcus aureus
Enterococcus
most strains of indole positive Proteus (Proteus vulgaris)
Enterobacter spp.
Morganella morganii
Providencia rettgeri
Serratia spp.
Pseudomonas spp.
Listeria
Special populations
Pregnancy
Cefazolin is pregnancy category B, indicating general safety for use in pregnancy. Caution should be used in breastfeeding as a small amount of cefazolin enters the breast milk. Cefazolin can be used prophylactically against perinatal Group B streptococcal infection (GBS). Although penicillin and ampicillin are the standard of care for GBS prophylaxis, penicillin-allergic women with no history of anaphylaxis can be given cefazolin instead. These patients should be closely monitored as there is a small chance of an allergic reaction due to the similar structure of the antibiotics.
Newborns
There has been no established safety and effectiveness for use in premature infants and neonates.
Elderly
No overall differences in safety or effectiveness were observed in clinical trials comparing elderly and younger subjects, however the trials could not eliminate the possibility that some older individuals may have a higher level of sensitivity.
Additional considerations
People with kidney disease and those on hemodialysis may need the dose adjusted. Cefazolin levels are not significantly affected by liver disease.
As with other antibiotics, cefazolin may interact with other medications being taken. Some important drugs that may interact with cefazolin such as probenecid.
Side effects
Side effects associated with use of cefazolin therapy include:
Common (1–10%): diarrhea, stomach pain or upset stomach, vomiting, and rash.
Uncommon (<1%): dizziness, headache, fatigue, itching, transient hepatitis.
Patients with penicillin allergies could experience a potential reaction to cefazolin and other cephalosporins. As with other antibiotics, patients experiencing watery and/or bloody stools occurring up to three months following therapy should contact their prescriber.
Like those of several other cephalosporins, the chemical structure of cefazolin contains an N-methylthiodiazole (NMTD or 1-MTD) side-chain. As the antibiotic is broken down in the body, it releases free NMTD, which can cause hypoprothrombinemia (likely due to inhibition of the enzyme vitamin K epoxide reductase) and a reaction with ethanol similar to that produced by disulfiram (Antabuse), due to inhibition of aldehyde dehydrogenase. Those with an allergy to penicillin may develop a cross sensitivity to cefazolin.
Mechanism of action
Cefazolin inhibits cell wall biosynthesis by binding penicillin-binding proteins which stops peptidoglycan synthesis. Penicillin-binding proteins are bacterial proteins that help to catalyze the last stages of peptidoglycan synthesis, which is needed to maintain the cell wall. They remove the D-alanine from the precursor of the peptidoglycan. The lack of synthesis causes the bacteria to lyse because they also continually break down their cell walls. Cefazolin is bactericidal, meaning it kills the bacteria rather than inhibiting their growth.
Cost
Cefazolin is relatively inexpensive.
Trade names
It was initially marketed by GlaxoSmithKline under the trade name Nostof.
Other trade names include: Cefacidal, Cefamezin, Cefrina, Elzogram, Faxilen, Gramaxin, Kefol, Kefzol, Kefzolan, Kezolin, Novaporin, Reflin, Zinol, and Zolicef.
| Biology and health sciences | Antibiotics | Health |
825162 | https://en.wikipedia.org/wiki/Dog%20crossbreed | Dog crossbreed | Dog crossbreeds (sometimes called designer dogs) are dogs which have been intentionally bred from two or more recognized dog breeds. They are not dogs with no purebred ancestors, but are not otherwise recognised as breeds in their own right, and do not necessarily breed true.
Dog crossbreeds are combinations of lineages of the domestic dog; they are distinguished from canid hybrids, which are interspecific crosses between Canis species (wolves, coyotes, jackals, etc.).
Working crossbreeds
Several types of working dog crossbreeds date from the 14th century or earlier, such as the lurcher or the longdog.
Historically, crosses between dogs of different types were more well accepted at a time when modern purebred breeds (based on the eugenics principles) did not yet exist. These types of crosses were performed to aggregate qualities of two different types in the same dog or to perfect an already fixed type of dog, always for working purposes. An example to be cited is the famous case of Lord Orford's Greyhounds, which were improved by adding courage through the crossing with Old English Bulldogs, achieving the desired result after six generations. With the success of Lord Orford's dogs, the practice was adopted by other Greyhound breeders and became more common.
Crossbreeding has played a key characteristic in the development of sled dogs with various crossbreeds developing to meet the specific needs of the era and geographical region, including the Mackenzie River husky, in which European breeds were crossed with Native American dogs to produce a powerful and hardy freighting dog in the 19th century, and the Alaskan husky, bred specifically for sled dog racing. In the 1980s, a rise in Nordic-style sled racing in Scandinavia, characterized by shorter distances than typically seen in North American sled racing, led to the development of the eurohound and greyster, crosses utilizing German shorthair pointers with Alaskan huskies and greyhounds, respectively. While the Mackenzie River husky has been largely replaced by mechanized travel, Alaskan huskies continue to be the most commonly used type of dog for competitive sled dog racing today.
Other historical examples are the bull and terrier (Old English Bulldog and terrier cross) and crosses between foxhounds and Old Spanish Pointers that later resulted in the English Pointer.
Designer dogs
The Encyclopædia Britannica traces what was the "designer dog" fad to the late 20th century when breeders began to cross purebred Poodles with other purebred breeds to obtain a dog with the Poodle's hypoallergenic coat, along with various desirable characteristics from other breeds. The resulting puppies are called by a portmanteau word made up of syllables (or sounds) from the breed names of the two purebred parents, such as Schnoodle (Schnauzer and Poodle cross), or Shepsky (German Shepherd Dog/Siberian Husky cross). Other purebred breeds are being crossed to provide designer dogs described with an endless range of created labels, such as the Puggle (Pug and Beagle cross). There are even complex crosses (with multiple breeds in recent ancestry) being labeled in this manner, such as the German Chusky (German Shepherd Dog, Siberian Husky and Chow Chow cross).
Like children in a family, a percentage of designer dogs with the same breed ancestry will look similar to each other, even though crossbreeding does not result in as uniform a phenotype as the breeding of purebreds. Often even pups in the same litter will look quite different.
Another defining characteristic of designer dogs is that they are usually bred as companion dogs and pets. Working and hunting dogs deliberately crossbred for a particular working purpose are not generally given portmanteau names; they are most often referred to by a type name, such as eurohounds (racing sled dogs) or lurchers (hunting dogs). These dogs could be considered only as crossbreeds, not as designer animals, since appearance is not the main reason for them to be bred. An exception to this is the Labradoodle, which although having a portmanteau name, is often used as a Guide or Assistance dog as well as being a popular family dog.
Although designer dogs are often selected by owners for their novelty, reputable breeders sometimes use crossbreeding in an attempt to reduce the incidence of certain hereditary problems found in the purebred dogs, while retaining their more appealing traits. Jon Mooallem, writing in The New York Times, commented, "Given the roughly 350 inherited disorders littering the dog genome, crossing two purebreds and expanding their gene pools can be 'a phenomenally good idea,' according to one canine geneticist—if it is done conscientiously." Crossbreeding has not been well studied in dogs, although it has been for livestock. The heritability of the desired trait being bred for (such as a hypoallergenic coat) needs to be known; "Heritability is the proportion of the measurable difference observed between animals for a given trait that is due to genetics (and can be passed to the next generation)." In addition, the goals of dog crossbreeding may be harder to define than the goals of livestock crossbreeding; good temperament may be harder to define and measure than high calf weight.
Designer dog breeders are often criticized for being more interested in profitable puppy production than in dog health and welfare. Wally Conron, writing in Reader's Digest, comments on the popularity of crosses after his introduction of the Labradoodle: "Were breeders bothering to check their sires and bitches for heredity faults, or were they simply caught up in delivering to hungry customers the next status symbol?"
'Designer dog' puppies sometimes bring higher prices than the purebreds from which they are bred. Fanciers of designer dogs say that all modern dog breeds were created from earlier breeds and types of dogs through the same kind of selective breeding that is used to create designer dogs. Most of the modern breeds have ancestries that include various older dog types and breeds; see individual breed articles for details of the origin of each breed.
Health and genetic defects
Crossbreeding that takes advantage of the increased chance that a recessive detrimental allele will only be inherited from one parent and therefore not expressed in the phenotype of the offspring, is one strategy breeders can use to decrease the incidences of genetic defects. Knowing the disease incidence in the breed, and the genetic history of the individual, is important.
Some crossbred dogs, created by breeding two purebred dogs of different breeds, may have the advantage of heterosis, or crossbreed vigor. This advantage can be progressively diluted when two crossbreeds are bred in the attempt to create a breed, narrowing the gene pool. The best way to continue taking advantage of crossbreed vigor is from the breeding of dogs of purebred ancestry, as this vigor is typically seen only in the first generation cross of two purebred animals of separate breeds, thus taking advantage of genetic diversity.
Health of crossbred dogs depends on their being descended from healthy parents. Breeders who select their breeding stock for cost-effectiveness and who skip health testing for the same reason will not produce puppies that are as reliably healthy as those bred by more conscientious breeders. However, studies of longevity in dogs have found some advantage for crossbreeds compared to purebred dogs. In general it is believed that crossbred dogs "have a far lower chance of exhibiting the disorders that are common with the parental breeds. Their genetic health will be substantially higher."
Despite commonly believed to be substantially healthier than pedigree dogs data from clinical records of over 1,000 veterinary hospitals in the US shows the difference in life expectancy between mixed breed dogs and pedigree dogs to be minimal. A review of cemetery data in Japan found that the Shiba Inu had a life expectancy greater than crossbreeds; however crossbreeds still had a higher life expectancy than the average pedigree dog in this study. A Swedish study reviewing over 200,000 dogs registered for a veterinary insurance company in 1995 and 1996 found morbidity to be higher in most pedigrees than mongrels; however several pedigree breeds had a lower morbidity these are in order of highest risk to lowest risk: Drever, Norwegian Buhund, Schillerstövare, Jämthund, Gråhund, Siberian Husky, Karelian Bear Dog, Smålandsstövare, Finnish Spitz, and Norbottenspets. Notably all the breeds are native to the Scandinavian peninsula and are most commonly used as working dogs.
Some health issues not common in either parent breed may be more common in the crossbreed than both of the parent breeds. Prolapsed nictitating membrane gland is a condition more common in the Puggle and Jug than both of the parent breeds, which shows the complexity of genetics and provides evidence against the theory of hybrid vigour. Overall designer dog breeds had lower rates of PNMG.
Registration and recognition
Crossbreed dogs are not recognized by traditional breed registries, even if both parents are registered purebreds. Breed associations such as the American Kennel Club, the United Kennel Club and the Canadian Kennel Club do not recognize designer crosses as dog breeds.
If crossbred dogs are bred together for some period of time, and their breeding is well documented, they may eventually be considered a new breed of dog by major kennel clubs (an example of a recent crossbreed becoming a breed recognised by all major kennel clubs is the Cesky Terrier). New breeds of dogs must have a breed club that will document the ancestry of any individual member of that breed from the original founding dogs of the breed; when the kennel club that the breed club wishes to join is satisfied that the dogs are pedigreed, they will accept and register the dogs of that breed. Each kennel club has individual rules about how to document a new breed. Some minor registries and internet registry businesses will register dogs as any breed the owner
chooses with minimal or no documentation; some even allow the breeder or owner to make up a designer "breed name" for their pet.
| Biology and health sciences | Dogs | Animals |
825332 | https://en.wikipedia.org/wiki/Purebred | Purebred | Purebreds are like cultivars of an animal species achieved through the process of selective breeding. When the lineage of a purebred animal is recorded, that animal is said to be pedigreed. Purebreds breed true-to-type which means the progeny of like-to-like purebred parents will carry the same phenotype, or observable characteristics of the parents. A group of like purebreds is called a pure-breeding line or strain.
True breeding
In the world of selective animal breeding, to "breed true" means that specimens of an animal breed will breed true-to-type when mated like-to-like; that is, that the progeny of any two individuals of the same breed will show fairly consistent, replicable and predictable characteristics, or traits with sufficiently high heritability. A puppy from two purebred dogs of the same breed, for example, will exhibit the traits of its parents, and not the traits of all breeds in the subject breed's ancestry.
Breeding from too small a gene pool, especially direct inbreeding, can lead to the passing on of undesirable characteristics or even a collapse of a breed population due to inbreeding depression. Therefore, there is a question, and often heated controversy, as to when or if a breed may need to allow "outside" stock in for the purpose of improving the overall health and vigor of the breed.
Because pure-breeding creates a limited gene pool, purebred animal breeds are also susceptible to a wide range of congenital health problems. This problem is especially prevalent in competitive dog breeding and dog show circles due to the singular emphasis on aesthetics rather than health or function. Such problems also occur within certain segments of the horse industry for similar reasons. The problem is further compounded when breeders practice inbreeding. The opposite effect to that of the restricted gene pool caused by pure-breeding is known as hybrid vigor, which generally results in healthier animals.
Pedigrees
A pedigreed animal is one that has its ancestry recorded. Often this is tracked by a major registry. The number of generations required varies from breed to breed, but all pedigreed animals have papers from the registering body that attest to their ancestry.
The word "pedigree" appeared in the English language in 1410 as "pee de Grewe", "pedegrewe" or "pedegru", each of those words being borrowed to the Middle French "pié de grue", meaning "crane foot". This comes from a visual analogy between the trace of the bird's foot and the three lines used in the English official registers to show the ramifications of a genealogical tree.
Sometimes the word purebred is used synonymously with pedigreed, but purebred refers to the animal having a known ancestry, and pedigree refers to the written record of breeding. Not all purebred animals have their lineage in written form. For example, until the 20th century, the Bedouin people of the Arabian Peninsula only recorded the ancestry of their Arabian horses via an oral tradition, supported by the swearing of religiously based oaths as to the asil or "pure" breeding of the animal. Conversely, some animals may have a recorded pedigree or even a registry, but not be considered "purebred". Today the modern Anglo-Arabian horse, a cross of Thoroughbred and Arabian bloodlines, is considered such a case.
By type
Dogs
A purebred dog is a dog of a modern breed of dog, with written documentation showing the individual purebred dog's descent from its breeds' foundation stock. In dogs, the term breed is used two ways: loosely, to refer to dog types or landraces of dog (also called natural breeds or ancient breeds); or more precisely, to refer to modern breeds of dog, which are documented so as to be known to be descended from specific ancestors, that closely resemble others of their breed in appearance, movement, way of working and other characters; and that reproduce with offspring closely resembling each other and their parents. Purebred dogs are breeds in the second sense.
New breeds of dog are constantly being created, and there are many websites for new breed associations and breed clubs offering legitimate registrations for new or rare breeds. When dogs of a new breed are "visibly similar in most characteristics" and have reliable documented descent from a "known and designated foundation stock", then they can then be considered members of a breed, and, if an individual dog is documented and registered, it can be called purebred.
Cats
A cat whose ancestry is formally registered is called a pedigreed or purebred cat. Technically, a purebred cat is one whose ancestry contains only individuals of the same breed. A pedigreed cat is one whose ancestry is recorded with a cat registry, but may have ancestors of different breeds. Landraces are not cat breeds, but a selective group of representative cats can be used as foundation stock to create a new cat breed (examples of breeds created in this way are the Maine Coon, European Shorthair and Siberian).
Because of common crossbreeding in populated areas, most cats are simply identified as belonging to the unregistered non-pedigree cats of mixed or unknown ancestry, referred to as domestic long-haired and domestic short-haired cat, depending on their fur length. Other commonly used terms are random-bred cat, domestic cat, house(hold) cat or moggie/moggy (UK English). Out of the hundreds of millions of cats worldwide, almost none have any purebred ancestors, nor belong to a specific breed, because purebred cats are a human invention of the last 150 years and selectively bred from foundation stock by breeders in closed off lineages.
Approximately 3–4% of the cats in the US are purchased from breeders. Not all breeders sell registered pedigree cats. In France, approximately 4% of cats are pedigreed. Worldwide the number of pedigreed cats is somewhat lower, and is estimated at approximately 1–2%.
By definition all cats belonging to a specific breed are pedigreed cats with a known and formally registered ancestry with one of the cat registries, also known as the cat’s “paperwork” or pedigree. The list of cat breeds is quite large: most cat registries actually recognize between 30 and 75 breeds of cats, and several more are in development, with one or more new breeds being recognized each year on average, having distinct features (phenotype) and lineage. Nowadays, there exist over 100 cat breeds and varieties recognized by at least one of the official cat registries. The purpose of the registry of cat breeds is to develop and maintain a healthy breed by controlling inbreeding and the spread of hereditary diseases, and regulating the well-being of the cats. Owners and breeders compete in cat shows to see whose animal bears the closest resemblance (best conformance) to an idealized definition, based on breed type and the breed standard for each breed.
Modern breeders created cat breeds, which are actually feline hybrids between a wild cat species and the domestic cat species (Felis catus ). A famous example of such a hybrid cat breed is the Savannah cat (Felis catus × Leptailurus serval ), which is produced by crossing wild servals with domestic cats.
Some natural, ancient breeds of cat that have a distinct phenotype were formerly considered or speculated to be subspecies of wild cats or domestic cats (Felis catus), or hybrids between them. Later genetic research shows that only one wild cat species was domesticated; the north African and southwest Asian wild cat (Felis silvestris lybica). All domestic (non-hybrid) cats and cat breeds fall under the domestic cat (Felis catus), and are no longer considered separate (sub)species. The domestication of the Felis silvestris lybica started around 9.000 years ago in the Near East and Egypt region, while the selective breeding of purebred/pedigreed cat breeds only started 150 years ago.
Horses
Written and oral histories of various animals or pedigrees of certain types of horse have been kept throughout history, though breed registry stud books trace back to about the 13th century, at least in Europe, when pedigrees were tracked in writing, and the practice of declaring a type of horse to be a breed or a purebred became more widespread.
Certain horse breeds, such as the Andalusian horse and the Arabian horse, are claimed by aficionados of the respective breeds to be ancient, near-pure descendants from an ancient wild prototype, though mapping of the horse genome as well as the mtDNA and y-DNA of various breeds has largely disproved such claims.
Livestock
Most domesticated farm animals among others can also have true-breeding breeds and breed registries, particularly cattle, water buffaloes, sheep, goats, donkeys, guinea pigs, chickens, fancy pigeons, domestic ducks, rabbits, and pigs. While animals bred strictly for market sale are not always purebreds, or if purebred may not be registered, most livestock producers value the presence of purebred genetic stock for the consistency of traits such animals provide. It is common for a farm's male breeding stock in particular to be of purebred, pedigreed lines.
In cattle, some breeders associations make a difference between "purebred" and "full blood". Full blood cattle are fully pedigreed animals, where every ancestor is registered in the herdbook and shows the typical characteristics of the breed. Purebred are those animals that have been bred-up to purebred status as a result of using full blood animals to cross with an animal of another breed.
Artificial breeding via artificial insemination or embryo transfer is often used in sheep and cattle breeding to quickly expand, or improve purebred herds. Embryo transfer techniques allow top quality female livestock to have a greater influence on the genetic advancement of a herd or flock in much the same way that artificial insemination has allowed greater use of superior sires.
| Technology | Animal husbandry | null |
825694 | https://en.wikipedia.org/wiki/Eyepiece | Eyepiece | An eyepiece, or ocular lens, is a type of lens that is attached to a variety of optical devices such as telescopes and microscopes. It is named because it is usually the lens that is closest to the eye when someone looks through an optical device to observe an object or sample. The objective lens or mirror collects light from an object or sample and brings it to focus creating an image of the object. The eyepiece is placed near the focal point of the objective to magnify this image to the eyes. (The eyepiece and the eye together make an image of the image created by the objective, on the retina of the eye.) The amount of magnification depends on the focal length of the eyepiece.
An eyepiece consists of several "lens elements" in a housing, with a "barrel" on one end. The barrel is shaped to fit in a special opening of the instrument to which it is attached. The image can be focused by moving the eyepiece nearer and further from the objective. Most instruments have a focusing mechanism to allow movement of the shaft in which the eyepiece is mounted, without needing to manipulate the eyepiece directly.
The eyepieces of binoculars are usually permanently mounted in the binoculars, causing them to have a pre-determined magnification and field of view. With telescopes and microscopes, however, eyepieces are usually interchangeable. By switching the eyepiece, the user can adjust what is viewed. For instance, eyepieces will often be interchanged to increase or decrease the magnification of a telescope. Eyepieces also offer varying fields of view, and differing degrees of eye relief for the person who looks through them.
Properties
Several properties of an eyepiece are likely to be of interest to a user of an optical instrument, when comparing eyepieces and deciding which eyepiece suits their needs.
Design distance to entrance pupil
Eyepieces are optical systems where the entrance pupil is invariably located outside of the system. They must be designed for optimal performance for a specific distance to this entrance pupil (i.e. with minimum aberrations for this distance). In a refracting astronomical telescope the entrance pupil is identical with the objective. This may be several feet distant from the eyepiece; whereas with a microscope eyepiece the entrance pupil is close to the back focal plane of the objective, mere inches from the eyepiece. Microscope eyepieces may be corrected differently from telescope eyepieces; however, most are also suitable for telescope use.
Elements and groups
Elements are the individual lenses, which may come as simple lenses or "singlets" and cemented doublets or (rarely) triplets. When lenses are cemented together in pairs or triples, the combined elements are called groups (of lenses).
The first eyepieces had only a single lens element, which delivered highly distorted images. Two and three-element designs were invented soon after, and quickly became standard due to the improved image quality. Today, engineers assisted by computer-aided drafting software have designed eyepieces with seven or eight elements that deliver exceptionally large, sharp views.
Internal reflection and scatter
Internal reflections, sometimes called "scatter", cause the light passing through an eyepiece to disperse and reduce the contrast of the image projected by the eyepiece. When the effect is particularly bad, "ghost images" are seen, called "ghosting". For many years, simple eyepiece designs with a minimum number of internal air-to-glass surfaces were preferred to avoid this problem.
One solution to scatter is to use thin film coatings over the surface of the element. These thin coatings are only one or two wavelengths deep, and work to reduce reflections and scattering by changing the refraction of the light passing through the element. Some coatings may also absorb light that is not being passed through the lens in a process called total internal reflection where the light incident on the film is at a shallow angle.
Chromatic aberration
Lateral or transverse chromatic aberration is caused because the refraction at glass surfaces differs for light of different wavelengths. Blue light, seen through an eyepiece element, will not focus to the same point but along the same axis as red light. The effect can create a ring of false colour around point sources of light and results in a general blurriness to the image.
One solution is to reduce the aberration by using multiple elements of different types of glass. Achromats are lens groups that bring two different wavelengths of light to the same focus and exhibit greatly reduced false colour. Low dispersion glass may also be used to reduce chromatic aberration.
Longitudinal chromatic aberration is a pronounced effect of optical telescope objectives, because the focal lengths are so long. Microscopes, whose focal lengths are generally shorter, do not tend to suffer from this effect.
Focal length
The focal length of an eyepiece is the distance from the principal plane of the eyepiece to where parallel rays of light converge to a single point. When in use, the focal length of an eyepiece, combined with the focal length of the telescope or microscope objective, to which it is attached, determines the magnification. It is usually expressed in millimetres when referring to the eyepiece alone. When interchanging a set of eyepieces on a single instrument, however, some users prefer to identify each eyepiece by the magnification produced.
For a telescope, the approximate angular magnification produced by the combination of a particular eyepiece and objective can be calculated with the following formula:
where:
is the focal length of the objective,
is the focal length of the eyepiece.
Magnification increases, therefore, when the focal length of the eyepiece is shorter or the focal length of the objective is longer. For example, a 25 mm eyepiece in a telescope with a 1200 mm focal length would magnify objects 48 times. A 4 mm eyepiece in the same telescope would magnify 300 times.
Amateur astronomers tend to refer to telescope eyepieces by their focal length in millimeters. These typically range from about 3 mm to 50 mm. Some astronomers, however, prefer to specify the resulting magnification power rather than the focal length. It is often more convenient to express magnification in observation reports, as it gives a more immediate impression of what view the observer actually saw. Due to its dependence on properties of the particular telescope in use, however, magnification power alone is meaningless for describing a telescope eyepiece.
For a compound microscope the corresponding formula is
where
is the distance of closest distinct vision (usually 250 mm).
is the distance between the back focal plane of the objective and the back focal plane of the eyepiece (loosely called the "tube length"), typically 160 mm for a modern instrument.
is the objective focal length and is the eyepiece focal length.
By convention, microscope eyepieces are usually specified by power instead of focal length. Microscope eyepiece power and objective power are defined by
thus from the expression given earlier for the angular magnification of a compound microscope
The total angular magnification of a microscope image is then simply calculated by multiplying the eyepiece power by the objective power. For example, a 10× eyepiece with a 40× objective will magnify the image 400 times.
This definition of lens power relies upon an arbitrary decision to split the angular magnification of the instrument into separate factors for the eyepiece and the objective. Historically, Abbe described microscope eyepieces differently, in terms of angular magnification of the eyepiece and 'initial magnification' of the objective. While convenient for the optical designer, this turned out to be less convenient from the viewpoint of practical microscopy and was thus subsequently abandoned.
The generally accepted visual distance of closest focus is 250 mm, and eyepiece power is normally specified assuming this value. Common eyepiece powers are 8×, 10×, 15×, and 20×. The focal length of the eyepiece (in mm) can thus be determined if required by dividing 250 mm by the eyepiece power.
Modern instruments often use objectives optically corrected for an infinite tube length rather than 160 mm, and these require an auxiliary correction lens in the tube.
Location of focal plane
In some eyepiece types, such as Ramsden eyepieces (described in more detail below), the eyepiece behaves as a magnifier, and its focal plane is located outside of the eyepiece in front of the field lens. This plane is therefore accessible as a location for a graticule or micrometer crosswires. In the Huygenian eyepiece, the focal plane is located between the eye and field lenses, inside the eyepiece, and is hence not accessible.
Field of view
The field of view, often abbreviated FOV, describes the area of a target (measured as an angle from the location of viewing) that can be seen when looking through an eyepiece. The field of view seen through an eyepiece varies, depending on the magnification achieved when connected to a particular telescope or microscope, and also on properties of the eyepiece itself. Eyepieces are differentiated by their field stop, which is the narrowest aperture that light entering the eyepiece must pass through to reach the field lens of the eyepiece.
Due to the effects of these variables, the term "field of view" nearly always refers to one of two meanings:
True or Telescope's field of view For a telescope or binocular, the actual angular size of the span of sky that can be seen through a particular eyepiece, used with a particular telescope, producing a specific magnification. It ranges typically between 0.1–2 degrees. For a microscope, the actual width of the visible sample on the slide or sample tray, usually given in millimeters, but sometimes given as angular measure, like a telescope. For binoculars it is expressed as the actual field width in feet or in meters at some standard distance (typically either 100 feet or 30 meters, which are very nearly the same: 30 m is only a 2% smaller than 100 feet).
Apparent or Eye's field of view For telescopes, microscopes, or binoculars, the apparent field of view is a measure of the angular size of the image seen by the eye, through the eyepiece. In other words, it is how large the image appears (as distinct from the magnification). Unless there is vignetting by the telescope's or microscope's body tube, this is constant for any given eyepiece with a fixed focal length, and may be used to calculate what the true field of view will be when the eyepiece is used with a given telescope or microscope. For modern eyepieces, the measurement ranges from 30–110 degrees, with all current good eyepieces being at least 50°, except for a few special-purpose eyepieces, such as some equipped with reticles.
It is common for users of an eyepiece to want to calculate the actual field of view, because it indicates how much of the sky will be visible when the eyepiece is used with their telescope. The most convenient method of calculating the actual field of view depends on whether the apparent field of view is known.
If the apparent field of view is known, the actual field of view can be calculated from the following approximate formula:
where:
is the true field of view (on the sky), calculated in whichever unit of angular measurement that is provided in;
is the apparent field of view (in the eye);
is the magnification.
The formula is accurate to 4% or better up to 40° apparent field of view, and has a 10% error for 60°.
Since where:
is the focal length of the telescope;
is the focal length of the eyepiece, expressed in the same units of measurement as
The true field of view even without knowing the apparent field of view, given by:
The focal length of the telescope objective, is the diameter of the objective times the focal ratio. It represents the distance at which the mirror or objective lens will cause light from a star to converge onto a single point (aberrations excepted).
If the apparent field of view is unknown, the actual field of view can be approximately found using:
where:
is the actual field of view, calculated in degrees.
is the diameter of the eyepiece field stop in mm.
is the focal length of the telescope, in mm.
The second formula is actually more accurate, but field stop size is not usually specified by most manufacturers. The first formula will not be accurate if the field is not flat, or is higher than 60° which is common for most ultra-wide eyepiece design.
The above formulas are approximations. The ISO 14132-1:2002 standard gives the exact calculation for apparent field of view, from the true field of view, as:
If a diagonal or Barlow lens is used before the eyepiece, the eyepiece's field of view may be slightly restricted. This occurs when the preceding lens has a narrower field stop than the eyepiece's, causing the obstruction in the front to act as a smaller field stop in front of the eyepiece. The exact relationship is given by
An occasionally used approximation is
This formula also indicates that, for an eyepiece design with a given apparent field of view, the barrel diameter will determine the maximum focal length possible for that eyepiece, as no field stop can be larger than the barrel itself. For example, a Plössl with 45° apparent field of view in a 1.25 inch barrel would yield a maximum focal length of 35 mm.
Anything longer requires larger barrel or the view is restricted by the edge, effectively making the field of view less than 45°.
Barrel diameter
Eyepieces for telescopes and microscopes are usually interchanged to increase or decrease the magnification, and to enable the user to select a type with certain performance characteristics. To allow this, eyepieces come in standardized "Barrel diameters".
Telescope eyepieces
There are six standard barrel diameters for telescopes. The barrel sizes (usually expressed in inches) are:
0.965 inch (24.5 mm) – This is the smallest standard barrel diameter and is usually found in retail toy store and shopping mall telescopes. Many of these eyepieces that come with such telescopes are plastic, and some even have plastic lenses. High-end telescope eyepieces with this barrel size are no longer manufactured, but you can still purchase Kellner types.
1.25 inch (31.75 mm) – This is the most popular telescope eyepiece barrel diameter. The practical upper limit on focal lengths for eyepieces with 1.25″ barrels is about 32 mm. With longer focal lengths, the edges of the eyepiece barrel intrude into the view, limiting its size. With focal lengths longer than 32 mm, the available field of view falls below 50°, which most amateurs consider to be the minimum acceptable width. These barrel sizes are threaded for 30 mm filters.
2 inch (50.8 mm) – The larger barrel size in 2″ eyepieces helps alleviate the limit on focal lengths; it is the largest size commonly available. The upper limit of focal length with 2″ eyepieces is about 55 mm. The trade-off is that these eyepieces are usually more expensive, will not fit in some telescopes, and may be heavy enough to tip the telescope. These barrel sizes are threaded for 48 mm filters (or rarely 49 mm).
2.7 inch (68.58 mm) – 2.7″ eyepieces are only made by a few manufacturers. They allow for slightly larger fields of view. Many high-end focusers now accept these eyepieces.
3 inch (76.2 mm) – The even larger barrel size in 3″ eyepieces allows for extreme focal lengths and over 120° field of view eyepieces. The disadvantages are that these eyepieces are somewhat rare, extremely expensive, up to 5 lbs in weight, and that only a few telescopes have focusers large enough to accept them. Their huge weight causes balancing issues in Schmidt-Cassegrains under 10 inches, refractors under 5 inches, and reflectors under 16 inches. Also, due to their large field stops, without large-diameter secondary mirrors, most reflectors and Schmidt-Cassegrains will have severe vignetting with these eyepieces.
4 inch (102 mm) – Eyepieces this size are rare, and only commonly used for long refracting telescopes in older observatories. Very few manufacturers make them, and with the current popularity of short focal length / smaller focal ratio telescopes among amateurs, the demand for this size is low. They are sometimes improvised from re‑adapted lenses scavenged out of old cinema projectors.
Microscope eyepieces
Eyepieces for microscopes have a variety of barrel diameters, usually given in millimeters, such as 23.2 mm and 30 mm.
Eye relief
The eye needs to be held at a certain distance behind the eye lens of an eyepiece to see images properly through it. This distance is called the eye relief. A larger eye relief means that the optimum position is farther from the eyepiece, making it easier to view an image. However, if the eye relief is too large it can be uncomfortable to hold the eye in the correct position for an extended period of time, for which reason some eyepieces with long eye relief have cups behind the eye lens to aid the observer in maintaining the correct observing position. The eye pupil should coincide with the exit pupil, the image of the entrance pupil, which in the case of an astronomical telescope corresponds to the object glass.
Eye relief typically ranges from about 2 mm to 20 mm, depending on the construction of the eyepiece. Long focal-length eyepieces usually have ample eye relief, but short focal-length eyepieces are more problematic. Until recently, and still quite commonly, eyepieces of a short-focal length have had a short eye relief. Good design guidelines suggest a minimum of 5–6 mm to accommodate the eyelashes of the observer to avoid discomfort. Modern designs with many lens elements, however, can correct for this, and viewing at high power becomes more comfortable. This is especially the case for spectacle wearers, who may need up to 20 mm of eye relief to accommodate their glasses.
Designs
Technology has developed over time and there are a variety of eyepiece designs for use with telescopes, microscopes, gun-sights, and other devices. Some of these designs are described in more detail below.
Negative lens or "Galilean"
The simple negative lens placed before the focus of the objective has the advantage of presenting an erect image but with limited field of view better suited to low magnification. It is suspected this type of lens was used in some of the first refracting telescopes that appeared in the Netherlands in about 1608. It was also used in Galileo Galilei's 1609 telescope design which gave this type of eyepiece arrangement the name "Galilean". This type of eyepiece is still used in very cheap telescopes, binoculars and in opera glasses.
Convex lens
A simple convex lens placed after the focus of the objective lens presents the viewer with a magnified inverted image. This configuration may have been used in the first refracting telescopes from the Netherlands and was proposed as a way to have a much wider field of view and higher magnification in telescopes in Johannes Kepler's 1611 book Dioptrice. Since the lens is placed after the focal plane of the objective it also allowed for use of a micrometer at the focal plane (used for determining the angular size and/or distance between objects observed).
Huygens
Huygens eyepieces consist of two plano-convex lenses with the plane sides towards the eye separated by an air gap. The lenses are called the eye lens and the field lens. The focal plane is located between the two lenses. It was invented by Christiaan Huygens in the late 1660s and was the first compound (multi-lens) eyepiece. Huygens discovered that two air spaced lenses can be used to make an eyepiece with zero transverse chromatic aberration. If the lenses are made of glass of the same Abbe number, to be used with a relaxed eye and a telescope with an infinitely distant objective then the separation is given by:
where and are the focal lengths of the component lenses.
These eyepieces work well with the very long focal length telescopes.
This optical design is now considered obsolete since with today's shorter focal length telescopes the eyepiece suffers from short eye relief, high image distortion, axial chromatic aberration, and a very narrow apparent field of view. Since these eyepieces are cheap to make they can often be found on inexpensive telescopes and microscopes.
Because Huygens eyepieces do not contain cement to hold the lens elements, telescope users sometimes use these eyepieces in the role of "solar projection", i.e. projecting an image of the Sun onto a screen for prolonged periods of time. Cemented eyepieces are traditionally regarded as potentially vulnerable to heat damage by the intense concentrations of light involved.
Ramsden
The Ramsden eyepiece comprises two plano-convex lenses of the same glass and similar focal lengths, placed less than one eye-lens focal length apart, a design created by astronomical and scientific instrument maker Jesse Ramsden in 1782. The lens separation varies between different designs, but is typically somewhere between and of the focal length of the eye-lens, the choice being a trade off between residual transverse chromatic aberration (at low values) and at high values running the risk of the field lens touching the focal plane when used by an observer who works with a close virtual image such as a myopic observer, or a young person whose accommodation is able to cope with a close virtual image (this is a serious problem when used with a micrometer as it can result in damage to the instrument).
A separation of exactly 1 focal length is also inadvisable since it renders the dust on the field lens disturbingly in focus. The two curved surfaces face inwards. The focal plane is thus located outside of the eyepiece and is hence accessible as a location where a graticule, or micrometer crosshairs may be placed. Because a separation of exactly one focal length would be required to correct transverse chromatic aberration, it is not possible to correct the Ramsden design completely for transverse chromatic aberration. The design is slightly better than Huygens but still not up to today's standards.
It remains highly suitable for use with instruments operating using near-monochromatic light sources e.g. polarimeters.
Kellner or "Achromat"
In a Kellner eyepiece an achromatic doublet is used in place of the simple plano-convex eye lens in the Ramsden design to correct the residual transverse chromatic aberration. Carl Kellner designed this first modern achromatic eyepiece in 1849, also called an "achromatized Ramsden". Kellner eyepieces are a 3-lens design. They are inexpensive and have fairly good image from low to medium power and are far superior to Huygenian or Ramsden design. The eye relief is better than the Huygenian and worse than the Ramsden eyepieces. The biggest problem of Kellner eyepieces was internal reflections. Today's anti-reflection coatings make these usable, economical choices for small to medium aperture telescopes with focal ratio f/6 or longer. The typical apparent field of view is 40–50°.
Plössl or "Symmetrical"
The Plössl is an eyepiece usually consisting of two sets of doublets, designed by Georg Plössl in 1860. Since the two doublets can be identical this design is sometimes called a symmetrical eyepiece. The compound Plössl lens provides a large 50° or more apparent field of view, along with the proportionally large true FOV. This makes this eyepiece ideal for a variety of observational purposes including deep-sky and planetary viewing. The chief disadvantage of the Plössl optical design is short eye relief compared to an orthoscopic, since the Plössl eye relief is restricted to about 70–80% of focal length. The short eye relief is more critical in short focal lengths below about 10 mm, when viewing can become uncomfortable – especially for people wearing glasses.
The Plössl eyepiece was an obscure design until the 1980s when astronomical equipment manufacturers started selling redesigned versions of it. Today it is a very popular design on the amateur astronomical market, where the name Plössl covers a range of eyepieces with at least four optical elements, sometimes overlapping with the Erfle design.
This eyepiece is one of the more expensive to manufacture because of the quality of glass, and the need for well matched convex and concave lenses to prevent internal reflections. Due to this fact, the quality of different Plössl eyepieces varies. There are notable differences between cheap Plössls with simplest anti-reflection coatings and well made ones.
Orthoscopic or "Abbe"
The 4-element orthoscopic eyepiece consists of a plano-convex singlet eye lens and a cemented convex-convex triplet field lens achromatic field lens. This gives the eyepiece a nearly perfect image quality and good eye relief, but a narrow apparent field of view — about 40°–45°. It was invented by Ernst Abbe in 1880. It is called "orthoscopic" or "orthographic" because of its low degree of distortion and is also sometimes called an "ortho" or "Abbe".
Until the advent of multicoatings and the popularity of the Plössl, orthoscopics were the most popular design for telescope eyepieces. Even today these eyepieces are considered good eyepieces for planetary and lunar viewing. They are preferred for reticle eyepieces, since they are one of the wide-field, long eye-relief designs with an external focal plane; slowly being supplanted by the König. Due to their low degree of distortion and the corresponding globe effect, they are less suitable for applications which require an extensive panning of the instrument.
Monocentric
A Monocentric is an achromatic triplet lens with two pieces of crown glass cemented on both sides of a flint glass element. The elements are thick, strongly curved, and their surfaces have a common center giving it the name "monocentric". It was invented by H.A. Steinheil around 1883. This design, like the solid eyepiece designs of Tolles, Hastings, and Taylor, is free from ghost reflections and gives a bright contrasty image, a desirable feature when it was invented (before anti-reflective coatings). It has a narrow apparent field of view around 25° but was favored by planetary observers.
Erfle
An Erfle is a 5 element eyepiece consisting of 2 achromatic doublets with an extra simple lens between them. They were invented by Heinrich Erfle during World War I for military use. The design is an elementary extension of 4 element eyepieces such as Plössls, enhanced for wider fields.
Erfle eyepieces are designed to have wide field of view (about 60°), but are unusable at high powers because they suffer from astigmatism and ghost images.
However, with lens coatings at low powers (focal lengths of 20~30 mm and up) they are acceptable, and at 40 mm they can be excellent. Erfles are very popular for wide-field views, because they have large eye lenses, and can be very comfortable to use because of their good eye relief in longer focal lengths.
König
The König eyepiece has a concave-convex positive doublet and a plano-convex singlet. The strongly convex surfaces of the doublet and singlet face and (nearly) touch each other. The doublet has its concave surface facing the light source and the singlet has its almost flat (slightly convex) surface facing the eye. It was designed in 1915 by German optician Albert König (1871−1946) and is effectively a simplified Abbe. The design allows for high magnification with remarkably high eye relief – the longest eye relief proportional to focal length of any design before the Nagler, in 1979. The field of view of about 55° is slightly superior to the Plössl, with the further advantages of better eye relief and requiring one less lens element.
Modern improvements typically have fields of view of 60°−70°. König design revisions use exotic glass and / or add more lens groups; the most typical adaptation is to add a simple positive, concave-convex lens before the doublet, with the concave face towards the light source and the convex surface facing the doublet.
RKE
An RKE eyepiece has an achromatic field lens and double convex eye lens, a reversed adaptation of the Kellner eyepiece, with its lens layout similar to the König. It was designed by Dr. David Rank for the Edmund Scientific Corporation, who marketed it throughout the late 1960s and early 1970s. This design provides slightly wider field of view than classic Kellner design and makes its design similar to a widely spaced version of the König.
According to Edmund Scientific Corporation, RKE stands for "Rank Kellner Eyepiece'". In an amendment to their trademark application on 16 January 1979 it was given as "Rank-Kaspereit-Erfle", the three designs from which the eyepiece was derived. Edmund Astronomy News (March 1978) called the eyepiece the "Rank-Kaspereit-Erfle" (RKE) a "redesign[ed] ... type II Kellner". However, the RKE deign does not resemble a Kellner, and is closer to a modified König. There is some speculation that at some point the "K" was mistakenly interpreted as the name of the more common Kellner, instead of the fairly rarely seen König.
Nagler
Invented by Albert Nagler and patented in 1979, the Nagler eyepiece is a design optimized for astronomical telescopes to give an ultra-wide field of view (82°) that has good correction for astigmatism and other aberrations. Introduced in 2007, the Ethos is an enhanced ultra-wide field design developed principally by Paul Dellechiaie under Albert Nagler's guidance at Tele Vue Optics and claims a 100–110° AFOV. This is achieved using exotic high-index glass and up to eight optical elements in four or five groups; there are several similar designs called the Nagler, Nagler type 2, Nagler type 4, Nagler type 5, and Nagler type 6. The newer Delos design is a modified Ethos design with a FOV of 'only' 72 degrees but with a long 20 mm eye relief.
The number of elements in a Nagler makes them seem complex, but the idea of the design is fairly simple: every Nagler has a negative doublet field lens, which increases magnification, followed by several positive groups. The positive groups, considered separate from the first negative group, combine to have long focal length, and form a positive lens. That allows the design to take advantage of the many good qualities of low power lenses. In effect, a Nagler is a superior version of a Barlow lens combined with a long focal length eyepiece. This design has been widely copied in other wide field or long eye relief eyepieces.
The main disadvantage to Naglers is in their weight; they are often ruefully referred to as ‘hand grenades’ because of their heft and large size. Long focal length versions exceed , which is enough to unbalance small to medium-sized telescopes. Another disadvantage is a high purchase cost, with large Naglers' prices comparable to the cost of a small telescope. Hence these eyepieces are regarded by many amateur astronomers as a luxury.
| Technology | Telescope | null |
825748 | https://en.wikipedia.org/wiki/Propylene | Propylene | Propylene, also known as propene, is an unsaturated organic compound with the chemical formula . It has one double bond, and is the second simplest member of the alkene class of hydrocarbons. It is a colorless gas with a faint petroleum-like odor.
Propylene is a product of combustion from forest fires, cigarette smoke, and motor vehicle and aircraft exhaust. It was discovered in 1850 by A. W. von Hoffmann's student Captain (later Major General) John Williams Reynolds as the only gaseous product of thermal decomposition of amyl alcohol to react with chlorine and bromine.
Production
Steam cracking
The dominant technology for producing propylene is steam cracking, using propane as the feedstock. Cracking propane yields a mixture of ethylene, propylene, methane, hydrogen gas, and other related compounds. The yield of propylene is about 15%. The other principal feedstock is naphtha, especially in the Middle East and Asia.
Propylene can be separated by fractional distillation from the hydrocarbon mixtures obtained from cracking and other refining processes; refinery-grade propene is about 50 to 70%. In the United States, shale gas is a major source of propane.
Olefin conversion technology
In the Phillips triolefin or olefin conversion technology, propylene is interconverted with ethylene and 2-butenes. Rhenium and molybdenum catalysts are used:
CH2=CH2{} + CH3CH=CHCH3 ->[][\text{Re, Mo} \atop \text{catalyst}] 2 CH2=CHCH3
The technology is founded on an olefin metathesis reaction discovered at Phillips Petroleum Company. Propylene yields of about 90 wt% are achieved.
Related is the Methanol-to-Olefins/Methanol-to-Propene process. It converts synthesis gas (syngas) to methanol, and then converts the methanol to ethylene and/or propene. The process produces water as a by-product. Synthesis gas is produced from the reformation of natural gas or by the steam-induced reformation of petroleum products such as naphtha, or by gasification of coal or natural gas.
Fluid catalytic cracking
High severity fluid catalytic cracking (FCC) uses traditional FCC technology under severe conditions (higher catalyst-to-oil ratios, higher steam injection rates, higher temperatures, etc.) in order to maximize the amount of propene and other light products. A high severity FCC unit is usually fed with gas oils (paraffins) and residues, and produces about 20–25% (by mass) of propene on feedstock together with greater volumes of motor gasoline and distillate byproducts. These high temperature processes are expensive and have a high carbon footprint. For these reasons, alternative routes to propylene continue to attract attention.
Other commercialized methods
On-purpose propylene production technologies were developed throughout the twentieth century. Of these, propane dehydrogenation technologies such as the CATOFIN and OLEFLEX processes have become common, although they still make up a minority of the market, with most of the olefin being sourced from the above mentioned cracking technologies. Platinum, chromia, and vanadium catalysts are common in propane dehydrogenation processes.
Market
Propene production has remained static at around 35 million tonnes (Europe and North America only) from 2000 to 2008, but it has been increasing in East Asia, most notably Singapore and China. Total world production of propene is currently about half that of ethylene.
Research
The use of engineered enzymes has been explored but has not been commercialized.
There is ongoing research into the use of oxygen carrier catalysts for the oxidative dehydrogenation of propane. This poses several advantages, as this reaction mechanism can occur at lower temperatures than conventional dehydrogenation, and may not be equilibrium-limited because oxygen is used to combust the hydrogen by-product.
Uses
Propylene is the second most important starting product in the petrochemical industry after ethylene. It is the raw material for a wide variety of products. Polypropylene manufacturers consume nearly two thirds of global production. Polypropylene end uses include films, fibers, containers, packaging, and caps and closures. Propene is also used for the production of chemicals such as propylene oxide, acrylonitrile, cumene, butyraldehyde, and acrylic acid. In the year 2013 about 85 million tonnes of propylene were processed worldwide.
Propylene and benzene are converted to acetone and phenol via the cumene process.
Propylene is also used to produce isopropyl alcohol (propan-2-ol), acrylonitrile, propylene oxide, and epichlorohydrin.
The industrial production of acrylic acid involves the catalytic partial oxidation of propylene. Propylene is an intermediate in the oxidation to acrylic acid.
In industry and workshops, propylene is used as an alternative fuel to acetylene in Oxy-fuel welding and cutting, brazing and heating of metal for the purpose of bending. It has become a standard in BernzOmatic products and others in MAPP substitutes, now that true MAPP gas is no longer available.
Reactions
Propylene resembles other alkenes in that it undergoes electrophilic addition reactions relatively easily at room temperature. The relative weakness of its double bond explains its tendency to react with substances that can achieve this transformation. Alkene reactions include:
Polymerization and oligomerization
Oxidation
Halogenation
Hydrohalogenation
Alkylation
Hydration
Hydroformylation
Complexes of transition metals
Foundational to hydroformylation, alkene metathesis, and polymerization are metal-propylene complexes, which are intermediates in these processes. Propylene is prochiral, meaning that binding of a reagent (such as a metal electrophile) to the C=C group yields one of two enantiomers.
Polymerization
The majority of propylene is used to form polypropylene, a very important commodity thermoplastic, through chain-growth polymerization. In the presence of a suitable catalyst (typically a Ziegler–Natta catalyst), propylene will polymerize. There are multiple ways to achieve this, such as using high pressures to suspending the catalyst in a solution of liquid propylene, or running gaseous propylene through a fluidized bed reactor.
Oligomerizationn
In the presence of catalysts, propylene will form various short oligomers. It can dimerizes to give 2,3-dimethyl-1-butene and/or 2,3-dimethyl-2-butene. or trimerise to form tripropylene.
Environmental safety
Propene is a product of combustion from forest fires, cigarette smoke, and motor vehicle and aircraft exhaust. It is an impurity in some heating gases. Observed concentrations have been in the range of 0.1–4.8 parts per billion (ppb) in rural air, 4–10.5 ppb in urban air, and 7–260 ppb in industrial air samples.
In the United States and some European countries a threshold limit value of 500 parts per million (ppm) was established for occupational (8-hour time-weighted average) exposure. It is considered a volatile organic compound (VOC) and emissions are regulated by many governments, but it is not listed by the U.S. Environmental Protection Agency (EPA) as a hazardous air pollutant under the Clean Air Act. With a relatively short half-life, it is not expected to bioaccumulate.
Propene has low acute toxicity from inhalation and is not considered to be carcinogenic. Chronic toxicity studies in mice did not yield significant evidence suggesting adverse effects. Humans briefly exposed to 4,000 ppm did not experience any noticeable effects. Propene is dangerous from its potential to displace oxygen as an asphyxiant gas, and from its high flammability/explosion risk.
Bio-propylene is the bio-based propylene.
It has been examined, motivated by diverse interests such a carbon footprint. Production from glucose has been considered. More advanced ways of addressing such issues focus on electrification alternatives to steam cracking.
Storage and handling
Propene is flammable. Propene is usually stored as liquid under pressure, although it is also possible to store it safely as gas at ambient temperature in approved containers.
Occurrence in nature
Propene is detected in the interstellar medium through microwave spectroscopy. On September 30, 2013, NASA announced the detection of small amounts of naturally occurring propene in the atmosphere of Titan using infrared spectroscopy. The detection was made by a team led by NASA GSFC scientist Conor Nixon using data from the CIRS instrument on the Cassini orbiter spacecraft, part of the Cassini-Huygens mission. Its confirmation solved a 32-year old mystery by filling a predicted gap in Titan's detected hydrocarbons, adding the C3H6 species (propene) to the already-detected C3H4 (propyne) and C3H8 (propane).
| Physical sciences | Hydrocarbons | null |
826216 | https://en.wikipedia.org/wiki/Galaxy%20morphological%20classification | Galaxy morphological classification | Galaxy morphological classification is a system used by astronomers to divide galaxies into groups based on their visual appearance. There are several schemes in use by which galaxies can be classified according to their morphologies, the most famous being the Hubble sequence, devised by Edwin Hubble and later expanded by Gérard de Vaucouleurs and Allan Sandage. However, galaxy classification and morphology are now largely done using computational methods and physical morphology.
Hubble sequence
The Hubble sequence is a morphological classification scheme for galaxies invented by Edwin Hubble in 1926.
It is often known colloquially as the “Hubble tuning-fork” because of the shape in which it is traditionally represented. Hubble's scheme divides galaxies into three broad classes based on their visual appearance (originally on photographic plates):
Elliptical galaxies have smooth, featureless light distributions and appear as ellipses in images. They are denoted by the letter "E", followed by an integer n representing their degree of ellipticity on the sky. The specific ellipticity rating depends on ratio of the major (a) to minor axes (b), thus:
Spiral galaxies consist of a flattened disk, with stars forming a (usually two-armed) spiral structure, and a central concentration of stars known as the bulge, which is similar in appearance to an elliptical galaxy. They are given the symbol "S". Roughly half of all spirals are also observed to have a bar-like structure, extending from the central bulge. These barred spirals are given the symbol "SB".
Lenticular galaxies (designated S0) also consist of a bright central bulge surrounded by an extended, disk-like structure but, unlike spiral galaxies, the disks of lenticular galaxies have no visible spiral structure and are not actively forming stars in any significant quantity.
These broad classes can be extended to enable finer distinctions of appearance and to encompass other types of galaxies, such as irregular galaxies, which have no obvious regular structure (either disk-like or ellipsoidal).
The Hubble sequence is often represented in the form of a two-pronged fork, with the ellipticals on the left (with the degree of ellipticity increasing from left to right) and the barred and unbarred spirals forming the two parallel prongs of the fork on the right. Lenticular galaxies are placed between the ellipticals and the spirals, at the point where the two prongs meet the “handle”.
To this day, the Hubble sequence is the most commonly used system for classifying galaxies, both in professional astronomical research and in amateur astronomy.
Nonetheless, in June 2019, citizen scientists through Galaxy Zoo reported that the usual Hubble classification, particularly concerning spiral galaxies, may not be supported, and may need updating.
De Vaucouleurs system
The de Vaucouleurs system for classifying galaxies is a widely used extension to the Hubble sequence, first described by Gérard de Vaucouleurs in 1959. De Vaucouleurs argued that Hubble's two-dimensional classification of spiral galaxies—based on the tightness of the spiral arms and the presence or absence of a bar—did not adequately describe the full range of observed galaxy morphologies. In particular, he argued that rings and lenses are important structural components of spiral galaxies.
The de Vaucouleurs system retains Hubble's basic division of galaxies into ellipticals, lenticulars, spirals and irregulars. To complement Hubble's scheme, de Vaucouleurs introduced a more elaborate classification system for spiral galaxies, based on three morphological characteristics:
The different elements of the classification scheme are combined — in the order in which they are listed — to give the complete classification of a galaxy. For example, a weakly barred spiral galaxy with loosely wound arms and a ring is denoted SAB(r)c.
Visually, the de Vaucouleurs system can be represented as a three-dimensional version of Hubble's tuning fork, with stage (spiralness) on the x-axis, family (barredness) on the y-axis, and variety (ringedness) on the z-axis.
Numerical Hubble stage
De Vaucouleurs also assigned numerical values to each class of galaxy in his scheme. Values of the numerical Hubble stage T run from −6 to +10, with negative numbers corresponding to early-type galaxies (ellipticals and lenticulars) and positive numbers to late types (spirals and irregulars). Thus, as a rough rule, lower values of T correspond to a larger fraction of the stellar mass contained in a spheroid/bulge relative to the disk. The approximate mapping between the spheroid-to-total stellar mass ratio (MB/MT) and the Hubble stage is MB/MT=(10−T)2/256 based on local galaxies.
Elliptical galaxies are divided into three 'stages': compact ellipticals (cE), normal ellipticals (E) and late types (E+). Lenticulars are similarly subdivided into early (S−), intermediate (S0) and late (S+) types. Irregular galaxies can be of type magellanic irregulars (T = 10) or 'compact' (T = 11).
The use of numerical stages allows for more quantitative studies of galaxy morphology.
Yerkes (or Morgan) scheme
The Yerkes scheme was created by American astronomer William Wilson Morgan. Together with Philip Keenan, Morgan also developed the MK system for the classification of stars through their spectra. The Yerkes scheme uses the spectra of stars in the galaxy; the shape, real and apparent; and the degree of the central concentration to classify galaxies.
Thus, for example, the Andromeda Galaxy is classified as kS5.
| Physical sciences | Galaxy morphological classification | null |
826258 | https://en.wikipedia.org/wiki/Scintillation%20%28physics%29 | Scintillation (physics) | In condensed matter physics, scintillation ( ) is the physical process where a material, called a scintillator, emits ultraviolet or visible light under excitation from high energy photons (X-rays or gamma rays) or energetic particles (such as electrons, alpha particles, neutrons, or ions). See scintillator and scintillation counter for practical applications.
Overview
Scintillation is an example of luminescence, whereby light of a characteristic spectrum is emitted following the absorption of radiation. The scintillation process can be summarized in three main stages: conversion, transport and energy transfer to the luminescence center, and luminescence. The emitted radiation is usually less energetic than the absorbed radiation, hence scintillation is generally a down-conversion process.
Conversion processes
The first stage of scintillation, conversion, is the process where the energy from the incident radiation is absorbed by the scintillator and highly energetic electrons and holes are created in the material. The energy absorption mechanism by the scintillator depends on the type and energy of radiation involved. For highly energetic photons such as X-rays (0.1 keV < < 100 keV) and γ-rays ( > 100 keV), three types of interactions are responsible for the energy conversion process in scintillation: photoelectric absorption, Compton scattering, and pair production, which only occurs when > 1022 keV, i.e. the photon has enough energy to create an electron-positron pair.
These processes have different attenuation coefficients, which depend mainly on the energy of the incident radiation, the average atomic number of the material and the density of the material. Generally the absorption of high energy radiation is described by:
where is the intensity of the incident radiation, is the thickness of the material, and is the linear attenuation coefficient, which is the sum of the attenuation coefficients of the various contributions:
At lower X-ray energies ( 60 keV), the most dominant process is the photoelectric effect, where the photons are fully absorbed by bound electrons in the material, usually core electrons in the K- or L-shell of the atom, and then ejected, leading to the ionization of the host atom. The linear attenuation coefficient contribution for the photoelectric effect is given by:
where is the density of the scintillator, is the average atomic number, is a constant that varies between 3 and 4, and is the energy of the photon. At low X-ray energies, scintillator materials with atoms with high atomic numbers and densities are favored for more efficient absorption of the incident radiation.
At higher energies ( 60 keV) Compton scattering, the inelastic scattering of photons by bound electrons, often also leading to ionization of the host atom, becomes the more dominant conversion process. The linear attenuation coefficient contribution for Compton scattering is given by:
Unlike the photoelectric effect, the absorption resulting from Compton scattering is independent of the atomic number of the atoms present in the crystal, but linearly on their density.
At γ-ray energies higher than > 1022 keV, i.e. energies higher than twice the rest-mass energy of the electron, pair production starts to occur. Pair production is the relativistic phenomenon where the energy of a photon is converted into an electron-positron pair. The created electron and positron will then further interact with the scintillating material to generate energetic electron and holes. The attenuation coefficient contribution for pair production is given by:
where is the rest mass of the electron and is the speed of light. Hence, at high γ-ray energies, the energy absorption depends both on the density and average atomic number of the scintillator. In addition, unlike for the photoelectric effect and Compton scattering, pair production becomes more probable as the energy of the incident photons increases, and pair production becomes the most dominant conversion process above ~ 8 MeV.
The term includes other (minor) contributions, such as Rayleigh (coherent) scattering at low energies and photonuclear reactions at very high energies, which also contribute to the conversion, however the contribution from Rayleigh scattering is almost negligible and photonuclear reactions become relevant only at very high energies.
After the energy of the incident radiation is absorbed and converted into so-called hot electrons and holes in the material, these energetic charge carriers will interact with other particles and quasi-particles in the scintillator (electrons, plasmons, phonons), leading to an "avalanche event", where a great number of secondary electron–hole pairs are produced until the hot electrons and holes have lost sufficient energy. The large number of electrons and holes that result from this process will then undergo thermalization, i.e. dissipation of part of their energy through interaction with phonons in the material
The resulting large number of energetic charge carriers will then undergo further energy dissipation called thermalization. This occurs via interaction with phonons for electrons and Auger processes for holes.
The average timescale for conversion, including energy absorption and thermalization has been estimated to be in the order of 1 ps, which is much faster than the average decay time in photoluminescence.
Charge transport of excited carriers
The second stage of scintillation is the charge transport of thermalized electrons and holes towards luminescence centers and the energy transfer to the atoms involved in the luminescence process. In this stage, the large number of electrons and holes that have been generated during the conversion process, migrate inside the material. This is probably one of the most critical phases of scintillation, since it is generally in this stage where most loss of efficiency occur due to effects such as trapping or non-radiative recombination. These are mainly caused by the presence of defects in the scintillator crystal, such as impurities, ionic vacancies, and grain boundaries. The charge transport can also become a bottleneck for the timing of the scintillation process. The charge transport phase is also one of the least understood parts of scintillation and depends strongly on the type material involved and its intrinsic charge conduction properties.
Luminescence
Once the electrons and holes reach the luminescence centers, the third and final stage of scintillation occurs: luminescence. In this stage the electrons and holes are captured potential paths by the luminescent center, and then the electrons and hole recombine radiatively. The exact details of the luminescence phase also depend on the type of material used for scintillation.
Inorganic crystals
For photons such as gamma rays, thallium activated NaI crystals (NaI(Tl)) are often used. For a faster response (but only 5% of the output) CsF crystals can be used.
Organic scintillators
In organic molecules scintillation is a product of π-orbitals. Organic materials form molecular crystals where the molecules are loosely bound by Van der Waals forces. The ground state of 12C is 1s2 2s2 2p2. In valence bond theory, when carbon forms compounds, one of the 2s electrons is excited into the 2p state resulting in a configuration of 1s2 2s1 2p3. To describe the different valencies of carbon, the four valence electron orbitals, one 2s and three 2p, are considered to be mixed or hybridized in several alternative configurations. For example, in a tetrahedral configuration the s and p3 orbitals combine to produce four hybrid orbitals. In another configuration, known as trigonal configuration, one of the p-orbitals (say pz) remains unchanged and three hybrid orbitals are produced by mixing the s, px and py orbitals. The orbitals that are symmetrical about the bonding axes and plane of the molecule (sp2) are known as σ-electrons and the bonds are called σ-bonds. The pz orbital is called a π-orbital. A π-bond occurs when two π-orbitals interact. This occurs when their nodal planes are coplanar.
In certain organic molecules π-orbitals interact to produce a common nodal plane. These form delocalized π-electrons that can be excited by radiation. The de-excitation of the delocalized π-electrons results in luminescence.
The excited states of π-electron systems can be explained by the perimeter free-electron model (Platt 1949). This model is used for describing polycyclic hydrocarbons consisting of condensed systems of benzenoid rings in which no C atom belongs to more than two rings and every C atom is on the periphery.
The ring can be approximated as a circle with circumference l. The wave-function of the electron orbital must satisfy the condition of a plane rotator:
The corresponding solutions to the Schrödinger wave equation are:
where q is the orbital ring quantum number; the number of nodes of the wave-function. Since the electron can have spin up and spin down and can rotate about the circle in both directions all of the energy levels except the lowest are doubly degenerate.
The above shows the π-electronic energy levels of an organic molecule. Absorption of radiation is followed by molecular vibration to the S1 state. This is followed by a de-excitation to the S0 state called fluorescence. The population of triplet states is also possible by other means. The triplet states decay with a much longer decay time than singlet states, which results in what is called the slow component of the decay process (the fluorescence process is called the fast component). Depending on the particular energy loss of a certain particle (dE/dx), the "fast" and "slow" states are occupied in different proportions. The relative intensities in the light output of these states thus differs for different dE/dx. This property of scintillators allows for pulse shape discrimination: it is possible to identify which particle was detected by looking at the pulse shape. Of course, the difference in shape is visible in the trailing side of the pulse, since it is due to the decay of the excited states.
| Physical sciences | Electromagnetic radiation | Physics |
826277 | https://en.wikipedia.org/wiki/Ericsson%20cycle | Ericsson cycle | The Ericsson cycle is named after inventor John Ericsson who designed and built many unique heat engines based on various thermodynamic cycles. He is credited with inventing two unique heat engine cycles and developing practical engines based on these cycles. His first cycle is now known as the closed Brayton cycle, while his second cycle is what is now called the Ericsson cycle.
Ericsson is one of the few who built open-cycle engines, but he also built closed-cycle ones.
Ideal Ericsson cycle
The following is a list of the four processes that occur between the four stages of the ideal Ericsson cycle:
Process 1 -> 2: Isothermal compression. The compression space is assumed to be intercooled, so the gas undergoes isothermal compression. The compressed air flows into a storage tank at constant pressure. In the ideal cycle, there is no heat transfer across the tank walls.
Process 2 -> 3: Isobaric heat addition. From the tank, the compressed air flows through the regenerator and picks up heat at a high constant-pressure on the way to the heated power-cylinder.
Process 3 -> 4: Isothermal expansion. The power-cylinder expansion-space is heated externally, and the gas undergoes isothermal expansion.
Process 4 -> 1: Isobaric heat removal. Before the air is released as exhaust, it is passed back through the regenerator, thus cooling the gas at a low constant pressure, and heating the regenerator for the next cycle.
Comparison with Carnot, Diesel, Otto, and Stirling cycles
The ideal Otto and Diesel cycles are not totally reversible because they involve heat transfer through a finite temperature difference during the irreversible isochoric/isobaric heat-addition and isochoric heat-rejection processes. The aforementioned irreversibility renders the thermal efficiency of these cycles less than that of a Carnot engine operating within the same limits of temperature. Another cycle that features isobaric heat-addition and heat-rejection processes is the Ericsson cycle. The Ericsson cycle is an altered version of the Carnot cycle in which the two isentropic processes featured in the Carnot cycle are replaced by two isothermal regeneration processes.
The Ericsson cycle is often compared with the Stirling cycle, since the engine designs based on these respective cycles are both external combustion engines with regenerators. The Ericsson is perhaps most similar to the so-called "double-acting" type of Stirling engine, in which the displacer piston also acts as the power piston. Theoretically, both of these cycles have so called ideal efficiency, which is the highest allowed by the second law of thermodynamics. The most well-known ideal cycle is the Carnot cycle, although a useful Carnot engine is not known to have been invented.
The theoretical efficiencies for both, Ericsson and Stirling cycles acting in the same limits are equal to the Carnot Efficiency for same limits.
Comparison with the Brayton cycle
The first cycle Ericsson developed is now called the "Brayton cycle", commonly applied to gas turbine engines.
The second Ericsson cycle is the cycle most commonly referred to as simply the "Ericsson cycle". The (second) Ericsson cycle is also the limit of an ideal gas-turbine Brayton cycle, operating with multistage intercooled compression, and multistage expansion with reheat and regeneration. Compared to the Brayton cycle which uses adiabatic compression and expansion, the second Ericsson cycle uses isothermal compression and expansion, thus producing more net work per stroke. Also the use of regeneration in the Ericsson cycle increases efficiency by reducing the required heat input. For further comparisons of thermodynamic cycles, see heat engine.
Ericsson engine
The Ericsson engine is based on the Ericsson cycle, and is known as an "external combustion engine", because it is externally heated. To improve efficiency, the engine has a regenerator or recuperator between the compressor and the expander. The engine can be run open- or closed-cycle. Expansion occurs simultaneously with compression, on opposite sides of the piston.
Regenerator
Ericsson coined the term "regenerator" for his independent invention of the mixed-flow counter-current heat exchanger. However, Rev. Robert Stirling had invented the same device, prior to Ericsson, so the invention is credited to Stirling. Stirling called it an "economiser" or "economizer", because it increased the fuel economy of various types of heat processes. The invention was found to be useful, in many other devices and systems, where it became more widely used, since other types of engines became favored over the Stirling engine. The term "regenerator" is now the name given to the component in the Stirling engine.
The term "recuperator" refers to a separated-flow, counter-current heat exchanger. As if this weren't confusing enough, a mixed-flow regenerator is sometimes used as a quasi-separated-flow recuperator. This can be done through the use of moving valves, or by a rotating regenerates with fixed baffles, or by the use of other moving parts. When heat is recovered from exhaust gases and used to preheat combustion air, typically the term recuperator is used, because the two flows are separate.
History
In 1791, before Ericsson, John Barber proposed a similar engine. The Barber engine used a bellows compressor and a turbine expander, but it lacked a regenerator/recuperator. There are no records of a working Barber engine. Ericsson invented and patented his first engine using an external version of the Brayton cycle in 1833 (number 6409/1833 British). This was 18 years before Joule and 43 years before Brayton. Brayton engines were all piston engines and for the most part, internal combustion versions of the un-recuperated Ericsson engine. The "Brayton cycle" is now known as the gas turbine cycle, which differs from the original "Brayton cycle" in the use of a turbine compressor and expander. The gas turbine cycle is used for all modern gas turbine and turbojet engines, however simple cycle turbines are often recuperated to improve efficiency and these recuperated turbines more closely resemble Ericsson's work.
Ericsson eventually abandoned the open cycle in favor of the traditional closed Stirling cycle.
Ericsson's engine can easily be modified to operate in a closed-cycle mode, using a second, lower-pressure, cooled container between the original exhaust and intake. In closed cycle, the lower pressure can be significantly above ambient pressure, and He or H2 working gas can be used. Because of the higher pressure difference between the upward and downward movement of the work-piston, specific output can be greater than of a valveless Stirling engine. The added cost is the valve. Ericsson's engine also minimizes mechanical losses: the power necessary for compression does not go through crank-bearing frictional losses, but is applied directly from the expansion force. The piston-type Ericsson engine can potentially be the highest efficiency heat engine arrangement ever constructed. Admittedly, this has yet to be proven in practical applications.
Ericsson designed and built a very great number of engines running on various cycles including steam, Stirling, Brayton, externally heated diesel air fluid cycle. He ran his engines on a great variety of fuels including coal and solar heat.
Ericsson was also responsible for an early use of the screw propeller for ship propulsion, in the USS Princeton, built in 1842–43.
Caloric ship Ericsson
In 1851 the Ericsson-cycle engine (the second of the two discussed here) was used to power a 2,000-ton ship, the caloric ship Ericsson, and ran flawlessly for 73 hours. The combination engine produced about . It had a combination of four dual-piston engines; the larger expansion piston/cylinder, at in diameter, was perhaps the largest piston ever built. Rumor has it that tables were placed on top of those pistons (obviously in the cool compression chamber, not the hot power chamber) and dinner was served and eaten, while the engine was running at full power. At 6.5 RPM the pressure was limited to . According to the official report it only consumed 4200 kg coal per 24 hours (original target was 8000 kg, which is still better than contemporary steam engines). The one sea trial proved that even though the engine ran well, the ship was underpowered. Some time after the trials, the Ericsson sank. When it was raised, the Ericsson-cycle engine was removed and a steam engine took its place. The ship was wrecked when blown aground in November 1892 at the entrance to Barkley Sound, British Columbia, Canada.
Today's potential
The Ericsson cycle (and the similar Brayton cycle) receives renewed interest today to extract power from the exhaust heat of gas (and producer gas) engines and solar concentrators. An important advantage of the Ericsson cycle over the widely known Stirling engine is often not recognized : the volume of the heat exchanger does not adversely affect the efficiency.
(...)despite having significant advantages over the Stirling. Amongst them, it is worth to note that the Ericsson engine heat exchangers are not dead volumes, whereas the Stirling engine heat exchangers designer has to face a difficult compromise between as large heat transfer areas as possible, but as small heat exchanger volumes as possible.
For medium and large engines the cost of valves can be small compared to this advantage. Turbocompressor plus turbine implementations seem favorable in the MWe range, positive displacement compressor plus turbine for Nx100 kWe power, and positive displacement compressor+expander below 100 kW. With high temperature hydraulic fluid, both the compressor and the expander can be liquid-ring pumps even up to 400 °C, with rotating casing for best efficiency.
| Physical sciences | Thermodynamics | Physics |
826435 | https://en.wikipedia.org/wiki/Pannotia | Pannotia | Pannotia (from Greek: pan-, "all", -nótos, "south"; meaning "all southern land"), also known as the Vendian supercontinent, Greater Gondwana, and the Pan-African supercontinent, was a relatively short-lived Neoproterozoic supercontinent that formed at the end of the Precambrian during the Pan-African orogeny (650–500 Ma), during the Cryogenian period and broke apart 560 Ma with the opening of the Iapetus Ocean, in the late Ediacaran and early Cambrian.
Pannotia formed when Laurentia was located adjacent to the two major South American cratons, Amazonia and Río de la Plata. The opening of the Iapetus Ocean separated Laurentia from Baltica, Amazonia, and Río de la Plata. A 2022 paper argues that Pannotia never fully existed, reinterpreting the geochronological evidence: "the supposed landmass had begun to break up well before it was fully assembled". However, the assembly of the next supercontinent Pangaea is well established.
Origin of concept
J. D. A. Piper was probably the first to propose a Proterozoic supercontinent preceding Pangaea, today known as Rodinia. At that time he simply referred to it as "the Proterozoic super-continent", but much later he named this "symmetrical crescent-shaped analogue of Pangaea" 'Palaeopangaea' and in 2000 he still insisted that there is neither a need nor any evidences for Rodinia or its daughter supercontinent Pannotia or a series of other proposed supercontinents since Archaean times.
The existence of a late Proterozoic supercontinent, much different from Pangaea, was first proposed by based on paleomagnetic data, and the break-up of this supercontinent around 625–550 Ma was documented by . The reconstruction of Bond et al. is virtually identical to that of and others.
Another term for the supercontinent that is thought to have existed at the end of Neoproterozoic time is "Greater Gondwanaland", suggested by . This term recognizes that the supercontinent of Gondwana, which formed at the end of the Neoproterozoic, was once part of the much larger Neoproterozoic supercontinent.
Pannotia was named by , based on the term "Pannotios" originally proposed by for "the cycle of tectonic activity common to the Gondwana continents that resulted in the formation of the supercontinent." proposed renaming the older Proterozoic supercontinent (now known as Rodinia) "Kanatia", the St. Lawrence Iroquoian word from which the name Canada is derived, while keeping the name Rodinia for the latter Neoproterozoic supercontinent (now known as Pannotia). Powell, however, objected to this renaming and instead proposed Stump's term for the latter supercontinent.
Formation
The formation of Pannotia began during the Pan-African orogeny when the Congo Craton was lodged between the northern and southern halves of the previous supercontinent Rodinia some 750 Ma. The peak in this mountain building event was around 640–610 Ma, but these continental collisions may have continued into the early Cambrian some 530 Ma. The formation of Pannotia was the result of Rodinia turning itself inside out.
When Pannotia had formed, Africa was located at the centre surrounded by the rest of Gondwana: South America, Arabia, Madagascar, India, Antarctica, and Australia. Laurentia, which 'escaped' out of Rodinia, Baltica, and Siberia kept the relative positions they had in Rodinia. The Cathaysian and Cimmerian terranes (continental blocks of southern Asia) were located along the northern margins of east Gondwana. The Avalonian-Cadomian terranes (later to become central Europe, Britain, the North American east coast, and Yucatán) were located along the active northern margins of western Gondwana. This orogeny probably extended north into the Uralian margin of Baltica.
Pannotia formed by subduction of exterior oceans (a mechanism called extroversion) over a geoid low, whereas Pangaea formed by subduction of interior oceans (introversion) over a geoid high perhaps caused by superplumes and slab avalanche events.
The oceanic crust subducted by Pannotia formed within the Mirovia superocean that surrounded Rodinia before its 830–750 Ma break-up and were accreted during the late Proterozoic orogenies that resulted from the assembly of Pannotia.
One of the major of these orogenies was the collision between eastern and western Gondwana or the East African Orogeny. The Trans-Saharan Belt in West Africa is the result of the collision between the East Saharan Shield and the West African Craton when 1200–710 Ma volcanic and arc-related rocks were accreted to the margin of this craton.
Between 600 and 500 Ma, two Brazilian interior orogens were highly deformed and metamorphosed between a series of colliding cratons: Amazonia, West Africa-São Luís, and São Francisco-Congo-Kasai. The material that accreted included, 950–850 Ma, mafic meta-igneous complexes and younger arc-related rocks.
Break-up
The break-up of Pannotia was accompanied by sea level rise, dramatic changes in climate and ocean water chemistry, and rapid metazoan diversification. found Neoproterozoic passive margin sequences worldwide—the first indication of a Late Neoproterozoic supercontinent but also the traces of its demise.
The Iapetus Ocean started to open while Pannotia was being assembled, 200 Ma after the break-up of Rodinia. This opening of the Iapetus and other Cambrian seas coincided with the first steps in the evolution of soft-bodied metazoans, and also made a myriad of habitats available for them; this led to the so-called Cambrian explosion, the rapid evolution of skeletalized metazoans.
Trilobites originated in the Neoproterozoic and began to diversify before the break-up of Pannotia 600–550 Ma, as evidenced by their ubiquitous presence in the fossil record, and the lack of vicariance patterns in their lineage.
| Physical sciences | Paleogeography | Earth science |
826723 | https://en.wikipedia.org/wiki/Angular%20diameter | Angular diameter | The angular diameter, angular size, apparent diameter, or apparent size is an angular separation (in units of angle) describing how large a sphere or circle appears from a given point of view. In the vision sciences, it is called the visual angle, and in optics, it is the angular aperture (of a lens). The angular diameter can alternatively be thought of as the angular displacement through which an eye or camera must rotate to look from one side of an apparent circle to the opposite side.
A person can resolve with their naked eyes diameters down to about 1 arcminute (approximately 0.017° or 0.0003 radians). This corresponds to 0.3 m at a 1 km distance, or to perceiving Venus as a disk under optimal conditions.
Formulation
The angular diameter of a circle whose plane is perpendicular to the displacement vector between the point of view and the center of said circle can be calculated using the formula
in which is the angular diameter (in units of angle, normally radians, sometimes in degrees, depending on the arctangent implementation), is the linear diameter of the object (in units of length), and is the distance to the object (also in units of length). When , we have:
,
and the result obtained is necessarily in radians.
For a sphere
For a spherical object whose linear diameter equals and where is the distance to the of the sphere, the angular diameter can be found by the following modified formula
Such a different formulation is due to the fact that the apparent edges of a sphere are its tangent points, which are closer to the observer than the center of the sphere, and have a distance between them which is smaller than the actual diameter. The above formula can be found by understanding that in the case of a spherical object, a right triangle can be constructed such that its three vertices are the observer, the center of the sphere, and one of the sphere's tangent points, with as the hypotenuse and as the sine.
The formula is related to the zenith angle to the horizon,
where R is the radius of the sphere and h is the distance to the near of the sphere.
The difference with the case of a perpendicular circle is significant only for spherical objects of large angular diameter, since the following small-angle approximations hold for small values of :
Estimating angular diameter using the hand
Estimates of angular diameter may be obtained by holding the hand at right angles to a fully extended arm, as shown in the figure.
Use in astronomy
In astronomy, the sizes of celestial objects are often given in terms of their angular diameter as seen from Earth, rather than their actual sizes. Since these angular diameters are typically small, it is common to present them in arcseconds (). An arcsecond is 1/3600th of one degree (1°) and a radian is 180/π degrees. So one radian equals 3,600 × 180/ arcseconds, which is about 206,265 arcseconds (1 rad ≈ 206,264.806247"). Therefore, the angular diameter of an object with physical diameter d at a distance D, expressed in arcseconds, is given by:
.
These objects have an angular diameter of 1:
an object of diameter 1 cm at a distance of 2.06 km
an object of diameter 725.27 km at a distance of 1 astronomical unit (AU)
an object of diameter 45 866 916 km at 1 light-year
an object of diameter 1 AU (149 597 871 km) at a distance of 1 parsec (pc)
Thus, the angular diameter of Earth's orbit around the Sun as viewed from a distance of 1 pc is 2, as 1 AU is the mean radius of Earth's orbit.
The angular diameter of the Sun, from a distance of one light-year, is 0.03, and that of Earth 0.0003. The angular diameter 0.03 of the Sun given above is approximately the same as that of a human body at a distance of the diameter of Earth.
This table shows the angular sizes of noteworthy celestial bodies as seen from Earth:
The angular diameter of the Sun, as seen from Earth, is about 250,000 times that of Sirius. (Sirius has twice the diameter and its distance is 500,000 times as much; the Sun is 1010 times as bright, corresponding to an angular diameter ratio of 105, so Sirius is roughly 6 times as bright per unit solid angle.)
The angular diameter of the Sun is also about 250,000 times that of Alpha Centauri A (it has about the same diameter and the distance is 250,000 times as much; the Sun is 4×1010 times as bright, corresponding to an angular diameter ratio of 200,000, so Alpha Centauri A is a little brighter per unit solid angle).
The angular diameter of the Sun is about the same as that of the Moon. (The Sun's diameter is 400 times as large and its distance also; the Sun is 200,000 to 500,000 times as bright as the full Moon (figures vary), corresponding to an angular diameter ratio of 450 to 700, so a celestial body with a diameter of 2.5–4 and the same brightness per unit solid angle would have the same brightness as the full Moon.)
Even though Pluto is physically larger than Ceres, when viewed from Earth (e.g., through the Hubble Space Telescope) Ceres has a much larger apparent size.
Angular sizes measured in degrees are useful for larger patches of sky. (For example, the three stars of the Belt cover about 4.5° of angular size.) However, much finer units are needed to measure the angular sizes of galaxies, nebulae, or other objects of the night sky.
Degrees, therefore, are subdivided as follows:
360 degrees (°) in a full circle
60 arc-minutes () in one degree
60 arc-seconds () in one arc-minute
To put this in perspective, the full Moon as viewed from Earth is about °, or 30 (or 1800). The Moon's motion across the sky can be measured in angular size: approximately 15° every hour, or 15 per second. A one-mile-long line painted on the face of the Moon would appear from Earth to be about 1 in length.
In astronomy, it is typically difficult to directly measure the distance to an object, yet the object may have a known physical size (perhaps it is similar to a closer object with known distance) and a measurable angular diameter. In that case, the angular diameter formula can be inverted to yield the angular diameter distance to distant objects as
In non-Euclidean space, such as our expanding universe, the angular diameter distance is only one of several definitions of distance, so that there can be different "distances" to the same object. See Distance measures (cosmology).
Non-circular objects
Many deep-sky objects such as galaxies and nebulae appear non-circular and are thus typically given two measures of diameter: major axis and minor axis. For example, the Small Magellanic Cloud has a visual apparent diameter of × .
Defect of illumination
Defect of illumination is the maximum angular width of the unilluminated part of a celestial body seen by a given observer. For example, if an object is 40 of arc across and is 75% illuminated, the defect of illumination is 10.
| Physical sciences | Basics | Astronomy |
826903 | https://en.wikipedia.org/wiki/Melastomataceae | Melastomataceae | Melastomataceae () is a family of dicotyledonous flowering plants found mostly in the tropics (two-thirds of the genera are from the New World tropics) comprising c. 175 genera and c. 5115 known species. Melastomes are annual or perennial herbs, shrubs, or small trees.
Description
The leaves of melastomes are somewhat distinctive, being opposite, decussate, and usually with 3-7 longitudinal veins arising either from the base of the blade, plinerved (inner veins diverging above base of blade), or pinnately nerved with three or more pairs of primary veins diverging from the mid-vein at successive points above the base.
Flowers are perfect, and borne either singly or in terminal or axillary, paniculate cymes.
Ecology
A number of melastomes are regarded as invasive species once naturalized in tropical and subtropical environments outside their normal range. Examples are Koster's curse (Clidemia hirta), Pleroma semidecandrum and Miconia calvescens, but many other species are involved.
Taxonomy
Under the APG III system of classification, the seven genera from Memecylaceae are now included in this family.
Genera
There are some 167 accepted genera in the Melastomataceae family as of October 2023. They are:
Acanthella
Aciotis
Acisanthera
Adelobotrys
Allomaieta
Alloneuron
Almedanthus
Amphiblemma
Amphorocalyx
Anaheterotis
Andesanthus
Anerincleistus
Antherotoma
Appendicularia
Argyrella
Arthrostemma
Aschistanthera
Astrocalyx
Astronia
Astronidium
Axinaea
Bamlera
Barthea
Beccarianthus
Bellucia
Benna
Bertolonia
Bisglaziovia
Blakea
Blastus
Boerlagea
Bourdaria
Boyania
Brachyotum
Brasilianthus
Bredia
Bucquetia
Cailliella
Calvoa
Cambessedesia
Castratella
Catanthera
Centradenia
Centradeniastrum
Centronia
Chaetogastra
Chaetolepis
Chalybea
Cincinnobotrys
Comolia
Comoliopsis
Creochiton
Cyphotheca
Dalenia
Derosiphia
Desmoscelis
Dicellandra
Dichaetanthera
Dinophora
Dionycha
Dionychastrum
Dissochaeta
Dissotidendron
Dissotis
Driessenia
Dupineta
Eleotis
Eriocnema
Ernestia
Feliciadamia
Feliciotis
Fordiophyton
Fritzschia
Graffenrieda
Gravesia
Guyonia
Henriettea
Heteroblemma
Heterocentron
Heterotis
Huberia
Kendrickia
Kerriothyrsus
Kirkbridea
Lijndenia
Lithobium
Loricalepis
Macairea
Macrocentrum
Macrolenes
Maguireanthus
Mallophyton
Marcetia
Medinilla
Melastoma
†Melastomites
Melastomastrum
Memecylon
Meriania
Merianthera
Miconia
Microlicia
Monochaetum
Monolena
Mouriri
Neblinanthera
Neodriessenia
Nepsera
Nerophila
Noterophila Mart.
Nothodissotis
Ochthephilus
Ochthocharis
Opisthocentra
Osbeckia
Ossaea
Oxyspora
Pachycentria
Pachyloma
Phainantha
Phyllagathis
Physeterostemon
Pilocosta
Plagiopetalum
Pleroma
Plethiandra
Poikilogyne
Poilannammia
Poteranthera
Preussiella
Pseudodissochaeta
Pseudoernestia
Pternandra
Pterogastra
Pterolepis
Pyrotis
Quipuanthus
Rhexia
Rhynchanthera
Rosettea
Rostranthera
Rousseauxia
Salpinga
Sandemania
Sarcopyramis
Schwackaea
Scorpiothyrsus
Siphanthera
Sonerila
Spathandra
Sporoxeia
Stanmarkia
Stussenia
Styrophyton
Tashiroea
Tateanthus
Tessmannianthus
Tibouchina
Tigridiopalma
Triolena
Tristemma
Tryssophyton
Vietsenia
Votomita
Warneckea
Wurdastom
Foraging
Melastomataceae is foraged by many stingless bees, especially by the species Melipona bicolor which gather pollen from this taxon of flowering plant.
| Biology and health sciences | Myrtales | Plants |
826997 | https://en.wikipedia.org/wiki/Regression%20analysis | Regression analysis | In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the outcome or response variable, or a label in machine learning parlance) and one or more error-free independent variables (often called regressors, predictors, covariates, explanatory variables or features). The most common form of regression analysis is linear regression, in which one finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares compute the unique line (or hyperplane) that minimize the sum of squared differences between the true data and that line (or hyperplane). For specific mathematical reasons (see linear regression), this allows the researcher to estimate the conditional expectation (or population average values) of the dependent variables when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters (e.g., quantile regression or Necessary Condition Analysis) or estimate the conditional expectation across a broader collection of non-linear models (e.g., nonparametric regression).
Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to show causal relationships between the independent and dependent variables. Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed data set. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships using observational data.
History
The earliest regression form was seen in Isaac Newton's work in 1700 while studying equinoxes, being credited with introducing "an embryonic linear aggression analysis" as "Not only did he perform the averaging of a set of data, 50 years before Tobias Mayer, but summing the residuals to zero he forced the regression line to pass through the average point. He also distinguished between two inhomogeneous sets of data and might have thought of an optimal solution in terms of bias, though not in terms of effectiveness." He previously used an averaging method in his 1671 work on Newton's rings, which was unprecedented at the time.
The method of least squares was published by Legendre in 1805, and by Gauss in 1809. Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the Sun (mostly comets, but also later the then newly discovered minor planets). Gauss published a further development of the theory of least squares in 1821, including a version of the Gauss–Markov theorem.
The term "regression" was coined by Francis Galton in the 19th century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known as regression toward the mean).
For Galton, regression had only this biological meaning, but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context. In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925. Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.
In the 1950s and 1960s, economists used electromechanical desk calculators to calculate regressions. Before 1970, it sometimes took up to 24 hours to receive the result from one regression.
Regression methods continue to be an area of active research. In recent decades, new methods have been developed for robust regression, regression involving correlated responses such as time series and growth curves, regression in which the predictor (independent variable) or response variables are curves, images, graphs, or other complex data objects, regression methods accommodating various types of missing data, nonparametric regression, Bayesian methods for regression, regression in which the predictor variables are measured with error, regression with more predictor variables than observations, and causal inference with regression. Modern regression analysis is typically done with statistical and spreadsheet software packages on computers as well as on handheld scientific and graphing calculators.
Regression model
In practice, researchers first select a model they would like to estimate and then use their chosen method (e.g., ordinary least squares) to estimate the parameters of that model. Regression models involve the following components:
The unknown parameters, often denoted as a scalar or vector .
The independent variables, which are observed in data and are often denoted as a vector (where denotes a row of data).
The dependent variable, which are observed in data and often denoted using the scalar .
The error terms, which are not directly observed in data and are often denoted using the scalar .
In various fields of application, different terminologies are used in place of dependent and independent variables.
Most regression models propose that is a function (regression function) of and , with representing an additive error term that may stand in for un-modeled determinants of or random statistical noise:
Note that the independent variables are assumed to be free of error. This important assumption is often overlooked, although errors-in-variables models can be used when the independent variables are assumed to contain errors.
The researchers' goal is to estimate the function that most closely fits the data. To carry out regression analysis, the form of the function must be specified. Sometimes the form of this function is based on knowledge about the relationship between and that does not rely on the data. If no such knowledge is available, a flexible or convenient form for is chosen. For example, a simple univariate regression may propose , suggesting that the researcher believes to be a reasonable approximation for the statistical process generating the data.
Once researchers determine their preferred statistical model, different forms of regression analysis provide tools to estimate the parameters . For example, least squares (including its most common variant, ordinary least squares) finds the value of that minimizes the sum of squared errors . A given regression method will ultimately provide an estimate of , usually denoted to distinguish the estimate from the true (unknown) parameter value that generated the data. Using this estimate, the researcher can then use the fitted value for prediction or to assess the accuracy of the model in explaining the data. Whether the researcher is intrinsically interested in the estimate or the predicted value will depend on context and their goals. As described in ordinary least squares, least squares is widely used because the estimated function approximates the conditional expectation . However, alternative variants (e.g., least absolute deviations or quantile regression) are useful when researchers want to model other functions .
It is important to note that there must be sufficient data to estimate a regression model. For example, suppose that a researcher has access to rows of data with one dependent and two independent variables: . Suppose further that the researcher wants to estimate a bivariate linear model via least squares: . If the researcher only has access to data points, then they could find infinitely many combinations that explain the data equally well: any combination can be chosen that satisfies , all of which lead to and are therefore valid solutions that minimize the sum of squared residuals. To understand why there are infinitely many options, note that the system of equations is to be solved for 3 unknowns, which makes the system underdetermined. Alternatively, one can visualize infinitely many 3-dimensional planes that go through fixed points.
More generally, to estimate a least squares model with distinct parameters, one must have distinct data points. If , then there does not generally exist a set of parameters that will perfectly fit the data. The quantity appears often in regression analysis, and is referred to as the degrees of freedom in the model. Moreover, to estimate a least squares model, the independent variables must be linearly independent: one must not be able to reconstruct any of the independent variables by adding and multiplying the remaining independent variables. As discussed in ordinary least squares, this condition ensures that is an invertible matrix and therefore that a unique solution exists.
Underlying assumptions
By itself, a regression is simply a calculation using the data. In order to interpret the output of regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classical assumptions. These assumptions often include:
The sample is representative of the population at large.
The independent variables are measured with no error.
Deviations from the model have an expected value of zero, conditional on covariates:
The variance of the residuals is constant across observations (homoscedasticity).
The residuals are uncorrelated with one another. Mathematically, the variance–covariance matrix of the errors is diagonal.
A handful of conditions are sufficient for the least-squares estimator to possess desirable properties: in particular, the Gauss–Markov assumptions imply that the parameter estimates will be unbiased, consistent, and efficient in the class of linear unbiased estimators. Practitioners have developed a variety of methods to maintain some or all of these desirable properties in real-world settings, because these classical assumptions are unlikely to hold exactly. For example, modeling errors-in-variables can lead to reasonable estimates independent variables are measured with errors. Heteroscedasticity-consistent standard errors allow the variance of to change across values of . Correlated errors that exist within subsets of the data or follow specific patterns can be handled using clustered standard errors, geographic weighted regression, or Newey–West standard errors, among other techniques. When rows of data correspond to locations in space, the choice of how to model within geographic units can have important consequences. The subfield of econometrics is largely focused on developing techniques that allow researchers to make reasonable real-world conclusions in real-world settings, where classical assumptions do not hold exactly.
Linear regression
In linear regression, the model specification is that the dependent variable, is a linear combination of the parameters (but need not be linear in the independent variables). For example, in simple linear regression for modeling data points there is one independent variable: , and two parameters, and :
straight line:
In multiple linear regression, there are several independent variables or functions of independent variables.
Adding a term in to the preceding regression gives:
parabola:
This is still linear regression; although the expression on the right hand side is quadratic in the independent variable , it is linear in the parameters , and
In both cases, is an error term and the subscript indexes a particular observation.
Returning our attention to the straight line case: Given a random sample from the population, we estimate the population parameters and obtain the sample linear regression model:
The residual, , is the difference between the value of the dependent variable predicted by the model, , and the true value of the dependent variable, . One method of estimation is ordinary least squares. This method obtains parameter estimates that minimize the sum of squared residuals, SSR:
Minimization of this function results in a set of normal equations, a set of simultaneous linear equations in the parameters, which are solved to yield the parameter estimators, .
In the case of simple regression, the formulas for the least squares estimates are
where is the mean (average) of the values and is the mean of the values.
Under the assumption that the population error term has a constant variance, the estimate of that variance is given by:
This is called the mean square error (MSE) of the regression. The denominator is the sample size reduced by the number of model parameters estimated from the same data, for regressors or if an intercept is used. In this case, so the denominator is .
The standard errors of the parameter estimates are given by
Under the further assumption that the population error term is normally distributed, the researcher can use these estimated standard errors to create confidence intervals and conduct hypothesis tests about the population parameters.
General linear model
In the more general multiple regression model, there are independent variables:
where is the -th observation on the -th independent variable.
If the first independent variable takes the value 1 for all , , then is called the regression intercept.
The least squares parameter estimates are obtained from normal equations. The residual can be written as
The normal equations are
In matrix notation, the normal equations are written as
where the element of is , the element of the column vector is , and the element of is . Thus is , is , and is . The solution is
Diagnostics
Once a regression model has been constructed, it may be important to confirm the goodness of fit of the model and the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include the R-squared, analyses of the pattern of residuals and hypothesis testing. Statistical significance can be checked by an F-test of the overall fit, followed by t-tests of individual parameters.
Interpretations of these diagnostic tests rest heavily on the model's assumptions. Although examination of the residuals can be used to invalidate a model, the results of a t-test or F-test are sometimes more difficult to interpret if the model's assumptions are violated. For example, if the error term does not have a normal distribution, in small samples the estimated parameters will not follow normal distributions and complicate inference. With relatively large samples, however, a central limit theorem can be invoked such that hypothesis testing may proceed using asymptotic approximations.
Limited dependent variables
Limited dependent variables, which are response variables that are categorical variables or are variables constrained to fall only in a certain range, often arise in econometrics.
The response variable may be non-continuous ("limited" to lie on some subset of the real line). For binary (zero or one) variables, if analysis proceeds with least-squares linear regression, the model is called the linear probability model. Nonlinear models for binary dependent variables include the probit and logit model. The multivariate probit model is a standard method of estimating a joint relationship between several binary dependent variables and some independent variables. For categorical variables with more than two values there is the multinomial logit. For ordinal variables with more than two values, there are the ordered logit and ordered probit models. Censored regression models may be used when the dependent variable is only sometimes observed, and Heckman correction type models may be used when the sample is not randomly selected from the population of interest. An alternative to such procedures is linear regression based on polychoric correlation (or polyserial correlations) between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, then count models like the Poisson regression or the negative binomial model may be used.
Nonlinear regression
When the model function is not linear in the parameters, the sum of squares must be minimized by an iterative procedure. This introduces many complications which are summarized in Differences between linear and non-linear least squares.
Prediction (interpolation and extrapolation)
Regression models predict a value of the Y variable given known values of the X variables. Prediction the range of values in the dataset used for model-fitting is known informally as interpolation. Prediction this range of the data is known as extrapolation. Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values.
A prediction interval that represents the uncertainty may accompany the point prediction. Such intervals tend to expand rapidly as the values of the independent variable(s) moved outside the range covered by the observed data.
For such reasons and others, some tend to say that it might be unwise to undertake extrapolation.
Model selection
The assumption of a particular form for the relation between Y and X is another source of uncertainty. A properly conducted regression analysis will include an assessment of how well the assumed form is matched by the observed data, but it can only do so within the range of values of the independent variables actually available. This means that any extrapolation is particularly reliant on the assumptions being made about the structural form of the regression relationship. If this knowledge includes the fact that the dependent variable cannot go outside a certain range of values, this can be made use of in selecting the model – even if the observed dataset has no values particularly near such bounds. The implications of this step of choosing an appropriate functional form for the regression can be great when extrapolation is considered. At a minimum, it can ensure that any extrapolation arising from a fitted model is "realistic" (or in accord with what is known).
Power and sample size calculations
There are no generally agreed methods for relating the number of observations versus the number of independent variables in the model. One method conjectured by Good and Hardin is , where is the sample size, is the number of independent variables and is the number of observations needed to reach the desired precision if the model had only one independent variable. For example, a researcher is building a linear regression model using a dataset that contains 1000 patients (). If the researcher decides that five observations are needed to precisely define a straight line (), then the maximum number of independent variables () the model can support is 4, because
.
Other methods
Although the parameters of a regression model are usually estimated using the method of least squares, other methods which have been used include:
Bayesian methods, e.g. Bayesian linear regression
Percentage regression, for situations where reducing percentage errors is deemed more appropriate.
Least absolute deviations, which is more robust in the presence of outliers, leading to quantile regression
Nonparametric regression, requires a large number of observations and is computationally intensive
Scenario optimization, leading to interval predictor models
Distance metric learning, which is learned by the search of a meaningful distance metric in a given input space.
Software
All major statistical software packages perform least squares regression analysis and inference. Simple linear regression and multiple regression using least squares can be done in some spreadsheet applications and on some calculators. While many statistical software packages can perform various types of nonparametric and robust regression, these methods are less standardized. Different software packages implement different methods, and a method with a given name may be implemented differently in different packages. Specialized regression software has been developed for use in fields such as survey analysis and neuroimaging.
| Mathematics | Statistics and probability | null |
827490 | https://en.wikipedia.org/wiki/Tilefish | Tilefish | Tilefishes are mostly small perciform marine fish comprising the family Malacanthidae. They are usually found in sandy areas, especially near coral reefs. They have a long life span, up to 46 years (females) and 39 years (males).
Commercial fisheries exist for the largest species, making them important food fish. However, the U.S. Food and Drug Administration warns pregnant or breastfeeding women against eating tilefish and some other fish due to mercury contamination.
Exceptionally colorful smaller species of tilefish are favored for aquariums.
Taxonomic issues
The family is further divided into two subfamilies: Latilinae, sometimes called the Branchiosteginae, and Malacanthinae. Some authors regard these subfamilies as two evolutionarily distinct families.
The placement of this family within the Eupercaria is still uncertain. The 5th edition of Fishes of the World classifies them within the Perciformes but in a grouping of seven families that may have a relationship to Acanthuroidei, Monodactylidae, and Priacanthidae, while other authorities place it outside the Perciformes, at an order level but with its true relationships being incertae sedis.
Subfamilies and genera
The following two subfamilies and five genera are classified within the family Malacanthidae, in total it contains 45 species.
subfamily Latilinae Gill, 1862
genus Branchiostegus Rafinesque, 1815
genus Caulolatilus Gill, 1862
genus Lopholatilus Goode & Bean, 1879
subfamily Malacanthinae Poey, 1861
genus Hoplolatilus Günther 1887
genus Malacanthus Cuvier 1829
Description
The two subfamilies appear to be morphologically different, with members of the Latilinae having deeper bodies bearing predorsal ridge and heads rounded to squarish in profile. In contrast, members of the Malacanthinae are more slender with elongated bodies lacking predorsal ridge and rounded head. They also differ ecologically, with latilines typically occurring below 50 m and malacanthines shallower than 50 m depth.
Tilefish range in size from (yellow tilefish, Hoplolatilus luteus) to (great northern tilefish, Lopholatilus chamaeleonticeps) and a weight of .
Both subfamilies have long dorsal and anal fins, the latter having one or two spines. The gill covers (opercula) have one spine which may be sharp or blunt; some species also have a cutaneous ridge atop the head. The tail fin may range in shape from truncated to forked. Most species are fairly low-key in colour, commonly shades of yellow, brown, and gray. Notable exceptions include three small, vibrant Hoplolatilus species: the purple sand tilefish (H. purpureus), Starck's tilefish (H. starcki), and the redback sand tilefish (H. marcosi).
Tilefish larvae are notable for their elaborate spines. The family name Malacanthidae, is based on the type genus Malacanthus which is a compound of the Greek words malakos meaning "soft" and akanthos meaning "thorn", possibly derived from the slender, flexible spines in the dorsal fin of Malacanthus plumieri.
Habitat and diet
Generally shallow-water fish, tilefish are usually found at depths of 50–200 m in both temperate and tropical waters of the Atlantic, Pacific, and Indian Oceans. All species seek shelter in self-made burrows, caves at the bases of reefs, or piles of rock, often in canyons or at the edges of steep slopes. Either gravelly or sandy substrate may be preferred, depending on the species.
Most species are strictly marine; an exception is found in the blue blanquillo (Malacanthus latovittatus) which is known to enter the brackish waters of Papua New Guinea's Goldie River.
Tilefish feed primarily on small benthic invertebrates, especially crustaceans such as crab and shrimp. Mollusks, worms, sea urchins, and small fish are also taken.
After the 1882 mass die-off, great northern tilefish were thought to be extinct until a large number were caught in 1910 near New Bedford, Massachusetts.
Behaviour and reproduction
Active fish, tilefish keep to themselves and generally stay at or near the bottom. They rely heavily on their keen eyesight to catch their prey. If approached, the fish quickly dive into their constructed retreats, often head-first. The chameleon sand tilefish (Hoplolatilus chlupatyi) relies on its remarkable ability to rapidly change colour (with a wide range) to evade predators.
Many species form monogamous pairs, while some are solitary in nature (e.g., ocean whitefish, Caulolatilus princeps), and others colonial. Some species, such as the rare pastel tilefish (Hoplolatilus fronticinctus) of the Indo-Pacific, actively builds large rubble mounds above which they school and in which they live. These mounds serve as both refuge and as a microecosystem for other reef species.
The reproductive habits of tilefish are not well studied. Spawning occurs throughout the spring and summer; all species are presumed not to guard their broods. Eggs are small and made buoyant by oil. The larvae are pelagic and drift until the fish have reached the juvenile stage.
Timeline
The relative extant of Branchiostegus in the archeological record:
Health effects
Tilefish from the Gulf of Mexico have been shown to have high levels of mercury, and the FDA has recommended against their consumption by pregnant women. Atlantic Ocean tilefish may have lower levels of mercury and may be safer to consume.
Gallery
| Biology and health sciences | Acanthomorpha | Animals |
3588331 | https://en.wikipedia.org/wiki/Scalar%20%28mathematics%29 | Scalar (mathematics) | A scalar is an element of a field which is used to define a vector space.
In linear algebra, real numbers or generally elements of a field are called scalars and relate to vectors in an associated vector space through the operation of scalar multiplication (defined in the vector space), in which a vector can be multiplied by a scalar in the defined way to produce another vector. Generally speaking, a vector space may be defined by using any field instead of real numbers (such as complex numbers). Then scalars of that vector space will be elements of the associated field (such as complex numbers).
A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied in the defined way to produce a scalar. A vector space equipped with a scalar product is called an inner product space.
A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector.
The term scalar is also sometimes used informally to mean a vector, matrix, tensor, or other, usually, "compound" value that is actually reduced to a single component. Thus, for example, the product of a 1 × n matrix and an n × 1 matrix, which is formally a 1 × 1 matrix, is often said to be a scalar.
The real component of a quaternion is also called its scalar part.
The term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix.
Etymology
The word scalar derives from the Latin word scalaris, an adjectival form of scala (Latin for "ladder"), from which the English word scale also comes. The first recorded usage of the word "scalar" in mathematics occurs in François Viète's Analytic Art (In artem analyticem isagoge) (1591):
Magnitudes that ascend or descend proportionally in keeping with their nature from one kind to another may be called scalar terms.
(Latin: Magnitudines quae ex genere ad genus sua vi proportionaliter adscendunt vel descendunt, vocentur Scalares.)
According to a citation in the Oxford English Dictionary the first recorded usage of the term "scalar" in English came with W. R. Hamilton in 1846, referring to the real part of a quaternion:
The algebraically real part may receive, according to the question in which it occurs, all values contained on the one scale of progression of numbers from negative to positive infinity; we shall call it therefore the scalar part.
Definitions and properties
Scalars of vector spaces
A vector space is defined as a set of vectors (additive abelian group), a set of scalars (field), and a scalar multiplication operation that takes a scalar k and a vector v to form another vector kv. For example, in a coordinate space, the scalar multiplication yields . In a (linear) function space, is the function .
The scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields.
Scalars as vector components
According to a fundamental theorem of linear algebra, every vector space has a basis. It follows that every vector space over a field K is isomorphic to the corresponding coordinate vector space where each coordinate consists of elements of K (E.g., coordinates (a1, a2, ..., an) where ai ∈ K and n is the dimension of the vector space in consideration.). For example, every real vector space of dimension n is isomorphic to the n-dimensional real space Rn.
Scalars in normed vector spaces
Alternatively, a vector space V can be equipped with a norm function that assigns to every vector v in V a scalar ||v||. By definition, multiplying v by a scalar k also multiplies its norm by |k|. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k. A vector space equipped with a norm is called a normed vector space (or normed linear space).
The norm is usually defined to be an element of V's scalar field K, which restricts the latter to fields that support the notion of sign. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four arithmetic operations; thus the rational numbers Q are excluded, but the surd field is acceptable. For this reason, not every scalar product space is a normed vector space.
Scalars in modules
When the requirement that the set of scalars form a field is relaxed so that it need only form a ring (so that, for example, the division of scalars need not be defined, or the scalars need not be commutative), the resulting more general algebraic structure is called a module.
In this case the "scalars" may be complicated objects. For instance, if R is a ring, the vectors of the product space Rn can be made into a module with the n × n matrices with entries from R as the scalars. Another example comes from manifold theory, where the space of sections of the tangent bundle forms a module over the algebra of real functions on the manifold.
Scaling transformation
The scalar multiplication of vector spaces and modules is a special case of scaling, a kind of linear transformation.
| Mathematics | Linear algebra | null |
3588425 | https://en.wikipedia.org/wiki/Scalar%20%28physics%29 | Scalar (physics) | Scalar quantities or simply scalars are physical quantities that can be described by a single pure number (a scalar, typically a real number), accompanied by a unit of measurement, as in "10cm" (ten centimeters).
Examples of scalar quantities are length, mass, charge, volume, and time.
Scalars may represent the magnitude of physical quantities, such as speed is to velocity. Scalars do not represent a direction.
Scalars are unaffected by changes to a vector space basis (i.e., a coordinate rotation) but may be affected by translations (as in relative speed).
A change of a vector space basis changes the description of a vector in terms of the basis used but does not change the vector itself, while a scalar has nothing to do with this change. In classical physics, like Newtonian mechanics, rotations and reflections preserve scalars, while in relativity, Lorentz transformations or space-time translations preserve scalars. The term "scalar" has origin in the multiplication of vectors by a unitless scalar, which is a uniform scaling transformation.
Relationship with the mathematical concept
A scalar in physics and other areas of science is also a scalar in mathematics, as an element of a mathematical field used to define a vector space. For example, the magnitude (or length) of an electric field vector is calculated as the square root of its absolute square (the inner product of the electric field with itself); so, the inner product's result is an element of the mathematical field for the vector space in which the electric field is described. As the vector space in this example and usual cases in physics is defined over the mathematical field of real numbers or complex numbers, the magnitude is also an element of the field, so it is mathematically a scalar. Since the inner product is independent of any vector space basis, the electric field magnitude is also physically a scalar.
The mass of an object is unaffected by a change of vector space basis so it is also a physical scalar, described by a real number as an element of the real number field. Since a field is a vector space with addition defined based on vector addition and multiplication defined as scalar multiplication, the mass is also a mathematical scalar.
Scalar field
Since scalars mostly may be treated as special cases of multi-dimensional quantities such as vectors and tensors, physical scalar fields might be regarded as a special case of more general fields, like vector fields, spinor fields, and tensor fields.
Units
Like other physical quantities, a physical quantity of scalar is also typically expressed by a numerical value and a physical unit, not merely a number, to provide its physical meaning. It may be regarded as the product of the number and the unit (e.g., 1 km as a physical distance is the same as 1,000 m). A physical distance does not depend on the length of each base vector of the coordinate system where the base vector length corresponds to the physical distance unit in use. (E.g., 1 m base vector length means the meter unit is used.) A physical distance differs from a metric in the sense that it is not just a real number while the metric is calculated to a real number, but the metric can be converted to the physical distance by converting each base vector length to the corresponding physical unit.
Any change of a coordinate system may affect the formula for computing scalars (for example, the Euclidean formula for distance in terms of coordinates relies on the basis being orthonormal), but not the scalars themselves. Vectors themselves also do not change by a change of a coordinate system, but their descriptions changes (e.g., a change of numbers representing a position vector by rotating a coordinate system in use).
Classical scalars
An example of a scalar quantity is temperature: the temperature at a given point is a single number. Velocity, on the other hand, is a vector quantity.
Other examples of scalar quantities are mass, charge, volume, time, speed, pressure, and electric potential at a point inside a medium. The distance between two points in three-dimensional space is a scalar, but the direction from one of those points to the other is not, since describing a direction requires two physical quantities such as the angle on the horizontal plane and the angle away from that plane. Force cannot be described using a scalar, since force has both direction and magnitude; however, the magnitude of a force alone can be described with a scalar, for instance the gravitational force acting on a particle is not a scalar, but its magnitude is. The speed of an object is a scalar (e.g., 180 km/h), while its velocity is not (e.g. a velocity of 180 km/h in a roughly northwest direction might consist of 108 km/h northward and 144 km/h westward).
Some other examples of scalar quantities in Newtonian mechanics are electric charge and charge density.
Relativistic scalars
In the theory of relativity, one considers changes of coordinate systems that trade space for time. As a consequence, several physical quantities that are scalars in "classical" (non-relativistic) physics need to be combined with other quantities and treated as four-vectors or tensors. For example, the charge density at a point in a medium, which is a scalar in classical physics, must be combined with the local current density (a 3-vector) to comprise a relativistic 4-vector. Similarly, energy density must be combined with momentum density and pressure into the stress–energy tensor.
Examples of scalar quantities in relativity include electric charge, spacetime interval (e.g., proper time and proper length), and invariant mass.
Pseudoscalar
| Physical sciences | Physics basics: General | Physics |
3594512 | https://en.wikipedia.org/wiki/River%20island | River island | A river island is any exposed landmass surrounded by river water. Properly defined, it excludes shoals between seasonally varying flows and may exclude semi-coastal islands in river deltas such as Marajó.
These islands result from changes in the course of a river. Such changes may be caused by interactions with a tributary, or by the opposing fluvial actions of deposition and/or erosion that form a natural cut and meander. Nascent vegetation-free shoals and mudflats may dissipate and shift or build up into such islands through deposition; the process may be assisted through artificial reinforcement or natural factors, such as reeds, palms, evergreen trees or willows, that act as obstacles or erosion barriers, so that water flows around them. Islands may be small or large, covering many square kilometers, examples of which are given below.
Regional nomenclature
The term "towhead" implies an islet (small island) or shoal within a river (most often the Mississippi River) having a grouping or thicket of trees, and is often used in the Midwestern United States. Many rivers, if wide enough, can house considerably large islands. The term "towhead" was popularised by Mark Twain's Adventures of Huckleberry Finn.
In England, a river island in the Thames is referred to as an "ait" (or "eyot").
Largest and smallest
Majuli (a non-coastal landmass between two banks of a river), located in the Brahmaputra River in Assam, India, is recognised by Guinness World Records as the world's largest inhabited riverine island, at .
The Encyclopædia Britannica cites another large non-coastal landmass, Bananal Island (an island that divides the Araguaia River into two branches over a 320 km (200-mile) length of water), located in Tocantins, central Brazil, to be the world's largest river island instead, at .
However, Bananal Island is not considered a riverine island by some geologists, as they consider the Araguaia River to form two distributaries, and Bananal Island to be the landmass between these two distributed rivers. However, Bananal Island is technically an island as it does not touch the main landmass at any point.
Umananda Island, at , is among contenders as the smallest permanently-inhabited river island, or islet, with fixed dwellings. Umananda also lies in the Brahmaputra River. Many as tiny as Umananda or smaller, inhabited, exist on the Amazon basin and Bangladesh. Another island of comparable size to Umananda, Hatfield Island in the Guyandotte River in the U.S. state of West Virginia, has no permanent population, but contains several permanent buildings, namely the K–12 schools serving the city of Logan and its surrounding area plus the main branch of the Logan County public library.
On canalised rivers, such as the Thames and the Seine, one-home islands exist, containing houses constructed of permanent materials. Canals reduce erosion of the islands and in particular limit the height of flash flooding by maintaining substantial "heads" of water through barrages. One-home islands improved by river canals include Monkey, Friday, Holm and D'Oyly Carte islands.
Lists of river islands
River islands by area
Note: Includes some river islands that also have an ocean coast.
Most populous river islands
This list ranks river islands with a population of at least 25,000.
| Physical sciences | Oceanic and coastal landforms | Earth science |
20927048 | https://en.wikipedia.org/wiki/Amoebiasis | Amoebiasis | Amoebiasis, or amoebic dysentery, is an infection of the intestines caused by a parasitic amoeba Entamoeba histolytica. Amoebiasis can be present with no, mild, or severe symptoms. Symptoms may include lethargy, loss of weight, colonic ulcerations, abdominal pain, diarrhea, or bloody diarrhea. Complications can include inflammation and ulceration of the colon with tissue death or perforation, which may result in peritonitis. Anemia may develop due to prolonged gastric bleeding.
Cysts of Entamoeba can survive for up to a month in soil or for up to 45 minutes under fingernails. Invasion of the intestinal lining results in bloody diarrhea. If the parasite reaches the bloodstream it can spread through the body, most frequently ending up in the liver where it can cause amoebic liver abscesses. Liver abscesses can occur without previous diarrhea. Diagnosis is made by stool examination using microscopy, but it can be difficult to distinguish E. hystolitica from other harmless entamoeba species. An increased white blood cell count may be present in severe cases. The most accurate test is finding specific antibodies in the blood, but it may remain positive following treatment. Bacterial colitis can result in similar symptoms.
Prevention of amoebiasis is by improved sanitation, including separating food and water from faeces. There is no vaccine. There are two treatment options depending on the location of the infection. Amoebiasis in tissues is treated with either metronidazole, tinidazole, nitazoxanide, dehydroemetine or chloroquine. Luminal infection is treated with diloxanide furoate or iodoquinoline. Effective treatment against all stages of the disease may require a combination of medications. Infections without symptoms may be treated with just one antibiotic, and infections with symptoms are treated with two antibiotics.
Amoebiasis is present all over the world, though most cases occur in the developing world. About 480 million people are currently infected with about 40 million new cases per year with significant symptoms. This results in the death of between 40,000–100,000 people a year. The first case of amoebiasis was documented in 1875. In 1891, the disease was described in detail, resulting in the terms amoebic dysentery and amoebic liver abscess. Further evidence from the Philippines in 1913 found that upon swallowing cysts of E. histolytica volunteers developed the disease.
Signs and symptoms
Most infected people, about 90%, are asymptomatic, but this disease has the potential to become serious. It is estimated that about 40,000 to 100,000 people worldwide die annually due to amoebiasis.
Infections can sometimes last for years if there is no treatment. Symptoms take from a few days to a few weeks to develop and manifest themselves, but usually it is about two to four weeks. Symptoms can range from mild diarrhea to dysentery with blood, coupled with intense abdominal pains. Extra-intestinal complications might also arise as a result of invasive infection which includes colitis, liver, lung, or brain abscesses. The blood comes from bleeding lesions created by the amoebae invading the lining of the colon. In about 10% of invasive cases the amoebae enter the bloodstream and may travel to other organs in the body. Most commonly this means the liver, as this is where blood from the intestine reaches first, but they can end up almost anywhere in the body.
Onset time is highly variable and the average asymptomatic infection persists for over a year. It is theorized that the absence of symptoms or their intensity may vary with such factors as strain of amoeba, immune response of the host, and perhaps associated bacteria and viruses.
In asymptomatic infections, the amoeba lives by eating and digesting bacteria and food particles in the gut, a part of the gastrointestinal tract. It does not usually come in contact with the intestine itself due to the protective layer of mucus that lines the gut. Disease occurs when amoeba comes in contact with the cells lining the intestine. It then secretes the same substances it uses to digest bacteria, which include enzymes that destroy cell membranes and proteins. This process can lead to penetration and digestion of human tissues, resulting first in flask-shaped ulcerations in the intestine. Entamoeba histolytica ingests the destroyed cells by phagocytosis and is often seen with red blood cells (a process known as erythrophagocytosis) inside when viewed in stool samples. Especially in Latin America, a granulomatous mass (known as an amoeboma) may form in the wall of the ascending colon or rectum due to long-lasting immunological cellular response, and is sometimes confused with cancer.
The ingestion of one viable cyst may cause an infection.
Steroid therapy can occasionally provoke severe amoebic colitis in people with any E. histolytica infection. This bears high mortality: on average more than 50% with severe colitis die.
Cause
Amoebiasis is an infection caused by the amoeba Entamoeba histolytica.
Transmission
Amoebiasis is usually transmitted by the fecal-oral route, but it can also be transmitted indirectly through contact with dirty hands or objects as well as by anal-oral contact. Infection is spread through ingestion of the cyst form of the parasite, a semi-dormant and hardy structure found in feces. Any non-encysted amoebae, or trophozoites, die quickly after leaving the body but may also be present in stool: these are rarely the source of new infections. Since amoebiasis is transmitted through contaminated food and water, it is often endemic in regions of the world with limited modern sanitation systems, including México, Central America, western South America, South Asia, and western and southern Africa.
Amoebic dysentery is one form of traveler's diarrhea, although most traveler's diarrhea is bacterial or viral in origin.
Pathogenesis
Amoebiasis results from tissue destruction induced by the E. histolytica parasite.
E. histolytica causes tissue damage by three main events: direct host cell killing, inflammation, and parasite invasion.
The pathogenesis of amoebiasis involves interplay of various molecules secreted by E. histolytica such as LPPG, lectins, cysteine proteases, and amoebapores. Lectins help in the attachment of the parasite to the mucosal layer of the host during invasion. The amoebapores destroy the ingested bacteria present in the colonic environment. Cysteine proteases lyse the host tissues. Other molecules such as PATMK, myosins, G proteins, C2PK, CaBP3, and EhAK1 play an important role in the process of phagocytosis.
Diagnosis
With colonoscopy it is possible to detect small ulcers of between 3–5mm, but diagnosis may be difficult as the mucous membrane between these areas can look either healthy or inflamed.
Trophozoites may be identified at the ulcer edge or within the tissue, using immunohistochemical staining with specific anti-E. histolytica antibodies.
Asymptomatic human infections are usually diagnosed by finding cysts shed in the stool. Various flotation or sedimentation procedures have been developed to recover the cysts from fecal matter and stains help to visualize the isolated cysts for microscopic examination. Since cysts are not shed constantly, a minimum of three stools are examined. In symptomatic infections, the motile form (the trophozoite) is often seen in fresh feces. Serological tests exist, and most infected individuals (with symptoms or not) test positive for the presence of antibodies. The levels of antibody are much higher in individuals with liver abscesses. Serology only becomes positive about two weeks after infection. More recent developments include a kit that detects the presence of amoeba proteins in the feces, and another that detects amoeba DNA in feces. These tests are not in widespread use due to their expense.
Microscopy is still by far the most widespread method of diagnosis around the world. However it is not as sensitive or accurate in diagnosis as the other tests available. It is important to distinguish the E. histolytica cyst from the cysts of nonpathogenic intestinal protozoa such as Entamoeba coli by its appearance. E. histolytica cysts have a maximum of four nuclei, while the commensal Entamoeba coli cyst has up to 8 nuclei. Additionally, in E. histolytica, the endosome is centrally located in the nucleus, while it is usually off-center in Entamoeba coli. Finally, chromatoidal bodies in E. histolytica cysts are rounded, while they are jagged in Entamoeba coli. However, other species, Entamoeba dispar and E. moshkovskii, are also commensals and cannot be distinguished from E. histolytica under the microscope. As E. dispar is much more common than E. histolytica in most parts of the world this means that there is a lot of incorrect diagnosis of E. histolytica infection taking place. The WHO recommends that infections diagnosed by microscopy alone should not be treated if they are asymptomatic and there is no other reason to suspect that the infection is actually E. histolytica. Detection of cysts or trophozoites stools under microscope may require examination of several samples over several days to determine if they are present, because cysts are shed intermittently and may not show up in every sample.
Typically, the organism can no longer be found in the feces once the disease goes extra-intestinal. Serological tests are useful in detecting infection by E. histolytica if the organism goes extra-intestinal and in excluding the organism from the diagnosis of other disorders. An Ova & Parasite (O&P) test or an E. histolytica fecal antigen assay is the proper assay for intestinal infections. Since antibodies may persist for years after clinical cure, a positive serological result may not necessarily indicate an active infection. A negative serological result, however, can be equally important in excluding suspected tissue invasion by E. histolytica.
Stool antigen detection tests have helped to overcome some of the limitations of stool microscopy. Antigen detection tests are easy to use, but they have variable sensitivity and specificity, especially in low-endemic areas.
Polymerase chain reaction (PCR) is considered the gold standard for diagnosis but remains underutilized.
Prevention
To help prevent the spread of amoebiasis around the home :
Wash hands thoroughly with soap and hot running water for at least 10 seconds after using the toilet or changing a baby's diaper, and before handling food.
Clean bathrooms and toilets often; pay particular attention to toilet seats and taps.
Avoid sharing towels or face washers.
To help prevent infection:
Avoid raw vegetables when in endemic areas, as they may have been fertilized using human feces.
Boil water or treat with iodine tablets.
Avoid eating street foods especially in public places where others are sharing sauces in one container
Good sanitary practice, as well as responsible sewage disposal or treatment, are necessary for the prevention of E. histolytica infection on an endemic level. E.histolytica cysts are usually resistant to chlorination, therefore sedimentation and filtration of water supplies are necessary to reduce the incidence of infection.
E. histolytica cysts may be recovered from contaminated food by methods similar to those used for recovering Giardia lamblia cysts from feces. Filtration is probably the most practical method for recovery from drinking water and liquid foods. E. histolytica cysts must be distinguished from cysts of other parasitic (but nonpathogenic) protozoa and from cysts of free-living protozoa as discussed above. Recovery procedures are not very accurate; cysts are easily lost or damaged beyond recognition, which leads to many falsely negative results in recovery tests.
Treatment
E. histolytica infections occur in both the intestine and (in people with symptoms) in tissue of the intestine and/or liver. Those with symptoms require treatment with two medications, an amoebicidal tissue-active agent and a luminal cysticidal agent. Individuals that are asymptomatic only need a luminal cysticidal agent.
Prognosis
In the majority of cases, amoebas remain in the gastrointestinal tract of the hosts. Severe ulceration of the gastrointestinal mucosal surfaces occurs in less than 16% of cases. In fewer cases, the parasite invades the soft tissues, most commonly the liver. Only rarely are masses formed (amoebomas) that lead to intestinal obstruction.(Mistaken for Ca caecum and appendicular mass) Other local complications include bloody diarrhea, pericolic and pericaecal abscess.
Complications of hepatic amoebiasis includes subdiaphragmatic abscess, perforation of diaphragm to pericardium and pleural cavity, perforation to abdominal cavital (amoebic peritonitis) and perforation of skin (amoebiasis cutis).
Pulmonary amoebiasis can occur from liver lesions by spread through the blood or by perforation of pleural cavity and lung. It can cause lung abscess, pulmono pleural fistula, empyema lung and broncho pleural fistula. It can also reach the brain through blood vessels and cause amoebic brain abscess and amoebic meningoencephalitis. Cutaneous amoebiasis can also occur in skin around sites of colostomy wound, perianal region, region overlying visceral lesion and at the site of drainage of liver abscess.
Urogenital tract amoebiasis derived from intestinal lesion can cause amoebic vulvovaginitis (May's disease), rectovesicle fistula and rectovaginal fistula.
Entamoeba histolytica infection is associated with malnutrition and stunting of growth in children.
Epidemiology
Amoebiasis caused about 55,000 deaths worldwide in 2010, down from 68,000 in 1990.
In older textbooks it is often stated that 10% of the world's population is infected with Entamoeba histolytica. Nevertheless, this means that there are up to 50 million true E. histolytica infections and approximately seventy thousand die each year, mostly from liver abscesses or other complications. Although usually considered a tropical parasite, the first case reported (in 1875) was actually in St Petersburg in Russia, near the Arctic Circle. Infection is more common in warmer areas, but this is because of both poorer hygiene and the parasitic cysts surviving longer in warm moist conditions.
History
Amoebiasis was first described by Fedor A. Lösch in 1875, in northern Russia. The most dramatic incident in the US was the Chicago World's Fair outbreak in 1933, caused by contaminated drinking water. There were more than a thousand cases, with 98 deaths. It has been known since 1897 that at least one non-disease-causing species of Entamoeba existed (Entamoeba coli), but it was first formally recognized by the WHO in 1997 that E. histolytica was two species, despite this having first been proposed in 1925. In addition to the now-recognized E. dispar, evidence shows there are at least two other species of Entamoeba that look the same in humans: E. moshkovskii and Entamoeba bangladeshi. The reason these species haven't been differentiated until recently is because of the reliance on appearance.
Joel Connolly of the Chicago Bureau of Sanitary Engineering brought the outbreak to an end when he found that defective plumbing permitted sewage to contaminate drinking water. In 1998 there was an outbreak of amoebiasis in the Republic of Georgia. Between 26 May and 3 September 1998, 177 cases were reported, including 71 cases of intestinal amoebiasis and 106 probable cases of liver abscess.
The Nicobarese people have attested to the medicinal properties found in Glochidion calocarpum, a plant common to India, saying that its bark and seed are most effective in curing abdominal disorders associated with amoebiasis.
Society and culture
An outbreak of amoebic dysentery occurs in Diana Gabaldon's novel A Breath of Snow and Ashes.
| Biology and health sciences | Protozoan infections | Health |
130312 | https://en.wikipedia.org/wiki/Storm | Storm | A storm is any disturbed state of the natural environment or the atmosphere of an astronomical body. It may be marked by significant disruptions to normal conditions such as strong wind, tornadoes, hail, thunder and lightning (a thunderstorm), heavy precipitation (snowstorm, rainstorm), heavy freezing rain (ice storm), strong winds (tropical cyclone, windstorm), wind transporting some substance through the atmosphere such as in a dust storm, among other forms of severe weather.
Storms have the potential to harm lives and property via storm surge, heavy rain or snow causing flooding or road impassibility, lightning, wildfires, and vertical and horizontal wind shear. Systems with significant rainfall and duration help alleviate drought in places they move through. Heavy snowfall can allow special recreational activities to take place which would not be possible otherwise, such as skiing and snowmobiling.
The English word comes from Proto-Germanic *sturmaz meaning "noise, tumult".
Storms are created when a center of low pressure develops with the system of high pressure surrounding it. This combination of opposing forces can create winds and result in the formation of storm clouds such as cumulonimbus. Small localized areas of low pressure can form from hot air rising off hot ground, resulting in smaller disturbances such as dust devils and whirlwinds.
Types
There are many varieties and names for storms:
Blizzard There are varying definitions for blizzards, both over time and by location. In general, a blizzard is accompanied by gale-force winds, heavy snow (accumulating at a rate of at least 5 centimeters (2 in) per hour), and very cold conditions (below approximately −10 degrees Celsius or 14 F). Lately, the temperature criterion has fallen out of the definition across the United States.
Bomb cyclone A rapid deepening of a mid-latitude cyclonic low-pressure area, typically occurring over the ocean, but can occur over land. The winds experienced during these storms can be as powerful as that of a typhoon or hurricane.
Coastal storm Large wind waves and/or storm surge that strike the coastal zone. Their impacts include coastal erosion and coastal flooding.
Derecho A derecho is a widespread, long-lived, straight-line wind storm that is associated with a land-based, fast-moving group of severe thunderstorms.
Dust devil A small, localized updraft of rising air.
Dust storm A situation in which winds pick up large quantities of sand or soil, greatly reducing visibility.
Firestorm Firestorms are conflagrations which attain such intensity that they create and sustain their own wind systems. It is most commonly a natural phenomenon, created during some of the largest bush fires, forest fires, and wildfires. The Peshtigo Fire is one example of a firestorm. Firestorms can also be deliberate effects of targeted explosives, such as occurred as a result of the aerial bombings of Dresden. Nuclear detonations generate firestorms if high winds are not present.
Gale An extratropical storm with sustained winds between 34 and 48 knots (39–55 mph or 63–90 km/h).
Hailstorm A type of storm that precipitates round chunks of ice. Hailstorms usually occur during regular thunderstorms. While most of the hail that precipitates from the clouds is fairly small and virtually harmless, there are occasional occurrences of hail greater than 2 inches (5 cm) in diameter that can cause much damage and injuries.
Ice storm Ice storms are one of the most dangerous forms of winter storms. When surface temperatures are below freezing, but a thick layer of above-freezing air remains aloft, rain can fall into the freezing layer and freeze upon impact into a glaze of ice. In general, of accumulation is all that is required, especially in combination with breezy conditions, to start downing power lines as well as tree limbs. Ice storms also make unheated road surfaces too slick to drive upon. Ice storms can vary in time range from hours to days and can cripple small towns and large metropolitan cities alike.
Microburst A very powerful windstorm produced during a thunderstorm that only lasts a few minutes.
Ocean Storm or sea storm Storm conditions out at sea are defined as having sustained winds of 48 knots (55 mph or 90 km/h) or greater. Usually just referred to as a storm, these systems can sink vessels of all types and sizes.
Nor'westers A powerful storm coming from North-western direction, associated with heavy gusts, hail and thunderstorms. Usually occurs in Eastern India and Bangladesh in the late Spring and early Summer.
Snowstorm A heavy fall of snow accumulating at a rate of more than 5 centimeters (2 in) per hour that lasts several hours. Snow storms, especially ones with a high liquid equivalent and breezy conditions, can down tree limbs, cut off power connections and paralyze travel over large regions.
Squall Sudden onset of wind increase of at least 16 knots (30 km/h) or greater sustained for at least one minute.
Thunderstorm A thunderstorm is a type of storm that generates both lightning and thunder. It is normally accompanied by heavy precipitation. Thunderstorms occur throughout the world, with the highest frequency in tropical rainforest regions where there are conditions of high humidity and temperature along with atmospheric instability. These storms occur when high levels of condensation form in a volume of unstable air that generates deep, rapid, upward motion in the atmosphere. The heat energy creates powerful rising air currents that swirl upwards to the tropopause. Cool descending air currents produce strong downdraughts below the storm. After the storm has spent its energy, the rising currents die away and downdraughts break up the cloud. Individual storm clouds can measure 2–10 km across.
Tornado A tornado is a violent, destructive whirlwind storm occurring on land. Usually its appearance is that of a dark, funnel-shaped cloud. Often tornadoes are preceded by or associated with thunderstorms and a wall cloud. They are often called the most destructive of storms, and while they form all over the planet, the interior of the United States is the most prone area, especially throughout Tornado Alley.
Tropical cyclone A tropical cyclone is a storm system with a closed circulation around a centre of low pressure, fueled by the heat released when moist air rises and condenses. The name underscores its origin in the tropics and their cyclonic nature. Tropical cyclones are distinguished from other cyclonic storms such as nor'easters and polar lows by the heat mechanism that fuels them, which makes them "warm core" storm systems. Tropical cyclones form in the oceans if the conditions in the area are favorable, and depending on their strength and location, there are various terms by which they are called, such as tropical depression, tropical storm, hurricane and typhoon.
Wind storm A storm marked by high wind with little or no precipitation. Windstorm damage often opens the door for massive amounts of water and debris to cause further damage to a structure. European windstorms and derechos are two type of windstorms. High wind is also the cause of sandstorms in dry climates.
Classification
A strict meteorological definition of a terrestrial storm is a wind measuring 10 or higher on the Beaufort scale, meaning a wind speed of 24.5 m/s (89 km/h, 55 mph) or more; however, popular usage is not so restrictive. Storms can last anywhere from 12 to 200 hours, depending on season and geography. In North America, the east and northeast storms are noted for the most frequent repeatability and duration, especially during the cold period. Big terrestrial storms alter the oceanographic conditions that in turn may affect food abundance and distribution: strong currents, strong tides, increased siltation, change in water temperatures, overturn in the water column, etc.
Extraterrestrial storms
Storms do not only occur on Earth; other planetary bodies with a sufficient atmosphere (giant planets in particular) also undergo stormy weather. The Great Red Spot on Jupiter provides a well-known example. Though technically an anticyclone, with greater than hurricane wind speeds, it is larger than the Earth and has persisted for at least 340 years, having first been observed by astronomer Giovanni Domenico Cassini. Neptune also had its own lesser-known Great Dark Spot.
In September 1994, the Hubble Space Telescope – using Wide Field Planetary Camera 2 – imaged storms on Saturn generated by upwelling of warmer air, similar to a terrestrial thunderhead. The east–west extent of the same-year storm equaled the diameter of Earth. The storm was observed earlier in September 1990 and acquired the name Dragon Storm.
The dust storms of Mars vary in size, but can often cover the entire planet. They tend to occur when Mars comes closest to the Sun, and have been shown to increase the global temperature.
One particularly large Martian storm was exhaustively studied up close due to coincidental timing. When the first spacecraft to successfully orbit another planet, Mariner 9, arrived and successfully orbited Mars on 14 November 1971, planetary scientists were surprised to find the atmosphere was thick with a planet-wide robe of dust, the largest storm ever observed on Mars. The surface of the planet was totally obscured. Mariner 9's computer was reprogrammed from Earth to delay imaging of the surface for a couple of months until the dust settled, however, the surface-obscured images contributed much to the collection of Mars atmospheric and planetary surface science.
Two extrasolar planets are known to have storms: HD 209458 b and HD 80606 b. The former's storm was discovered on 23 June 2010, and measured at , while the latter produces winds of across the surface. The spin of the planet then creates giant swirling shock-wave storms that carry the heat aloft.
Effects on human society
Shipwrecks are common with the passage of strong tropical cyclones. Such shipwrecks can change the course of history, as well as influence art and literature. A hurricane led to a victory of the Spanish over the French for control of Fort Caroline, and ultimately the Atlantic coast of North America, in 1565.
Strong winds from any storm type can damage or destroy vehicles, buildings, bridges, and other outside objects, turning loose debris into deadly flying projectiles. In the United States, major hurricanes comprise just 21% of all landfalling tropical cyclones, but account for 83% of all damage. Tropical cyclones often knock out power to tens or hundreds of thousands of people, preventing vital communication and hampering rescue efforts. Tropical cyclones often destroy key bridges, overpasses, and roads, complicating efforts to transport food, clean water, and medicine to the areas that need it. Furthermore, the damage caused by tropical cyclones to buildings and dwellings can result in economic damage to a region, and to a diaspora of the population of the region.
The storm surge, or the increase in sea level due to the cyclone, is typically the worst effect from landfalling tropical cyclones, historically resulting in 90% of tropical cyclone deaths. The relatively quick surge in sea level can move miles/kilometers inland, flooding homes and cutting off escape routes. The storm surges and winds of hurricanes may be destructive to human-made structures, but they also stir up the waters of coastal estuaries, which are typically important fish breeding locales.
Cloud-to-ground lightning frequently occurs within the phenomena of thunderstorms and have numerous hazards towards landscapes and populations. One of the more significant hazards lightning can pose is the wildfires they are capable of igniting. Under a regime of low precipitation (LP) thunderstorms, where little precipitation is present, rainfall cannot prevent fires from starting when vegetation is dry as lightning produces a concentrated amount of extreme heat. Wildfires can devastate vegetation and the biodiversity of an ecosystem. Wildfires that occur close to urban environments can inflict damages upon infrastructures, buildings, crops, and provide risks to explosions, should the flames be exposed to gas pipes. Direct damage caused by lightning strikes occurs on occasion. In areas with a high frequency for cloud-to-ground lightning, like Florida, lightning causes several fatalities per year, most commonly to people working outside.
Precipitation with low potential of hydrogen levels (pH), otherwise known as acid rain, is also a frequent risk produced by lightning. Distilled water, which contains no carbon dioxide, has a neutral pH of 7. Liquids with a pH less than 7 are acidic, and those with a pH greater than 7 are bases. "Clean" or unpolluted rain has a slightly acidic pH of about 5.2, because carbon dioxide and water in the air react together to form carbonic acid, a weak acid (pH 5.6 in distilled water), but unpolluted rain also contains other chemicals. Nitric oxide present during thunderstorm phenomena, caused by the splitting of nitrogen molecules, can result in the production of acid rain, if nitric oxide forms compounds with the water molecules in precipitation, thus creating acid rain. Acid rain can damage infrastructures containing calcite or other solid chemical compounds containing carbon. In ecosystems, acid rain can dissolve plant tissues of vegetations and increase acidification process in bodies of water and in soil, resulting in deaths of marine and terrestrial organisms.
Hail damage to roofs often goes unnoticed until further structural damage is seen, such as leaks or cracks. It is hardest to recognize hail damage on shingled roofs and flat roofs, but all roofs have their own hail damage detection problems. Metal roofs are fairly resistant to hail damage, but may accumulate cosmetic damage in the form of dents and damaged coatings. Hail is also a common nuisance to drivers of automobiles, severely denting the vehicle and cracking or even shattering windshields and windows. Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest recorded incidents occurred around the 9th century in Roopkund, Uttarakhand, India. The largest hailstone in terms of diameter and weight ever recorded in the United States fell on 23 July 2010, in Vivian, South Dakota in the United States; it measured in diameter and in circumference, weighing in at . This broke the previous record for diameter set by a hailstone diameter and circumference which fell in Aurora, Nebraska in the United States on 22 June 2003, as well as the record for weight, set by a hailstone of that fell in Coffeyville, Kansas in 1970.
Various hazards, ranging from hail to lightning can affect outside technology facilities, such as antennas, satellite dishes, and towers. As a result, companies with outside facilities have begun installing such facilities underground, to reduce the risk of damage from storms.
Substantial snowfall can disrupt public infrastructure and services, slowing human activity even in regions that are accustomed to such weather. Air and ground transport may be greatly inhibited or shut down entirely. Populations living in snow-prone areas have developed various ways to travel across the snow, such as skis, snowshoes, and sleds pulled by horses, dogs, or other animals and later, snowmobiles. Basic utilities such as electricity, telephone lines, and gas supply can also fail. In addition, snow can make roads much harder to travel and vehicles attempting to use them can easily become stuck.
The combined effects can lead to a "snow day" on which gatherings such as school, work, or church are officially canceled. In areas that normally have very little or no snow, a snow day may occur when there is only light accumulation or even the threat of snowfall, since those areas are unprepared to handle any amount of snow. In some areas, such as some states in the United States, schools are given a yearly quota of snow days (or "calamity days"). Once the quota is exceeded, the snow days must be made up. In other states, all snow days must be made up. For example, schools may extend the remaining school days later into the afternoon, shorten spring break, or delay the start of summer vacation.
Accumulated snow is removed to make travel easier and safer, and to decrease the long-term effect of a heavy snowfall. This process uses shovels and snowplows, and is often assisted by sprinkling salt or other chloride-based chemicals, which reduce the melting temperature of snow. In some areas with abundant snowfall, such as Yamagata Prefecture, Japan, people harvest snow and store it surrounded by insulation in ice houses. This allows the snow to be used through the summer for refrigeration and air conditioning, which requires far less electricity than traditional cooling methods.
Agriculture
Hail can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and most commonly, farmers' crops. Wheat, corn, soybeans, and tobacco are the most sensitive crops to hail damage. Hail is one of Canada's most expensive hazards. Snowfall can be beneficial to agriculture by serving as a thermal insulator, conserving the heat of the Earth and protecting crops from subfreezing weather. Some agricultural areas depend on an accumulation of snow during winter that will melt gradually in spring, providing water for crop growth. If it melts into water and refreezes upon sensitive crops, such as oranges, the resulting ice will protect the fruit from exposure to lower temperatures. Although tropical cyclones take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they affect and bring much-needed precipitation to otherwise dry regions. Hurricanes in the eastern north Pacific often supply moisture to the Southwestern United States and parts of Mexico. Japan receives over half of its rainfall from typhoons. Hurricane Camille averted drought conditions and ended water deficits along much of its path, though it also killed 259 people and caused $9.14 billion (2005 USD) in damage.
Aviation
Hail is one of the most significant thunderstorm hazards to aircraft. When hail stones exceed in diameter, planes can be seriously damaged within seconds. The hailstones accumulating on the ground can also be hazardous to landing aircraft. Strong wind outflow from thunderstorms causes rapid changes in the three-dimensional wind velocity just above ground level. Initially, this outflow causes a headwind that increases airspeed, which normally causes a pilot to reduce engine power if they are unaware of the wind shear. As the aircraft passes into the region of the downdraft, the localized headwind diminishes, reducing the aircraft's airspeed and increasing its sink rate. Then, when the aircraft passes through the other side of the downdraft, the headwind becomes a tailwind, reducing lift generated by the wings, and leaving the aircraft in a low-power, low-speed descent. This can lead to an accident if the aircraft is too low to effect a recovery before ground contact. As the result of the accidents in the 1970s and 1980s, in 1988 the U.S. Federal Aviation Administration mandated that all commercial aircraft have on-board wind shear detection systems by 1993. Between 1964 and 1985, wind shear directly caused or contributed to 26 major civil transport aircraft accidents in the U.S. that led to 620 deaths and 200 injuries. Since 1995, the number of major civil aircraft accidents caused by wind shear has dropped to approximately one every ten years, due to the mandated on-board detection as well as the addition of Doppler weather radar units on the ground. (NEXRAD)
Recreation
Many winter sports, such as skiing, snowboarding, snowmobiling, and snowshoeing depend upon snow. Where snow is scarce but the temperature is low enough, snow cannons may be used to produce an adequate amount for such sports. Children and adults can play on a sled or ride in a sleigh. Although a person's footsteps remain a visible lifeline within a snow-covered landscape, snow cover is considered a general danger to hiking since the snow obscures landmarks and makes the landscape itself appear uniform.
Notable storms in art and culture
In mythology and literature
According to the Bible, a giant storm sent by God flooded the Earth. Noah and his family and the animals entered the Ark, and "the same day were all the fountains of the great deep broken up, and the windows of heaven were opened, and the rain was upon the earth forty days and forty nights." The flood covered even the highest mountains to a depth of more than twenty feet, and all creatures died; only Noah and those with him on the Ark were left alive. In the New Testament, Jesus Christ is recorded to have calmed a storm on the Sea of Galilee.
The Gilgamesh flood myth is a deluge story in the Epic of Gilgamesh.
In Greek mythology Aeolus, keeper of storm-winds, squalls and tempests.
The Sea Venture was wrecked near Bermuda in 1609, which led to the colonization of Bermuda and provided the inspiration for Shakespeare's play The Tempest(1611). Specifically, Sir Thomas Gates, future governor of Virginia, was on his way to England from Jamestown, Virginia. On Saint James Day, while he was between Cuba and the Bahamas, a hurricane raged for nearly two days. Though one of the small vessels in the fleet sank to the bottom of the Florida Straits, seven of the remaining vessels reached Virginia within several days after the storm. The flagship of the fleet, known as Sea Adventure, disappeared and was presumed lost. A small bit of fortune befell the ship and her crew when they made landfall on Bermuda. The vessel was damaged on a surrounding coral reef, but all aboard survived for nearly a year on the island. The British colonists claimed the island and quickly settled Bermuda. In May 1610, they set forth for Jamestown, this time arriving at their destination.
The children's novel The Wonderful Wizard of Oz, written by L. Frank Baum and illustrated by W. W. Denslow, chronicles the adventures of a young girl named Dorothy Gale in the Land of Oz, after being swept away from her Kansas farm home by a tornado. The story was originally published by the George M. Hill Company in Chicago on 17 May 1900, and has since been reprinted numerous times, most often under the name The Wizard of Oz, and adapted for use in other media. Thanks in part to the 1939 MGM movie, it is one of the best-known stories in American popular culture and has been widely translated. Its initial success, and the success of the popular 1902 Broadway musical which Baum adapted from his original story, led to Baum's writing thirteen more Oz books.
Hollywood director King Vidor (8 February 1894 – 1 November 1982) survived the Galveston Hurricane of 1900 as a boy. Based on that experience, he published a fictionalized account of that cyclone, titled "Southern Storm", for the May 1935 issue of Esquire magazine. Erik Larson excerpts a passage from that article in his 2005 book, Isaac's Storm:
I remember now that it seemed as if we were in a bowl looking up toward the level of the sea. As we stood there in the sandy street, my mother and I, I wanted to take my mother's hand and hurry her away. I felt as if the sea was going to break over the edge of the bowl and come puring down upon us.
Numerous other accounts of the Galveston Hurricane of 1900 have been made in print and in film. Larson cites many of them in Isaac's Storm, which centrally features that storm, as well as chronicles the creation of the Weather Bureau (which came to known as the National Weather Service) and that agency's fateful rivalry with the weather service in Cuba, and a number of other major storms, such as those which ravaged Indianola, Texas in 1875 and 1886.
The Great Storm of 1987 is key in an important scene near the end of Possession: A Romance, the bestselling and Booker Prize-winning novel by A. S. Byatt. The Great Storm of 1987 occurred on the night of 15–16 October 1987, when an unusually strong weather system caused winds to hit much of southern England and northern France. It was the worst storm to hit England since the Great Storm of 1703 (284 years earlier) and was responsible for the deaths of at least 22 people in England and France combined (18 in England, at least four in France).
Hurricane Katrina (2005) has been featured in a number of works of fiction.
In fine art
The Romantic seascape painters J. M. W. Turner and Ivan Aivazovsky created some of the most lasting impressions of the sublime and stormy seas that are firmly imprinted on the popular mind. Turner's representations of powerful natural forces reinvented the traditional seascape during the first half of the nineteenth century.
Upon his travels to Holland, he took note of the familiar large rolling waves of the English seashore transforming into the sharper, choppy waves of a Dutch storm. A characteristic example of Turner's dramatic seascape is The Slave Ship of 1840. Aivazovsky left several thousand turbulent canvases in which he increasingly eliminated human figures and historical background to focus on such essential elements as light, sea, and sky. His grandiose Ninth Wave (1850) is an ode to human daring in the face of the elements.
In motion pictures
The 1926 silent film The Johnstown Flood features the Great Flood of 1889 in Johnstown, Pennsylvania. The flood, caused by the catastrophic failure of the South Fork Dam after days of extremely heavy rainfall, prompted the first major disaster relief effort by the American Red Cross, directed by Clara Barton. The Johnstown Flood was depicted in numerous other media (both fictional and in non-fiction), as well.
Warner Bros.' 2000 dramatic disaster film The Perfect Storm, directed by Wolfgang Petersen, is an adaptation of Sebastian Junger's 1997 non-fiction book of the same title. The book and film feature the crew of the Andrea Gail, which got caught in the Perfect Storm of 1991. The 1991 Perfect Storm, also known as the Halloween Nor'easter of 1991, was a nor'easter that absorbed Hurricane Grace and ultimately evolved into a small hurricane late in its life cycle.
In music
Storms have also been portrayed in many works of music. Examples of storm music include Vivaldi's Four Seasons violin concerto RV 315 (Summer) (third movement: Presto), Beethoven's Pastoral Symphony (the fourth movement), a scene in Act II of Rossini's opera The Barber of Seville, the third act of Giuseppe Verdi's Rigoletto, and the fifth (Cloudburst) movement of Ferde Grofé's Grand Canyon Suite.
Gallery
| Physical sciences | Storms | null |
26780222 | https://en.wikipedia.org/wiki/Earth%20analog | Earth analog | An Earth analog, also called an Earth twin or second Earth, is a planet or moon with environmental conditions similar to those found on Earth. The term Earth-like planet is also used, but this term may refer to any terrestrial planet.
The possibility is of particular interest to astrobiologists and astronomers under reasoning that the more similar a planet is to Earth, the more likely it is to be capable of sustaining complex extraterrestrial life. As such, it has long been speculated and the subject expressed in science, philosophy, science fiction and popular culture. Advocates of space colonization and space and survival have long sought an Earth analog for settlement. In the far future, humans might artificially produce an Earth analog by terraforming.
Before the scientific search for and study of extrasolar planets, the possibility was argued through philosophy and science fiction. Philosophers have suggested that the size of the universe is such that a near-identical planet must exist somewhere. The mediocrity principle suggests that planets like Earth should be common in the Universe, while the Rare Earth hypothesis suggests that they are extremely rare. The thousands of exoplanetary star systems discovered so far are profoundly different from the Solar System, supporting the Rare Earth Hypothesis.
On 4 November 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarf stars within the Milky Way Galaxy. The nearest such planet could be expected to be within 12 light-years of the Earth, statistically. In September 2020, astronomers identified 24 superhabitable planets (planets better than Earth) contenders, from among more than 4000 confirmed exoplanets, based on astrophysical parameters, as well as the natural history of known life forms on the Earth.
Scientific findings since the 1990s have greatly influenced the scope of the fields of astrobiology, models of planetary habitability and the search for extraterrestrial intelligence (SETI).
History
Between 1858 and 1920, Mars was thought by many, including some scientists, to be very similar to Earth, only drier with a thick atmosphere, similar axial tilt, orbit and seasons as well as a Martian civilization that had built great Martian canals. These theories were advanced by Giovanni Schiaparelli, Percival Lowell and others. As such Mars in fiction portrayed the red planet as similar to Earth's deserts. Images and data from the Mariner (1965) and Viking space probes (1975–1980), however, revealed the planet as a barren cratered world. However, with continuing discoveries, other Earth comparisons remained. For example, the Mars Ocean Hypothesis had its origins in the Viking missions and was popularised during the 1980s. With the possibility of past water, there was the possibility that life could have begun on Mars, and it was once again perceived to be more Earth-like.
Likewise, until the 1960s, Venus was believed by many, including some scientists, to be a warmer version of Earth with a thick atmosphere and either hot and dusty or humid with water clouds and oceans. Venus in fiction was often portrayed as having similarities to Earth and many speculated about Venusian civilization. These beliefs were dispelled in the 1960s as the first space probes gathered more accurate scientific data on the planet and found that Venus is a very hot world with the surface temperature around under an acidic atmosphere with a surface pressure of .
From 2004, Cassini–Huygens began to reveal Saturn's moon Titan to be one of the most Earth-like worlds outside of the habitable zone. Though having a dramatically different chemical makeup, discoveries such as the confirmation of Titanian lakes, rivers and fluvial processes in 2007, advanced comparisons to Earth. Further observations, including weather phenomena, have aided the understanding of geological processes that may operate on Earth-like planets.
The Kepler space telescope began observing the transits of potential terrestrial planets in the habitable zone from 2011. Though the technology provided a more effective means for detecting and confirming planets, it was unable to conclude definitively how Earth-like the candidate planets actually are. In 2013, several Kepler candidates less than 1.5 Earth radii were confirmed orbiting in the habitable zone of stars. It was not until 2015 that the first near-Earth sized candidate orbiting a solar candidate, Kepler-452b, was announced.
On 11 January 2023, NASA scientists reported the detection of LHS 475 b, an Earth-like exoplanet – and the first exoplanet discovered by the James Webb Space Telescope.
Attributes and criteria
The probability of finding an Earth analog depends mostly on the attributes that are expected to be similar, and these vary greatly. Generally it is considered that it would be a terrestrial planet and there have been several scientific studies aimed at finding such planets. Often implied but not limited to are such criteria as planet size, surface gravity, star size and type (i.e. Solar analog), orbital distance and stability, axial tilt and rotation, similar geography, oceans, air and weather conditions, strong magnetosphere and even the presence of Earth-like complex life. If there is complex life, there could be some forests covering much of the land. If there is intelligent life, some parts of land could be covered in cities. Some factors that are assumed of such a planet may be unlikely due to Earth's own history. For instance, the Earth's atmosphere was not always oxygen-rich and this is a biosignature from the emergence of photosynthetic life. The formation, presence, influence on these characteristics of the Moon (such as tidal forces) may also pose a problem in finding an Earth analog.
The process of determining Earth analogs often involves reconciling several registers of uncertainty quantification. As anthropologist Vincent Ialenti's work on the epistemology of analogical reasoning has shown, some planetary scientists are "more comfortable making the leap of faith to bridge time and space and pull together two disparate objects" than others are.
Size
Size is often thought to be a significant factor, as planets of Earth's size are thought more likely to be terrestrial in nature and be capable of retaining an Earth-like atmosphere.
The list includes planets within the range of 0.8–1.9 Earth masses, below which are generally classed as sub-Earth and above classed as super-Earth. In addition, only planets known to fall within the range of 0.5–2.0 Earth radius (between half and twice the radius of the Earth) are included.
According to the size criteria, the closest planetary mass objects by known radius or mass are:
This comparison indicates that size alone is a poor measure, particularly in terms of habitability. Temperature must also be considered as Venus and the planets of Alpha Centauri B (discovered in 2012), Kepler-20 (discovered in 2011), COROT-7 (discovered in 2009) and the three planets of Kepler-42 (all discovered in 2011) are very hot, and Mars, Ganymede and Titan are frigid worlds, resulting also in wide variety of surface and atmospheric conditions. The masses of the Solar System's moons are a tiny fraction of that of Earth whereas the masses of extrasolar planets are very difficult to accurately measure. However discoveries of Earth-sized terrestrial planets are important as they may indicate the probable frequency and distribution of Earth-like planets.
Terrestrial
Another criterion often cited is that an Earth analog must be terrestrial, that is, it should possess a similar surface geology—a planetary surface composed of similar surface materials. The closest known examples are Mars and Titan and while there are similarities in their types of landforms and surface compositions, there are also significant differences such as the temperature and quantities of ice.
Many of Earth's surface materials and landforms are formed as a result of interaction with water (such as clay and sedimentary rocks) or as a byproduct of life (such as limestone or coal), interaction with the atmosphere, volcanically or artificially. A true Earth analog therefore might need to have formed through similar processes, having possessed an atmosphere, volcanic interactions with the surface, past or present liquid water and life forms.
Temperature
There are several factors that can determine planetary temperatures and therefore several measures that can draw comparisons to that of the Earth in planets where atmospheric conditions are unknown. Equilibrium temperature is used for planets without atmospheres. With atmosphere, a greenhouse effect is assumed. Finally, surface temperature is used. Each of these temperatures is affected by climate, which is influenced by the orbit and rotation (or tidal locking) of the planet, each of which introduces further variables.
Below is a comparison of the confirmed planets with the closest known temperatures to Earth.
Solar analog
Another criterion of an ideal life-harboring earth analog is that it should orbit a solar analog; that is, a star much like the Sun. However, this criterion may not be entirely valid as many different types of stars can provide a local environment hospitable to life. For example, in the Milky Way, most stars are smaller and dimmer than the Sun. One such star, TRAPPIST-1, is located 12 parsecs (39 light years) away and is roughly 10 times smaller and 2,000 times dimmer than the Sun, yet it harbors at least six Earth-like planets in its habitable zone. While these conditions may seem unfavorable to known life, TRAPPIST-1 is expected to continue burning for 12 trillion years (compared to the Suns remaining 5 billion year lifetime) which is time enough for life to arise by abiogenesis. For comparison, life evolved on Earth in a mere one billion years.
Surface water and hydrological cycle
The concept of the habitable zone (or Liquid Water Zone) defining a region where water can exist on the surface, is based on the properties of both the Earth and Sun. Under this model, Earth orbits roughly at the centre of this zone or in the "Goldilocks" position. Earth is the only planet currently confirmed to possess large bodies of surface water. Venus is on the hot side of the zone while Mars is on the cold side. Neither are known to have persistent surface water, though evidence exists that Mars did have in its ancient past, and it is speculated that the same was the case for Venus. Thus extrasolar planets (or moons) in the Goldilocks position with substantial atmospheres may possess oceans and water clouds like those on Earth. In addition to surface water, a true Earth analog would require a mix of oceans or lakes and areas not covered by water, or land.
Some argue that a true Earth analog must not only have a similar position of its planetary system but also orbit a solar analog and have a near circular orbit such that it remains continuously habitable like Earth.
Extrasolar Earth analog
The mediocrity principle suggests that there is a chance that serendipitous events may have allowed an Earth-like planet to form elsewhere that would allow the emergence of complex, multi-cellular life. In contrast, the Rare Earth hypothesis asserts that if the strictest criteria are applied, such a planet, if it exists, may be so far away that humans may never locate it.
Because the Solar System proved to be devoid of an Earth analog, the search has widened to extrasolar planets. Astrobiologists assert that Earth analogs would most likely be found in a stellar habitable zone, in which liquid water could exist, providing the conditions for supporting life. Some astrobiologists, such as Dirk Schulze-Makuch, estimated that a sufficiently massive natural satellite may form a habitable moon similar to Earth.
History
Estimated frequency
The frequency of Earth-like planets in both the Milky Way and the larger universe is still unknown. It ranges from the extreme Rare Earth hypothesis estimates – one (i. e., Earth) – to innumerable.
Several current scientific studies, including the Kepler mission, are aimed at refining estimates using real data from transiting planets. A 2008 study by astronomer Michael Meyer from the University of Arizona of cosmic dust near recently formed Sun-like stars suggests that between 20% and 60% of solar analogs have evidence for the formation of rocky planets, not unlike the processes that led to those of Earth. Meyer's team found discs of cosmic dust around stars and sees this as a byproduct of the formation of rocky planets.
In 2009, Alan Boss of the Carnegie Institution for Science speculated that there could be 100 billion terrestrial planets just in the Milky Way galaxy.
In 2011 NASA's Jet Propulsion Laboratory (JPL), based on observations from the Kepler Mission suggested that between 1.4% and 2.7% of all Sun-like stars are expected to have Earth-size planets within the habitable zones of their stars. This means there could be as many as two billion Earth-sized planets in the Milky Way galaxy alone, and assuming that all galaxies have number of such planets similar to the Milky Way, in the 50 billion galaxies in the observable universe, there may be as many as a hundred quintillion Earth-like planets. This would correspond to around 20 earth analogs per square centimeter of the Earth.
In 2013, a Harvard–Smithsonian Center for Astrophysics using statistical analysis of additional Kepler data suggested that there are at least 17 billion Earth-sized planets in the Milky Way. This, however, says nothing of their position in relation to the habitable zone.
A 2019 study determined that Earth-size planets may circle 1 in 6 Sun-like stars.
Terraforming
Terraforming (literally, "Earth-shaping") of a planet, moon, or other body is the hypothetical process of deliberately modifying its atmosphere, temperature, surface topography or ecosystems to be similar to those of Earth to make it habitable to humans.
Due to proximity and similarity in size, Mars, and to a lesser extent Venus, have been cited as the most likely candidates for terraforming.
| Physical sciences | Planetary science | Astronomy |
19792392 | https://en.wikipedia.org/wiki/Humid%20subtropical%20climate | Humid subtropical climate | A humid subtropical climate is a subtropical-temperate climate type, characterized by long and hot summers, and cool to mild winters. These climates normally lie on the southeast side of all continents (except Antarctica), generally between latitudes 25° and 40° and are located poleward from adjacent tropical climates, and equatorward from either humid continental (in North America and Asia) or oceanic climates (in other continents). It is also known as warm temperate climate in some climate classifications.
Under the Köppen climate classification, Cfa and Cwa climates are either described as humid subtropical climates or warm temperate climates. This climate features mean temperature in the coldest month between (or ) and and mean temperature in the warmest month or higher. However, while some climatologists have opted to describe this climate type as a "humid subtropical climate", Köppen himself never used this term. The humid subtropical climate classification was officially created under the Trewartha climate classification. In this classification, climates are termed humid subtropical when they have at least 8 months with a mean temperature above .
While many subtropical climates tend to be located at or near coastal locations, in some cases, they extend inland, most notably in China and the United States, where they exhibit more pronounced seasonal variations and sharper contrasts between summer and winter, as part of a gradient between the hotter tropical climates of the southern coasts and the colder continental climates to the north and further inland. As such, the climate can be said to exhibit somewhat different features depending on whether it is found inland, or in a maritime position.
Characteristics
In a humid subtropical climate, summers are typically long, hot and humid. A deep current of tropical air dominates the humid subtropics at the time of high sun, and daily intense (but brief) convective thundershowers are common. Monthly mean temperatures in winter may be mild or slightly above freezing.
Rainfall often shows a summer peak especially where storms reaching the level of monsoons are well developed, as in Southeast Asia and South Asia. Other areas have a more uniform or varying rainfall cycles but consistently lack any predictably dry summer months unlike Mediterranean climates (which lie at similar latitudes but, in most continents, on opposite coasts). Most summer rainfall occurs during thunderstorms that build up due to the intense surface heating and strong subtropical sun angle. Weak tropical lows that move in from adjacent warm tropical oceans, as well as occasional tropical cyclones often contribute to summer seasonal rainfall peaks. Winter rainfall (and occasional snowfall, especially near the poleward margins) is often associated with large storms in the westerlies that have fronts that reach down into subtropical latitudes.
Under the Holdridge life zones classification, the subtropical climates have a biotemperature between the frost or critical temperature line, (depending on locations in the world) and , and these climates are humid (or even perhumid or superhumid) when the potential evapotranspiration (PET) ratio (= PET / Precipitation) is less than 1. In the Holdridge classification, the humid subtropical climates of the Koppen system coincide more or less with the subtropical and warm temperate life zones.
Breakdown of letters
Cfa: C = Mild temperate f = Fully humid a = Hot Summer
Cwa: C = Mild temperate w = Dry Winter a = Hot Summer
Locations
Africa
In Africa, humid subtropical climates are primarily found in the southeastern part of the continent. The Cwa climate is found over a large portion of the interior of the Middle and Eastern African regions. This area includes central Angola, northeastern Zimbabwe, the Niassa, Manica and Tete provinces of Mozambique, the southern Congo provinces, southwest Tanzania, and the majority of Malawi, and Zambia. Some lower portions of the Ethiopian Highlands also have this climate.
The climate is also found in the narrow coastal sections of southern and eastern South Africa, primarily in KwaZulu-Natal and the Eastern Cape provinces. South Africa's version of this climate features heavy oceanic influences resulting in generally milder temperatures. This is particularly evident in its winters when temperatures do not drop as low as in many other regions within the humid subtropical category.
Asia
East Asia
In East Asia, this climate type is found in the southeastern quarter of China from Hong Kong north to Nanjing, the northern half of Taiwan, southern and central Japan (Kyushu, Shikoku and half of Honshu), and the most southern regions of Korea (the south and east Central and Southern Gyeongsang Province and Jeju island). Cities near the equatorward boundary of this zone include Hong Kong and Taichung; while Sendai, Gwangju - Daegu - Gangneung of Korea and Qingdao are near the northern boundary.
The influence of the strong Siberian anticyclone in East Asia brings colder winter temperatures than in the humid subtropical zones in South America, and Australia. The isotherm reaches as far south as the valleys of the Yellow and Wei rivers, roughly latitude 34° N. At Hainan Island and in Taiwan, the climate transitions from subtropical into tropical. In most of this region, the winter monsoon is very well developed, as such eastern Asian humid subtropical zones have a strong winter dry season and heavy summer rainfall.
Only in inland areas below the Yangtze River and coastal areas between approximately the Huai River and the beginning of the coast of Guangdong is there sufficient winter rainfall to produce a Cfa climate; even in these areas, rainfall and streamflow display a highly pronounced summer peak, unlike other regions of this climate type. Drought can be severe and often catastrophic to agriculture in the Cwa zone.
The only area where winter precipitation equals or even exceeds the summer rain is around the San'in region at the western coast of Japan, which during winter is on the windward side of the westerlies. The winter precipitation in these regions is usually produced by low-pressure systems off the east coast that develop in the onshore flow from the Siberian high. Summer rainfall comes from the East Asian Monsoon and from frequent typhoons. Annual rainfall is generally over , and in areas below the Himalayas can be much higher still.
South Asia
Humid subtropical climates can also be found in the Indian subcontinent, predominantly in the northern regions. However, the humid subtropical climates exhibited here typically differ markedly from those in East Asia (and, for that matter, a good portion of the globe). Winters are typically cool to mild (sometimes reaching ), ranging from humid and foggy in December to dry in February. These winters are followed by a mild spring (March-April). Summers tend to be relatively longer and very hot, starting from mid-April and peaking in June, extending up to July with high temperatures often exceeding . Summers usually begin dry, complete with dust storms, traits typically associated with arid or semi-arid climates, before eventually transforming into a more humid July. This is followed by the cooler but still hot and extremely humid monsoon season (August-September), where the region experiences heavy rains almost daily, with humidity usually above 90%. The autumn season (October-November), which immediately follows the monsoon and precedes winter, usually experiences a pleasant climate. Cities such as New Delhi, Dehradun, Lucknow, Kanpur and Patna, among others, exhibit this atypical version of the climate in India. In Pakistan, the cities of Islamabad, Sialkot, Gujranwala and Rawalpindi, among others, feature this weather pattern. Lahore overlaps between being humid subtropical and semi-arid. The annual precipitation in Peshawar is slightly less than required for this classification.
In Bangladesh, cities like Rangpur, Saidpur and Dinajpur in the northern region feature the monsoon variant (Cwa), where rainfall peaks at the monsoon season. Closely resembling the climate patterns of neighboring Northern Indian plains, this region shows a distinct three season pattern- relatively dry and very hot summer (March- early June), extremely wet, cooler Monsoon season (June- September), and mild, foggy winter (Late October- February).
Humid subtropical climates can also be found in Nepal. However, the Nepalese version of the climate generally does not feature the extreme hot spells that are commonplace for many other South Asian locations with this climate. In Nepal cities such as Kathmandu, Pokhara, Butwal, Birgunj and Biratnagar feature this iteration of the climate.
In South Asia, humid subtropical climates generally border on continental climates as altitude increases, or on winter-rainfall climates in western areas of Pakistan and northwestern India (e.g. Peshawar in northwestern Pakistan or Srinagar in the Kashmir Valley in India, where the primary precipitation peak occurs in March, not July or August). Further east, in highland areas with lengthier monsoons such as Nepal, seasonal temperature variation is lower than in the lowlands.
Southeast Asia
In Southeast Asia, about 90% of the region has a tropical climate; but humid subtropical climates can also be seen here, such as in Northern Vietnam (including Hanoi).
Southeast Asian locations with these climates can feature cool temperatures, with lows reaching during the months of December, January, and February. Unlike a good portion of East Asian locations with this climate however, most of Southeast Asia seldom experiences snowfall. These areas tend to feature hot and humid summers and cool and wet winters, with mean temperatures varying between in summer.
Western Asia
Although humid subtropical climates in Asia are mostly confined to the southeastern quarter of the continent, there are two narrow areas along the coast of the Caspian Sea and Black Sea with humid subtropical climates. Summers in these locations are cooler than typical humid subtropical climates and snowfall in winter is relatively common, but is usually of a short duration.
In Western Asia, the climate is prevalent in the Gilan, Māzandarān and Golestan Provinces of Iran, in parts of the Caucasus, in Azerbaijan and in Georgia wedged between the Caspian and Black seas and coastal (Black Sea) Turkey, albeit having more oceanic influence.
Annual rainfall ranges from around at Gorgan to over at Bandar-e Anzali, and is heavy throughout the year, with a maximum in October or November when Bandar-e Anzali can average in one month. Temperatures are generally moderate in comparison with other parts of Western Asia. During winter, the coastal areas can receive snowfall, which is usually of a short duration.
In Rasht, the average temperature in July and August is around but with near-saturation humidity, whilst in January and February it is around . The heavy, evenly distributed rainfall extends north into the Caspian coastal strip of Azerbaijan up to its northern border but this climate in Azerbaijan is, however, a Cfb/Cfa (Oceanic climate/Humid subtropical climate) borderline case.
Western Georgia (Batumi and Kutaisi) in the Kolkheti Lowland and the northeast coast of Turkey (Giresun), have a climate similar to that of Gilan and Mazandaran in Iran and very similar to that of southeastern and northern Azerbaijan. Temperatures range from in summer to in winter and rainfall is even heavier than in Caspian Iran, up to per year in Hopa (Turkey). These climates are a Cfb/Cfa (Oceanic climate/Humid subtropical climate) borderline case.
North America
In North America, humid subtropical climates are found in the American Gulf and lower East Coast states, including Alabama, Arkansas, Florida, Georgia, Louisiana, Mississippi, North Carolina, Oklahoma, South Carolina, Tennessee, and Texas. On the Florida peninsula, the humid subtropical climate gives way to the tropical climate of South Florida and the Florida Keys.
Under Köppen's climate classification, this zone includes locations further north, primarily Virginia, Kentucky, the lower elevations of West Virginia, Maryland, Delaware, Washington, D.C., southeastern Pennsylvania, central and southern portions of New Jersey, and Downstate New York. It can also be found in the lower Midwest, primarily in the central and southern portions of Kansas and Missouri and the southern portions of Illinois, Indiana and Ohio.
In Mexico, there are small areas of Cfa and Cwa climates. The climate can be found in small areas scattered around the northeastern part of the country, in proximity to the Gulf of Mexico. Other areas where the climate can be found is in the high elevations of Trans-Mexican Volcanic Belt and Sierra Madre Oriental. Despite being located at higher elevations, these locations have summers that are too warm to qualify as a subtropical highland climate. Guadalajara's climate is a major example of this.
Outside of isolated sections of Mexico, the southernmost limits of this climate zone in North America lie just north of South Florida and around southern coastal Texas. Cities at the southernmost limits, such as Tampa and Orlando and along the Texas coast around Corpus Christi down toward Brownsville generally feature warm weather year-round and minimal temperature differences between seasons. In contrast, cities at the northernmost limits of the climate zone such as New York, Philadelphia and Louisville feature hot, humid summers and chilly winters. These areas have average winter temperatures at the coldest limit of climates classed as humid subtropical.
Snowfall varies greatly in this climate zone. In locations at the southern limits of this zone and areas around the Gulf Coast, cities such as Orlando, Tampa, Houston, New Orleans, and Savannah rarely see snowfall, which occurs, at most, a few times per generation. In Southern cities farther north or inland, such as Atlanta, Charlotte, Dallas, Memphis, Nashville, and Raleigh, snow only occasionally falls and is usually three inches or less. However, for the majority of the winter here, temperatures remain above or well above freezing. At the northernmost limits of this zone, cities such as New York City, Philadelphia, Baltimore, Washington, D.C., and Louisville typically see snowfall during the winter, with occasional heavy snowstorms. Still, average temperatures during a typical winter hover just above freezing at these locations.
Precipitation is plentiful in North America's humid subtropical climate zone – but with significant variations in terms of wettest/driest months and seasons. Much of the interior South, including Tennessee, Kentucky, and the northern halves of Mississippi and Alabama, tends to have a winter or spring (not summer) precipitation maximum. Closer to the South Atlantic and Gulf coasts, there is a summer maximum, with July or August usually the wettest month – such as in Jacksonville, Charleston, Mobile, New Orleans, and Virginia Beach. A semblance of a monsoon pattern (dry winters/wet summers) is evident along the Atlantic coast from the Chesapeake Bay region and the Outer Banks south to Florida. The seasonal monsoon is much stronger on the Florida peninsula, as most locations in Florida have dry winters and wet summers.
In addition, areas in Texas that are slightly inland from the Gulf of Mexico, such as Austin and San Antonio that border the semi-arid climate zone, generally see a peak of precipitation in May, a drought-like nadir in mid-summer and a secondary, if not equal, precipitation peak in September or October. Areas further south along South Texas' Gulf Coast (Brownsville), which closely border tropical climate classification, typically have a strong September precipitation maximum, and a tendency toward dry conditions in winter with rain increasing in spring, with December or January often the driest months.
South America
Humid subtropical climates are found in a sizable portion of southeastern South America. The climate extends over a few states of southern Brazil, including Paraná, into sections of Paraguay, all of Uruguay and central Argentina (Pampas region). Major cities such as São Paulo, Buenos Aires, Porto Alegre and Montevideo, have a humid subtropical climate, generally in the form of hot and humid summers, and mild to cool winters. These areas, which include the Pampas, generally feature a Cfa climate categorization. At 38°S, the Argentine city of Bahía Blanca lies on the southern limit of the humid subtropical zone.
The Cwa climate occurs in parts of tropical highlands of São Paulo state, Mato Grosso do Sul and near the Andean highland in northwestern Argentina. These highland areas feature summer temperatures that are warm enough to fall outside the subtropical highland climate category.
Australia
The humid subtropical climate, dominates a few major cities in Australia. Sydney, Brisbane, and Gold Coast. This climate zone predominantly lies in eastern Australia, which begins from the coastal strip of Mackay, Queensland and stretches down the coast to just south of Sydney, where it transitions into the cooler, oceanic climates.
From Newcastle, approximately northeast of Sydney, the Cfa zone would extend to inland New South Wales, excluding the highland regions (which have an oceanic climate), stretching towards Dubbo to the northwest and Wagga Wagga to the south, ending at the New South Wales/Victoria border (Albury-Wodonga). To note, these places would have characteristics of the semi-arid and/or Mediterranean climates. Furthermore, the inland Cfa climates generally have drier summers, or at least summers with low humidity.
Extreme heat is more often experienced in Sydney than in other large cities in Australia's Cfa zone, especially in the western suburbs, where highs over are not uncommon. Frost is prevalent in the more inland areas of Sydney, such as Richmond. Average annual rainfall in the Sydney region ranges between .
There is usually a distinct summer rainfall maximum that becomes more pronounced moving northwards. In Brisbane, the wettest month (February) receives five times the rainfall of the driest month (September). Temperatures are very warm to hot but are not excessive: the average maximum in February is usually around and in July around . Frosts are extremely rare except at higher elevations, but temperatures over are not common on the coast.
North of the Cfa climate zone there is a zone centred upon Rockhampton which extends north to the Köppen Cwa classified climate zone of the Atherton Tablelands region. This region has a very pronounced dry winter period, with often negligible rainfall between June and October. Winter temperatures generally only fall slightly below , which would classify the region as a tropical savanna, or Aw, climate.
Annual rainfall within Australia's humid subtropical climate zone can reach as high as in coastal locations and is generally or above. The most intense 2-3 day rainfall periods that occur in this coastal zone however are the outcome of east coast lows forming to the north of a large high pressure system. There can be great variation in rainfall amounts from year to year as a result of these systems. As an example at Lismore which lies in the centre of this zone, the annual rainfall can range from less than in 1915 to more than in 1950.
Europe
As the continent does not have a large ocean to its east as the case in many other continents within the climate zone, humid subtropical climates in Europe are limited to relatively small areas on the margins of the Mediterranean and Black Sea basins. Cfa zones are generally transitional between the Mediterranean climate zones along the coast and oceanic and humid continental zones to the west and north where rainfall in the warmer months is too high for a Mediterranean classification, while temperatures (either in the summer and/or winter) are too warm to qualify as oceanic or humid continental. Summer humidity is generally not as high here as in other continents within this climatic zone.
The Po Valley, in Northern Italy, including major cities such as Milan, Turin, Bologna, and Verona, has a humid subtropical climate, featuring hot, humid summers with frequent thunderstorms; winters are foggy, damp and chilly, with sudden bursts of frost. Places along the shores of Lake Maggiore, Lake Lugano, Lake Como (Como and Verbania in Italy and Lugano and Locarno in Switzerland) have a humid subtropical climate with a distinctively high amount of rainfall during summer. In France, the climate is found in parts of the Garonne Valley (city of Toulouse) and in the Rhône Valley, including the cities of Lyon and Valence. Due to climate change, some cities on the Balkan peninsula and in the Pannonian Basin such as Belgrade, Novi Sad, Niš and Budapest are now just warm enough to be categorized as such. At 48°N, the urban core of Vienna, in Austria and Bratislava, in Slovakia, lie on the northern limit of the humid subtropical zone.
The coastal areas in the northern half of the Adriatic Sea also fall within this climate zone. The cities include Trieste, Venice, and Rimini in Italy, Rijeka and Split in Croatia, Koper in Slovenia and Kotor in Montenegro. Other Southern European areas in the Cfa zone include the central valleys and coast of Catalonia of Girona and Barcelona in Spain, some on the north-east of Spain (Huesca), West Macedonia in Greece (Kozani).
Along the Black Sea coast of Bulgaria (Varna), coast of Romania (Constanța and Mamaia), Sochi, Russia and Crimea, have summers too warm (> mean temperature in the warmest month) to qualify as oceanic, no freezing month, and enough summer precipitation and sometimes humid conditions, where they would be fit to be classed under Cfa, though they closely border the humid continental zone due to colder winters. All these areas are subject to occasional, in some cases repeated snowfalls and freezes during winter.
In Central Europe, a small area of humid subtropical climates are located in transitional areas between the oceanic and continental climates in areas where higher summer temperatures do not quite qualify it for inclusion in the Oceanic climate schema and mild winters do not allow their inclusion into continental climates. Average summer temperatures in areas of Europe with this climate are generally not as hot as most other subtropical zones around the world. Urban examples include Bratislava, Budapest, and the Innere Stadt of Vienna.
In the Azores, some islands have this climate, with very mild and rainy winters (>) and no snowfall, warm summers (>) but with no dry season during the warmest period, which means that they can neither be classified as oceanic, nor as Mediterranean, only as humid subtropical, as with Corvo Island.
In many other climate classification systems outside of the Köppen, most of these locations would not be included in the humid subtropical grouping. The higher summer precipitation and poleward flow of tropical air-masses in summer are not present in Europe as they are in eastern Australia or the southern United States. Many of these locations in Central and Southern Europe are considered oceanic by Trewartha's classification.
| Physical sciences | Climates | Earth science |
19799359 | https://en.wikipedia.org/wiki/Nasal%20fracture | Nasal fracture | A nasal fracture, commonly referred to as a broken nose, is a fracture of one of the bones of the nose. Symptoms may include bleeding, swelling, bruising, and an inability to breathe through the nose. They may be complicated by other facial fractures or a septal hematoma.
The most common causes include assault, trauma during sports, falls, and motor vehicle collisions. Diagnosis is typically based on the signs and symptoms and may occasionally be confirmed by plain X-ray.
Treatment is typically with pain medication and cold compresses. Reduction, if needed, can typically occur after the swelling has come down. Depending on the type of fracture reduction may be closed or open. Outcomes are generally good. Nasal fractures are common, comprising about 40% of facial fractures. Males in their 20s are most commonly affected.
Signs and symptoms
Symptoms of a broken nose include bruising, swelling, tenderness, pain, deformity, and/or bleeding of the nose and nasal region of the face. The patient may have difficulty breathing, or excessive nosebleeds (if the nasal mucosa are damaged). The patient may also have bruising around one or both eyes.
Cause
Nasal fractures are caused by physical trauma to the face. Common sources of nasal fractures include sports injuries, fighting, falls, and car accidents in the younger age groups, and falls from syncope or impaired balance in the elderly.
Diagnosis
Nasal fractures are usually identified visually and through physical examination. In addition, relevant questions to ask the patient include whether there is a noticeable cosmetic deformity and whether the patient has difficulty breathing through the nose after the injury. Medical imaging is generally not recommended. A priority is to distinguish simple fractures limited to the nasal bones (Type 1) from fractures that also involve other facial bones and/or the nasal septum (Types 2 and 3). In simple Type 1 fractures X-Rays supply surprisingly little information beyond clinical examination. However, diagnosis may be confirmed with X-rays or CT scans, and these are required if other facial injuries are suspected.
A fracture that runs horizontally across the septum is sometimes called a "Jarjavay fracture", and a vertical one, a "Chevallet fracture".
Although treatment of an uncomplicated fracture of nasal bones is not urgent—a referral for specific treatment in five to seven days usually suffices—an associated injury, nasal septal hematoma, occurs in about 5% of cases and does require urgent treatment and should be looked for during the assessment of nasal injuries.
Treatment
Minor nasal fractures may be allowed to heal on their own provided there is not significant cosmetic deformity. Ice and pain medication may be prescribed to ease discomfort during the healing process. For nasal fractures where the nose has been deformed, manual alignment (ie, closed reduction) may be attempted, usually with good results. Manually alignment should be performed in adults within 10 days of injury (prior to the bone healing in the misaligned state). For children, bone healing occurs faster, so manual alignment should ideally be performed within 4 days of injury. Injuries involving other structures (Types 2 and 3) must be recognized and treated surgically.
Prognosis
Bone stability after a fracture occurs between 3 and 5 weeks. Full bone fusion occurs between 4 and 8 weeks.
| Biology and health sciences | Types | Health |
18676889 | https://en.wikipedia.org/wiki/Tropical%20rainforest%20climate | Tropical rainforest climate | A tropical rainforest climate is a tropical climate sub-type usually found within 10 to 15 degrees latitude of the equator. There are some other areas at higher latitudes, such as Bermuda, the coast of southernmost Florida, United States (Fort Lauderdale, West Palm Beach), and Okinawa, Japan that fall into the tropical rainforest climate category. They experience high mean annual temperatures, small temperature ranges, and rain that falls throughout the year. Regions with this climate are typically designated Af by the Köppen climate classification. A tropical rainforest climate is typically hot, very humid, and wet with no dry season.
Description
Tropical rainforests have a type of tropical climate (at least 18 C or 64.4 F in their coldest month) in which there is no dry season—all months have an average precipitation value of at least . There are no distinct wet or dry seasons as rainfall is high throughout the months. One day in a tropical rainforest climate can be very similar to the next, while the change in temperature between day and night may be larger than the average change in temperature during the year.
Equatorial climates and tropical trade-wind climates
When tropical rainforest climates are more dominated by the Intertropical Convergence Zone (ITCZ) than the trade winds (and with no or rare cyclones), so usually located near the equator, they are also called equatorial climates. Otherwise, when they are more dominated by the trade winds than the ITCZ, they are called tropical trade-wind climates. In pure equatorial climates, the atmospheric pressure is almost constantly low so the horizontal pressure gradient is low. Consequently, the winds are rare and usually weak (except sea and land breezes in coastal areas) while in tropical trade-wind climates, often located at higher latitudes than the equatorial climates, the wind is almost permanent which incidentally explains why rainforest formations are impoverished compared to those of equatorial climates due to their necessary resistance to strong winds accompanying tropical disturbances.
Cities with tropical rainforest climates
Asia
Bandar Seri Begawan, Brunei
Malacca - Car Nicobar, India
Balikpapan, Indonesia
Banjarmasin, Indonesia
Bogor, Indonesia
Jayapura, Indonesia
Medan, Indonesia
Padang, Indonesia
Palembang, Indonesia
Pekanbaru, Indonesia
Pontianak, Indonesia
Tarakan, Indonesia
Ishigaki, Japan
Ipoh, Malaysia
Kuching, Malaysia
George Town, Malaysia
Johor Bahru, Malaysia
Kuala Lumpur, Malaysia
Davao City, Philippines
Polomolok, Philippines
Tacloban, Leyte, Philippines
Singapore
Colombo, Sri Lanka
Kurunegala, Sri Lanka (bordering on Am)
Ratnapura, Sri Lanka
Sri Jayawardenepura Kotte, Sri Lanka (bordering on Am)
Orchid Island, Taiwan
Nakhon Si Thammarat, Thailand
Narathiwat, Thailand (bordering on Am)
Oceania
Pago Pago, American Samoa
Tubuai, Austral Islands
Innisfail, Queensland, Australia
Avarua, Cook Islands
Palikir, Federated States of Micronesia
Suva, Fiji
Hagåtña, Guam
Atuona, French Polynesia
Mata Utu, French Polynesia
Papeete, French Polynesia
Tarawa, Kiribati
Majuro, Marshall Islands
Yaren, Nauru
Alofi, Niue, New Zealand
Koror, Palau
Tabubil, Papua New Guinea
Lae, Papua New Guinea
Pitcairn Island
Apia, Samoa
Honiara, Solomon Islands
Nuku’alofa, Tonga
Funafuti, Tuvalu
Hilo, Hawaii, United States
Port Vila, Vanuatu
Africa
Moroni, Comoros
Boende, Democratic Republic of the Congo
Kisumu, Kenya
Harper, Liberia
Antalaha, Madagascar
Manakara, Madagascar
Toamasina, Madagascar
Victoria, Seychelles
Kampala, Uganda
Americas
Punta Gorda, Belize
Hamilton, Bermuda (bordering on Cfa)
Villa Tunari, Bolivia
Belém, Brazil
Macaé, Brazil
Manaus, Brazil
Salvador, Brazil
Santos, Brazil
Easter Island, Chile (bordering on Cfa)
Buenaventura, Valle del Cauca, Colombia
Florencia, Colombia
Leticia, Colombia
Quibdó, Colombia
Cocos Island, Costa Rica
Limón, Costa Rica
Higüey, Dominican Republic (bordering on Am)
Puyo, Ecuador
Saint-Laurent-du-Maroni, French Guiana
St. George's, Grenada
Pointe-à-Pitre, Guadeloupe (bordering on Am)
Puerto Barrios, Guatemala
Georgetown, Guyana
La Ceiba, Honduras
Port Antonio, Jamaica
Bluefields, Nicaragua
Bocas del Toro, Panama
Changuinola, Panama
Iquitos, Peru
Castries, Saint Lucia (bordering on Am)
Lelydorp, Suriname
Paramaribo, Suriname
Scarborough, Trinidad and Tobago
West Palm Beach, Florida, United States (bordering on Am)
Fort Lauderdale, Florida, United States (bordering on Am)
| Physical sciences | Climates | Earth science |
27135340 | https://en.wikipedia.org/wiki/Seaweed%20farming | Seaweed farming | Seaweed farming or kelp farming is the practice of cultivating and harvesting seaweed. In its simplest form farmers gather from natural beds, while at the other extreme farmers fully control the crop's life cycle.
The seven most cultivated taxa are Eucheuma spp., Kappaphycus alvarezii, Gracilaria spp., Saccharina japonica, Undaria pinnatifida, Pyropia spp., and Sargassum fusiforme. Eucheuma and K. alvarezii are attractive for carrageenan (a gelling agent); Gracilaria is farmed for agar; the rest are eaten after limited processing. Seaweeds are different from mangroves and seagrasses, as they are photosynthetic algal organisms and are non-flowering.
The largest seaweed-producing countries as of 2022 are China (58.62%) and Indonesia (28.6%); followed by South Korea (5.09%) and the Philippines (4.19%). Other notable producers include North Korea (1.6%), Japan (1.15%), Malaysia (0.53%), Zanzibar (Tanzania, 0.5%), and Chile (0.3%). Seaweed farming has frequently been developed to improve economic conditions and to reduce fishing pressure.
The Food and Agriculture Organization (FAO) reported that world production in 2019 was over 35 million tonnes. North America produced some 23,000 tonnes of wet seaweed. Alaska, Maine, France, and Norway each more than doubled their seaweed production since 2018. As of 2019, seaweed represented 30% of marine aquaculture.
Seaweed farming is a carbon negative crop, with a high potential for climate change mitigation. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" as a mitigation tactic. World Wildlife Fund, Oceans 2050, and The Nature Conservancy publicly support expanded seaweed cultivation.
Methods
The earliest seaweed farming guides in the Philippines recommended the cultivation of Laminaria seaweed and reef flats at approximately one meter's depth at low tide. They also recommended cutting off seagrasses and removing sea urchins before farm construction. Seedlings are tied to monofilament lines and strung between mangrove stakes in the substrate. This off-bottom method remains a primary method.
Long-line cultivation methods can be used in water approximately in depth. Floating cultivation lines are anchored to the bottom and are widely used in North Sulawesi, Indonesia. Species cultured by long-line include those of the genera Saccharina, Undaria, Eucheuma, Kappaphycus, and Gracilaria.
Cultivation in Asia is relatively low-technology with a high labor requirement. Attempts to introduce technology to cultivate detached plant growth in tanks on land to reduce labor have yet to attain commercial viability.
Diseases
A bacterial infection called ice-ice stunts seaweed crops. In the Philippines 15 percent reduction in one species appeared in 2011 to 2013, representing 268,000 tonnes of seaweed.
Ecological impacts
Seaweed is an extractive crop that has little need for fertilisers or water, meaning that seaweed farms typically have a smaller environmental footprint than other agriculture or fed aquaculture. Many of the impacts of seaweed farms, both positive and negative, remain understudied and uncertain.
Nonetheless, many environmental problems can result from seaweed farming. For instance, seaweed farmers sometimes cut down mangroves to use as stakes. Removing mangroves negatively affects farming by reducing water quality and mangrove biodiversity. Farmers may remove eelgrass from their farming areas, damaging water quality.
Seaweed farming can pose a biosecurity risk, as farming activities have the potential to introduce or facilitate invasive species. For this reason, regions such as the UK, Maine and British Columbia only allow native varieties.
Farms may also have positive environmental effects. They may support welcome ecosystem services such as nutrient cycling, carbon uptake, and habitat provision.
Evidence suggests that seaweed farming can have positive impacts which include supplementing human diets, feeding livestock, creating biofuels, slowing climate change and providing crucial habitat for a marine life, but must scale sustainably in order to have these effects. One way for seaweed farming to scale at terrestrial farming levels is with the use of ROVs, which can install low-cost helical anchors that can extend seaweed farming into unprotected waters.
Seaweed can be used to capture, absorb, and incorporate excess nutrients into living tissue, aka nutrient bioextraction/bioharvesting, is the practice of farming and harvesting shellfish and seaweed to remove nitrogen and other nutrients from natural water bodies.
Similarly, seaweed farms may offer habitat that enhances biodiversity. Seaweed farms have been proposed to protect coral reefs by increasing diversity, providing habitat for local marine species. Farming may increase the production of herbivorous fish and shellfish. Pollinac reported an increase in Siginid population after the start of farming of Eucheuma seaweed in villages in North Sulawesi.
Economic impacts
In Japan the annual production of nori amounts to US$2 billion and is one of the world's most valuable aquaculture crops. The demand for seaweed production provides plentiful work opportunities.
A study conducted by the Philippines reported that plots of approximately one hectare could produce net income from Eucheuma farming was 5 to 6 times the average wage of an agriculture worker. The study also reported an increase in seaweed exports from 675 metric tons (MT) in 1967 to 13,191 MT in 1980, and 28,000 MT by 1988.
About 0.7 million tonnes of carbon are removed from the sea each year by commercially harvested seaweeds. In Indonesia, seaweed farms account for 40 percent of the national fisheries output and employ about one million people.
The Safe Seaweed Coalition is a research and industry group that promotes seaweed cultivation.
Tanzania
Seaweed farming has had widespread socio-economic impacts in Tanzania, has become a very important source of resources for women, and is the third biggest contributor of foreign currency to the country. 90% of the farmers are women, and much of it is used by the skincare and cosmetics industry.
In 1982 Adelaida K. Semesi began a programme of research into seaweed cultivation in Zanzibar and its application resulted in greater investment in the industry.
Uses
Farmed seaweed is used in industrial products, as food, as an ingredient in animal feed, and as source material for biofuels.
Chemicals
Seaweeds are used to produce chemicals that can be used for various industrial, pharmaceutical, or food products. Two major derivative products are carrageenan and agar. Bioactive ingredients can be used for industries such as pharmaceuticals, industrial food, and cosmetics.
Carrageenan
Agar
Food
Fuel
Climate change mitigation
Seaweed cultivation in the open ocean can act as a form of carbon sequestration to mitigate climate change. Studies have reported that nearshore seaweed forests constitute a source of blue carbon, as seaweed detritus is carried into the middle and deep ocean thereby sequestering carbon. Macrocystis pyrifera (also known as giant kelp) sequesters carbon faster than any other species. It can reach in length and grow as rapidly as a day. According to one study, covering 9% of the world's oceans with kelp forests could produce "sufficient biomethane to replace all of today's needs in fossil fuel energy, while removing 53 billion tons of CO2 per year from the atmosphere, restoring pre-industrial levels".
Seaweed farming may be an initial step towards adapting to and mitigating climate change. These include shoreline protection through the dissipation of wave energy, which is especially important to mangrove shorelines. Carbon dioxide intake would raise pH locally, benefitting calcifiers (e.g. crustaceans) or in reducing coral bleaching. Finally, seaweed farming could provide oxygen input to coastal waters, thus countering ocean deoxygenation driven by rising ocean temperature.
Tim Flannery claimed that growing seaweeds in the open ocean, facilitated by artificial upwelling and substrate, can enable carbon sequestration if seaweeds are sunk to depths greater than one kilometer.
Seaweed contributes approximately 16–18.7% of the total marine-vegetation sink. In 2010 there were 19.2 × tons of aquatic plants worldwide, 6.8 × tons for brown seaweeds; 9.0 × tons for red seaweeds; 0.2 × tons of green seaweeds; and 3.2 × tons of miscellaneous aquatic plants. Seaweed is largely transported from coastal areas to the open and deep ocean, acting as a permanent storage of carbon biomass within marine sediments.
Ocean afforestation is a proposal for farming seaweed for carbon removal. After harvesting seaweed is decomposed into biogas (60% methane and 40% carbon dioxide) in an anaerobic digester. The methane can be used as a biofuel, while the carbon dioxide can be stored to keep it from the atmosphere.
Marine permaculture
Similarly, the NGO Climate Foundation and permaculture experts claimed that offshore seaweed ecosystems can be cultivated according to permaculture principles, constituting marine permaculture. The concept envisions using artificial upwelling and floating, submerged platforms as substrate to replicate natural seaweed ecosystems that provide habitat and the basis of a trophic pyramid for marine life. Seaweeds and fish can be sustainably harvested. As of 2020, successful trials had taken place in Hawaii, the Philippines, Puerto Rico and Tasmania. The idea featured as a solution covered by the documentary 2040 and in the book Drawdown: The Most Comprehensive Plan Ever Proposed to Reverse Global Warming.
History
Human use of seaweed is known from the Neolithic period. Cultivation of gim (laver) in Korea is reported in books from the 15th century. Seaweed farming began in Japan as early as 1670 in Tokyo Bay. In autumn of each year, farmers would throw bamboo branches into shallow, muddy water, where the spores of the seaweed would collect. A few weeks later these branches would be moved to a river estuary. Nutrients from the river helped the seaweed to grow.
In the 1940s, the Japanese improved this method by placing nets of synthetic material tied to bamboo poles. This effectively doubled production. A cheaper variant of this method is called the hibi method—ropes stretched between bamboo poles. In the early 1970s, demand for seaweed and seaweed products outstripped supply, and cultivation was viewed as the best means to increase production.
In the tropics, commercial cultivation of Caulerpa lentillifera (sea grapes) was pioneered in the 1950s in Cebu, Philippines, after accidental introduction of C. lentillifera to fish ponds on the island of Mactan. This was further developed by local research, particularly through the efforts of Gavino Trono, since recognized as a National Scientist of the Philippines. Local research and experimental cultures led to the development of the first commercial farming methods for other warm-water algae (since cold-water red and brown edible algae favored in East Asia do not grow in the tropics), including the first successful commercial cultivation of carrageenan-producing algae. These include Eucheuma spp., Kappaphycus alvarezii, Gracilaria spp., and Halymenia durvillei. In 1997, it was estimated that 40,000 people in the Philippines made their living through seaweed farming. The Philippines was the world's largest producer of carrageenan for several decades until it was overtaken by Indonesia in 2008.
Seaweed farming spread beyond Japan and the Philippines to southeast Asia, Canada, Great Britain, Spain, and the United States.
In the 2000s, seaweed farming has been getting increasing attention due to its potential for mitigating both climate change and other environmental issues, such as agricultural runoff. Seaweed farming can be mixed with other aquaculture, such as shellfish, to improve water bodies, such as in the practices developed by American non-profit GreenWave. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" as a mitigation tactic.
In 2024 a commercial-scale seaweed farm began construction within the Hollandse Kust Zuid (HKZ) 139 turbine wind farm. The project uses 13-metre long "Eco-anchors" that cover the surface with a marine life habitat using materials such as oyster shells, wood, and cork.
| Technology | Aquaculture | null |
1276437 | https://en.wikipedia.org/wiki/Soil%20mechanics | Soil mechanics | Soil mechanics is a branch of soil physics and applied mechanics that describes the behavior of soils. It differs from fluid mechanics and solid mechanics in the sense that soils consist of a heterogeneous mixture of fluids (usually air and water) and particles (usually clay, silt, sand, and gravel) but soil may also contain organic solids and other matter. Along with rock mechanics, soil mechanics provides the theoretical basis for analysis in geotechnical engineering, a subdiscipline of civil engineering, and engineering geology, a subdiscipline of geology. Soil mechanics is used to analyze the deformations of and flow of fluids within natural and man-made structures that are supported on or made of soil, or structures that are buried in soils. Example applications are building and bridge foundations, retaining walls, dams, and buried pipeline systems. Principles of soil mechanics are also used in related disciplines such as geophysical engineering, coastal engineering, agricultural engineering, and hydrology.
This article describes the genesis and composition of soil, the distinction between pore water pressure and inter-granular effective stress, capillary action of fluids in the soil pore spaces, soil classification, seepage and permeability, time dependent change of volume due to squeezing water out of tiny pore spaces, also known as consolidation, shear strength and stiffness of soils. The shear strength of soils is primarily derived from friction between the particles and interlocking, which are very sensitive to the effective stress. The article concludes with some examples of applications of the principles of soil mechanics such as slope stability, lateral earth pressure on retaining walls, and bearing capacity of foundations.
Genesis and composition of soils
Genesis
The primary mechanism of soil creation is the weathering of rock. All rock types (igneous rock, metamorphic rock and sedimentary rock) may be broken down into small particles to create soil. Weathering mechanisms are physical weathering, chemical weathering, and biological weathering Human activities such as excavation, blasting, and waste disposal, may also create soil. Over geologic time, deeply buried soils may be altered by pressure and temperature to become metamorphic or sedimentary rock, and if melted and solidified again, they would complete the geologic cycle by becoming igneous rock.
Physical weathering includes temperature effects, freeze and thaw of water in cracks, rain, wind, impact and other mechanisms. Chemical weathering includes dissolution of matter composing a rock and precipitation in the form of another mineral. Clay minerals, for example can be formed by weathering of feldspar, which is the most common mineral present in igneous rock.
The most common mineral constituent of silt and sand is quartz, also called silica, which has the chemical name silicon dioxide. The reason that feldspar is most common in rocks but silica is more prevalent in soils is that feldspar is much more soluble than silica.
Silt, Sand, and Gravel are basically little pieces of broken rocks.
According to the Unified Soil Classification System, silt particle sizes are in the range of 0.002 mm to 0.075 mm and sand particles have sizes in the range of 0.075 mm to 4.75 mm.
Gravel particles are broken pieces of rock in the size range 4.75 mm to 100 mm. Particles larger than gravel are called cobbles and boulders.
Transport
Soil deposits are affected by the mechanism of transport and deposition to their location. Soils that are not transported are called residual soils—they exist at the same location as the rock from which they were generated. Decomposed granite is a common example of a residual soil. The common mechanisms of transport are the actions of gravity, ice, water, and wind. Wind blown soils include dune sands and loess. Water carries particles of different size depending on the speed of the water, thus soils transported by water are graded according to their size. Silt and clay may settle out in a lake, and gravel and sand collect at the bottom of a river bed. Wind blown soil deposits (aeolian soils) also tend to be sorted according to their grain size. Erosion at the base of glaciers is powerful enough to pick up large rocks and boulders as well as soil; soils dropped by melting ice can be a well graded mixture of widely varying particle sizes. Gravity on its own may also carry particles down from the top of a mountain to make a pile of soil and boulders at the base; soil deposits transported by gravity are called colluvium.
The mechanism of transport also has a major effect on the particle shape. For example, low velocity grinding in a river bed will produce rounded particles. Freshly fractured colluvium particles often have a very angular shape.
Soil composition
Soil mineralogy
Silts, sands and gravels are classified by their size, and hence they may consist of a variety of minerals. Owing to the stability of quartz compared to other rock minerals, quartz is the most common constituent of sand and silt. Mica, and feldspar are other common minerals present in sands and silts. The mineral constituents of gravel may be more similar to that of the parent rock.
The common clay minerals are montmorillonite or smectite, illite, and kaolinite or kaolin. These minerals tend to form in sheet or plate like structures, with length typically ranging between 10−7 m and 4x10−6 m and thickness typically ranging between 10−9 m and 2x10−6 m, and they have a relatively large specific surface area. The specific surface area (SSA) is defined as the ratio of the surface area of particles to the mass of the particles. Clay minerals typically have specific surface areas in the range of 10 to 1,000 square meters per gram of solid. Due to the large surface area available for chemical, electrostatic, and van der Waals interaction, the mechanical behavior of clay minerals is very sensitive to the amount of pore fluid available and the type and amount of dissolved ions in the pore fluid.
The minerals of soils are predominantly formed by atoms of oxygen, silicon, hydrogen, and aluminum, organized in various crystalline forms. These elements along with calcium, sodium, potassium, magnesium, and carbon constitute over 99 per cent of the solid mass of soils.
Grain size distribution
Soils consist of a mixture of particles of different size, shape and mineralogy. Because the size of the particles obviously has a significant effect on the soil behavior, the grain size and grain size distribution are used to classify soils. The grain size distribution describes the relative proportions of particles of various sizes. The grain size is often visualized in a cumulative distribution graph which, for example, plots the percentage of particles finer than a given size as a function of size. The median grain size, , is the size for which 50% of the particle mass consists of finer particles. Soil behavior, especially the hydraulic conductivity, tends to be dominated by the smaller particles, hence, the term "effective size", denoted by , is defined as the size for which 10% of the particle mass consists of finer particles.
Sands and gravels that possess a wide range of particle sizes with a smooth distribution of particle sizes are called well graded soils. If the soil particles in a sample are predominantly in a relatively narrow range of sizes, the sample is uniformly graded. If a soil sample has distinct gaps in the gradation curve, e.g., a mixture of gravel and fine sand, with no coarse sand, the sample may be gap graded. Uniformly graded and gap graded soils are both considered to be poorly graded. There are many methods for measuring particle-size distribution. The two traditional methods are sieve analysis and hydrometer analysis.
Sieve analysis
The size distribution of gravel and sand particles are typically measured using sieve analysis. The formal procedure is described in ASTM D6913-04(2009). A stack of sieves with accurately dimensioned holes between a mesh of wires is used to separate the particles into size bins. A known volume of dried soil, with clods broken down to individual particles, is put into the top of a stack of sieves arranged from coarse to fine. The stack of sieves is shaken for a standard period of time so that the particles are sorted into size bins. This method works reasonably well for particles in the sand and gravel size range. Fine particles tend to stick to each other, and hence the sieving process is not an effective method. If there are a lot of fines (silt and clay) present in the soil it may be necessary to run water through the sieves to wash the coarse particles and clods through.
A variety of sieve sizes are available. The boundary between sand and silt is arbitrary. According to the Unified Soil Classification System, a #4 sieve (4 openings per inch) having 4.75 mm opening size separates sand from gravel and a #200 sieve with an 0.075 mm opening separates sand from silt and clay. According to the British standard, 0.063 mm is the boundary between sand and silt, and 2 mm is the boundary between sand and gravel.
Hydrometer analysis
The classification of fine-grained soils, i.e., soils that are finer than sand, is determined primarily by their Atterberg limits, not by their grain size. If it is important to determine the grain size distribution of fine-grained soils, the hydrometer test may be performed. In the hydrometer tests, the soil particles are mixed with water and shaken to produce a dilute suspension in a glass cylinder, and then the cylinder is left to sit. A hydrometer is used to measure the density of the suspension as a function of time. Clay particles may take several hours to settle past the depth of measurement of the hydrometer. Sand particles may take less than a second. Stokes' law provides the theoretical basis to calculate the relationship between sedimentation velocity and particle size. ASTM provides the detailed procedures for performing the Hydrometer test.
Clay particles can be sufficiently small that they never settle because they are kept in suspension by Brownian motion, in which case they may be classified as colloids.
Mass-volume relations
There are a variety of parameters used to describe the relative proportions of air, water and solid in a soil. This section defines these parameters and some of their interrelationships. The basic notation is as follows:
, , and represent the volumes of air, water and solids in a soil mixture;
, , and represent the weights of air, water and solids in a soil mixture;
, , and represent the masses of air, water and solids in a soil mixture;
, , and represent the densities of the constituents (air, water and solids) in a soil mixture;
Note that the weights, W, can be obtained by multiplying the mass, M, by the acceleration due to gravity, g; e.g.,
Specific Gravity is the ratio of the density of one material compared to the density of pure water ().
Specific gravity of solids,
Note that specific weight, conventionally denoted by the symbol may be obtained by multiplying the density ( ) of a material by the acceleration due to gravity, .
Density, Bulk Density, or Wet Density, , are different names for the density of the mixture, i.e., the total mass of air, water, solids divided by the total volume of air water and solids (the mass of air is assumed to be zero for practical purposes):
Dry Density, , is the mass of solids divided by the total volume of air water and solids:
Buoyant Density, , defined as the density of the mixture minus the density of water is useful if the soil is submerged under water:
where is the density of water
Water Content, is the ratio of mass of water to mass of solid. It is easily measured by weighing a sample of the soil, drying it out in an oven and re-weighing. Standard procedures are described by ASTM.
Void ratio, , is the ratio of the volume of voids to the volume of solids:
Porosity, , is the ratio of volume of voids to the total volume, and is related to the void ratio:
Degree of saturation, , is the ratio of the volume of water to the volume of voids:
From the above definitions, some useful relationships can be derived by use of basic algebra.
Soil classification
Geotechnical engineers classify the soil particle types by performing tests on disturbed (dried, passed through sieves, and remolded) samples of the soil. This provides information about the characteristics of the soil grains themselves. Classification of the types of grains present in a soil does not account for important effects of the structure or fabric of the soil, terms that describe compactness of the particles and patterns in the arrangement of particles in a load carrying framework as well as the pore size and pore fluid distributions. Engineering geologists also classify soils based on their genesis and depositional history.
Classification of soil grains
In the US and other countries, the Unified Soil Classification System (USCS) is often used for soil classification. Other classification systems include the British Standard BS 5930 and the AASHTO soil classification system.
Classification of sands and gravels
In the USCS, gravels (given the symbol G) and sands (given the symbol S) are classified according to their grain size distribution. For the USCS, gravels may be given the classification symbol GW (well-graded gravel), GP (poorly graded gravel), GM (gravel with a large amount of silt), or GC (gravel with a large amount of clay). Likewise sands may be classified as being SW, SP, SM or SC. Sands and gravels with a small but non-negligible amount of fines (5–12%) may be given a dual classification such as SW-SC.
Atterberg limits
Clays and Silts, often called 'fine-grained soils', are classified according to their Atterberg limits; the most commonly used Atterberg limits are the Liquid Limit (denoted by LL or ), Plastic Limit (denoted by PL or ), and Shrinkage Limit (denoted by SL).
The Liquid Limit is the water content at which the soil behavior transitions from a plastic solid to a liquid. The Plastic Limit is the water content at which the soil behavior transitions from that of a plastic solid to a brittle solid. The Shrinkage Limit corresponds to a water content below which the soil will not shrink as it dries. The consistency of fine grained soil varies in proportional to the water content in a soil.
As the transitions from one state to another are gradual, the tests have adopted arbitrary definitions to determine the boundaries of the states. The liquid limit is determined by measuring the water content for which a groove closes after 25 blows in a standard test. Alternatively, a fall cone test apparatus may be used to measure the liquid limit. The undrained shear strength of remolded soil at the liquid limit is approximately 2 kPa. The Plastic Limit is the water content below which it is not possible to roll by hand the soil into 3 mm diameter cylinders. The soil cracks or breaks up as it is rolled down to this diameter. Remolded soil at the plastic limit is quite stiff, having an undrained shear strength of the order of about 200 kPa.
The Plasticity Index of a particular soil specimen is defined as the difference between the Liquid Limit and the Plastic Limit of the specimen; it is an indicator of how much water the soil particles in the specimen can absorb, and correlates with many engineering properties like permeability, compressibility, shear strength and others. Generally, the clay having high plasticity have lower permeability and also they are also difficult to be compacted.
Classification of silts and clays
According to the Unified Soil Classification System (USCS), silts and clays are classified by plotting the values of their plasticity index and liquid limit on a plasticity chart. The A-Line on the chart separates clays (given the USCS symbol C) from silts (given the symbol M). LL=50% separates high plasticity soils (given the modifier symbol H) from low plasticity soils (given the modifier symbol L). A soil that plots above the A-line and has LL>50% would, for example, be classified as CH. Other possible classifications of silts and clays are ML, CL and MH. If the Atterberg limits plot in the"hatched" region on the graph near the origin, the soils are given the dual classification 'CL-ML'.
Indices related to soil strength
Liquidity index
The effects of the water content on the strength of saturated remolded soils can be quantified by the use of the liquidity index, LI:
When the LI is 1, remolded soil is at the liquid limit and it has an undrained shear strength of about 2 kPa. When the soil is at the plastic limit, the LI is 0 and the undrained shear strength is about 200 kPa.
Relative density
The density of sands (cohesionless soils) is often characterized by the relative density,
where: is the "maximum void ratio" corresponding to a very loose state, is the "minimum void ratio" corresponding to a very dense state and is the in situ void ratio. Methods used to calculate relative density are defined in ASTM D4254-00(2006).
Thus if the sand or gravel is very dense, and if the soil is extremely loose and unstable.
Seepage: steady state flow of water
If fluid pressures in a soil deposit are uniformly increasing with depth according to
then hydrostatic conditions will prevail and the fluids will not be flowing through the soil. is the depth below the water table. However, if the water table is sloping or there is a perched water table as indicated in the accompanying sketch, then seepage will occur. For steady state seepage, the seepage velocities are not varying with time. If the water tables are changing levels with time, or if the soil is in the process of consolidation, then steady state conditions do not apply.
Darcy's law
Darcy's law states that the volume of flow of the pore fluid through a porous medium per unit time is proportional to the rate of change of excess fluid pressure with distance. The constant of proportionality includes the viscosity of the fluid and the intrinsic permeability of the soil. For the simple case of a horizontal tube filled with soil
The total discharge, (having units of volume per time, e.g., ft3/s or m3/s), is proportional to the intrinsic permeability, , the cross sectional area, , and rate of pore pressure change with distance, , and inversely proportional to the dynamic viscosity of the fluid, . The negative sign is needed because fluids flow from high pressure to low pressure. So if the change in pressure is negative (in the -direction) then the flow will be positive (in the -direction). The above equation works well for a horizontal tube, but if the tube was inclined so that point b was a different elevation than point a, the equation would not work. The effect of elevation is accounted for by replacing the pore pressure by excess pore pressure, defined as:
where is the depth measured from an arbitrary elevation reference (datum). Replacing by we obtain a more general equation for flow:
Dividing both sides of the equation by , and expressing the rate of change of excess pore pressure as a derivative, we obtain a more general equation for the apparent velocity in the x-direction:
where has units of velocity and is called the Darcy velocity (or the specific discharge, filtration velocity, or superficial velocity). The pore or interstitial velocity is the average velocity of fluid molecules in the pores; it is related to the Darcy velocity and the porosity through the Dupuit-Forchheimer relationship
(Some authors use the term seepage velocity to mean the Darcy velocity, while others use it to mean the pore velocity.)
Civil engineers predominantly work on problems that involve water and predominantly work on problems on earth (in earth's gravity). For this class of problems, civil engineers will often write Darcy's law in a much simpler form:
where is the hydraulic conductivity, defined as , and is the hydraulic gradient. The hydraulic gradient is the rate of change of total head with distance. The total head, at a point is defined as the height (measured relative to the datum) to which water would rise in a piezometer at that point. The total head is related to the excess water pressure by:
and the is zero if the datum for head measurement is chosen at the same elevation as the origin for the depth, z used to calculate .
Typical values of hydraulic conductivity
Values of hydraulic conductivity, , can vary by many orders of magnitude depending on the soil type. Clays may have hydraulic conductivity as small as about , gravels may have hydraulic conductivity up to about . Layering and heterogeneity and disturbance during the sampling and testing process make the accurate measurement of soil hydraulic conductivity a very difficult problem.
Flownets
Darcy's Law applies in one, two or three dimensions. In two or three dimensions, steady state seepage is described by Laplace's equation. Computer programs are available to solve this equation. But traditionally two-dimensional seepage problems were solved using a graphical procedure known as flownet. One set of lines in the flownet are in the direction of the water flow (flow lines), and the other set of lines are in the direction of constant total head (equipotential lines). Flownets may be used to estimate the quantity of seepage under dams and sheet piling.
Seepage forces and erosion
When the seepage velocity is great enough, erosion can occur because of the frictional drag exerted on the soil particles. Vertically upwards seepage is a source of danger on the downstream side of sheet piling and beneath the toe of a dam or levee. Erosion of the soil, known as "soil piping", can lead to failure of the structure and to sinkhole formation. Seeping water removes soil, starting from the exit point of the seepage, and erosion advances upgradient. The term "sand boil" is used to describe the appearance of the discharging end of an active soil pipe.
Seepage pressures
Seepage in an upward direction reduces the effective stress within the soil. When the water pressure at a point in the soil is equal to the total vertical stress at that point, the effective stress is zero and the soil has no frictional resistance to deformation. For a surface layer, the vertical effective stress becomes zero within the layer when the upward hydraulic gradient is equal to the critical gradient. At zero effective stress soil has very little strength and layers of relatively impermeable soil may heave up due to the underlying water pressures. The loss in strength due to upward seepage is a common contributor to levee failures. The condition of zero effective stress associated with upward seepage is also called liquefaction, quicksand, or a boiling condition. Quicksand was so named because the soil particles move around and appear to be 'alive' (the biblical meaning of 'quick' – as opposed to 'dead'). (Note that it is not possible to be 'sucked down' into quicksand. On the contrary, you would float with about half your body out of the water.)
Effective stress and capillarity: hydrostatic conditions
To understand the mechanics of soils it is necessary to understand how normal stresses and shear stresses are shared by the different phases. Neither gas nor liquid provide significant resistance to shear stress. The shear resistance of soil is provided by friction and interlocking of the particles. The friction depends on the intergranular contact stresses between solid particles. The normal stresses, on the other hand, are shared by the fluid and the particles. Although the pore air is relatively compressible, and hence takes little normal stress in most geotechnical problems, liquid water is relatively incompressible and if the voids are saturated with water, the pore water must be squeezed out in order to pack the particles closer together.
The principle of effective stress, introduced by Karl Terzaghi, states that the effective stress σ''' (i.e., the average intergranular stress between solid particles) may be calculated by a simple subtraction of the pore pressure from the total stress:
where σ is the total stress and u is the pore pressure. It is not practical to measure σ' directly, so in practice the vertical effective stress is calculated from the pore pressure and vertical total stress. The distinction between the terms pressure and stress is also important. By definition, pressure at a point is equal in all directions but stresses at a point can be different in different directions. In soil mechanics, compressive stresses and pressures are considered to be positive and tensile stresses are considered to be negative, which is different from the solid mechanics sign convention for stress.
Total stress
For level ground conditions, the total vertical stress at a point, , on average, is the weight of everything above that point per unit area. The vertical stress beneath a uniform surface layer with density , and thickness is for example:
where is the acceleration due to gravity, and is the unit weight of the overlying layer. If there are multiple layers of soil or water above the point of interest, the vertical stress may be calculated by summing the product of the unit weight and thickness of all of the overlying layers. Total stress increases with increasing depth in proportion to the density of the overlying soil.
It is not possible to calculate the horizontal total stress in this way. Lateral earth pressures are addressed elsewhere.
Pore water pressure
Hydrostatic conditions
If the soil pores are filled with water that is not flowing but is static, the pore water pressures will be hydrostatic. The water table is located at the depth where the water pressure is equal to the atmospheric pressure. For hydrostatic conditions, the water pressure increases linearly with depth below the water table:
where is the density of water, and is the depth below the water table.
Capillary action
Due to surface tension, water will rise up in a small capillary tube above a free surface of water. Likewise, water will rise up above the water table into the small pore spaces around the soil particles. In fact the soil may be completely saturated for some distance above the water table. Above the height of capillary saturation, the soil may be wet but the water content will decrease with elevation. If the water in the capillary zone is not moving, the water pressure obeys the equation of hydrostatic equilibrium, , but note that , is negative above the water table. Hence, hydrostatic water pressures are negative above the water table. The thickness of the zone of capillary saturation depends on the pore size, but typically, the heights vary between a centimeter or so for coarse sand to tens of meters for a silt or clay. In fact the pore space of soil is a uniform fractal e.g. a set of uniformly distributed D-dimensional fractals of average linear size L. For the clay soil it has been found that L=0.15 mm and D=2.7.
The surface tension of water explains why the water does not drain out of a wet sand castle or a moist ball of clay. Negative water pressures make the water stick to the particles and pull the particles to each other, friction at the particle contacts make a sand castle stable. But as soon as a wet sand castle is submerged below a free water surface, the negative pressures are lost and the castle collapses. Considering the effective stress equation, if the water pressure is negative, the effective stress may be positive, even on a free surface (a surface where the total normal stress is zero). The negative pore pressure pulls the particles together and causes compressive particle to particle contact forces.
Negative pore pressures in clayey soil can be much more powerful than those in sand. Negative pore pressures explain why clay soils shrink when they dry and swell as they are wetted. The swelling and shrinkage can cause major distress, especially to light structures and roads.
Later sections of this article address the pore water pressures for seepage and consolidation problems.
Consolidation: transient flow of water
Consolidation is a process by which soils decrease in volume. It occurs when stress is applied to a soil that causes the soil particles to pack together more tightly, therefore reducing volume. When this occurs in a soil that is saturated with water, water will be squeezed out of the soil. The time required to squeeze the water out of a thick deposit of clayey soil layer might be years. For a layer of sand, the water may be squeezed out in a matter of seconds. A building foundation or construction of a new embankment will cause the soil below to consolidate and this will cause settlement which in turn may cause distress to the building or embankment. Karl Terzaghi developed the theory of one-dimensional consolidation which enables prediction of the amount of settlement and the time required for the settlement to occur. Afterwards, Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity. Soils are tested with an oedometer test to determine their compression index and coefficient of consolidation.
When stress is removed from a consolidated soil, the soil will rebound, drawing water back into the pores and regaining some of the volume it had lost in the consolidation process. If the stress is reapplied, the soil will re-consolidate again along a recompression curve, defined by the recompression index. Soil that has been consolidated to a large pressure and has been subsequently unloaded is considered to be overconsolidated. The maximum past vertical effective stress is termed the preconsolidation stress. A soil which is currently experiencing the maximum past vertical effective stress is said to be normally consolidated. The overconsolidation ratio, (OCR) is the ratio of the maximum past vertical effective stress to the current vertical effective stress. The OCR is significant for two reasons: firstly, because the compressibility of normally consolidated soil is significantly larger than that for overconsolidated soil, and secondly, the shear behavior and dilatancy of clayey soil are related to the OCR through critical state soil mechanics; highly overconsolidated clayey soils are dilatant, while normally consolidated soils tend to be contractive.
Shear behavior: stiffness and strength
The shear strength and stiffness of soil determines whether or not soil will be stable or how much it will deform. Knowledge of the strength is necessary to determine if a slope will be stable, if a building or bridge might settle too far into the ground, and the limiting pressures on a retaining wall. It is important to distinguish between failure of a soil element and the failure of a geotechnical structure (e.g., a building foundation, slope or retaining wall); some soil elements may reach their peak strength prior to failure of the structure. Different criteria can be used to define the "shear strength" and the "yield point" for a soil element from a stress–strain curve. One may define the peak shear strength as the peak of a stress–strain curve, or the shear strength at critical state as the value after large strains when the shear resistance levels off. If the stress–strain curve does not stabilize before the end of shear strength test, the "strength" is sometimes considered to be the shear resistance at 15–20% strain. The shear strength of soil depends on many factors including the effective stress and the void ratio.
The shear stiffness is important, for example, for evaluation of the magnitude of deformations of foundations and slopes prior to failure and because it is related to the shear wave velocity. The slope of the initial, nearly linear, portion of a plot of shear stress as a function of shear strain is called the shear modulus
Friction, interlocking and dilation
Soil is an assemblage of particles that have little to no cementation while rock (such as sandstone) may consist of an assembly of particles that are strongly cemented together by chemical bonds. The shear strength of soil is primarily due to interparticle friction and therefore, the shear resistance on a plane is approximately proportional to the effective normal stress on that plane. The angle of internal friction is thus closely related to the maximum stable slope angle, often called the angle of repose.
But in addition to friction, soil derives significant shear resistance from interlocking of grains. If the grains are densely packed, the grains tend to spread apart from each other as they are subject to shear strain. The expansion of the particle matrix due to shearing was called dilatancy by Osborne Reynolds. If one considers the energy required to shear an assembly of particles there is energy input by the shear force, T, moving a distance, x and there is also energy input by the normal force, N, as the sample expands a distance, y. Due to the extra energy required for the particles to dilate against the confining pressures, dilatant soils have a greater peak strength than contractive soils. Furthermore, as dilative soil grains dilate, they become looser (their void ratio increases), and their rate of dilation decreases until they reach a critical void ratio. Contractive soils become denser as they shear, and their rate of contraction decreases until they reach a critical void ratio.
The tendency for a soil to dilate or contract depends primarily on the confining pressure and the void ratio of the soil. The rate of dilation is high if the confining pressure is small and the void ratio is small. The rate of contraction is high if the confining pressure is large and the void ratio is large. As a first approximation, the regions of contraction and dilation are separated by the critical state line.
Failure criteria
After a soil reaches the critical state, it is no longer contracting or dilating and the shear stress on the failure plane
is determined by the effective normal stress on the failure plane
and critical state friction angle :
The peak strength of the soil may be greater, however, due to the interlocking (dilatancy) contribution.
This may be stated:
where . However, use of a friction angle greater than the critical state value for design requires care. The peak strength will not be mobilized everywhere at the same time in a practical problem such as a foundation, slope or retaining wall. The critical state friction angle is not nearly as variable as the peak friction angle and hence it can be relied upon with confidence.
Not recognizing the significance of dilatancy, Coulomb proposed that the shear strength of soil may be expressed as a combination of adhesion and friction components:
It is now known that the
and parameters in the last equation are not fundamental soil properties.Terzaghi, K., Peck, R.B., Mesri, G. (1996) Soil mechanics in Engineering Practice, Third Edition, John Wiley & Sons, Inc., In particular, and are different depending on the magnitude of effective stress. According to Schofield (2006), the longstanding use of in practice has led many engineers to wrongly believe that is a fundamental parameter. This assumption that and are constant can lead to overestimation of peak strengths.
Structure, fabric, and chemistry
In addition to the friction and interlocking (dilatancy) components of strength, the structure and fabric also play a significant role in the soil behavior. The structure and fabric include factors such as the spacing and arrangement of the solid particles or the amount and spatial distribution of pore water; in some cases cementitious material accumulates at particle-particle contacts. Mechanical behavior of soil is affected by the density of the particles and their structure or arrangement of the particles as well as the amount and spatial distribution of fluids present (e.g., water and air voids). Other factors include the electrical charge of the particles, chemistry of pore water, chemical bonds (i.e. cementation -particles connected through a solid substance such as recrystallized calcium carbonate)
Drained and undrained shear
The presence of nearly incompressible fluids such as water in the pore spaces affects the ability for the pores to dilate or contract.
If the pores are saturated with water, water must be sucked into the dilating pore spaces to fill the expanding pores (this phenomenon is visible at the beach when apparently dry spots form around feet that press into the wet sand).
Similarly, for contractive soil, water must be squeezed out of the pore spaces to allow contraction to take place.
Dilation of the voids causes negative water pressures that draw fluid into the pores, and contraction of the voids causes positive pore pressures to push the water out of the pores. If the rate of shearing is very large compared to the rate that water can be sucked into or squeezed out of the dilating or contracting pore spaces, then the shearing is called undrained shear, if the shearing is slow enough that the water pressures are negligible, the shearing is called drained shear. During undrained shear, the water pressure u changes depending on volume change tendencies. From the effective stress equation, the change in u directly effects the effective stress by the equation:
and the strength is very sensitive to the effective stress. It follows then that the undrained shear strength of a soil may be smaller or larger than the drained shear strength depending upon whether the soil is contractive or dilative.
Shear tests
Strength parameters can be measured in the laboratory using direct shear test, triaxial shear test, simple shear test, fall cone test and (hand) shear vane test; there are numerous other devices and variations on these devices used in practice today. Tests conducted to characterize the strength and stiffness of the soils in the ground include the Cone penetration test and the Standard penetration test.
Other factors
The stress–strain relationship of soils, and therefore the shearing strength, is affected by:
soil composition (basic soil material): mineralogy, grain size and grain size distribution, shape of particles, pore fluid type and content, ions on grain and in pore fluid.
state (initial): Defined by the initial void ratio, effective normal stress and shear stress (stress history). State can be describd by terms such as: loose, dense, overconsolidated, normally consolidated, stiff, soft, contractive, dilative, etc.
structure: Refers to the arrangement of particles within the soil mass; the manner in which the particles are packed or distributed. Features such as layers, joints, fissures, slickensides, voids, pockets, cementation, etc., are part of the structure. Structure of soils is described by terms such as: undisturbed, disturbed, remolded, compacted, cemented; flocculent, honey-combed, single-grained; flocculated, deflocculated; stratified, layered, laminated; isotropic and anisotropic.
Loading conditions'': Effective stress path - drained, undrained, and type of loading - magnitude, rate (static, dynamic), and time history (monotonic, cyclic).
Applications
Lateral earth pressure
Lateral earth stress theory is used to estimate the amount of stress soil can exert perpendicular to gravity. This is the stress exerted on retaining walls. A lateral earth stress coefficient, K, is defined as the ratio of lateral (horizontal) effective stress to vertical effective stress for cohesionless soils (K=σ'h/σ'v). There are three coefficients: at-rest, active, and passive. At-rest stress is the lateral stress in the ground before any disturbance takes place. The active stress state is reached when a wall moves away from the soil under the influence of lateral stress, and results from shear failure due to reduction of lateral stress. The passive stress state is reached when a wall is pushed into the soil far enough to cause shear failure within the mass due to increase of lateral stress. There are many theories for estimating lateral earth stress; some are empirically based, and some are analytically derived.
Bearing capacity
The bearing capacity of soil is the average contact stress between a foundation and the soil which will cause shear failure in the soil. Allowable bearing stress is the bearing capacity divided by a factor of safety. Sometimes, on soft soil sites, large settlements may occur under loaded foundations without actual shear failure occurring; in such cases, the allowable bearing stress is determined with regard to the maximum allowable settlement. It is important during construction and design stage of a project to evaluate the subgrade strength. The California Bearing Ratio (CBR) test is commonly used to determine the suitability of a soil as a subgrade for design and construction. The field Plate Load Test is commonly used to predict the deformations and failure characteristics of the soil/subgrade and modulus of subgrade reaction (ks). The Modulus of subgrade reaction (ks) is used in foundation design, soil-structure interaction studies and design of highway pavements.
Slope stability
The field of slope stability encompasses the analysis of static and dynamic stability of slopes of earth and rock-fill dams, slopes of other types of embankments, excavated slopes, and natural slopes in soil and soft rock.
As seen to the right, earthen slopes can develop a cut-spherical weakness zone. The probability of this happening can be calculated in advance using a simple 2-D circular analysis package. A primary difficulty with analysis is locating the most-probable slip plane for any given situation. Many landslides have been analyzed only after the fact. Landslides vs. Rock strength are two factors for consideration.
Recent developments
A recent finding in soil mechanics is that soil deformation can be described as the behavior of a dynamical system. This approach to soil mechanics is referred to as Dynamical Systems based Soil Mechanics (DSSM). DSSM holds simply that soil deformation is a Poisson process in which particles move to their final position at random shear strains.
The basis of DSSM is that soils (including sands) can be sheared till they reach a steady-state condition at which, under conditions of constant strain-rate, there is no change in shear stress, effective confining stress, and void ratio. The steady-state was formally defined by Steve J. Poulos an associate professor at the Soil Mechanics Department of Harvard University, who built off a hypothesis that Arthur Casagrande was formulating towards the end of his career. The steady state condition is not the same as the "critical state" condition. It differs from the critical state in that it specifies a statistically constant structure at the steady state. The steady-state values are also very slightly dependent on the strain-rate.
Many systems in nature reach steady states, and dynamical systems theory describes such systems. Soil shear can also be described as a dynamical system. The physical basis of the soil shear dynamical system is a Poisson process in which particles move to the steady-state at random shear strains. Joseph generalized this—particles move to their final position (not just steady-state) at random shear-strains. Because of its origins in the steady state concept, DSSM is sometimes informally called "Harvard soil mechanics."
DSSM provides for very close fits to stress–strain curves, including for sands. Because it tracks conditions on the failure plane, it also provides close fits for the post failure region of sensitive clays and silts something that other theories are not able to do. Additionally DSSM explains key relationships in soil mechanics that to date have simply been taken for granted, for example, why normalized undrained peak shear strengths vary with the log of the overconsolidation ratio and why stress–strain curves normalize with the initial effective confining stress; and why in one-dimensional consolidation the void ratio must vary with the log of the effective vertical stress, why the end-of-primary curve is unique for static load increments, and why the ratio of the creep value Cα to the compression index Cc must be approximately constant for a wide range of soils.
| Physical sciences | Soil mechanics | Physics |
1277350 | https://en.wikipedia.org/wiki/Names%20of%20the%20days%20of%20the%20week | Names of the days of the week | In many languages, the names given to the seven days of the week are derived from the names of the classical planets in Hellenistic astronomy, which were in turn named after contemporary deities, a system introduced by the Sumerians and later adopted by the Babylonians from whom the Roman Empire adopted the system during late antiquity. In some other languages, the days are named after corresponding deities of the regional culture, beginning either with Sunday or with Monday. The seven-day week was adopted in early Christianity from the Hebrew calendar, and gradually replaced the Roman internundinum.
Sunday remained the first day of the week, being considered the day of the sun god Sol Invictus and the Lord's Day, while the Jewish Sabbath remained the seventh.
The Babylonians invented the actual seven-day week in 600 BCE, with Emperor Constantine making the Day of the Sun (, "Sunday") a legal holiday centuries later.
In the international standard ISO 8601, Monday is treated as the first day of the week, but in many countries it is counted as the second day of the week.
Days named after planets
Greco-Roman tradition
Between the first and third centuries CE, the Roman Empire gradually replaced the eight-day Roman nundinal cycle with the seven-day week. The earliest evidence for this new system is a Pompeiian graffito referring to 6 February (ante diem viii idus Februarias) of the year 60 CE as dies solis ("Sunday"). Another early witness is a reference to a lost treatise by Plutarch, written in about 100 CE, which addressed the question of: "Why are the days named after the planets reckoned in a different order from the 'actual' order?" The treatise is lost, but the answer to the question is known; see planetary hours.
The Ptolemaic system of planetary spheres asserts that the order of the heavenly bodies from the farthest to the closest to the Earth is Saturn, Jupiter, Mars, Sun, Venus, Mercury, and the Moon; objectively, the planets are ordered from slowest to fastest moving as they appear in the night sky.
The days were named after the classical planets of Hellenistic astrology, in the order: Sun (Helios), Moon (Selene), Mars (Ares), Mercury (Hermes), Jupiter (Zeus), Venus (Aphrodite), and Saturn (Cronus).
The seven-day week spread throughout the Roman Empire in late antiquity.
By the fourth century CE, it was in wide use throughout the Empire.
The Greek and Latin names are as follows:
Romance languages
Except for in Portuguese and Mirandese, the Romance languages preserved the Latin names, except for the names of Sunday, which was replaced by [dies] Dominicus (Dominica), that is, "the Lord's Day", and of Saturday, which was named for the Jewish Sabbath. Mirandese and Portuguese use numbered weekdays, but retain sábado and demingo/domingo for weekends. Meanwhile, Galician occasionally uses them alongside the traditional Latin-derived names, albeit to a lesser extent (see below).
Celtic languages
Early Old Irish adopted the names from Latin, but introduced separate terms of Norse origin for Wednesday, Thursday and Friday, then later supplanted these with terms relating to church fasting practices.
Albanian language
Albanian adopted the Latin terms for Tuesday, Wednesday and Saturday, translated the Latin terms for Sunday and Monday using the native names of Diell and Hënë, respectively, and replaced the Latin terms for Thursday and Friday with the equivalent native deity names Enji and Prende, respectively.
Adoptions from Romance
Other languages adopted the week together with the Latin (Romance) names for the days of the week in the colonial period. Several constructed languages also adopted the Latin terminology.
With the exception of sabato, the Esperanto names are all from French, cf. French dimanche, lundi, mardi, mercredi, jeudi, vendredi.
Germanic tradition
The Germanic peoples adapted the system introduced by the Romans by substituting the Germanic deities for the Roman ones (with the exception of Saturday) in a process known as .
The date of the introduction of this system is not known exactly, but it must have happened later than 100 AD but before the introduction of Christianity during the 6th to 7th centuries, i.e., during the final phase or soon after the collapse of the Western Roman Empire. This period is later than the Common Germanic stage, but still during the phase of undifferentiated West Germanic. The names of the days of the week in North Germanic languages were not calqued from Latin directly, but taken from the West Germanic names.
Sunday: Old English (), meaning "sun's day". This is a translation of the Latin phrase . English, like most of the Germanic languages, preserves the day's association with the sun. Many other European languages, including all of the Romance languages, have changed its name to the equivalent of "the Lord's day" (based on Ecclesiastical Latin ). In both West Germanic and North Germanic mythology, the Sun is personified as Sunna/Sól.
Monday: Old English (), meaning "Moon's day". This is equivalent to the Latin name . In North Germanic mythology, the Moon is personified as Máni.
Tuesday: Old English (), meaning "Tiw's day". Tiw (Norse ) was a one-handed god associated with single combat and pledges in Norse mythology and also attested prominently in wider Germanic paganism. The name of the day is also related to the Latin name , "Day of Mars" (the Roman god of war).
Wednesday: Old English () meaning the day of the Germanic god Woden (known as Óðinn among the North Germanic peoples), and a prominent god of the Anglo-Saxons (and other Germanic peoples) in England until about the seventh century. This corresponds to the Latin counterpart , "Day of Mercury", as both are deities of magic and knowledge. The German Mittwoch, the Low German , the miðviku- in Icelandic and the Finnish all mean "mid-week".
Thursday: Old English (), meaning ''s day'. means thunder or its personification, the Norse god known in Modern English as Thor. Similarly Dutch , German ('thunder's day'), Finnish , and Scandinavian ('Thor's day'). "Thor's day" corresponds to Latin , "day of Jupiter" (the Roman god of thunder).
Friday: Old English (), meaning the day of the Anglo-Saxon goddess . The Norse name for the planet Venus was , 'Frigg's star'. It is based on the Latin , "Day of Venus".
Saturday: named after the Roman god Saturn associated with the Titan Cronus, father of Zeus and many Olympians. Its original Anglo-Saxon rendering was (). In Latin, it was , "Day of Saturn". The Nordic laugardagur, leygardagur, laurdag, etc. deviate significantly as they have no reference to either the Norse or the Roman pantheon; they derive from Old Nordic , literally "washing-day". The German (mainly used in northern and eastern Germany) and the Low German mean "Sunday Eve"; the German word derives from the name for Shabbat.
Adoptions from Germanic
Hindu tradition
Hindu astrology uses the concept of days under the regency of a planet under the term vāsara/vāra, the days of the week being called sūrya-/ravi-, chandra-/soma-, maṅgala-, budha-, guru-/bṛhaspati-, śukra-, and śani-vāsara. śukrá is a name of Venus (regarded as a son of Bhṛgu); guru is here a title of Bṛhaspati, and hence of Jupiter; budha "Mercury" is regarded as a son of Soma, that is, the Moon. Knowledge of Greek astrology existed since about the 2nd century BC, but references to the vāsara occur somewhat later, during the Gupta period (Yājñavalkya Smṛti, c. 3rd to 5th century AD), that is, at roughly the same period or before the system was introduced in the Roman Empire.
In languages of the Indian subcontinent
Southeast Asian languages
The Southeast Asian tradition also uses the Hindu names of the days of the week. Hindu astrology adopted the concept of days under the regency of a planet under the term vāra, the days of the week being called āditya-, soma-, maṅgala-, budha-, guru-, śukra-, and śani-vāra. śukrá is a name of Venus (regarded as a son of Bhṛgu); guru is here a title of Bṛhaspati, and hence of Jupiter; budha "Mercury" is regarded as a son of Soma, that is, the Moon.
Northeast Asian languages
East Asian tradition
The East Asian naming system for the days of the week closely parallels that of the Latin system and is ordered after the "Seven Luminaries" (七曜 qī yào), which consists of the Sun, Moon and the five planets visible to the naked eye.
The Chinese had apparently adopted the seven-day week from the Hellenistic system by the 4th century AD, although by which route is not entirely clear. It was again transmitted to China in the 8th century AD by Manichaeans, via the country of Kang (a Central Asian polity near Samarkand).
The 4th-century AD date, according to the Cihai encyclopedia, is due to a reference to Fan Ning (范寧), an astrologer of the Jin dynasty. The renewed adoption from Manichaeans in the 8th century AD (Tang dynasty) is documented with the writings of the Chinese Buddhist monk Yijing and the Ceylonese Buddhist monk Bu Kong.
The Chinese transliteration of the planetary system was soon brought to Japan by the Japanese monk Kobo Daishi; surviving diaries of the Japanese statesman Fujiwara no Michinaga show the seven-day system in use in Heian Period Japan as early as 1007. In Japan, the seven-day system was kept in use (for astrological purposes) until its promotion to a full-fledged (Western-style) calendrical basis during the Meiji era. In China, with the founding of the Republic of China in 1911, Monday through Saturday in China are now named after the luminaries implicitly with the numbers.
Pronunciations for Classical Chinese names are given in Standard Chinese.
Numbered days of the week
Days numbered from Monday
ISO prescribes Monday as the first day of the week with ISO-8601 for software date formats.
The Slavic, Baltic and Uralic languages (except Finnish and partially Estonian and Võro) adopted numbering but took Monday rather than Sunday as the "first day". This convention is also found in some Austronesian languages whose speakers were converted to Christianity by European missionaries.
In Slavic languages, some of the names correspond to numerals after Sunday: compare Russian vtornik () "Tuesday" and vtoroj () "the second", chetverg () "Thursday" and chetvjortyj () "the fourth", pyatnitsa () "Friday" and pyatyj () "the fifth"; see also the | Technology | Days of the week | null |
1277893 | https://en.wikipedia.org/wiki/Thermogravimetric%20analysis | Thermogravimetric analysis | Thermogravimetric analysis or thermal gravimetric analysis (TGA) is a method of thermal analysis in which the mass of a sample is measured over time as the temperature changes. This measurement provides information about physical phenomena, such as phase transitions, absorption, adsorption and desorption; as well as chemical phenomena including chemisorptions, thermal decomposition, and solid-gas reactions (e.g., oxidation or reduction).
Thermogravimetric analyzer
Thermogravimetric analysis (TGA) is conducted on an instrument referred to as a thermogravimetric analyzer. A thermogravimetric analyzer continuously measures mass while the temperature of a sample is changed over time. Mass, temperature, and time are considered base measurements in thermogravimetric analysis while many additional measures may be derived from these three base measurements.
A typical thermogravimetric analyzer consists of a precision balance with a sample pan located inside a furnace with a programmable control temperature. The temperature is generally increased at constant rate (or for some applications the temperature is controlled for a constant mass loss) to incur a thermal reaction. The thermal reaction may occur under a variety of atmospheres including: ambient air, vacuum, inert gas, oxidizing/reducing gases, corrosive gases, carburizing gases, vapors of liquids or "self-generated atmosphere"; as well as a variety of pressures including: a high vacuum, high pressure, constant pressure, or a controlled pressure.
The thermogravimetric data collected from a thermal reaction is compiled into a plot of mass or percentage of initial mass on the y axis versus either temperature or time on the x-axis. This plot, which is often smoothed, is referred to as a TGA curve. The first derivative of the TGA curve (the DTG curve) may be plotted to determine inflection points useful for in-depth interpretations as well as differential thermal analysis.
A TGA can be used for materials characterization through analysis of characteristic decomposition patterns. It is an especially useful technique for the study of polymeric materials, including thermoplastics, thermosets, elastomers, composites, plastic films, fibers, coatings, paints, and fuels.
Types of TGA
There are three types of thermogravimetry:
Isothermal or static thermogravimetry: In this technique, the sample weight is recorded as a function of time at a constant temperature.
Quasistatic thermogravimetry: In this technique, the sample temperature is raised in sequential steps separated by isothermal intervals, during which the sample mass reaches stability before the start of the next temperature ramp.
Dynamic thermogravimetry: In this technique, the sample is heated in an environment whose temperature is changed in a linear manner.
Applications
Thermal stability
TGA can be used to evaluate the thermal stability of a material. In a desired temperature range, if a species is thermally stable, there will be no observed mass change. Negligible mass loss corresponds to little or no slope in the TGA trace. TGA also gives the upper use temperature of a material. Beyond this temperature the material will begin to degrade.
TGA is used in the analysis of polymers. Polymers usually melt before they decompose, thus TGA is mainly used to investigate the thermal stability of polymers. Most polymers melt or degrade before 200 °C. However, there is a class of thermally stable polymers that are able to withstand temperatures of at least 300 °C in air and 500 °C in inert gases without structural changes or strength loss, which can be analyzed by TGA.
Oxidation and combustion
The simplest materials characterization is the residue remaining after a reaction. For example, a combustion reaction could be tested by loading a sample into a thermogravimetric analyzer at normal conditions. The thermogravimetric analyzer would cause ion combustion in the sample by heating it beyond its ignition temperature. The resultant TGA curve plotted with the y-axis as a percentage of initial mass would show the residue at the final point of the curve.
Oxidative mass losses are the most common observable losses in TGA.
Studying the resistance to oxidation in copper alloys is very important. For example, NASA (National Aeronautics and Space Administration) is conducting research on advanced copper alloys for their possible use in combustion engines. However, oxidative degradation can occur in these alloys as copper oxides form in atmospheres that are rich in oxygen. Resistance to oxidation is significant because NASA wants to be able to reuse shuttle materials. TGA can be used to study the static oxidation of materials such as these for practical use.
Combustion during TG analysis is identifiable by distinct traces made in the TGA thermograms produced. One interesting example occurs with samples of as-produced unpurified carbon nanotubes that have a large amount of metal catalyst present. Due to combustion, a TGA trace can deviate from the normal form of a well-behaved function. This phenomenon arises from a rapid temperature change. When the weight and temperature are plotted versus time, a dramatic slope change in the first derivative plot is concurrent with the mass loss of the sample and the sudden increase in temperature seen by the thermocouple. The mass loss could result from particles of smoke released from burning caused by inconsistencies in the material itself, beyond the oxidation of carbon due to poorly controlled weight loss.
Different weight losses on the same sample at different points can also be used as a diagnosis of the sample's anisotropy. For instance, sampling the top side and the bottom side of a sample with dispersed particles inside can be useful to detect sedimentation, as thermograms will not overlap but will show a gap between them if the particle distribution is different from side to side.
Thermogravimetric kinetics
Thermogravimetric kinetics may be explored for insight into the reaction mechanisms of thermal (catalytic or non-catalytic) decomposition involved in the pyrolysis and combustion processes of different materials.
Activation energies of the decomposition process can be calculated using Kissinger method.
Though a constant heating rate is more common, a constant mass loss rate can illuminate specific reaction kinetics. For example, the kinetic parameters of the carbonization of polyvinyl butyral were found using a constant mass loss rate of 0.2 wt %/min.
Operation in combination with other instruments
Thermogravimetric analysis is often combined with other processes or used in conjunction with other analytical methods.
For example, the TGA instrument continuously weighs a sample as it is heated to temperatures of up to 2000 °C for coupling with Fourier-transform infrared spectroscopy (FTIR) and mass spectrometry gas analysis. As the temperature increases, various components of the sample are decomposed and the weight percentage of each resulting mass change can be measured.
| Physical sciences | Chemical methods | Chemistry |
1278389 | https://en.wikipedia.org/wiki/Related%20rates | Related rates | In differential calculus, related rates problems involve finding a rate at which a quantity changes by relating that quantity to other quantities whose rates of change are known. The rate of change is usually with respect to time. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of the chain rule, since most problems involve several variables.
Fundamentally, if a function is defined such that , then the derivative of the function can be taken with respect to another variable. We assume is a function of , i.e. . Then , so
Written in Leibniz notation, this is:
Thus, if it is known how changes with respect to , then we can determine how changes with respect to and vice versa. We can extend this application of the chain rule with the sum, difference, product and quotient rules of calculus, etc.
For example, if then
Procedure
The most common way to approach related rates problems is the following:
Identify the known variables, including rates of change and the rate of change that is to be found. (Drawing a picture or representation of the problem can help to keep everything in order)
Construct an equation relating the quantities whose rates of change are known to the quantity whose rate of change is to be found.
Differentiate both sides of the equation with respect to time (or other rate of change). Often, the chain rule is employed at this step.
Substitute the known rates of change and the known quantities into the equation.
Solve for the wanted rate of change.
Errors in this procedure are often caused by plugging in the known values for the variables before (rather than after) finding the derivative with respect to time. Doing so will yield an incorrect result, since if those values are substituted for the variables before differentiation, those variables will become constants; and when the equation is differentiated, zeroes appear in places of all variables for which the values were plugged in.
Example
A 10-meter ladder is leaning against the wall of a building, and the base of the ladder is sliding away from the building at a rate of 3 meters per second. How fast is the top of the ladder sliding down the wall when the base of the ladder is 6 meters from the wall?
The distance between the base of the ladder and the wall, x, and the height of the ladder on the wall, y, represent the sides of a right triangle with the ladder as the hypotenuse, h. The objective is to find dy/dt, the rate of change of y with respect to time, t, when h, x and dx/dt, the rate of change of x, are known.
Step 1:
Step 2:
From the Pythagorean theorem, the equation
describes the relationship between x, y and h, for a right triangle. Differentiating both sides of this equation with respect to time, t, yields
Step 3:
When solved for the wanted rate of change, dy/dt, gives us
Step 4 & 5:
Using the variables from step 1 gives us:
Solving for y using the Pythagorean Theorem gives:
Plugging in 8 for the equation:
It is generally assumed that negative values represent the downward direction. In doing such, the top of the ladder is sliding down the wall at a rate of meters per second.
Physics examples
Because one physical quantity often depends on another, which, in turn depends on others, such as time, related-rates methods have broad applications in Physics. This section presents an example of related rates kinematics and electromagnetic induction.
Relative kinematics of two vehicles
For example, one can consider the kinematics problem where one vehicle is heading West toward an intersection at 80 miles per hour while another is heading North away from the intersection at 60 miles per hour. One can ask whether the vehicles are getting closer or further apart and at what rate at the moment when the North bound vehicle is 3 miles North of the intersection and the West bound vehicle is 4 miles East of the intersection.
Big idea: use chain rule to compute rate of change of distance between two vehicles.
Plan:
Choose coordinate system
Identify variables
Draw picture
Big idea: use chain rule to compute rate of change of distance between two vehicles
Express c in terms of x and y via Pythagorean theorem
Express dc/dt using chain rule in terms of dx/dt and dy/dt
Substitute in x, y, dx/dt, dy/dt
Simplify.
Choose coordinate system:
Let the y-axis point North and the x-axis point East.
Identify variables:
Define y(t) to be the distance of the vehicle heading North from the origin and x(t) to be the distance of the vehicle heading West from the origin.
Express c in terms of x and y via the Pythagorean theorem:
Express dc/dt using chain rule in terms of dx/dt and dy/dt:
Substitute in x = 4 mi, y = 3 mi, dx/dt = −80 mi/hr, dy/dt = 60 mi/hr and simplify
Consequently, the two vehicles are getting closer together at a rate of 28 mi/hr.
Electromagnetic induction of conducting loop spinning in magnetic field
The magnetic flux through a loop of area A whose normal is at an angle θ to a magnetic field of strength B is
Faraday's law of electromagnetic induction states that the induced electromotive force is the negative rate of change of magnetic flux through a conducting loop.
If the loop area A and magnetic field B are held constant, but the loop is rotated so that the angle θ is a known function of time, the rate of change of θ can be related to the rate of change of (and therefore the electromotive force) by taking the time derivative of the flux relation
If for example, the loop is rotating at a constant angular velocity ω, so that θ = ωt, then
| Mathematics | Differential calculus | null |
12974846 | https://en.wikipedia.org/wiki/Cadaver | Cadaver | A cadaver, often known as a corpse, is a dead human body. Cadavers are used by medical students, physicians and other scientists to study anatomy, identify disease sites, determine causes of death, and provide tissue to repair a defect in a living human being. Students in medical school study and dissect cadavers as a part of their education. Others who study cadavers include archaeologists and arts students. In addition, a cadaver may be used in the development and evaluation of surgical instruments.
The term cadaver is used in courts of law (and, to a lesser extent, also by media outlets such as newspapers) to refer to a dead body, as well as by recovery teams searching for bodies in natural disasters. The word comes from the Latin word cadere ("to fall"). Related terms include cadaverous (resembling a cadaver) and cadaveric spasm (a muscle spasm causing a dead body to twitch or jerk). A cadaver graft (also called “postmortem graft”) is the grafting of tissue from a dead body onto a living human to repair a defect or disfigurement. Cadavers can be observed for their stages of decomposition, helping to determine how long a body has been dead.
Cadavers have been used in art to depict the human body in paintings and drawings more accurately.
Human decay
Observation of the various stages of decomposition can help determine how long a body has been dead.
Stages of decomposition
The first stage is autolysis, more commonly known as self-digestion, during which the body's cells are destroyed through the action of their own digestive enzymes. However, these enzymes are released into the cells because of cessation of the active processes in the cells, not as an active process. In other words, though autolysis resembles the active process of digestion of nutrients by live cells, the dead cells are not actively digesting themselves as is often claimed in popular literature and as the synonym of autolysis – self-digestion – seems to imply. As a result of autolysis, liquid is created that seeps between the layers of skin and results in peeling of the skin. During this stage, flies (when present) begin to lay eggs in the openings of the body: eyes, nostrils, mouth, ears, open wounds, and other orifices. Hatched larvae (maggots) of blowflies subsequently get under the skin and begin to consume the body.
The second stage of decomposition is bloating. Bacteria in the gut begin to break down the tissues of the body, releasing gas that accumulates in the intestines, which becomes trapped because of the early collapse of the small intestine. This bloating occurs largely in the abdomen, and sometimes in the mouth, tongue, and genitals. This usually happens around the second week of decomposition. Gas accumulation and bloating will continue until the body is decomposed sufficiently for the gas to escape.
The third stage is putrefaction. It is the final and longest stage. Putrefaction is where the larger structures of the body break down, and tissues liquefy. The digestive organs, brain, and lungs are the first to disintegrate. Under normal conditions, the organs are unidentifiable after three weeks. The muscles may be eaten by bacteria or devoured by animals. Eventually, sometimes after several years, all that remains is the skeleton. In acid-rich soils, the skeleton will eventually dissolve into its base chemicals.
The rate of decomposition depends on many factors including temperature and the environment. The warmer and more humid the environment, the faster the body is broken down. The presence of carrion-consuming animals will also result in exposure of the skeleton as they consume parts of the decomposing body.
History
The history of the use of cadavers is filled with controversy, scientific advancements, and new discoveries. Beginning in the 3rd century ancient Greece two physicians by the name of Herophilus of Chalcedon and Erasistratus of Ceos practiced the dissection of cadavers in Alexandria, and it was the dominant means of learning anatomy. After both of these men died, the popularity of anatomical dissection decreased until it was not used at all. It was not revived until the 12th century and it became increasingly popular in the 17th century and has been used ever since.
Even though both Herophilus and Erasistratus had permission to use cadavers for dissection, there was still a large amount of taboo surrounding the use of cadavers for anatomical purposes, and these feelings continued for hundreds of years. From the time that anatomical dissection gained its roots in the 3rd century to around the 18th century, it was associated with dishonor, immorality, and unethical behavior. Many of these notions were because of religious beliefs and esthetic taboos, and were deeply entrenched in the beliefs of the public and the church. As mentioned above, the dissection of cadavers began to once again take hold around the 12th century. At this time dissection was still seen as dishonorable; however, it was not outright banned. Instead, the church put forth certain edicts for banning and allowing certain practices. One that was monumental for scientific advancement was issued by the Holy Roman emperor Frederick II in 1231. This decree stated that a human body would be dissected once every five years for anatomical studies, and that attendance was required for all who were training to or currently practicing medicine or surgery. This led to the first sanctioned human dissection since 300 B.C., which was performed publicly by Mondino de Liuzzi. This time period created a great deal of enthusiasm in what human dissection could do for science and attracted students from all over Europe to begin studying medicine.
In light of the new discoveries and advancements that were being made, religious moderation of dissection relaxed significantly; however, the public perception of it was still negative. Because of this perception, the only legal source of cadavers was the corpses of criminals who were executed, usually by hanging. Many of the offenders whose crimes “warranted” dissection and their families even considered dissection to be more terrifying and demeaning than the crime or death penalty itself. There were many fights and sometimes even riots when relatives and friends of the deceased and soon to be dissected tried to stop the delivery of corpses from the place of hanging to the anatomists. The government at the time (17th century) took advantage of these qualms by using dissection as a threat against committing serious crimes. They even increased the number of crimes that were punished by hanging to over 200 offenses. Nevertheless, as dissection of cadavers became even more popular, anatomists were forced to find other ways to obtain cadavers.
As demand increased for cadavers from universities across the world, people began grave-robbing. These corpses were transported and put on sale for local anatomy professors to take back to their students. The public tended to look the other way when it came to grave-robbing because the affected was usually poor or a part of a marginalized society. There was more out-cry if the affluent or prominent members of society were affected, and this led to a riot in New York most commonly referred to as the Resurrection Riot of 1788. It all started when a doctor waved the arm of a cadaver at a young boy looking through the window, who then went home and told his father. Worrying that his recently deceased wife's grave had been robbed, he went to check on it and realized that it had been. This story spread and people accused local physicians and anatomists. The riot grew to 5,000 people and by the end medical students and doctors were beaten and six people were killed. This led to many legal adjustments such as the Anatomy Acts put forth by the U.S. government. These acts opened up other avenues to obtaining corpses for scientific purposes with Massachusetts being the first to do so. In 1830 and 1833, they allowed unclaimed bodies to be used for dissection. Laws in almost every state were subsequently passed and grave-robbing was essentially eradicated.
Although dissection became increasingly accepted throughout the years, it was still very much disapproved by the American public in the beginning of the 20th century. The disapproval mostly came from religious objections and dissection being associated with unclaimed bodies and therefore a mark of poverty. There were many people that attempted to display dissection in a positive light, for example 200 prominent New York physicians publicly said they would donate their bodies after their death. This and other efforts only helped in minor ways, and public opinion was much more affected by the exposure of the corrupt funeral industry. It was found that the cost of dying was incredibly high and a large amount of funeral homes were scamming people into paying more than they had to. These exposures did not necessarily remove stigma but created fear that a person and their families would be victimized by scheming funeral directors, therefore making people reconsider body donation.
In art
Since early history, the instances of inclusion and representation of corpses in art have been numerous; for instance, as in Neo-Assyrian sculpted reliefs of floating corpses on a river (c. 640 BCE), and in Aristophanes's comedy The Frogs (405 BCE), to memento mori and cadaver monuments.
The study and teaching of anatomy through the ages would not have been possible without sketches and detailed drawings of discoveries when working with human corpses. The artistic depiction of the placement of body parts plays a crucial role in studying anatomy and in assisting those working with the human body. These images serve as the only glance into the body that most will never witness in person.
Da Vinci collaborated with Andreas Vesalius who also worked with many young artists to illustrate Vesalius’ book "De Humani Corporis Fabrica" and this launched the use of labelling anatomical features to better describe them. It is believed that Vesalius used cadavers of executed criminals in his work due to the inability to secure bodies for this type of work and dissection. He also went to great measures to utilize a spirit of art appreciation in his drawings and also employed other artists to assist in these illustrations.
The study of the human body was not isolated to only medical doctors and students, as many artists reflected their expertise through masterful drawings and paintings. The detailed study of human and animal anatomy, as well as the dissection of corpses, was utilized by early Italian renaissance man Leonardo da Vinci in an effort to more accurately depict the human figure through his work. He studied the anatomy from an exterior perspective as an apprentice under Andrea del Verrocchio that started in 1466. During his apprenticeship, Leonardo mastered drawing detailed versions of anatomical structures such as muscles and tendons by 1472.
His approach to the depiction of the human body was much like that of the study of architecture, providing multiple views and three-dimensional perspectives of what he witnessed in person. One of the first examples of this is using the three dimensional perspectives to draw a skull in 1489. Further study under Verrocchio, some of Leonardo da Vinci's anatomical work was published in his book A Treatise on Painting. A few years later, in 1516, he partnered with professor and anatomist Marcantonio della Torre in Florence, Italy to take his study further. The two began to conduct dissections on human corpses at the Hospital of Santa Maria Nuova and later at hospitals in Milan and Rome. Through his study, da Vinci was perhaps the first to accurately draw the natural position of the human fetus in the womb, via cadaver of a late mother and her unborn child. It is speculated that he conducted approximately 30 dissections total. His work with cadavers allowed him to portray the first drawings of the umbilical cord, uterus, cervix and vagina and ultimately dispute beliefs that the uterus had multiple chambers in the case of multiple births. It is reported that between 1504 and 1507, he experimented with the brain of an ox by injecting a tube into the ventricular cavities, injecting hot wax, and scraping off the brain leaving a cast of the ventricles. Da Vinci's efforts proved to be very helpful in the study of the brain's ventricular system. Da Vinci gained an understanding of what was happening mechanically under the skin to better portray the body through art. For example, he removed the facial skin of the cadaver to more closely observe and draw the detailed muscles that move the lips to obtain a holistic understanding of that system. He also conducted a thorough study of the foot and ankle that continues to be consistent with current clinical theories and practice. His work with the shoulder also mirrors modern understanding of its movement and functions, utilizing a mechanical description likening it to ropes and pulleys. He also was one of the first to study neuroanatomy and made great advances regarding the understanding of the anatomy of the eye, optic nerves and the spine, but unfortunately his later discovered notes were disorganized and difficult to decipher due to his practice of reverse script writing (mirror writing).
For centuries artists have used their knowledge gleaned from the study of anatomy and the use of cadavers to better present a more accurate and lively representation of the human body in their artwork and mostly in paintings. It is thought that Michelangelo and/or Raphael may have also conducted dissections.
Importance in science
Cadavers are used in many different facets throughout the scientific community. One important aspect of cadavers use for science is that they have provided science with a vast amount of information dealing with the anatomy of the human body. Cadavers allowed scientists to investigate the human body on a deeper level which resulted in identification of certain body parts and organs. Two Greek scientists, Herophilus of Chalcedon and Erasistratus of Ceos were the first to use cadavers in the third century B.C. Through the dissection of cadavers, Herophilus made multiple discoveries concerning the anatomy of the human body, including the difference between the four ventricles within the brain, identification of seven pairs of cranial nerves, the difference between sensory and motor nerves, and the discovery of the cornea, retina and choroid coat within the eye. Herophilus also discovered the valves within a human heart while Erasistratus identified their function by testing the irreversibility of the blood flow through the valves. Erasistratus also discovered and distinguished between many details within the veins and arteries of the human body. Herophilus later provides descriptions of the human liver, the pancreas, and the male and female reproductive systems due to the dissection of the human body. Cadavers allowed Herophilus to determine that the womb in which fetus’ grow and develop in is not bicameral. This goes against the original notion of the womb in which was thought to have two chambers; however, Herophilus discovered the womb to only have one chamber. Herophilus also discovered the ovaries, the broad ligaments and the tubes within the female reproductive system. During this time period, cadavers were one of the only ways to develop an understanding of the anatomy of the human body.
Galen (130–201 AD) connected the famous works of Aristotle and other Greek physicians to his understanding of the human body. Galenic anatomy and physiology were considered to be the most prominent methods to teach when dealing with the study of the human body during this time period. Andreas Vesalius (1514–1564), known as the father of modern human anatomy, based his knowledge off of Galen's findings and his own dissection of human cadavers. Vesalius performed multiple dissections on cadavers for medical students to recognize and understand how the interior body parts of a human being worked. Cadavers also helped Vesalius discredit previous notions of work published by the Greek physician Galen dealing with certain functions of the brain and human body. Vesalius concluded that Galen never did use cadavers in order to gain a proper understanding of human anatomy but instead used previous knowledge from his predecessors.
Importance in medical field
In the present day, cadavers are used within medicine and surgery to further knowledge on human gross anatomy. Surgeons have dissected and examined cadavers before surgical procedures on living patients to identify any possible deviations within the surgical area of interest. New types of surgical procedures can lead to numerous obstacles involved within the procedure which can be eliminated through prior knowledge from the dissection of a cadaver.
Cadavers not only provide medical students and doctors knowledge about the different functions of the human body, but they also provide multiple causes of malfunction within the human body. Galen (250 AD), a Greek physician, was one of the first to associate events that occurred during a human's life with the internal ramifications found later after death. A simple autopsy of a cadaver can help determine origins of deadly diseases or disorders. Autopsies also can provide information on how certain drugs or procedures have been effective within the cadaver and how humans respond to certain injuries.
Appendectomies, the removal of the appendix, are performed 28,000 times a year in the United States and are still practiced on human cadavers and not with technology simulations. Gross anatomy, a common course in medical school studying the visual structures of the body, gives students the opportunity to have a hands-on learning environment. The need for cadavers has also grown outside of academic programs for research. Organizations like Science Care and the Anatomy Gifts Registry help send bodies where they are needed most.
Preserving for use in dissection
For a cadaver to be viable and ideal for anatomical study and dissection, the body must be refrigerated or the preservation process must begin within 24 hours of death. This preservation may be accomplished by embalming using a mixture of embalming fluids, or with a relatively new method called plastination. Both methods have advantages and disadvantages in regards to preparing bodies for anatomical dissection in the educational setting.
Embalming with fluids
The practice of embalming via chemical fluids has been used for centuries. The main objectives of this form of preservation are to keep the body from decomposing, help the tissues retain their color and softness, prevent both biological and environmental hazards, and preserve the anatomical structures in their natural forms. This is accomplished with a variety of chemical substances that can be separated generally into groups by their purposes. Disinfectants are used to kill any potential microbes. Preservatives are used to halt the action of decomposing organisms, deprive these organisms of nutrition, and alter chemical structures in the body to prevent decomposition. Various modifying agents are used to maintain the moisture, pH, and osmotic properties of the tissues along with anticoagulants to keep blood from clotting within the cardiovascular system. Other chemicals may also be used to keep the tissue from carrying displeasing odors or particularly unnatural colors.
Embalming practice has changed a great deal in the last few hundred years. Modern embalming for anatomical purposes no longer includes evisceration, as this disrupts the organs in ways that would be disadvantageous for the study of anatomy. As with the mixtures of chemicals, embalmers practicing today can use different methods for introducing fluids into the cadaver. Fluid can be injected into the arterial system (typically through the carotid or femoral arteries), the main body cavities, under the skin, or the cadaver can be introduced to fluids at the outer surface of the skin via immersion.
Different embalming services use different types and ratios of fluids, but typical embalming chemicals include formaldehyde, phenol, methanol, and glycerin. These fluids are combined in varying ratios depending on the source, but are generally also mixed with large amounts of water.
Chemicals and their roles in embalming
Formaldehyde is very widely used in the process of embalming. It is a fixative, and kills bacteria, fungus, and insects. It prevents decay by keeping decomposing microorganisms from surviving on and in the cadaver. It also cures the tissues it is used in so that they cannot serve as nutrients for these organisms. While formaldehyde is a good antiseptic, it has certain disadvantages as well. When used in embalming, it causes blood to clot and tissues to harden, it turns the skin gray, and its fumes are both malodorous and toxic if inhaled. However, its abilities to prevent decay and tan tissue without ruining its structural integrity have led to its continued widespread use to this day.
Phenol is a disinfectant that functions as an antibacterial and antifungal agent. It prevents the growth of mold in its liquefied form. Its disinfectant qualities rely on its ability to denature proteins and dismantle cell walls, but this unfortunately has the added side effect of drying tissues and occasionally results in a degree of discoloration.
Methanol is an additive with disinfectant properties. It helps regulate the osmotic balance of the embalming fluid, and it is a decent anti-refrigerant. It has been noted to be acutely toxic to humans.
Glycerin is a wetting agent that preserves liquid in the tissues of the cadaver. While it is not itself a true disinfectant, mixing it with formaldehyde greatly increases the effectiveness of formaldehyde's disinfectant properties.
Advantages and disadvantages of using traditionally embalmed cadavers
The use of traditionally embalmed cadavers is and has been the standard for medical education. Many medical and dental institutions still show a preference for these today, even with the advent of more advanced technology like digital models or synthetic cadavers. Cadavers embalmed with fluid do present a greater health risk to anatomists than these other methods as some of the chemicals used in the embalming process are toxic, and imperfectly embalmed cadavers may carry a risk of infection.
Plastination
Gunther von Hagens invented plastination at Heidelberg University in Heidelberg, Germany in 1977. This method of cadaver preservation involves the replacement of fluid and soluble lipids in a body with plastics. The resulting preserved bodies are called plastinates.
Whole-body plastination begins with much the same method as traditional embalming; a mixture of embalming fluids and water are pumped through the cadaver via arterial injection. After this step is complete, the anatomist may choose to dissect parts of the body to expose particular anatomical structures for study. After any desired dissection is completed, the cadaver is submerged in acetone. The acetone draws the moisture and soluble fats from the body and flows in to replace them. The cadaver is then placed in a bath of the plastic or resin of the practitioner's choice and the step known as forced impregnation begins. The bath generates a vacuum that causes acetone to vaporize, drawing the plastic or resin into the cells as it leaves. Once this is done the cadaver is positioned, the plastic inside it is cured, and the specimen is ready for use.
Advantages and disadvantages of using plastinates
Plastinates are advantageous in the study of anatomy as they provide durable, non-toxic specimens that are easy to store. However, they still have not truly gained ground against the traditionally embalmed cadaver. Plastinated cadavers are not accessible for some institutions, some educators believe the experience gained during embalmed cadaver dissection is more valuable, and some simply do not have the resources to acquire or use plastinates.
Body snatching
While many cadavers were murderers provided by the state, few of these corpses were available for everyone to dissect. The first recorded body snatching was performed by four medical students who were arrested in 1319 for grave-robbing. In the 1700s most body snatchers were doctors, anatomy professors or their students. By 1828, some anatomists were paying others to perform the exhumation. People in this profession were commonly known in the medical community as "resurrection men".
The London Borough Gang was a group of resurrection men that worked from 1802 to 1825. These men provided a number of schools with cadavers, and members of the schools would use influence to keep these men out of jail. Members of rival gangs would often report members of other gangs, or desecrate a graveyard in order to cause a public upset, making it so that rival gangs would not be able to operate.
Selling murder victims
From 1827 to 1828 in Scotland, a number of people were murdered, and the bodies were sold to medical schools for research purposes, known as the West Port murders. The Anatomy Act 1832 was created to ensure that relatives of the deceased submitted to the use of their kin in dissection and other scientific processes. Public response to the West Port murders was a factor in the passage of this bill, as well as the acts committed by the London Burkers.
Stories appeared of people murdering and selling the cadaver. Two of the well-known cases are that of Burke and Hare, and that of Bishop, May, and Williams.
Burke and Hare – Burke and Hare ran a boarding house. When one of their tenants died, they brought him to Robert Knox's anatomy classroom in Edinburgh, where they were paid seven pounds for the body. Realizing the possible profit, they murdered 16 people by asphyxiation over the next year and sold their bodies to Knox. They were eventually caught when a tenant returned to her bed only to encounter a corpse. Hare testified against Burke in exchange for amnesty and Burke was found guilty, hanged, and publicly dissected.
London Burkers, Bishop, May and Williams – These body snatchers killed three boys, ages 10, 11 and 14 years old. The anatomist that they sold the cadavers to was suspicious. To delay their departure, the anatomist stated that he needed to break a 50-pound note and sent for the police who then arrested the men. In his confession Bishop claimed to have body-snatched 500 to 1000 bodies in his career.
Making cars safer
Prior to the development of crash test dummies, cadavers were used to make motor vehicles safer. Cadavers have helped set guidelines on the safety features of vehicles ranging from laminated windshields to seat belt airbags. The first recorded use of cadaver crash test dummies was performed by Lawrence Patrick, in the 1930s, after using his own body, and of his students, to test the limits of the human body. His first cadaver use was when he tossed a cadaver down an elevator shaft. He learned that the human skull can withstand up to one and a half tons for one second before experiencing any type of damage.
In a 1995 study, it was approximated that improvements made to cars since cadaver testing have prevented 143,000 injuries and 4250 deaths. Miniature accelerometers are placed on the bone of the tested area of the cadaver. Damage is then inflicted on the cadaver with different tools including; linear impactors, pendulums, or falling weights. The cadaver may also be placed on an impact sled, simulating a crash. After these tests are completed, the cadaver is examined with an x-ray, looking for any damage, and returned to the Anatomy Department. Cadaver use contributed to Ford's inflatable rear seat belts introduced in the 2011 Explorer.
Public view of cadaver crash test dummies
After a New York Times article published in 1993, the public became aware of the use of cadavers in crash testing. The article focused on Heidelberg University's use of approximately 200 adult and children cadavers. After public outcry, the university was ordered to prove that the families of the cadavers approved their use in testing.
| Biology and health sciences | Human anatomy | Health |
4848707 | https://en.wikipedia.org/wiki/Therocephalia | Therocephalia | Therocephalia is an extinct clade of eutheriodont therapsids (mammals and their close relatives) from the Permian and Triassic periods. The therocephalians ("beast-heads") are named after their large skulls, which, along with the structure of their teeth, suggest that they were carnivores. Like other non-mammalian synapsids, therocephalians were once described as "mammal-like reptiles". Therocephalia is the group most closely related to the cynodonts, which gave rise to the mammals, and this relationship takes evidence in a variety of skeletal features. Indeed, it had been proposed that cynodonts may have evolved from therocephalians and so that therocephalians as recognised are paraphyletic in relation to cynodonts.
The fossils of therocephalians are numerous in the Karoo of South Africa, but have also been found in Russia, China, Tanzania, Zambia, and Antarctica. Early therocephalian fossils discovered in Middle Permian deposits of South Africa support a Gondwanan origin for the group, which seems to have spread quickly across Earth. Although almost every therocephalian lineage ended during the great Permian–Triassic extinction event, a few representatives of the subgroup called Eutherocephalia survived into the Early Triassic. Some genera belonging to this group are believed to have possessed venom, which would make them the oldest tetrapods known to have such characteristics. However, the last therocephalians became extinct by the early Middle Triassic, possibly due to climate change, along with competition with cynodonts and various groups of reptiles — mostly archosaurs and their close relatives, including archosauromorphs and archosauriforms.
Anatomy and physiology
Like the Gorgonopsia and many cynodonts, most therocephalians were presumably carnivores. The earlier therocephalians were, in many respects, as primitive as the gorgonopsians, but they did show certain advanced features. There is an enlargement of the temporal opening for broader jaw adductor muscle attachment and a reduction of the phalanges (finger and toe bones) to the mammalian phalangeal formula. The presence of an incipient secondary palate in advanced therocephalians is another feature shared with mammals. The discovery of maxilloturbinal ridges in forms such as the primitive therocephalian Glanosuchus, suggests that at least some therocephalians may have been warm-blooded.
The later therocephalians included the advanced Baurioidea, which carried some theriodont characteristics to a high degree of specialization. For instance, small baurioids and the herbivorous Bauria did not have an ossified postorbital bar separating the orbit from the temporal opening—a condition typical of primitive mammals. These and other advanced features led to the long-held opinion, now rejected, that the ictidosaurs and even some early mammals arose from a baurioid therocephalian stem. Mammalian characteristics such as this seem to have evolved in parallel among a number of different therapsid groups, even within Therocephalia.
Several more specialized lifestyles have been suggested for some therocephalians. Many small forms, like ictidosuchids, have been interpreted as aquatic animals. Evidence for aquatic lifestyles includes sclerotic rings that may have stabilized the eye under the pressure of water and strongly developed cranial joints, which may have supported the skull when consuming large fish and aquatic invertebrates. One therocephalian, Nothogomphodon, had large sabre-like canine teeth and may have fed on large animals, including other therocephalians. Other therocephalians such as bauriids and nanictidopids have wide teeth with many ridges similar to those of mammals, and may have been herbivores.
Many small therocephalians have small pits on their snouts that probably supported vibrissae (whiskers). In 1994, the Russian paleontologist Leonid Tatarinov proposed that these pits were part of an electroreception system in aquatic therocephalians. However, it is more likely that these pits are enlarged versions of the ones thought to support whiskers, or holes for blood vessels in a fleshy lip. The genera Euchambersia and Ichibengops, dating from the Lopingian, particularly attract the attention of paleontologists, because the fossil skulls attributed to them have some structures which suggests that these two animals had organs for distributing venom.
Classification
The therocephalians evolved as one of several lines of non-mammalian therapsids, and have a close relationship to the cynodonts, which includes mammals and their ancestors. They are broadly regarded as the sister group to cynodonts by most modern researchers, united together as the clade Eutheriodontia. However, some researchers have proposed that therocephalians are themselves ancestral to cynodonts, which would render therocephalians cladistically paraphyletic relative to cynodonts. Historically, cynodonts are often proposed to descend from (or are closest to) the therocephalian family Whaitsiidae under this hypothesis, however a 2024 study instead found support for a sister relationship between cynodonts and Eutherocephalia. The oldest known therocephalians first appear in the fossil record at the same time as other major therapsid groups, including the Gorgonopsia, which they resemble in many primitive features. For example, many early therocephalians possess long canine teeth similar to those of gorgonopsians. The therocephalians, however, outlasted the gorgonopsians, persisting into the early-Middle Triassic period as small weasel-like carnivores and cynodont-like herbivores.
While common ancestry with cynodonts (and, thus, mammals) accounts for many similarities between these groups, some scientists believe that other similarities may be better attributed to convergent evolution, such as the loss of the postorbital bar in some forms, a mammalian phalangeal formula, and some form of a secondary palate in most taxa. Therocephalians and cynodonts both survived the Permian-Triassic mass extinction; but, while therocephalians soon became extinct, cynodonts underwent rapid diversification. Therocephalians experienced a decreased rate of cladogenesis, meaning that few new groups appeared after the extinction. Most Triassic therocephalian lineages originated in the Late Permian, and lasted for only a short period of time in the Triassic, going extinct during the late Anisian.
Taxonomy
Therocephalia was first named and conceived of by Robert Broom in 1903 as an order to include what he regarded as primitive theriodonts, based primarily on Scylacosaurus and Ictidosaurus. However, his original concept of Therocephalia differed strongly from the modern classification by also including various genera of gorgonopsians (including Gorgonops) and dinocephalians. From 1903 to 1907 Broom added more therocephalian genera, as well as some non-therocephalians, to this group, including the anomodont Galechirus. The latter's inclusion highlighted Broom's view of therocephalians as 'primitive' and ancestral to other therapsids, believing anomodonts to be descended from a therocephalian-like ancestor such as Galechirus. However, by 1908 he considered its and some other non-therocephalian's inclusions to the group to be doubtful. In 1913, Broom reinstated Gorgonopsia as distinct from Therocephalia, but for many decades after there was still confusion from him and other researchers over which genera belonged to which group. The group's rank also varied from order, suborder and infraorder depending on authors' preferred therapsid systematics.
At the same time, the small 'advanced' therocephalians now classified under Baurioidea were often regarded as belonging to their own subgroup of therapsids distinct from therocephalians, the Bauriamorpha. Bauriamorphs were classified separately from therocephalians for many decades, though were often inferred to have evolved from therocephalians in parallel with cynodonts, each typically from different therocephalian stock. The inclusion of baurioids under Therocephalia was only firmly established in the 1980s, namely by Kemp (1982) and Hopson and Barghusen (1986).
Various therocephalian subgroups and clades have been proposed since the group was named, although their contents and nomenclature have often been highly unstable and some previously recognized therocephalian clades have turned out to be artificial or based upon dubious taxa. This has led to some prevalent names in therocephalian literature, sometimes in use for decades, being replaced by lesser-known names that hold priority. For example, the Scaloposauridae was based on fossils with mostly juvenile characteristics and is likely represented by immature specimens from other disparate therocephalian families.
In another example, the name 'Pristerognathidae' was extensively used for a group of basal therocephalians for much of the 20th century, but it has since been recognised that the name Scylacosauridae holds precedent for this group. Furthermore, the scope of 'Pristerognathidae' was unstable and variably was limited to an individual subgroup of early therocephalians (alongside others such as Lycosuchidae, Alopecodontidae, and Ictidosauridae) to encompassing the entirety of early therocephalians. Similarly, various names have been used for therocephalians corresponding to the family Adkidnognathidae in 20th century literature, including Annatherapsididae, Euchambersiidae (the oldest available name) and Moschorhinidae, and members have often had a confused relationship to whaitsiids. Consensus on the name and contents of Akidnognathidae was only achieved in the 21st century, asserting that a family-level group is established on the oldest referable genus and thus Akidnognathidae takes precedent for this group of non-whaitsioid eutherocephalians.
On the other hand, some groups previously thought to be artificial have turned out to be valid. The aberrant therocephalian family Lycosuchidae, once identified by the presence of multiple functional caniniform teeth, was proposed to represent an unnatural group based on a study of canine replacement in early therocephalians by van den Heever in 1980. However, subsequent analysis has exposed additional synapomorphies supporting the monophyly of this group (including delayed caniniform replacement), and Lycosuchidae is currently considered a valid basal clade within Therocephalia. However, most genera included in the group have since been declared dubious, and it now only includes Lycosuchus and Simorhinella.
Modern therocephalian taxonomy is instead based upon phylogenetic analyses of therocephalian species, which consistently recognises two groups of early therocephalians (the Lycosuchidae and Scylacosauridae) while more derived therocephalians form the clade Eutherocephalia. Some analyses have found scylacosaurids to be closer to eutherocephalians than to lycosuchids, and so have been united as the clade Scylacosauria, while others have suggested they are each other's sister taxa. Within Eutherocephalia, major clades corresponding to the families Akidnognathidae, Chthonosauridae, Hofmeyriidae, Whaitsiidae are recognised, along with various subclades grouped under Baurioidea. However, while individual groups of therocephalians are broadly recognised as valid, the interrelationships between them are often poorly supported. As such, there are few higher-level named clades uniting the multiple subclades, with the exceptions of Whaitsiioidea (uniting Hofmeyriidae and Whaitsiidae) and Baurioidea.
Phylogeny
Early phylogenetic analyses of therocephalians, such as that of Hopson and Barghusen (1986) and van den Heever (1994), recovered and validated many of the therocephalian subtaxa mentioned above in a phylogenetic context. However, the higher-level relationships were difficult to resolve, particularly between the subclades of Eutherocephalia (i.e. Hofmeyriidae, Akidnognathidae, Whaitsiidae and Baurioidea). For example, Hopson and Barghusen (1986) could only recover Eutherocephalia as an unresolved polytomy. Despite these shortcomings, subsequent discussions of therocephalian relationships relied almost exclusively on these analyses. Later analyses focused on the relationships of early cynodonts, namely Abdala (2007) and Botha et al. (2007), included some therocephalian taxa and supported the existence of Eutherocephalia, but also found cynodonts to be the sister taxon to the whaitsiid therocephalian Theriognathus and thus rendering Therocephalia paraphyletic.
Later phylogenetic analyses of therocephalians, initiated by Huttenlocker (2009), emphasise using a broader selection of therocephalian taxa and characters. Such analyses have reinforced Therocephalia as a sister clade to cynodonts, and the monophyly of Therocephalia has been supported by subsequent researchers.
Below is a cladogram modified from an analysis published by Christian A. Sidor, Zoe. T Kulik and Adam K. Huttenlocker in 2022, simplified to illustrate the relationships of the major recognised therocephalian subclades. It is based on the data matrix first published by Huttenlocker et al. (2011), and represents the broad topologies found by other iterations of this dataset, such as Sigurdsen et al. (2012), Huttenlocker et al. (2014), and Liu and Abdala (2022). An example of the lability of these relationships is demonstrated by Liu and Abdala (2023), who recovered an alternative topology with Chthonosauridae nested deeply within Akidnognathidae.
Below is a cladogram modified from Pusch et al. (2024) analysing the relationships of therocephalians and early cynodonts. Their analysis focused on including endocranial characteristics to help resolve the relations of therocephalians and cynodonts to supplement previous analyses that relied almost entirely on superficial cranial and dental characteristics that are subject to convergent evolution, and as such only includes taxa with available applicable data. Of these, only four therocephalians could be included. However, they each represent four major groups within therocephalian phylogeny: the two 'basal therocephalians' Lycosuchus (Lycosuchidae) and Alopecognathus (Scylacosauridae) and two derived members of Eutherocephalia, Olivierosuchus (Akidnognathidae) and Theriognathus (Whaitsiidae).
Notably, their analyses consistently found cynodonts and eutherocephalians to be sister taxa, with the basal therocephalians Lycosuchus and scylacosaurids in a more basal position, rendering therocephalians as they are traditionally conceived paraphyletic. This differs from previous proposals of a paraphyletic Therocephalia which typically regarded cynodonts as being closest to derived whaitsiid therocephalians.
| Biology and health sciences | Proto-mammals | Animals |
4855071 | https://en.wikipedia.org/wiki/Dispersion%20%28chemistry%29 | Dispersion (chemistry) | A dispersion is a system in which distributed particles of one material are dispersed in a continuous phase of another material. The two phases may be in the same or different states of matter.
Dispersions are classified in a number of different ways, including how large the particles are in relation to the particles of the continuous phase, whether or not precipitation occurs, and the presence of Brownian motion. In general, dispersions of particles sufficiently large for sedimentation are called suspensions, while those of smaller particles are called colloids and solutions.
Structure and properties
Dispersions do not display any structure; i.e., the particles (or in case of emulsions: droplets) dispersed in the liquid or solid matrix (the "dispersion medium") are assumed to be statistically distributed. Therefore, for dispersions, usually percolation theory is assumed to appropriately describe their properties.
However, percolation theory can be applied only if the system it should describe is in or close to thermodynamic equilibrium. There are only very few studies about the structure of dispersions (emulsions), although they are plentiful in type and in use all over the world in innumerable applications (see below).
In the following, only such dispersions with a dispersed phase diameter of less than 1 μm will be discussed. To understand the formation and properties of such dispersions (incl emulsions), it must be considered that the dispersed phase exhibits a "surface", which is covered ("wet") by a different "surface" that, hence, are forming an interface (chemistry). Both surfaces have to be created (which requires a huge amount of energy), and the interfacial tension (difference of surface tension) is not compensating the energy input, if at all.
Experimental evidence suggests dispersions have a structure very much different from any kind of statistical distribution (which would be characteristics for a system in thermodynamic equilibrium), but in contrast display structures similar to self-organisation, which can be described by non-equilibrium thermodynamics. This is the reason why some liquid dispersions turn to become gels or even solid at a concentration of a dispersed phase above a critical concentration (which is dependent on particle size and interfacial tension). Also, the sudden appearance of conductivity in a system of a dispersed conductive phase in an insulating matrix has been explained.
Dispersion description
Dispersion is a process by which (in the case of solid dispersing in a liquid) agglomerated particles are separated from each other, and a new interface between the inner surface of the liquid dispersion medium and the surface of the dispersed particles is generated. This process is facilitated by molecular diffusion and convection.
With respect to molecular diffusion, dispersion occurs as a result of an unequal concentration of the introduced material throughout the bulk medium. When the dispersed material is first introduced into the bulk medium, the region at which it is introduced then has a higher concentration of that material than any other point in the bulk. This unequal distribution results in a concentration gradient that drives the dispersion of particles in the medium so that the concentration is constant across the entire bulk. With respect to convection, variations in velocity between flow paths in the bulk facilitate the distribution of the dispersed material into the medium.
Although both transport phenomena contribute to the dispersion of a material into the bulk, the mechanism of dispersion is primarily driven by convection in cases where there is significant turbulent flow in the bulk. Diffusion is the dominant mechanism in the process of dispersion in cases of little to no turbulence in the bulk, where molecular diffusion is able to facilitate dispersion over a long period of time. These phenomena are reflected in common real-world events. The molecules in a drop of food coloring added to water will eventually disperse throughout the entire medium, where the effects of molecular diffusion are more evident. However, stirring the mixture with a spoon will create turbulent flows in the water that accelerate the process of dispersion through convection-dominated dispersion.
Degree of dispersion
The term dispersion also refers to the physical property of the degree to which particles clump together into agglomerates or aggregates. While the two terms are often used interchangeably, according to ISO nanotechnology definitions, an agglomerate is a reversible collection of particles weakly bound, for example by van der Waals forces or physical entanglement, whereas an aggregate is composed of irreversibly bonded or fused particles, for example through covalent bonds. A full quantification of dispersion would involve the size, shape, and number of particles in each agglomerate or aggregate, the strength of the interparticle forces, their overall structure, and their distribution within the system. However, the complexity is usually reduced by comparing the measured size distribution of "primary" particles to that of the agglomerates or aggregates. When discussing suspensions of solid particles in liquid media, the zeta potential is most often used to quantify the degree of dispersion, with suspensions possessing a high absolute value of zeta potential being considered as well-dispersed.
Types of dispersions
A solution describes a homogeneous mixture where the dispersed particles will not settle if the solution is left undisturbed for a prolonged period of time.
A colloid is a heterogeneous mixture where the dispersed particles have at least in one direction a dimension roughly between 1 nm and 1 μm or that in a system discontinuities are found at distances of that order.
A suspension is a heterogeneous dispersion of larger particles in a medium. Unlike solutions and colloids, if left undisturbed for a prolonged period of time, the suspended particles will settle out of the mixture.
Although suspensions are relatively simple to distinguish from solutions and colloids, it may be difficult to distinguish solutions from colloids since the particles dispersed in the medium may be too small to distinguish by the human eye. Instead, the Tyndall effect is used to distinguish solutions and colloids. Due to the various reported definitions of solutions, colloids, and suspensions provided in the literature, it is difficult to label each classification with a specific particle size range. The International Union of Pure and Applied Chemistry attempts to provide a standard nomenclature for colloids as particles in a size range having a dimension roughly between 1 nm and 1 μm.
In addition to the classification by particle size, dispersions can also be labeled by the combination of the dispersed phase and the medium phase that the particles are suspended in. Aerosols are liquids dispersed in a gas, sols are solids in liquids, emulsions are liquids dispersed in liquids (more specifically a dispersion of two immiscible liquids), and gels are liquids dispersed in solids.
Examples of dispersions
Milk is a commonly cited example of an emulsion, a specific type of dispersion of one liquid into another liquid where the two liquids are immiscible. The fat molecules suspended in milk provide a mode of delivery of important fat-soluble vitamins and nutrients from the mother to newborn. The mechanical, thermal, or enzymatic treatment of milk manipulates the integrity of these fat globules and results in a wide variety of dairy products.
Oxide dispersion-strengthened alloy (ODS) is an example of oxide particle dispersion into a metal medium, which improves the high temperature tolerance of the material. Therefore these alloys have several applications in the nuclear energy industry, where materials must withstand extremely high temperatures to maintain operation.
The degradation of coastal aquifers is a direct result of seawater intrusion into the and dispersion into the aquifer following excessive use of the aquifer. When an aquifer is depleted for human use, it is naturally replenished by groundwater moving in from other areas. In the case of coastal aquifers, the water supply is replenished both from the land boundary on one side and the sea boundary on the other side. After excessive discharge, saline water from the sea boundary will enter the aquifer and disperse in the freshwater medium, threatening the viability of the aquifer for human use. Several different solutions to seawater intrusion in coastal aquifers have been proposed, including engineering methods of artificial recharge and implementing physical barriers at the sea boundary.
Chemical dispersants are used in oil spills to mitigate the effects of the spill and promote the degradation of oil particles. The dispersants effectively isolate pools on oil sitting on the surface of the water into smaller droplets that disperse into the water, which lowers the overall concentration of oil in the water to prevent any further contamination or impact on marine biology and coastal wildlife.
| Physical sciences | Chemical mixtures: General | null |
22438472 | https://en.wikipedia.org/wiki/Conium%20maculatum | Conium maculatum | Conium maculatum, known as hemlock (British English) or poison hemlock (American English), is a highly poisonous flowering plant in the carrot family Apiaceae, native to Europe and North Africa. It is herbaceous without woody parts and has a biennial lifecycle. A hardy plant capable of living in a variety of environments, hemlock is widely naturalised in locations outside its native range, such as parts of Australia, West Asia, and North and South America, to which it has been introduced. It is capable of spreading and thereby becoming an invasive weed.
All parts of the plant are toxic, especially the seeds and roots, and especially when ingested. Under the right conditions the plant grows quite rapidly during the growing season and can reach heights of , with a long penetrating root. The plant has a distinctive odour usually considered unpleasant that carries with the wind. The hollow stems are usually spotted with a dark maroon colour and become dry and brown after completing its biennial lifecycle. The hollow stems of the plant are deadly for up to three years after the plant has died.
Description
Conium maculatum is a herbaceous flowering plant that grows to tall, exceptionally . All parts of the plant are hairless (glabrous). Hemlock has a smooth, green, hollow stem, usually spotted or streaked with red or purple. The leaves are two- to four-pinnate, finely divided and lacy, overall triangular in shape, up to long and broad. Hemlock's flower is small and white; they are loosely clustered and each flower has five petals.
A biennial plant, hemlock produces leaves at its base the first year but no flowers. In its second year it produces white flowers in umbrella-shaped clusters.
Similar species
Hemlock can be confused with the wild carrot plant (Daucus carota, sometimes called Queen Anne's lace). Wild carrot has a hairy stem without purple markings, and grows less than tall. One can distinguish the two from each other by hemlock's smooth texture, vivid mid-green colour, purple spotting of stems and petioles and typical height of the flowering stems being at least , twice the maximum for wild carrot. Wild carrots have hairy stems that lack the purple blotches.
The species can also be confused with harmless cow parsley (Anthriscus sylvestris, also sometimes called Queen Anne's lace).
The plant should not be visually confused with the North American-native Tsuga, a coniferous tree sometimes called the hemlock, hemlock fir, or hemlock spruce, from a slight similarity in the leaf smell. The ambiguous shorthand of 'hemlock' for this tree is more common in the US dialect than the plant it is actually named after. Similarly, the plant should not be confused with Cicuta (commonly known as water hemlock).
Taxonomy
The genus name "Conium" refers to koneios, the Greek word for 'spin' or 'whirl', alluding to the dizzying effects of the plant's poison after ingestion. In the vernacular, "hemlock" most commonly refers to the species C. maculatum. Conium comes from the Ancient Greek κώνειον – kṓneion: "hemlock". This may be related to konas (meaning to whirl), in reference to vertigo, one of the symptoms of ingesting the plant.
C. maculatum, also known as poison hemlock, was the first species within the genus to be described. It was identified by Carl Linnaeus in his 1753 publication, Species Plantarum. Maculatum means 'spotted', in reference to the purple blotches characteristic of the stalks of the species.
Names
In British and Australian English the most prominent vernacular name is hemlock. In American English it is typically called poison hemlock, though this name is also used elsewhere. Less frequent names used in both America and Australia include spotted hemlock and poison parsley. Other local or infrequent names in the US include bunk, California-ferm, cashes, herb-bonnet, kill-cow, Nebraska-fern, poisonroot, poison-snakeweed, St. Bennet's-herb, snakeweed, stinkweed, and wode-whistle. In Australia it is occasionally called wild carrot or wild parsnip. In Hiberno English it may be called devil's bread or devil's porridge.
Distribution and habitat
The hemlock plant is native to Europe and the Mediterranean region.
It exists in some woodland (and elsewhere) in most British Isles counties; in Ulster these are particularly County Down, County Antrim and County Londonderry.
It has become naturalised in Asia, North America, Australia and New Zealand. It is sometimes encountered around rivers in southeast Australia and Tasmania. Infestations and human contact with the plant are sometimes newsworthy events in the U.S. due to its extreme toxicity.
Ecology
The plant is often found in poorly drained soil, particularly near streams, ditches, and other watery surfaces. It also appears on roadsides, edges of cultivated fields, and waste areas. Conium maculatum grows in quite damp soil, but also on drier rough grassland, roadsides and disturbed ground. It is used as a food plant by the larvae of some lepidoptera, including silver-ground carpet moths and particularly the poison hemlock moth (Agonopterix alstroemeriana). The latter has been widely used as a biological control agent for the plant. Hemlock grows in the spring, when much undergrowth is not in flower and may not be in leaf. All parts of the plant are poisonous.
Toxicity
Hemlock contains coniine and some similar poisonous alkaloids, and is poisonous to all mammals (and many other organisms) that eat it. Intoxication has been reported in cattle, pigs, sheep, goats, donkeys, rabbits, and horses. Ingesting more than 150–300 milligrams of coniine, approximately equivalent to six to eight hemlock leaves, can be fatal for adult humans. The seeds and roots are more toxic than the leaves. Farmers also need to ensure that the hay fed to their animals does not contain hemlock. Hemlock is most poisonous in the spring when the concentration of γ-coniceine (the precursor to other toxins) is at its peak.
Alkaloids
C. maculatum is known for being extremely poisonous. Its tissues contain a number of different alkaloids. In flower buds, the major alkaloid found is γ-coniceine. This molecule is transformed into coniine later during the fruit development. The alkaloids are volatile; as such, researchers assume that these alkaloids play an important role in attracting pollinators such as butterflies and bees.
Conium contains the piperidine alkaloids coniine, N-methylconiine, conhydrine, and gamma-coniceine (or g-coniceïne), which is the precursor of the other hemlock alkaloids.
Coniine has pharmacological properties and a chemical structure similar to nicotine. Coniine acts directly on the central nervous system through inhibitory action on nicotinic acetylcholine receptors. Coniine can be dangerous to humans and livestock. With its high potency, the ingestion of seemingly small doses can easily result in respiratory collapse and death.
The alkaloid content in C. maculatum also affects the thermoregulatory centre by a phenomenon called peripheral vasoconstriction, resulting in hypothermia in calves. In addition, the alkaloid content was also found to stimulate the sympathetic ganglia and reduce the influence of the parasympathetic ganglia in rats and rabbits, causing an increased heart rate.
Coniine also has significant toxic effects on the kidneys. The presence of rhabdomyolysis and acute tubular necrosis has been shown in patients who died from hemlock poisoning. A fraction of these patients were also found to have acute kidney injury. Coniine is toxic for the kidneys because it leads to the constriction of the urinary bladder sphincter and eventually the accumulation of urine.
Toxicology
A short time after ingestion, the alkaloids induce potentially fatal neuromuscular dysfunction due to failure of the respiratory muscles. Acute toxicity, if not lethal, may resolve in spontaneous recovery, provided further exposure is avoided. Death can be prevented by artificial ventilation until the effects have worn off 48–72 hours later. For an adult, the ingestion of more than 100 mg (0.1 gram) of coniine (about six to eight fresh leaves, or a smaller dose of the seeds or root) may be fatal. Narcotic-like effects can be observed as soon as 30 minutes after ingestion of green leaf matter of the plant, with victims falling asleep and unconsciousness gradually deepening until death a few hours later.
The onset of symptoms is similar to that caused by curare, with an ascending muscular paralysis leading to paralysis of the respiratory muscles, causing death from oxygen deprivation.
It has been observed that poisoned animals return to feed on the plant after initial poisoning. Chronic toxicity affects only pregnant animals when they are poisoned at low levels by C. maculatum during the fetus' organ-formation period; in such cases the offspring is born with malformations, mainly palatoschisis and multiple congenital contractures (arthrogryposis). The damage to the fetus due to chronic toxicity is irreversible. Though arthrogryposis may be surgically corrected in some cases, most of the malformed animals die.
Such losses may be underestimated, at least in some regions, because of the difficulty in associating malformations with the much earlier maternal poisoning.
Since no specific antidote is available, prevention is the only way to deal with the production losses caused by the plant. Control with herbicides and grazing with less-susceptible animals (such as sheep) have been suggested. It is a common myth that C. maculatum alkaloids can enter the human food chain via milk and fowl, and scientific studies have disproven these claims.
Culture
In ancient Greece, hemlock was used to poison condemned prisoners. Conium maculatum is the plant that killed Theramenes, Socrates, Polemarchus, and Phocion. Socrates, the most famous victim of hemlock poisoning, was accused of impiety and corrupting the minds of the young men of Athens in 399 BC, and his trial gave down his death sentence. He decided to take a potent infusion of hemlock.
| Biology and health sciences | Apiales | Plants |
6341469 | https://en.wikipedia.org/wiki/Pedophilia | Pedophilia | Pedophilia (alternatively spelled paedophilia) is a psychiatric disorder in which an adult or older adolescent experiences a primary or exclusive sexual attraction to prepubescent children. Although girls typically begin the process of puberty at age 10 or 11, and boys at age 11 or 12, psychiatric diagnostic criteria for pedophilia extend the cut-off point for prepubescence to age 13. People with the disorder are often referred to as pedophiles (or paedophiles).
Pedophilia is a paraphilia. In recent versions of formal diagnostic coding systems such as the DSM-5 and ICD-11, "pedophilia" is distinguished from "pedophilic disorder." Pedophilic disorder is defined as a pattern of pedophilic arousal accompanied by either subjective distress or interpersonal difficulty, or having acted on that arousal. The DSM-5 requires that a person must be at least 16 years old, and at least five years older than the prepubescent child or children they are aroused by, for the attraction to be diagnosed as pedophilic disorder. Similarly, the ICD-11 excludes sexual behavior among post-pubertal children who are close in age. The DSM requires the arousal pattern must be present for 6 months or longer, while the ICD lacks this requirement. The ICD criteria also refrain from specifying chronological ages.
In popular usage, the word pedophilia is often applied to any sexual interest in children or the act of child sexual abuse, including any sexual interest in minors below the local age of consent or age of adulthood, regardless of their level of physical or mental development. This use conflates the sexual attraction to prepubescent children with the act of child sexual abuse and fails to distinguish between attraction to prepubescent and pubescent or post-pubescent minors. Although some people who commit child sexual abuse are pedophiles, child sexual abuse offenders are not pedophiles unless they have a primary or exclusive sexual interest in prepubescent children, and many pedophiles do not molest children.
Pedophilia was first formally recognized and named in the late 19th century. A significant amount of research in the area has taken place since the 1980s. Although mostly documented in men, there are also women who exhibit the disorder, and researchers assume available estimates underrepresent the true number of female pedophiles. No cure for pedophilia has been developed, but there are therapies that can reduce the incidence of a person committing child sexual abuse. The exact causes of pedophilia have not been conclusively established. Some studies of pedophilia in child sex offenders have correlated it with various neurological abnormalities and psychological pathologies.
Etymology and definitions
The word pedophilia comes from the Greek (paîs, paidós), meaning , and (philía), or . The term (in German) started being used in the 1830s among researchers of pederasty in Ancient Greece. It was further used in the field of forensics after the 1890's, following Richard von Krafft-Ebing's coinage of the term paedophilia erotica in the 1896 edition of Psychopathia Sexualis. Krafft-Ebing was the first researcher to use the term pedophilia to refer to a pattern of sexual attraction toward children who had not yet reached puberty, excluding pubescent minors from the pedophilic age range. In 1895, the English word pedophily was used as a translation of the German word pädophilie.
The term pedophilia was hardly used by 1945, but started appearing in medical records after 1950. By the 1950s and throughout the 1980s, the word pedophilia started being increasingly used by the popular media.
Infantophilia (or nepiophilia) is a sub-type of pedophilia; it is used to refer to a sexual preference for children under the age of 5 (especially infants and toddlers). This is sometimes referred to as nepiophilia (from the Greek (népios) meaning or , which in turn derives from ne- and epos meaning ), though this term is rarely used in academic sources. Hebephilia is defined as individuals with a primary or exclusive sexual interest in 11- to 14-year-old pubescents. The DSM-5 does not list hebephilia among the diagnoses. While evidence suggests that hebephilia is separate from pedophilia, the ICD-10 includes early pubertal age (an aspect of hebephilia) in its pedophilia definition, covering the physical development overlap between the two philias. In addition to hebephilia, some clinicians have proposed other categories that are somewhat or completely distinguished from pedophilia; these include pedohebephilia (a combination of pedophilia and hebephilia) and ephebophilia (though ephebophilia is not considered pathological).
Signs and symptoms
Development
Pedophilia emerges before or during puberty, and is stable over time. It is self-discovered, not chosen. For these reasons, pedophilia has been described as a disorder of sexual preference, phenomenologically similar to a heterosexual or homosexual orientation. These observations, however, do not exclude pedophilia from being classified as a mental disorder since pedophilic acts cause harm, and mental health professionals can sometimes help pedophiles to refrain from harming children.
In response to misinterpretations that the American Psychiatric Association considers pedophilia a sexual orientation because of wording in its printed DSM-5 manual, which distinguishes between paraphilia and what it calls "paraphilic disorder", subsequently forming a division of "pedophilia" and "pedophilic disorder", the association commented: "'[S]exual orientation' is not a term used in the diagnostic criteria for pedophilic disorder and its use in the DSM-5 text discussion is an error and should read 'sexual interest.'" They added, "In fact, APA considers pedophilic disorder a 'paraphilia,' not a 'sexual orientation.' This error will be corrected in the electronic version of DSM-5 and the next printing of the manual." They said they strongly support efforts to criminally prosecute those who sexually abuse and exploit children and adolescents, and "also support continued efforts to develop treatments for those with pedophilic disorder with the goal of preventing future acts of abuse."
Comorbidity and personality traits
Studies of pedophilia in child sex offenders often report that it co-occurs with other psychopathologies, such as low self-esteem, depression, anxiety, and personality problems. It is not clear whether these are features of the disorder itself, artifacts of sampling bias, or consequences of being identified as a sex offender. One review of the literature concluded that research on personality correlates and psychopathology in pedophiles is rarely methodologically correct, in part owing to confusion between pedophiles and child sex offenders, as well as the difficulty of obtaining a representative, community sample of pedophiles. Seto (2004) points out that pedophiles who are available from a clinical setting are likely there because of distress over their sexual preference or pressure from others. This increases the likelihood that they will show psychological problems. Similarly, pedophiles recruited from a correctional setting have been convicted of a crime, making it more likely that they will show anti-social characteristics.
Impaired self-concept and interpersonal functioning were reported in a sample of child sex offenders who met the diagnostic criteria for pedophilia by Cohen et al. (2002), which the authors suggested could contribute to motivation for pedophilic acts. The pedophilic offenders in the study had elevated psychopathy and cognitive distortions compared to healthy community controls. This was interpreted as underlying their failure to inhibit their criminal behavior. Studies in 2009 and 2012 found that non-pedophilic child sex offenders exhibited psychopathy, but pedophiles did not.
Wilson and Cox (1983) studied the characteristics of a group of pedophile club members. The most marked differences between pedophiles and controls were on the introversion scale, with pedophiles showing elevated shyness, sensitivity and depression. The pedophiles scored higher on neuroticism and psychoticism, but not enough to be considered pathological as a group. The authors caution that "there is a difficulty in untangling cause and effect. We cannot tell whether paedophiles gravitate towards children because, being highly introverted, they find the company of children less threatening than that of adults, or whether the social withdrawal implied by their introversion is a result of the isolation engendered by their preference i.e., awareness of the social [dis]approbation and hostility that it evokes" (p. 324). In a non-clinical survey, 46% of pedophiles reported that they had seriously considered suicide for reasons related to their sexual interest, 32% planned to carry it out, and 13% had already attempted it.
A review of qualitative research studies published between 1982 and 2001 concluded that child sexual abusers use cognitive distortions to meet personal needs, justifying abuse by making excuses, redefining their actions as love and mutuality, and exploiting the power imbalance inherent in all adult–child relationships. Other cognitive distortions include the idea of "children as sexual beings", uncontrollability of sexual behavior, and "sexual entitlement-bias".
Child pornography
Consumption of child pornography is a more reliable indicator of pedophilia than molesting a child, although some non-pedophiles also view child pornography. Child pornography may be used for a variety of purposes, ranging from private sexual gratification or trading with other collectors, to preparing children for sexual abuse as part of the child grooming process.
Pedophilic viewers of child pornography are often obsessive about collecting, organizing, categorizing, and labeling their child pornography collection according to age, gender, sex act and fantasy. According to FBI agent Ken Lanning, "collecting" pornography does not mean that they merely view pornography, but that they save it, and "it comes to define, fuel, and validate their most cherished sexual fantasies". Lanning states that the collection is the single best indicator of what the offender wants to do, but not necessarily of what has been or will be done. Researchers Taylor and Quayle reported that pedophilic collectors of child pornography are often involved in anonymous internet communities dedicated to extending their collections.
Causes
Although what causes pedophilia is not yet known, researchers began reporting a series of findings linking pedophilia with brain structure and function, beginning in 2002. Testing individuals from a variety of referral sources inside and outside the criminal justice system as well as controls, these studies found associations between pedophilia and lower IQs, poorer scores on memory tests, greater rates of non-right-handedness, greater rates of school grade failure over and above the IQ differences, being below average height, greater probability of having had childhood head injuries resulting in unconsciousness, and several differences in MRI-detected brain structures.
Such studies suggest that there are one or more neurological characteristics present at birth that cause or increase the likelihood of being pedophilic. Some studies have found that pedophiles are less cognitively impaired than non-pedophilic child molesters. A 2011 study reported that pedophilic child molesters had deficits in response inhibition, but no deficits in memory or cognitive flexibility. Evidence of familial transmittability "suggests, but does not prove that genetic factors are responsible" for the development of pedophilia. A 2015 study indicated that pedophilic offenders have a normal IQ.
Another study, using structural MRI, indicated that male pedophiles have a lower volume of white matter than a control group. Functional magnetic resonance imaging (fMRI) has indicated that child molesters diagnosed with pedophilia have reduced activation of the hypothalamus as compared with non-pedophilic persons when viewing sexually arousing pictures of adults. A 2008 functional neuroimaging study notes that central processing of sexual stimuli in heterosexual "paedophile forensic inpatients" may be altered by a disturbance in the prefrontal networks, which "may be associated with stimulus-controlled behaviours, such as sexual compulsive behaviours". The findings may also suggest "a dysfunction at the cognitive stage of sexual arousal processing".
Blanchard, Cantor, and Robichaud (2006) reviewed the research that attempted to identify hormonal aspects of pedophiles. They concluded that there is some evidence that pedophilic men have less testosterone than controls, but that the research is of poor quality and that it is difficult to draw any firm conclusion from it.
While not causes of pedophilia themselves, childhood abuse by adults or comorbid psychiatric illnesses—such as personality disorders and substance abuse—are risk factors for acting on pedophilic urges. Blanchard, Cantor, and Robichaud addressed comorbid psychiatric illnesses that, "The theoretical implications are not so clear. Do particular genes or noxious factors in the prenatal environment predispose a male to develop both affective disorders and pedophilia, or do the frustration, danger, and isolation engendered by unacceptable sexual desires—or their occasional furtive satisfaction—lead to anxiety and despair?" They indicated that, because they previously found mothers of pedophiles to be more likely to have undergone psychiatric treatment, the genetic possibility is more likely.
A study analyzing the sexual fantasies of 200 heterosexual men by using the Wilson Sex Fantasy Questionnaire exam determined that males with a pronounced degree of paraphilic interest (including pedophilia) had a greater number of older brothers, a high 2D:4D digit ratio (which would indicate low prenatal androgen exposure), and an elevated probability of being left-handed, suggesting that disturbed hemispheric brain lateralization may play a role in deviant attractions.
Diagnosis
DSM and ICD-11
The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR) states, "The diagnostic criteria for pedophilic disorder are intended to apply both to individuals who freely disclose this paraphilia and to individuals who deny any sexual attraction to prepubertal children (generally age 13 years or younger), despite substantial objective evidence to the contrary." The manual outlines specific criteria for use in the diagnosis of this disorder. These include the presence of sexually arousing fantasies, behaviors or urges that involve some kind of sexual activity with a prepubescent child (with the diagnostic criteria for the disorder extending the cut-off point for prepubescence to age 13) for six months or more, or that the subject has acted on these urges or is distressed as a result of having these feelings. The criteria also indicate that the subject should be 16 or older and that the child or children they fantasize about are at least five years younger than them, though ongoing sexual relationships between a 12- to 13-year-old and a late adolescent are advised to be excluded. A diagnosis is further specified by the sex of the children the person is attracted to, if the impulses or acts are limited to incest, and if the attraction is "exclusive" or "nonexclusive".
The ICD-11 defines pedophilic disorder as a "sustained, focused, and intense pattern of sexual arousal—as manifested by persistent sexual thoughts, fantasies, urges, or behaviours—involving pre-pubertal children." It also states that for a diagnosis of pedophilic disorder, "the individual must have acted on these thoughts, fantasies or urges or be markedly distressed by them. This diagnosis does not apply to sexual behaviours among pre- or post-pubertal children with peers who are close in age."
Several terms have been used to distinguish "true pedophiles" from non-pedophilic and non-exclusive offenders, or to distinguish among types of offenders on a continuum according to strength and exclusivity of pedophilic interest, and motivation for the offense (see child sexual offender types). Exclusive pedophiles are sometimes referred to as true pedophiles. They are sexually attracted to prepubescent children, and only prepubescent children. Showing no erotic interest in adults, they can only become sexually aroused while fantasizing about or being in the presence of prepubescent children, or both. Non-exclusive offenders—or "non-exclusive pedophiles"—may at times be referred to as non-pedophilic offenders, but the two terms are not always synonymous. Non-exclusive offenders are sexually attracted to both children and adults, and can be sexually aroused by both, though a sexual preference for one over the other in this case may also exist. If the attraction is a sexual preference for prepubescent children, such offenders are considered pedophiles in the same vein as exclusive offenders.
Neither the DSM nor the ICD-11 diagnostic criteria require actual sexual activity with a prepubescent youth. The diagnosis can therefore be made based on the presence of fantasies or sexual urges even if they have never been acted upon. On the other hand, a person who acts upon these urges yet experiences no distress about their fantasies or urges can also qualify for the diagnosis. Acting on sexual urges is not limited to overt sex acts for purposes of this diagnosis, and can sometimes include indecent exposure, voyeuristic or frotteuristic behaviors. The ICD-11 also considers planning or seeking to engage in these behaviors, as well as the use of child pornography, to be evidence of the diagnosis. However the DSM-5-TR, in a change from the prior edition, excludes the use of child pornography alone as meeting the criteria for "acting on sexual urges." This change is controversial due to being made for legal reasons rather than scientific. According to forensic psychologist Michael C. Seto, who was part of the DSM-5-TR workgroup, the removal of child pornography use alone was to avoid diagnosing criminal defendants convicted of child pornography offenses, but no in-person offenses, with pedophilic disorder, as this could potentially lead to such defendants being committed to mental institutions under sexually violent predator laws. Seto, who has published several research studies on pedophilia and its relationship with child pornography, objected to this reasoning by the APA, as it would only apply to a tiny minority of commitments, as well as deny help-seeking pedophiles access to clinical care due to not having an official diagnosis for insurance purposes.
In practice, the patient's behaviors need to be considered in-context with an element of clinical judgment before a diagnosis is made. Likewise, when the patient is in late adolescence, the age difference is not specified in hard numbers and instead requires careful consideration of the situation.
Debate regarding criteria
There was discussion on the DSM-IV-TR being overinclusive and underinclusive. Its criterion A concerns sexual fantasies or sexual urges regarding prepubescent children, and its criterion B concerns acting on those urges or the urges causing marked distress or interpersonal difficulty. Several researchers discussed whether or not a "contented pedophile"—an individual who fantasizes about having sex with a child and masturbates to these fantasies, but does not commit child sexual abuse, and who does not feel subjectively distressed afterward—met the DSM-IV-TR criteria for pedophilia since this person did not meet criterion B. Criticism also concerned someone who met criterion B, but did not meet criterion A. A large-scale survey about usage of different classification systems showed that the DSM classification is only rarely used. As an explanation, it was suggested that the underinclusiveness, as well as a lack of validity, reliability and clarity might have led to the rejection of the DSM classification.
Ray Blanchard, an American-Canadian sexologist known for his research studies on pedophilia, addressed (in his literature review for the DSM-5) the objections to the overinclusiveness and under underinclusiveness of the DSM-IV-TR, and proposed a general solution applicable to all paraphilias. This meant namely a distinction between paraphilia and paraphilic disorder. The latter term is proposed to identify the diagnosable mental disorder which meets Criterion A and B, whereas an individual who does not meet Criterion B can be ascertained but not diagnosed as having a paraphilia. Blanchard and a number of his colleagues also proposed that hebephilia become a diagnosable mental disorder under the DSM-5 to resolve the physical development overlap between pedophilia and hebephilia by combining the categories under pedophilic disorder, but with specifiers on which age range (or both) is the primary interest. The proposal for hebephilia was rejected by the American Psychiatric Association, but the distinction between paraphilia and paraphilic disorder was implemented.
The American Psychiatric Association stated that "[i]n the case of pedophilic disorder, the notable detail is what wasn't revised in the new manual. Although proposals were discussed throughout the DSM-5 development process, diagnostic criteria ultimately remained the same as in DSM-IV TR" and that "[o]nly the disorder name will be changed from pedophilia to pedophilic disorder to maintain consistency with the chapter's other listings." If hebephilia had been accepted as a DSM-5 diagnosable disorder, it would have been similar to the ICD-10 definition of pedophilia that already includes early pubescents, and would have raised the minimum age required for a person to be able to be diagnosed with pedophilia from 16 years to 18 years (with the individual needing to be at least 5 years older than the minor).
O'Donohue, however, suggests that the diagnostic criteria for pedophilia be simplified to the attraction to children alone if ascertained by self-report, laboratory findings, or past behavior. He states that any sexual attraction to children is pathological and that distress is irrelevant, noting "this sexual attraction has the potential to cause significant harm to others and is also not in the best interests of the individual." Also arguing for behavioral criteria in defining pedophilia, Howard E. Barbaree and Michael C. Seto disagreed with the American Psychiatric Association's approach in 1997 and instead recommended the use of actions as the sole criterion for the diagnosis of pedophilia, as a means of taxonomic simplification.
Treatment
There is no evidence that pedophilia can be cured. Instead, most therapies focus on helping pedophiles refrain from acting on their desires. Some therapies do attempt to cure pedophilia, but there are no studies showing that they result in a long-term change in sexual preference. Michael Seto suggests that attempts to cure pedophilia in adulthood are unlikely to succeed because its development is influenced by prenatal factors. Pedophilia appears to be difficult to alter but pedophiles can be helped to control their behavior, and future research could develop a method of prevention.
There are several common limitations to studies of treatment effectiveness. Most categorize their participants by behavior rather than erotic age preference, which makes it difficult to know the specific treatment outcome for pedophiles. Many do not select their treatment and control groups randomly. Offenders who refuse or quit treatment are at higher risk of offending, so excluding them from the treated group, while not excluding those who would have refused or quit from the control group, can bias the treated group in favor of those with lower recidivism. The effectiveness of treatment for non-offending pedophiles has not been studied.
For child molesters
Cognitive behavioral therapy
Cognitive behavioral therapy (CBT) aims to reduce attitudes, beliefs, and behaviors that may increase the likelihood of sexual offenses against children. Its content varies widely between therapists, but a typical program might involve training in self-control, social competence and empathy, and use cognitive restructuring to change views on sex with children. The most common form of this therapy is relapse prevention, where the patient is taught to identify and respond to potentially risky situations based on principles used for treating addictions.
The evidence for cognitive behavioral therapy is mixed. A 2012 Cochrane Review of randomized trials found that CBT had no effect on risk of reoffending for contact sex offenders. Meta-analyses in 2002 and 2005, which included both randomized and non-randomized studies, concluded that CBT reduced recidivism. There is debate over whether non-randomized studies should be considered informative. More research is needed.
Behavioral interventions
Behavioral treatments target sexual arousal to children, using satiation and aversion techniques to suppress sexual arousal to children and covert sensitization (or masturbatory reconditioning) to increase sexual arousal to adults. Behavioral treatments appear to have an effect on sexual arousal patterns during phallometric testing, but it is not known whether the effect represents changes in sexual interests or changes in the ability to control genital arousal during testing, nor whether the effect persists in the long term. For sex offenders with mental disabilities, applied behavior analysis has been used.
Sex drive reduction
Pharmacological interventions are used to lower the sex drive in general, which can ease the management of pedophilic feelings, but does not change sexual preference. Antiandrogens work by interfering with the activity of testosterone. Cyproterone acetate (Androcur) and medroxyprogesterone acetate (Depo-Provera) are the most commonly used. The efficacy of antiandrogens has some support, but few high-quality studies exist. Cyproterone acetate has the strongest evidence for reducing sexual arousal, while findings on medroxyprogesterone acetate have been mixed.
Gonadotropin-releasing hormone analogs such as leuprorelin (Lupron), which last longer and have fewer side-effects, are also used to reduce libido, as are selective serotonin reuptake inhibitors. The evidence for these alternatives is more limited and mostly based on open trials and case studies. All of these treatments, commonly referred to as "chemical castration", are often used in conjunction with cognitive behavioral therapy. According to the Association for the Treatment of Sexual Abusers, when treating child molesters, "anti-androgen treatment should be coupled with appropriate monitoring and counseling within a comprehensive treatment plan." These drugs may have side-effects, such as weight gain, breast development, liver damage and osteoporosis.
Historically, surgical castration was used to lower sex drive by reducing testosterone. The emergence of pharmacological methods of adjusting testosterone has made it largely obsolete, because they are similarly effective and less invasive. It is still occasionally performed in Germany, the Czech Republic, Switzerland, and a few U.S. states. Non-randomized studies have reported that surgical castration reduces recidivism in contact sex offenders. The Association for the Treatment of Sexual Abusers opposes surgical castration and the Council of Europe works to bring the practice to an end in Eastern European countries where it is still applied through the courts.
Epidemiology
Pedophilia and child molestation
The prevalence of pedophilia in the general population is not known, but is estimated to be lower than 5% among adult men. Less is known about the prevalence of pedophilia in women, but there are case reports of women with strong sexual fantasies and urges towards children. Male perpetrators account for the vast majority of sexual crimes committed against children. Among convicted offenders, 0.4% to 4% are female, and one literature review estimates that the ratio of male-to-female child molesters is 10 to 1. The true number of female child molesters may be underrepresented by available estimates, for reasons including a "societal tendency to dismiss the negative impact of sexual relationships between young boys and adult women, as well as women's greater access to very young children who cannot report their abuse", among other explanations.
The term pedophile is commonly used by the public to describe all child sexual abuse offenders. This usage is considered problematic by researchers, because many child molesters do not have a strong sexual interest in prepubescent children, and are consequently not pedophiles. There are motives for child sexual abuse that are unrelated to pedophilia, such as stress, marital problems, the unavailability of an adult partner, general anti-social tendencies, high sex drive or alcohol use. As child sexual abuse is not automatically an indicator that its perpetrator is a pedophile, offenders can be separated into two types: pedophilic and non-pedophilic (or preferential and situational). Estimates for the rate of pedophilia in detected child molesters generally range between 25% and 50%. A 2006 study found that 35% of its sample of child molesters were pedophilic. Pedophilia appears to be less common in incest offenders, especially fathers and step-fathers. According to a U.S. study on 2429 adult male sex offenders who were categorized as "pedophiles", only 7% identified themselves as exclusive; indicating that many or most child sexual abusers may fall into the non-exclusive category.
Some pedophiles do not molest children. Little is known about this population because most studies of pedophilia use criminal or clinical samples, which may not be representative of pedophiles in general. Researcher Michael Seto suggests that pedophiles who commit child sexual abuse do so because of other anti-social traits in addition to their sexual attraction. He states that pedophiles who are "reflective, sensitive to the feelings of others, averse to risk, abstain from alcohol or drug use, and endorse attitudes and beliefs supportive of norms and the laws" may be unlikely to abuse children. A 2015 study indicates that pedophiles who molested children are neurologically distinct from non-offending pedophiles. The pedophilic molesters had neurological deficits suggestive of disruptions in inhibitory regions of the brain, while non-offending pedophiles had no such deficits.
According to Abel, Mittleman, and Becker (1985) and Ward et al. (1995), there are generally large distinctions between the characteristics of pedophilic and non-pedophilic molesters. They state that non-pedophilic offenders tend to offend at times of stress; have a later onset of offending; and have fewer, often familial, victims, while pedophilic offenders often start offending at an early age; often have a larger number of victims who are frequently extrafamilial; are more inwardly driven to offend; and have values or beliefs that strongly support an offense lifestyle. One study found that pedophilic molesters had a median of 1.3 victims for those with girl victims and 4.4 for those with boy victims. Child molesters, pedophilic or not, employ a variety of methods to gain sexual access to children. Some groom their victims into compliance with attention and gifts, while others use threats, alcohol or drugs, or physical force.
History
Pedophilia is believed to have occurred in humans throughout history. The term paedophilie (in German) has been used since the late 1830s by researchers of pederasty in ancient Greece. The term "paedophilia erotica" was coined in an 1896 article by the Viennese psychiatrist Richard von Krafft-Ebing but does not enter the author's Psychopathia Sexualis until the 10th German edition. A number of authors anticipated Krafft-Ebing's diagnostic gesture. In Psychopathia Sexualis, the term appears in a section titled "Violation of Individuals Under the Age of Fourteen", which focuses on the forensic psychiatry aspect of child sexual offenders in general. Krafft-Ebing describes several typologies of offender, dividing them into psychopathological and non-psychopathological origins, and hypothesizes several apparent causal factors that may lead to the sexual abuse of children.
Krafft-Ebing mentioned paedophilia erotica in a typology of "psycho-sexual perversion". He wrote that he had only encountered it four times in his career and gave brief descriptions of each case, listing three common traits:
The individual is tainted [by heredity] (hereditär belastete).
The subject's primary attraction is to children, rather than adults.
The acts committed by the subject are typically not intercourse, but rather involve inappropriate touching or manipulating the child into performing an act on the subject.
He mentions several cases of pedophilia among adult women (provided by another physician), and also considered the abuse of boys by homosexual men to be extremely rare. Further clarifying this point, he indicated that cases of adult men who have some medical or neurological disorder and abuse a male child are not true pedophilia and that, in his observation, victims of such men tended to be older and pubescent. He also lists pseudopaedophilia as a related condition wherein "individuals who have lost libido for the adult through masturbation and subsequently turn to children for the gratification of their sexual appetite" and claimed this is much more common.
Austrian neurologist Sigmund Freud briefly wrote about the topic in his 1905 book Three Essays on the Theory of Sexuality, in a section titled The Sexually immature and Animals as Sexual objects. He wrote that exclusive pedophilia was rare and only occasionally were prepubescent children exclusive objects. He wrote that they usually were the subject of desire when a weak person "makes use of such substitutes" or when an uncontrollable instinct which will not allow delay seeks immediate gratification and cannot find a more appropriate object.
In 1908, Swiss neuroanatomist and psychiatrist Auguste Forel wrote of the phenomenon, proposing that it be referred to it as "Pederosis", the "Sexual Appetite for Children". Similar to Krafft-Ebing's work, Forel made the distinction between incidental sexual abuse by persons with dementia and other organic brain conditions, and the truly preferential and sometimes exclusive sexual desire for children. However, he disagreed with Krafft-Ebing in that he felt the condition of the latter was largely ingrained and unchangeable.
The term pedophilia became the generally accepted term for the condition and saw widespread adoption in the early 20th century, appearing in many popular medical dictionaries such as the 5th Edition of Stedman's in 1918. In 1952, it was included in the first edition of the Diagnostic and Statistical Manual of Mental Disorders. This edition and the subsequent DSM-II listed the disorder as one subtype of the classification "Sexual Deviation", but no diagnostic criteria were provided. The DSM-III, published in 1980, contained a full description of the disorder and provided a set of guidelines for diagnosis. The revision in 1987, the DSM-III-R, kept the description largely the same, but updated and expanded the diagnostic criteria.
Law and forensic psychology
Definitions
Pedophilia is not a legal term, as having a sexual attraction to children without acting on it is not illegal. In law enforcement circles, the term pedophile is sometimes used informally to refer to any person who commits one or more sexually-based crimes that relate to legally underage victims. These crimes may include child sexual abuse, statutory rape, offenses involving child pornography, child grooming, stalking, and indecent exposure. One unit of the United Kingdom's Child Abuse Investigation Command is known as the "Paedophile Unit" and specializes in online investigations and enforcement work. Some forensic science texts, such as Holmes (2008), use the term to refer to offenders who target child victims, even when such children are not the primary sexual interest of the offender. FBI agent Kenneth Lanning, however, makes a point of distinguishing between pedophiles and child molesters.
Civil and legal commitment
In the United States, following Kansas v. Hendricks, sex offenders who have certain mental disorders, including pedophilia, can be subject to indefinite civil commitment under various state laws (generically called SVP laws) and the federal Adam Walsh Child Protection and Safety Act of 2006. Similar legislation exists in Canada.
In Kansas v. Hendricks, the US Supreme Court upheld as constitutional a Kansas law, the Sexually Violent Predator Act, under which Hendricks, a pedophile, was found to have a "mental abnormality" defined as a "congenital or acquired condition affecting the emotional or volitional capacity which predisposes the person to commit sexually violent offenses to the degree that such person is a menace to the health and safety of others", which allowed the State to confine Hendricks indefinitely irrespective of whether the State provided any treatment to him. In United States v. Comstock, this type of indefinite confinement was upheld for someone previously convicted on child pornography charges; this time a federal law was involved—the Adam Walsh Child Protection and Safety Act. The Walsh Act does not require a conviction on a sex offense charge, but only that the person be a federal prisoner, and one who "has engaged or attempted to engage in sexually violent conduct or child molestation and who is sexually dangerous to others", and who "would have serious difficulty in refraining from sexually violent conduct or child molestation if released".
In the US, offenders with pedophilia are more likely to be recommended for civil commitment than non-pedophilic offenders. About half of committed offenders have a diagnosis of pedophilia. Psychiatrist Michael First writes that, since not all people with a paraphilia have difficulty controlling their behavior, the evaluating clinician must present additional evidence of volitional impairment instead of recommending commitment based on pedophilia alone.
Society and culture
General
Pedophilia is one of the most stigmatized mental disorders. Among the public, common feelings include anger, fear and social rejection of pedophiles who have not committed a crime. Such attitudes could negatively impact child sexual abuse prevention by reducing pedophiles' mental stability and discouraging them from seeking help. According to sociologists Melanie-Angela Neuilly and Kristen Zgoba, social concern over pedophilia intensified greatly in the 1990s, coinciding with several sensational sex crimes (but a general decline in child sexual abuse rates). They found that pedophile appeared only rarely in The New York Times and before 1996, with zero mentions in 1991.
Social attitudes towards child sexual abuse are extremely negative, with some surveys ranking it as morally worse than murder. Early research showed that there was a great deal of misunderstanding and unrealistic perceptions in the general public about child sexual abuse and pedophiles. A 2004 study concluded that the public was well-informed on some aspects of these subjects.
Misuse of medical terminology
The words pedophile and pedophilia are commonly used informally to describe an adult's sexual interest in pubescent or post-pubescent persons under the age of consent. The terms hebephilia or ephebophilia may be more accurate in these cases.
Another common usage of pedophilia is to refer to the act of sexual abuse itself, rather than the medical meaning, which is a preference for prepubescents on the part of the older individual (see above for an explanation of the distinction). There are also situations where the terms are misused to refer to relationships where the younger person is an adult of legal age, but is either considered too young in comparison to their older partner, or the older partner occupies a position of authority over them. Researchers state that the above uses of the term pedophilia are imprecise or suggest that they are best avoided. Writing in Mayo Clinic Proceedings, Hall & Hall state that pedophilia "is not a criminal or legal term".
Pedophile advocacy groups
From the late 1950s to early 1990s, several pedophile membership organizations advocated age of consent reform to lower or abolish age of consent laws, as well as for the acceptance of pedophilia as a sexual orientation rather than a psychological disorder, and for the legalization of child pornography. The efforts of pedophile advocacy groups did not gain mainstream acceptance, and today those few groups that have not dissolved have only minimal membership and have ceased their activities other than through a few websites.
Non-offending pedophile support groups
In contrast to advocacy groups, there are pedophile support groups and organizations that do not support or condone sexual activities between adults and minors. Members of these groups have insight into their condition and understand the potential harm they could do, and so seek to avoid acting on their impulses.
Anti-pedophile activism
Anti-pedophile activism encompasses opposition against pedophiles, against pedophile advocacy groups, and against other phenomena that are seen as related to pedophilia, such as child pornography and child sexual abuse. Much of the direct action classified as anti-pedophile involves demonstrations against sex offenders, against pedophiles advocating for the legalization of sexual activity between adults and children, and against Internet users who solicit sex from minors.
High-profile media attention to pedophilia has led to incidents of moral panic, particularly following reports of pedophilia associated with Satanic ritual abuse and day care sex abuse. Instances of vigilantism have also been reported in response to public attention on convicted or suspected child sex offenders. In 2000, following a media campaign of "naming and shaming" suspected pedophiles in the UK, hundreds of residents took to the streets in protest against suspected pedophiles, eventually escalating to violent conduct requiring police intervention.
| Biology and health sciences | Mental disorder | null |
8327348 | https://en.wikipedia.org/wiki/Earthquake%20early%20warning%20system | Earthquake early warning system | An earthquake warning system or earthquake alarm system is a system of accelerometers, seismometers, communication, computers, and alarms that is devised for rapidly notifying adjoining regions of a substantial earthquake once one begins. This is not the same as earthquake prediction, which is currently not capable of producing decisive event warnings.
Time lag and wave projection
An earthquake is caused by the release of stored elastic strain energy during rapid sliding along a fault. The sliding starts at some location and progresses away from the hypocenter in each direction along the fault surface. The speed of the progression of this fault tear is slower than, and distinct from the speed of the resultant pressure and shear waves, with the pressure wave traveling faster than the shear wave. The pressure waves are always smaller in amplitude than the damaging shear waves that are the most destructive to structures, particularly buildings that have a resonant period similar to those of the radiated waves. Typically, these buildings are around eight floors in height. These waves will be strongest at the ends of the slippage, and may project destructive waves well beyond the fault failure. The intensity of such remote effects are highly dependent upon local soils conditions within the region and these effects are considered in constructing a model of the region that determines appropriate responses to specific events.
Transit safety
Such systems are currently implemented to determine appropriate real-time response to an event by the train operator in urban rail systems such as BART (Bay Area Rapid Transit) and LA Metro. The appropriate response is dependent on the warning time, the local right-of-way conditions and the current speed of the train.
Deployment
As of 2024, China, Japan, Taiwan, South Korea and Israel have comprehensive, nationwide earthquake early warning systems, that notify people in the affected areas via Wireless Emergency Alerts (WEA), TV alerts, radio announcements or via civil defense sirens.
Countries such as Mexico, United States and Canada have regional earthquake warning systems and notify people using the same technologies mentioned earlier. In particular the Mexican Seismic Alert System covers areas of central and southern Mexico including Mexico City and Oaxaca uses civil defense sirens while ShakeAlert covers California, Oregon, Washington in the US and British Columbia in Canada, using WEA.
Countries such as Guatemala, El Salvador, Nicaragua, Costa Rica and Romania have deployed systems that alert only specific users through the download of applications. While systems are being tested in Italy, France, Turkey, Switzerland, Chile, Peru, Indonesia.
The earliest automated earthquake pre-detection systems were installed in the 1990s; for instance, in California, the Calistoga fire station's system which automatically triggers a citywide siren to alert the entire area's residents of an earthquake. Some California fire departments use their warning systems to automatically open overhead doors of fire stations before the earthquake can disable them. While many of these efforts are governmental, several private companies also manufacture earthquake early warning systems to protect infrastructure such as elevators, gas lines and fire stations.
Canada
In 2009, an early warning system called ShakeAlarm was installed and commissioned in Vancouver, British Columbia, Canada. It was placed to protect a piece of critical transportation infrastructure called the George Massey Tunnel, which connects the north and south banks of the Fraser River. In this application the system automatically closes the gates at the tunnel entrances if there is a dangerous seismic event inbound. The success and the reliability of the system was such that as of 2015 there have been several additional installations on the west coast of Canada and the United States, and there are more being planned.
On August 29, 2024, the Canadian Earthquake Early Warning system was launched in British Columbia by Natural Resources Canada (NRCan), and is expected to be expanded to southern Quebec and eastern Ontario later in 2024. Alerts generated by this system are delivered to the public via the country's National Public Alerting System. The early warning system was developed in cooperation with the United States Geological Survey (USGS) and is based upon USGS's ShakeAlert system. While the two systems are distinct, USGS and NRCan share processing software, algorithms and real-time data.
China
The earliest earthquake warning system in China was built in the 1990s. The devastation of the 2008 Sichuan earthquake stimulated China's investment in nationwide earthquake early warning systems (EEWS). Large amounts of monitoring stations, sensors, and analytic systems were installed to improve the accuracy, responsiveness, and comprehensiveness of the earthquake data. In June 2019, the Chengdu Hi-Tech Disaster Reduction Institution, part of the national EEWS system, successfully warned various townships of a 6.0M earthquake 10–27 seconds before the shockwave arrived. In 2023, China Earthquake Administration announced that the national EEWS was completed, with 150,000 monitoring stations, managed by three national centers, 31 provincial centers, 173 prefectural and municipal centers. It is the largest seismic network of its kind in the world.
Japan
Japan's Earthquake Early Warning system was put to practical use in 2006. The system that warns the general public was installed on October 1, 2007. It was modeled partly on the Urgent Earthquake Detection and Alarm System () of Japan Railways, which was designed to enable automatic braking of bullet trains.
Gravimetric data from the 2011 Tōhoku earthquake has been used to create a model for increased warning time compared to seismic models, as gravity fields travel at the speed of light, much faster than seismic waves.
Mexico
The Mexican Seismic Alert System, otherwise known as SASMEX, began operations in 1991 and began publicly issuing alerts in 1993. It is funded by the Mexico City government, with financial contributions from several states that receive the alert. Initially serving Mexico City with twelve sensors, the system now has 97 sensors and is designed to protect life and property in several central and southern Mexican states.
United States
The United States Geological Survey (USGS) began research and development of an early warning system for the West Coast of the United States in August 2006, and the system became demonstrable in August 2009. Following various developmental phases, version 2.0 went live during the fall of 2018, allowing the "sufficiently functional and tested" system to begin Phase 1 of alerting California, Oregon and Washington.
Even though ShakeAlert could alert the public beginning September 28, 2018, the messages themselves could not be distributed until the various private and public distribution partners had completed mobile apps and made changes to various emergency alerting systems. The first publicly available alerting system was the ShakeAlertLA app, released on New Year's Eve 2018 (although it only alerted for shaking in the Los Angeles area). On October 17, 2019, Cal OES announced a statewide rollout of the alert distribution system in California, using mobile apps and the Wireless Emergency Alerts (WEA) system. California refers to their system as the California Earthquake Early Warning System. A statewide alert distribution system was rolled out in Oregon on March 11, 2021 and in Washington on May 4, 2021, completing the alert system for the West Coast.
Global systems
Earthquake Network
In January 2013, Francesco Finazzi of the University of Bergamo started the Earthquake Network research project which aims at developing and maintaining a crowdsourced earthquake warning system based on smartphone networks. Smartphones are used to detect the ground shaking induced by an earthquake and a warning is issued as soon as an earthquake is detected. People living at a further distance from the epicenter and the detection point may be alerted before they are reached by the damaging waves of the earthquake. People can take part in the project by installing the Android application "Earthquake Network" on their smart phones. The app requires the phone to receive the alerts.
MyShake
In February 2016, the Berkeley Seismological Laboratory at University of California, Berkeley (UC Berkeley) released the MyShake mobile app. The app uses accelerometers in phones that are stationary and connected to a power supply to record shaking and relay that information back to the laboratory. The system issues automated warnings of earthquakes of magnitude 4.5 or greater. UC Berkeley released a Japanese-language version of the app in May 2016. By December 2016, the app had captured nearly 400 earthquakes worldwide.
Android Earthquake Alerts System
On August 11, 2020, Google announced that its Android operating system would begin using accelerometers in devices to detect earthquakes (and send the data to the company's "earthquake detection server"). As millions of phones operate on Android, this may result in the world's largest earthquake detection network.
Initially, the system only collected earthquake data and did not issue alerts (except for on the West Coast of the United States, where it provided alerts issued by the USGS's ShakeAlert system and not from Google's own detection system). At this early stage, data collected by Android devices was only used to provide fast information on a nearby earthquake via Google Search, but it was always planned to issue alerts based on Google's detection capabilities in the future.
On April 28, 2021, Google announced the rollout of the alert system to Greece and New Zealand, the first countries to receive alerts based on Google's own detection capabilities. Google's alerts were extended to Turkey, the Philippines, Kazakhstan, Kyrgyz Republic, Tajikistan, Turkmenistan and Uzbekistan in June 2021. In September 2024, Google announced their warnings would now cover the entire United States (including areas not monitored by USGS's ShakeAlert); at the time, the earthquake alerts could be delivered to 97 other countries.
OpenEEW
On August 11, 2020, Linux Foundation, IBM and Grillo announced the first fully open-source earthquake early-warning system, featuring instructions for a low-cost seismometer, cloud-hosted detection system, dashboard and mobile app. This project is supported by USAID, the Clinton Foundation and Arrow Electronics. Smartphone-based earthquake early-warning systems are dependent on a dense network of users near the earthquake rupture zone, whereas OpenEEW has focused instead on providing affordable devices that can be deployed in remote regions close to where earthquakes can begin. All components of this system are open source and available on the project's GitHub repositories.
Social media
Social networking sites such as Twitter and Facebook play a significant role during natural disasters. The United States Geological Survey (USGS) has investigated collaboration with the social networking site Twitter to allow for more rapid construction of ShakeMaps.
| Physical sciences | Seismology | Earth science |
8330236 | https://en.wikipedia.org/wiki/Hiroshima%20Electric%20Railway | Hiroshima Electric Railway | is a Japanese transportation company established on June 18, 1910, that operates streetcars and buses in and around Hiroshima Prefecture. It is known as for short.
The company's rolling stock includes an eclectic range of trams manufactured from across Japan and Europe, earning it the nickname "The Moving Streetcar Museum".
From January 2008 the company has accepted PASPY, a smart card ticket system.
This is the longest tram network in Japan, with .
The atomic bombing of Hiroshima by the USA took place on 6 August 1945. 185 employees of the company were killed as a result of the bomb and 108 of its 123 cars were damaged or destroyed. Within three days, the system started running again. Three trams that survived or were rebuilt after the bombing continue to run 75 years afterwards.
Railway and streetcar
One Railway line with one route for 16.1 km. (Miyajima Line)
between Hiroden-nishi-hiroshima Station and Hiroden-miyajima-guchi Station.
trains(trams) link up with other lines from Nishi-Hiroshima.
Six Streetcar inner-city lines with eight routes for 19.0 km.
Operates 271 streetcars.
The company has the longest and busiest streetcar service in Japan.
Key terminal stations
Hiroshima Station (connect to JR Hiroshima Station)
Yokogawa Station (connect to JR Yokogawa Station)
Hiroshima Port Station (connect with ferries and hydrofoils for Matsuyama, Imabari, Kure, Miyajima, Etajima and some other islands in Seto Inland Sea)
Hondori Station (connect to Astram Line Hondori Station)
Hakushima Station
Eba Station
Hiroden-nishi-hiroshima Station (connect to JR Nishi-Hiroshima Station)
Hiroden-miyajima-guchi Station (connect to JR Miyajimaguchi Station and JR Miyajima Ferry and Miyajima Matsudai Kisen ferries for Miyajima)
List of lines and routes
Hiroden Streetcar Lines and Routes
Bus services
City area
No.2: - Hiroshima Station - Fuchū
No.3: Hiroshima Station - Hacchōbori - Kamiyachō - - Kan'on
No.4: Prefecture office - Hiroshima Station - Niho/Mukainada
No.5: Ushita - Hiroshima Station -
No.6: Ushita - Hatchōbori - Kamiyachō - City Hall - Funairi - Eba
No.7: (Yokogawa Station - Tokaichi-machi - ) Kamiyachō - City Hall - University Hospital - Niho/Mukainada
The bus runs only in morning between Yokogawa - Kamiyacho
No.8: Yokogawa - Kan'on
No.10: Nishi-Hiroshima Station - City Hall - University Hospital
No.12: Hesaka - Hacchōbori - Miyuki-Bashi - Niho/ - Asahimachi/ - Hondōri - Miyuki-Bashi - Niho (Express)
The route for Asahimachi and Express bus only runs in rush time.
No.13: Hiroshima Station - Inari-machi - Chuden-mae - City Hall
Suburb area
28 Bus routes for the suburbs. Most suburban lines departs from
North: For Asaminami and Asakita ward, and Yoshida, Toyohira, Sandan-kyō(via Chūgoku Expressway or Route 191) and Miyoshi(via Chūgoku Expressway)
For Asa zoo and Asahigaoka, the bus departs from Hiroshima Station and not via Bus Center.
West: For Koi, West ward, Saeki ward, Hatsukaichi
Some buses start from Hiroshima Station or Hacchōbori.
1 through Bus route to Kure(Clare Line)
6 Superhighway bus routes around Chūgoku region and Tokyo.
routes between Masuda, Hamada, Matsue, Yonago, Tottori and Tokyo.
Hiroshima Airport Limousine bus.
Operate 489 Buses.
Main bus stations
Hiroshima Bus Center, the main terminal bus station in central Hiroshima
| Technology | Japan | null |
8331759 | https://en.wikipedia.org/wiki/Human%20body%20temperature | Human body temperature | Normal human body temperature (normothermia, euthermia) is the typical temperature range found in humans. The normal human body temperature range is typically stated as .
Human body temperature varies. It depends on sex, age, time of day, exertion level, health status (such as illness and menstruation), what part of the body the measurement is taken at, state of consciousness (waking, sleeping, sedated), and emotions. Body temperature is kept in the normal range by a homeostatic function known as thermoregulation, in which adjustment of temperature is triggered by the central nervous system.
Methods of measurement
Taking a human's temperature is an initial part of a full clinical examination. There are various types of medical thermometers, as well as sites used for measurement, including:
In the rectum (rectal temperature)
In the mouth (oral temperature)
Under the arm (axillary temperature)
In the ear (tympanic temperature)
On the skin of the forehead over the temporal artery
Using heat flux sensors
Variations
Temperature control (thermoregulation) is a homeostatic mechanism that keeps the organism at optimum operating temperature, as the temperature affects the rate of chemical reactions. In humans, the average internal temperature is widely accepted to be , a "normal" temperature established in the 1800s. But newer studies show that average internal temperature for men and women is . No person always has exactly the same temperature at every moment of the day. Temperatures cycle regularly up and down through the day, as controlled by the person's circadian rhythm. The lowest temperature occurs about two hours before the person normally wakes up. Additionally, temperatures change according to activities and external factors.
In addition to varying throughout the day, normal body temperature may also differ as much as from one day to the next, so that the highest or lowest temperatures on one day will not always exactly match the highest or lowest temperatures on the next day.
Normal human body temperature varies slightly from person to person and by the time of day. Consequently, each type of measurement has a range of normal temperatures. The range for normal human body temperatures, taken orally, is . This means that any oral temperature between is likely to be normal.
The normal human body temperature is often stated as . In adults a review of the literature has found a wider range of for normal temperatures, depending on the gender and location measured.
Reported values vary depending on how it is measured: oral (under the tongue): (), internal (rectal, vaginal): . A rectal or vaginal measurement taken directly inside the body cavity is typically slightly higher than oral measurement, and oral measurement is somewhat higher than skin measurement. Other places, such as under the arm or in the ear, produce different typical temperatures. While some people think of these averages as representing normal or ideal measurements, a wide range of temperatures has been found in healthy people. The body temperature of a healthy person varies during the day by about with lower temperatures in the morning and higher temperatures in the late afternoon and evening, as the body's needs and activities change. Other circumstances also affect the body's temperature. The core body temperature of an individual tends to have the lowest value in the second half of the sleep cycle; the lowest point, called the nadir, is one of the primary markers for circadian rhythms. The body temperature also changes when a person is hungry, sleepy, sick, or cold.
Natural rhythms
Body temperature normally fluctuates over the day following circadian rhythms, with the lowest levels around 4a.m. and the highest in the late afternoon, between 4:00 and 6:00 p.m. (assuming the person sleeps at night and stays awake during the day). Therefore, an oral temperature of would, strictly speaking, be a normal, healthy temperature in the afternoon but not in the early morning. An individual's body temperature typically changes by about between its highest and lowest points each day.
Body temperature is sensitive to many hormones, so women have a temperature rhythm that varies with the menstrual cycle, called a circamensal rhythm. A woman's basal body temperature rises sharply after ovulation, as estrogen production decreases and progesterone increases. Fertility awareness programs use this change to identify when a woman has ovulated to achieve or avoid pregnancy. During the luteal phase of the menstrual cycle, both the lowest and the average temperatures are slightly higher than during other parts of the cycle. However, the amount that the temperature rises during each day is slightly lower than typical, so the highest temperature of the day is not very much higher than usual. Hormonal contraceptives both suppress the circamensal rhythm and raise the typical body temperature by about .
Temperature also may vary with the change of seasons during each year. This pattern is called a circannual rhythm. Studies of seasonal variations have produced inconsistent results. People living in different climates may have different seasonal patterns.
It has been found that physically active individuals have larger changes in body temperature throughout the day. Physically active people have been reported to have lower body temperatures than their less active peers in the early morning and similar or higher body temperatures later in the day.
With increased age, both average body temperature and the amount of daily variability in the body temperature tend to decrease. Elderly people may have a decreased ability to generate body heat during a fever, so even a somewhat elevated temperature can indicate a serious underlying cause in geriatrics. One study suggested that the average body temperature has also decreased since the 1850s. The study's authors believe the most likely explanation for the change is a reduction in inflammation at the population level due to decreased chronic infections and improved hygiene.
Measurement methods
Different methods used for measuring temperature produce different results. The temperature reading depends on which part of the body is being measured. The typical daytime temperatures among healthy adults are as follows:
Temperature in the rectum (rectal), vagina, or in the ear (tympanic) is about
Temperature in the mouth (oral) is about
Temperature under the arm (axillary) is about
Generally, oral, rectal, gut, and core body temperatures, although slightly different, are well-correlated.
Oral temperatures are influenced by drinking, chewing, smoking, and breathing with the mouth open. Mouth breathing, cold drinks or food reduce oral temperatures; hot drinks, hot food, chewing, and smoking raise oral temperatures.
Each measurement method also has different normal ranges depending on sex.
Infrared thermometer
As of 2016, reviews of infrared thermometers have found them to be of variable accuracy. This includes tympanic infrared thermometers in children.
Variations due to outside factors
Sleep disturbances also affect temperatures. Normally, body temperature drops significantly at a person's normal bedtime and throughout the night. Short-term sleep deprivation produces a higher temperature at night than normal, but long-term sleep deprivation appears to reduce temperatures. Insomnia and poor sleep quality are associated with smaller and later drops in body temperature. Similarly, waking up unusually early, sleeping in, jet lag and changes to shift work schedules may affect body temperature.
Concept
Fever
A temperature setpoint is the level at which the body attempts to maintain its temperature. When the setpoint is raised, the result is a fever. Most fevers are caused by infectious disease and can be lowered, if desired, with antipyretic medications.
An early morning temperature higher than or a late afternoon temperature higher than is normally considered a fever, assuming that the temperature is elevated due to a change in the hypothalamus's setpoint. Lower thresholds are sometimes appropriate for elderly people. The normal daily temperature variation is typically , but can be greater among people recovering from a fever.
An organism at optimum temperature is considered afebrile, meaning "without fever". If temperature is raised, but the setpoint is not raised, then the result is hyperthermia.
Hyperthermia
Hyperthermia occurs when the body produces or absorbs more heat than it can dissipate. It is usually caused by prolonged exposure to high temperatures. The heat-regulating mechanisms of the body eventually become overwhelmed and unable to deal effectively with the heat, causing the body temperature to climb uncontrollably. Hyperthermia at or above about is a life-threatening medical emergency that requires immediate treatment. Common symptoms include headache, confusion, and fatigue. If sweating has resulted in dehydration, then the affected person may have dry, red skin.
In a medical setting, mild hyperthermia is commonly called heat exhaustion or heat prostration; severe hyperthermia is called heat stroke. Heatstroke may come on suddenly, but it usually follows the untreated milder stages. Treatment involves cooling and rehydrating the body; fever-reducing drugs are useless for this condition. This may be done by moving out of direct sunlight to a cooler and shaded environment, drinking water, removing clothing that might keep heat close to the body, or sitting in front of a fan. Bathing in tepid or cool water, or even just washing the face and other exposed areas of the skin, can be helpful.
With fever, the body's core temperature rises to a higher temperature through the action of the part of the brain that controls the body temperature; with hyperthermia, the body temperature is raised without the influence of the heat control centers.
Hypothermia
In hypothermia, body temperature drops below that required for normal metabolism and bodily functions. In humans, this is usually due to excessive exposure to cold air or water, but it can be deliberately induced as a medical treatment. Symptoms usually appear when the body's core temperature drops by below normal temperature.
Basal body temperature
Basal body temperature is the lowest temperature attained by the body during rest (usually during sleep). It is generally measured immediately after awakening and before any physical activity has been undertaken, although the temperature measured at that time is somewhat higher than the true basal body temperature. In women, temperature differs at various points in the menstrual cycle, and this can be used in the long term to track ovulation both to aid conception or avoid pregnancy. This process is called fertility awareness.
Core temperature
Core temperature, also called core body temperature, is the operating temperature of an organism, specifically in deep structures of the body such as the liver, in comparison to temperatures of peripheral tissues. Core temperature is normally maintained within a narrow range so that essential enzymatic reactions can occur. Significant core temperature elevation (hyperthermia) or depression (hypothermia) over more than a brief period of time is fatal.
Temperature examination in the heart, using a catheter, is the traditional gold standard measurement used to estimate core temperature (oral temperature is affected by hot or cold drinks, ambient temperature fluctuations as well as mouth-breathing). Since catheters are highly invasive, the generally accepted alternative for measuring core body temperature is through rectal measurements. Rectal temperature is expected to be approximately higher than an oral temperature taken on the same person at the same time. Ear thermometers measure temperature from the tympanic membrane using infrared sensors and also aim to measure core body temperature, since the blood supply of this membrane is directly shared with the brain. However, this method of measuring body temperature is not as accurate as rectal measurement and has a low sensitivity for fever, failing to determine three or four out of every ten fever measurements in children. Ear temperature measurement may be acceptable for observing trends in body temperature but is less useful in consistently identifying and diagnosing fever.
Until recently, direct measurement of core body temperature required either an ingestible device or surgical insertion of a probe. Therefore, a variety of indirect methods have commonly been used as the preferred alternative to these more accurate albeit more invasive methods. The rectal or vaginal temperature is generally considered to give the most accurate assessment of core body temperature, particularly in hypothermia. In the early 2000s, ingestible thermistors in capsule form were produced, allowing the temperature inside the digestive tract to be transmitted to an external receiver; one study found that these were comparable in accuracy to rectal temperature measurement. More recently, a new method using heat flux sensors have been developed. Several research papers show that its accuracy is similar to the invasive methods.
Internal variation
Measurement within the body finds internal variation temperatures as different as for the radial artery and for the brachial artery. It has been observed that "chaos" has been "introduced into physiology by the fictitious assumption of a constant blood temperature".
Temperature variation
Hot
or more – Almost certainly death will occur; however, people have been known to survive up to .
– Normally death, or there may be serious brain damage, convulsions, and shock. Cardio-respiratory collapse will likely occur.
– Subject may turn red. They may become comatose, be in severe delirium, and convulsions can occur.
– (Medical emergency) – Fainting, severe headache, dizziness, confusion, hallucinations, delirium, and drowsiness can occur. There may also be palpitations and breathlessness.
– Fainting, dehydration, weakness, headache, breathlessness, and dizziness may occur as well as profuse sweating.
– Severe sweating, and red. Fast heart rate and breathlessness. There may be exhaustion accompanying this. Children and people with epilepsy may suffer convulsions at this temperature.
– (Classed as hyperthermia if not caused by a fever) – Feeling hot, sweating, feeling thirsty, feeling very uncomfortable.
Normal
is a typically reported range for normal body temperature.
Cold
– Feeling cold, mild to moderate shivering. This can be a normal body temperature for sleeping.
– Threshold for hypothermia. Intense shivering, numbness and bluish/grayness of the skin. There is the possibility of heart irritability.
– Severe shivering, loss of movement of fingers, blueness, and confusion. Some behavioral changes may take place.
– Moderate to severe confusion, sleepiness, depressed reflexes, progressive loss of shivering, slow heartbeat, shallow breathing. Shivering may stop. The subject may be unresponsive to certain stimuli.
– (Medical emergency) – Hallucinations, delirium, complete confusion, extreme sleepiness that is progressively becoming comatose. Shivering is absent. Reflex may be absent or very slight.
– Comatose, very rarely conscious. No or slight reflexes. Very shallow breathing and slow heart rate. Possibility of serious heart rhythm problems.
– Severe heart rhythm disturbances are likely and breathing may stop at any time. The person may appear to be dead.
or less – Death usually occurs due to irregular heart beat or respiratory arrest; however, some patients have been known to survive with body temperatures as low as .
There are non-verbal corporal cues that can hint at an individual experiencing a low body temperature, which can be used for those with dysphasia or infants. Examples of non-verbal cues of coldness include stillness and being lethargic, unusual paleness of skin among light-skinned people, and, among males, shrinkage, and contraction of the scrotum.
Effect of environment
Environmental conditions, primarily temperature and humidity, affect the ability of the mammalian body to thermoregulate. The psychrometric temperature, of which the wet-bulb temperature is the main component, largely limits thermoregulation. It was thought that a wet-bulb temperature of about was the highest sustained value consistent with human life.
A 2022 study on the effect of heat on young people found that the critical wet-bulb temperature at which heat stress can no longer be compensated, Twb,crit, in young, healthy adults performing tasks at modest metabolic rates mimicking basic activities of daily life was much lower than the usually assumed, at about in humid environments, but progressively decreased in hotter, dry ambient environments.
At low temperatures the body thermoregulates by generating heat, but this becomes unsustainable at extremely low temperatures.
Historical understanding
In the 19th century, most books quoted "blood heat" as 98 °F, until a study published the mean (but not the variance) of a large sample as . Subsequently, that mean was widely quoted as "37 °C or 98.4 °F" until editors realized 37 °C is equal to 98.6 °F, not 98.4 °F. The 37 °C value was set by German physician Carl Reinhold August Wunderlich in his 1868 book, which put temperature charts into widespread clinical use. Dictionaries and other sources that quoted these averages did add the word "about" to show that there is some variance, but generally did not state how wide the variance is.
| Biology and health sciences | Diagnostics | Health |
27139928 | https://en.wikipedia.org/wiki/Antozonite | Antozonite | Antozonite (historically known as Stinkspat, Stinkfluss, Stinkstein, Stinkspar and fetid fluorite) is a radioactive fluorite variety first found in Wölsendorf, Bavaria, in 1841, and named in 1862.
It is characterized by the presence of multiple inclusions containing elemental fluorine; when the crystals are crushed or broken, the elemental fluorine is released. It was postulated that beta radiation given by uranium inclusions continuously break down calcium fluoride into calcium and fluorine atoms. Fluorine atoms combine to produce difluoride anions and, upon losing the extra electrons at a defect, fluorine is formed. Fluorine subsequently reacts with atmospheric oxygen and water vapor, producing ozone (whose characteristic smell, originally mistaken for a hypothetical substance called antozone, is responsible for the mineral's name) and hydrogen fluoride.
| Physical sciences | Minerals | Earth science |
20932853 | https://en.wikipedia.org/wiki/Roystonea%20regia | Roystonea regia | Roystonea regia, commonly known as the royal palm, Cuban royal palm, or Florida royal palm, is a species of palm native to Mexico, the Caribbean, Florida, and parts of Central America. A large and attractive palm, it has been planted throughout the tropics and subtropics as an ornamental tree. Although it is sometimes called R. elata, the conserved name R. regia is now the correct name for the species. The royal palm reaches heights from tall. Populations in Cuba and Florida were long seen as separate species, but are now considered a single species.
Widely planted as an ornamental, R. regia is also used for thatch, construction timber, and in some forms of traditional medicine, although there is currently no valid scientific evidence to support the efficacy or use of any palm species for medicinal purposes. The fruit is eaten by birds and bats (which disperse the seeds) and fed to livestock. Its flowers are visited by birds and bats, and it serves as a roosting site and food source for a variety of animals. Roystonea regia is the national tree of Cuba, and has a religious role both in Santería and Christianity, where it is used in Palm Sunday observances.
Description
Roystonea regia is a large palm which reaches a height of tall, (with heights up to reported) and a stem diameter of about . (K. F. Connor reports a maximum stem diameter of .) The trunk is stout, very smooth and grey-white in colour with a characteristic bulge below a distinctive green crownshaft. Trees have about 15 leaves which can be up to long. The flowers are white with pinkish anthers. The fruit are spheroid to ellipsoid in shape, long and wide. They are green when immature, turning red and eventually purplish-black as they mature.
Root nodules containing Rhizobium bacteria have been found on R. regia trees in India. The presence of rhizobia-containing root nodules is usually associated with nitrogen fixation in legumes; this was the first record of root nodules in a monocotyledonous tree. Further evidence of nitrogen fixation was provided by the presence of nitrogenase (an enzyme used in nitrogen fixation) and leghaemoglobin, a compound which allows nitrogenase to function by reducing the oxygen concentration in the root nodule. In addition to evidence of nitrogen fixation, the nodules were also found to be producing indole acetic acid, an important plant hormone.
Taxonomy
Roystonea is placed in the subfamily Arecoideae and the tribe Roystoneae. The placement Roystonea within the Arecoideae is uncertain; a phylogeny based on plastid DNA failed to resolve the position of the genus within the Arecoideae. As of 2008, there appear to be no molecular phylogenetic studies of Roystonea and the relationship between R. regia and the rest of the genus is uncertain.
The species was first described by American naturalist William Bartram in 1791 as Palma elata based on trees growing in central Florida. In 1816 German botanist Carl Sigismund Kunth described the species Oreodoxa regia based on collections made by Alexander von Humboldt and Aimé Bonpland in Cuba. In 1825 German botanist Curt Polycarp Joachim Sprengel moved it to the genus Oenocarpus and renamed it O. regius.
The genus Oreodoxa was proposed by German botanist Carl Ludwig Willdenow in 1807 and applied by him to two species, O. acuminata (now known as Prestoea acuminata) and O. praemorsa (now Wettinia praemorsa). Although these species were transferred to other genera, the genus Oreodoxa continued to be applied to a variety of superficially similar species which were not, in fact, closely related. To address this problem, American botanist Orator F. Cook created the genus Roystonea, which he named in honour of American general Roy Stone, and renamed Kunth's species Roystonea regia.
Cook considered Floridian populations to be distinct from both the Cuba R. regia and the Puerto Rican R. borinquena, and he placed them in a new species, R. floridana, which is now considered a synonym of R. regia. In 1906 Charles Henry Wright described two new species based on collections from Georgetown, British Guiana (now Guyana) which he placed in the genus Euterpe — E. jenmanii and E. ventricosa. Both species are now considered synonyms of R. regia. The name R. regia var. hondurensis was applied by Paul H. Allen to Central American populations of the species. However, Scott Zona determined that they did not differ enough from Cuban populations to be considered a separate variety.
Based on the rules of botanical nomenclature, the oldest properly published name for a species has priority over newer names. Bartram applied the Linnaean binomial Palma elata to a "large, solitary palm with an ashen white trunk topped by a green leaf sheath [the crownshaft] and pinnate leaves" growing in central Florida. While no type collection is known, there are no other native palms that would fit Bartram's description. In 1946 Francis Harper pointed out that Bartram's name was valid and proposed a new combination, Roystonea elata. Liberty Hyde Bailey's use of the name in his 1949 revision of the genus, established its usage.
Harper's new combination immediately supplanted Cook's R. floridana, but there was disagreement as to whether Cuban and Floridian populations represented a single species or two species. Zona's revision of the genus concluded that they both belonged to the same species. According to the rules of botanical nomenclature, the correct name of the species should have been Roystonea elata. Zona pointed out, however, that the name R. regia (or Oreodoxa regia) has a history of use in horticulture that dated from at least 1838, and that the species had been propagated around the world under that name. Roystonea elata, on the other hand, had only been used since 1949, and was used much less widely. On that basis, Zona proposed that the name Roystonea regia should be conserved.
Common names
In cultivation, Roystonea regia is called the Cuban royal palm or simply the royal palm. In Cuba, the tree is called the palma real or palma criolla. In India, where it is widely cultivated, it is called vakka. In Cambodia, where it is planted as decorative along avenues and in public parks, it is known as sla barang''' ("Western palm").
Reproduction and growthRoystonea regia produces
unisexual flowers that are pollinated by animals. European honey bees and bats are reported pollinators. Seeds are dispersed by birds and bats that feed upon the fruit.
Seed germination is adjacent ligular—during germination, as the cotyledon expands it only pushes a portion of the embryo out of the seed. As a result, the seedling develops adjacent to the seed. The embryo forms a ligule, and the plumule protrudes from this. Seedlings in cultivation are reported to begin producing a stem two years after germination, at the point where they produce their thirteenth leaf. Growth rates of seedlings averaged per year in Florida.
Distribution and habitatRoystonea regia is found in Central America, Cuba, the Cayman Islands, Hispaniola (the Dominican Republic and Haiti), the Lesser Antilles, The Bahamas, southern Florida, and Mexico (in Veracruz, Campeche, Quintana Roo, and Yucatán).Zona, S. 1996. Roystonea (Arecaceae: Arecoideae). Flora Neotropica 71: 1–36. William Bartram described the species from Lake Dexter, along the St. Johns River in the area of modern Lake and Volusia Counties in central Florida, an area north of its modern range, suggesting a wider distribution in the past.Roystonea regia is most abundant in Cuba, where is occurs on hillsides and valleys. In southern Florida, Roystonea regia occurs in strand swamps and hardwood hammocks. Royal Palm State Park in the Everglades was established due to the high concentration of the species.Roystonea is cultivated in tropical and subtropical climates in the United States, Australia, Brazil, and parts of southern Asia as a landscape palm. It appears to naturalise with ease, and extensive naturalised populations are present in Panama, Costa Rica, and Guyana. In the United States it grows mostly in central and southern Florida, Hawaii, Puerto Rico, and in South Texas in the Rio Grande Valley and southern California.
Ecology
The leaves of Roystonea regia are used as roosting sites by Eumops floridanus, the Florida bonneted bat, and is used as a retreat for Cuban tree frogs (Osteopilus septentriolalis), a non-native species in Florida. In Panama (where R. regia is introduced), its trunks are used as nesting sites by yellow-crowned parrots (Amazona ochrocephala panamensis). The flowers of R. regia are visited by pollen-collecting bees and are considered a good source of nectar. Its pollen was also found in the stomachs of Phyllonycteris poeyi, the Cuban flower bat (a pollen-feeder) and Monophyllus redmani, Leach's single leaf bat (a nectar-feeder). Artibeus jamaicensis, the Jamaican fruit bat, and Myiozetetes similis, the social flycatcher, feed on the fruit.Roystonea regia is the host plant for the royal palm bug, Xylastodoris luteolus, in Florida. It also serves as a larval host plant for the butterflies Pyrrhocalles antiqua orientis and Asbolis capucinus in Cuba, and Brassolis astyra and B. sophorae in Brazil. It is susceptible to bud rot caused by the oomycete Phytophthora palmivora and by the fungus Thielaviopsis paradoxa.
The species is considered an invasive species in secondary forest in Panama.
UsesRoystonea regia has been planted throughout the tropics and subtropics as an ornamental. The seed is used as a source of oil and for livestock feed. Leaves are used for thatching and the wood for construction. The roots are used as a diuretic, and for that reason they are added to tifey, a Haitian drink, by Cubans of Haitian origin. They are also used as a treatment for diabetes.
Fibres extracted from the leaf sheath of R. regia have been found to be comparable with sisal and banana fibres, but lower in density, making it a potentially useful source for the use in lightweight composite materials. An extract from R. regia fruit known as D-004 reduces benign prostate hyperplasia (BPH) in rodents. D-004, is a mixture of fatty acids, is being studied as a potential alternative to finasteride for the treatment of BPH.
Religious significanceRoystonea regia plays an important role in popular religion in Cuba. In Santería it is associated primarily with Shango or with his father Aggayú. It also has symbolic importance in the Palo faiths and the Abakuá fraternity. In Roman Catholicism, R. regia'' plays an important role in Palm Sunday observances.
| Biology and health sciences | Arecales (inc. Palms) | Plants |
20936195 | https://en.wikipedia.org/wiki/Acromegaly | Acromegaly | Acromegaly is a disorder that results in excess growth of certain parts of the human body. It is caused by excess growth hormone (GH) after the growth plates have closed. The initial symptom is typically enlargement of the hands and feet. There may also be an enlargement of the forehead, jaw, and nose. Other symptoms may include joint pain, thicker skin, deepening of the voice, headaches, and problems with vision. Complications of the disease may include type 2 diabetes, sleep apnea, and high blood pressure.
Cause and diagnosis
Acromegaly is usually caused by the pituitary gland producing excess growth hormone. In more than 95% of cases, the excess production is due to a benign tumor, known as a pituitary adenoma. The condition is not inherited. Acromegaly is rarely due to a tumor in another part of the body. Diagnosis is by measuring growth hormone after a person has consumed a glucose solution, or by measuring insulin-like growth factor I in the blood. After diagnosis, medical imaging of the pituitary is carried out to determine if an adenoma is present. If excess growth hormone is produced during childhood, the result is the condition gigantism rather than acromegaly, and it is characterized by excessive height.
Treatment
Treatment options include surgery to remove the tumor, medications, and radiation therapy. Surgery is usually the preferred treatment; the smaller the tumor, the more likely surgery will be curative. If surgery is contraindicated or not curative, somatostatin analogues or GH receptor antagonists may be used. Radiation therapy may be used if neither surgery nor medications are completely effective. Without treatment, life expectancy is reduced by 10 years; with treatment, life expectancy is not reduced.
Epidemiology, history, and culture
Acromegaly affects about 3 per 50,000 people. It is most commonly diagnosed in middle age. Males and females are affected with equal frequency. It was first described in the medical literature by Nicolas Saucerotte in 1772. The term is from the Greek () meaning "extremity", and () meaning "large".
Signs and symptoms
Features that may result from a high level of GH or expanding tumor include:
Headaches – often severe and prolonged
Soft tissue swelling visibly resulting in enlargement of the hands, feet, nose, lips, and ears, and a general thickening of the skin
Soft tissue swelling of internal organs, notably the heart with the attendant weakening of its muscularity, and the kidneys, also the vocal cords resulting in a characteristic thick, deep voice and slowing of speech
Generalized expansion of the skull at the fontanelle
Pronounced brow protrusion, often with ocular distension (frontal bossing)
Pronounced lower jaw protrusion (prognathism) with attendant macroglossia (enlargement of the tongue) and teeth spacing
Hypertrichosis, hyperpigmentation and hyperhidrosis may occur in these people.
Skin tags
Carpal tunnel syndrome
Complications
Problems with bones and joints, including osteoarthritis, nerve compression syndrome due to bony overgrowth, and carpal tunnel syndrome
Hypertension
Diabetes mellitus
Cardiomyopathy, potentially leading to heart failure
Colorectal cancer
Sleep apnea
Thyroid nodules and thyroid cancer
Hypogonadism
Compression of the optic chiasm by the growth of pituitary adenoma leading to visual problems
Causes
Pituitary adenoma
About 98% of cases of acromegaly are due to the overproduction of growth hormone by a benign tumor of the pituitary gland called an adenoma. These tumors produce excessive growth hormone and compress surrounding brain tissues as they grow larger. In some cases, they may compress the optic nerves. Expansion of the tumor may cause headaches and visual disturbances. In addition, compression of the surrounding normal pituitary tissue can alter the production of other hormones, leading to changes in menstruation and breast discharge in women and impotence in men because of reduced testosterone production.
A marked variation in rates of GH production and the aggressiveness of the tumor occurs. Some adenomas grow slowly and symptoms of GH excess are often not noticed for many years. Other adenomas grow rapidly and invade surrounding brain areas or the sinuses, which are located near the pituitary. In general, younger people tend to have more aggressive tumors.
Most pituitary tumors arise spontaneously and are not genetically inherited. Many pituitary tumors arise from a genetic alteration in a single pituitary cell that leads to increased cell division and tumor formation. This genetic change, or mutation, is not present at birth but is acquired during life. The mutation occurs in a gene that regulates the transmission of chemical signals within pituitary cells; it permanently switches on the signal that tells the cell to divide and secrete growth hormones. The events within the cell that cause disordered pituitary cell growth and GH oversecretion currently are the subject of intensive research.
Pituitary adenomas and diffuse somatomammotroph hyperplasia may result from somatic mutations activating GNAS, which may be acquired or associated with McCune–Albright syndrome.
Other tumors
In a few people, acromegaly is caused not by pituitary tumors, but by tumors of the pancreas, lungs, and adrenal glands. These tumors also lead to an excess of GH, either because they produce GH themselves or, more frequently, because they produce GHRH (growth hormone-releasing hormone), the hormone that stimulates the pituitary to make GH. In these people, the excess GHRH can be measured in the blood and establishes that the cause of the acromegaly is not due to a pituitary defect. When these nonpituitary tumors are surgically removed, GH levels fall and the symptoms of acromegaly improve.
In people with GHRH-producing, non-pituitary tumors, the pituitary still may be enlarged and may be mistaken for a tumor. Therefore, it is important that physicians carefully analyze all "pituitary tumors" removed from people with acromegaly to not overlook the possibility that a tumor elsewhere in the body is causing the disorder.
Diagnosis
If acromegaly is suspected, medical laboratory investigations followed by medical imaging, if the lab tests are positive, confirm or rule out the presence of this condition.
IGF1 provides the most sensitive lab test for the diagnosis of acromegaly, and a GH suppression test following an oral glucose load, which is a very specific lab test, will confirm the diagnosis following a positive screening test for IGF1. A single value of the GH is not useful in view of its pulsatility (levels in the blood vary greatly even in healthy individuals). GH levels taken 2 hours after a 75- or 100-gram glucose tolerance test are helpful in the diagnosis: GH levels are suppressed below 1 μg/L in normal people, and levels higher than this cutoff are confirmatory of acromegaly.
Other pituitary hormones must be assessed to address the secretory effects of the tumor, as well as the mass effect of the tumor on the normal pituitary gland. They include thyroid stimulating hormone (TSH), gonadotropic hormones (FSH, LH), adrenocorticotropic hormone, and prolactin.
An MRI of the brain focusing on the sella turcica after gadolinium administration allows for clear delineation of the pituitary and the hypothalamus and the location of the tumor. A number of other overgrowth syndromes can result in similar problems.
Differential diagnosis
Pseudoacromegaly is a condition with the usual acromegaloid features but without an increase in growth hormone and IGF-1. It is frequently associated with insulin resistance. Cases have been reported due to minoxidil at an unusually high dose. It can also be caused by a selective post-receptor defect of insulin signalling, leading to the impairment of metabolic, but preservation of mitogenic, signaling.
Treatment
The goals of treatment are to reduce GH production to normal levels thereby reversing or ameliorating the signs and symptoms of acromegaly, to relieve the pressure that the growing pituitary tumor exerts on the surrounding brain areas, and to preserve normal pituitary function. Currently, treatment options include surgical removal of the tumor, drug therapy, and radiation therapy of the pituitary.
Medications
Somatostatin analogues
The primary current medical treatment of acromegaly is to use somatostatin analogues – octreotide (Sandostatin) or lanreotide (Somatuline).
These somatostatin analogues are synthetic forms of a brain hormone, somatostatin, which stops GH production. The long-acting forms of these drugs must be injected every 2 to 4 weeks for effective treatment. Most people with acromegaly respond to this medication. In many people with acromegaly, GH levels fall within one hour and headaches improve within minutes after the injection. Octreotide and lanreotide are effective for long-term treatment. Octreotide and lanreotide have also been used successfully to treat people with acromegaly caused by non-pituitary tumors.
Somatostatin analogues are also sometimes used to shrink large tumors before surgery.
Because octreotide inhibits gastrointestinal and pancreatic function, long-term use causes digestive problems such as loose stools, nausea, and gas in one-third of people. In addition, approximately 25 percent of people with acromegaly develop gallstones, which are usually asymptomatic. In some cases, octreotide treatment can cause diabetes due to the fact that somatostatin and its analogues can inhibit the release of insulin. With an aggressive adenoma that is not able to be operated on, there may be a resistance to octreotide in which case a second-generation SSA, pasireotide, may be used for tumor control. However, insulin and glucose levels should be carefully monitored as pasireotide has been associated with hyperglycemia by reducing insulin secretion.
Dopamine agonists
For those who are unresponsive to somatostatin analogues, or for whom they are otherwise contraindicated, it is possible to treat using one of the dopamine agonists, bromocriptine, or cabergoline. As tablets rather than injections, they cost considerably less. These drugs can also be used as an adjunct to somatostatin analogue therapy. They are most effective in those whose pituitary tumours also secrete prolactin. Side effects of these dopamine agonists include gastrointestinal upset, nausea, vomiting, light-headedness when standing, and nasal congestion. These side effects can be reduced or eliminated if medication is started at a very low dose at bedtime, taken with food, and gradually increased to the full therapeutic dose. Bromocriptine lowers GH and IGF-1 levels and reduces tumor size in fewer than half of people with acromegaly. Some people report improvement in their symptoms although their GH and IGF-1 levels still are elevated.
Growth hormone receptor antagonists
The latest development in the medical treatment of acromegaly is the use of growth hormone receptor antagonists. The only available member of this family is pegvisomant (Somavert). By blocking the action of the endogenous growth hormone molecules, this compound is able to control the disease activity of acromegaly in virtually everyone with acromegaly. Pegvisomant has to be administered subcutaneously by daily injections. Combinations of long-acting somatostatin analogues and weekly injections of pegvisomant seem to be equally effective as daily injections of pegvisomant.
Surgery
Surgical removal of the pituitary tumor is usually effective in lowering growth hormone levels. Two surgical procedures are available for use. The first is endonasal transsphenoidal surgery, which involves the surgeon reaching the pituitary through an incision in the nasal cavity wall. The wall is reached by passing through the nostrils with microsurgical instruments. The second method is transsphenoidal surgery during which an incision is made into the gum beneath the upper lip. Further incisions are made to cut through the septum to reach the nasal cavity, where the pituitary is located. Endonasal transsphenoidal surgery is a less invasive procedure with a shorter recovery time than the older method of transsphenoidal surgery, and the likelihood of removing the entire tumor is greater with reduced side effects. Consequently, endonasal transsphenoidal surgery is the more common surgical choice.
These procedures normally relieve the pressure on the surrounding brain regions and lead to a lowering of GH levels. Surgery is most successful in people with blood GH levels below 40 ng/ml before the operation and with pituitary tumors no larger than 10 mm in diameter. Success depends on the skill and experience of the surgeon. The success rate also depends on what level of GH is defined as a cure. The best measure of surgical success is the normalization of GH and IGF-1 levels. Ideally, GH should be less than 2 ng/ml after an oral glucose load. A review of GH levels in 1,360 people worldwide immediately after surgery revealed that 60% had random GH levels below 5 ng/ml. Complications of surgery may include cerebrospinal fluid leaks, meningitis, or damage to the surrounding normal pituitary tissue, requiring lifelong pituitary hormone replacement.
Even when surgery is successful and hormone levels return to normal, people must be carefully monitored for years for possible recurrence. More commonly, hormone levels may improve, but not return completely to normal. These people may then require additional treatment, usually with medications.
Radiation therapy
Radiation therapy has been used both as a primary treatment and combined with surgery or drugs. It is usually reserved for people who have tumours remaining after surgery. These people often also receive medication to lower GH levels. Radiation therapy is given in divided doses over four to six weeks. This treatment lowers GH levels by about 50 percent over 2 to 5 years. People monitored for more than 5 years show significant further improvement. Radiation therapy causes a gradual loss of production of other pituitary hormones with time. Loss of vision and brain injury, which have been reported, are very rare complications of radiation treatments.
Selection of treatment
The initial treatment chosen should be individualized depending on the person's characteristics, such as age and tumor size. If the tumor has not yet invaded surrounding brain tissues, removal of the pituitary adenoma by an experienced neurosurgeon is usually the first choice. After surgery, a person must be monitored long-term for increasing GH levels.
If surgery does not normalize hormone levels or a relapse occurs, a doctor will usually begin additional drug therapy. The current first choice is generally octreotide or lanreotide; however, bromocriptine and cabergoline are both cheaper and easier to administer. With all of these medications, long-term therapy is necessary, because their withdrawal can lead to rising GH levels and tumor re-expansion.
Radiation therapy is generally used for people whose tumors are not completely removed by surgery, for people who are not good candidates for surgery because of other health problems, and for people who do not respond adequately to surgery and medication.
Prognosis
Life expectancy of people with acromegaly is dependent on how early the disease is detected. Life expectancy after the successful treatment of early disease is equal to that of the general population. Acromegaly can often go on for years before diagnosis, resulting in poorer outcome, and it is suggested that the better the growth hormone is controlled, the better the outcome. Upon successful surgical treatment, headaches and visual symptoms tend to resolve. One exception is sleep apnea, which is present in around 70% of cases but does not tend to resolve with successful treatment of growth hormone level. While hypertension is a complication of 40% of cases, it typically responds well to regular regimens of blood pressure medication. Diabetes that occurs with acromegaly is treated with the typical medications, but successful lowering of growth hormone levels often alleviates symptoms of diabetes. Hypogonadism without gonad destruction is reversible with treatment. Acromegaly is associated with a slightly elevated risk of cancer.
Notable people
Salvatore Baccaro (1932–1984), Italian character actor. Active in B-movies, comedies, and horrors because of his peculiar features and spontaneous sympathy.
Paul Benedict (1938–2008), American actor. Best known for portraying Harry Bentley, The Jeffersons English next door neighbour
Mary Ann Bevan (1874–1933), an English woman, who after developing acromegaly, toured the sideshow circuit as "the ugliest woman in the world".
Eddie Carmel, born Oded Ha-Carmeili (1936–1972), Israeli-born entertainer with gigantism and acromegaly, popularly known as "The Jewish Giant".
Rondo Hatton (1894–1946), American journalist and actor. A Hollywood favorite in B-movie horror films of the 1930s and 1940s. Hatton's disfigurement, due to acromegaly, developed over time, beginning during his service in World War I.
Irwin Keyes (1952–2015), American actor. Best known for portraying Hugo Mojoloweski, George's occasional bodyguard on The Jeffersons
Richard Kiel (1939–2014), actor, "Jaws" from two James Bond movies and Mr. Larson in Happy Gilmore
Sultan Kösen, the world's tallest living man.
Neil McCarthy (1932–85), British actor. Known for roles in Zulu, Time Bandits, and many British television series
The Great Khali (Dalip Singh Rana), Indian professional wrestler, is best known for his tenure with WWE under the ring name The Great Khali. He had his pituitary tumor removed in 2012 at age 39.
André the Giant (André Roussimoff, 1946–1993), French professional wrestler and actor, known for playing Fezzik in The Princess Bride.
Maximinus Thrax, Roman emperor (, reigned 235–238). Descriptions, as well as depictions, indicate acromegaly, though remains of his body are yet to be found.
The French Angel (Maurice Tillet, 1903–1954), Russian-born French professional wrestler, is better known by his ring name, the French Angel.
Pío Pico, the last Mexican Governor of California (1801–1894), manifested acromegaly without gigantism between at least 1847 and 1858. Some time after 1858, signs of the growth hormone-producing tumor disappeared along with all the secondary effects the tumor had caused in him. He looked normal in his 90s. His remarkable recovery is likely an example of spontaneous selective pituitary tumor apoplexy.
(Leonel) Edmundo Rivero, Argentine tango singer, composer and impresario.
Tony Robbins, motivational speaker
Antônio "Bigfoot" Silva, Brazilian kickboxer and mixed martial artist.
Carel Struycken, Dutch actor, , is best known for playing Lurch in The Addams Family film trilogy, The Giant in Twin Peaks, Lwaxana Troi's silent Servant Mr. Homn in Star Trek: The Next Generation, and The Moonlight Man in Gerald's Game, based on the Stephen King book.
Nikolai Valuev, Russian politician and former professional boxer
Big Show (Paul Wight), American professional wrestler and actor, known for his tenures in WCW, ECW, WWE, and currently, AEW.
It has been argued that Lorenzo de' Medici (1449–92) may have had acromegaly. Historical documents and portraits, as well as a later analysis of his skeleton, support the speculation.
Pianist and composer Sergei Rachmaninoff (1873–1943), noted for his hands that could comfortably stretch a 13th on the piano, was never diagnosed with acromegaly in his lifetime, but a medical article from 2006 suggests that he might have had it.
Matthew McGrory, (1973–2005) American actor known best for his role as Karl the Giant in the 2003 Tim Burton film Big Fish, as well as for his appearances as a member of the Wack Pack on The Howard Stern Show, where he was known as the Original Bigfoot.
Marjon van Iwaarden, Dutch singer.
Gheorghe Mureșan, Romanian former basketball player nicknamed The Giant
| Biology and health sciences | Specific diseases | Health |
20940345 | https://en.wikipedia.org/wiki/Rabbit%20fish | Rabbit fish | Chimaera monstrosa, also known as the rabbit fish or rat fish, is a northeast Atlantic and Mediterranean species of cartilaginous fish in the family Chimaeridae. The rabbit fish is known for its characteristically large head and small, tapering body. With large eyes, nostrils, and tooth plates, the head gives them a rabbit-like appearance, hence the nickname "Rabbit fish". They can grow to and live for up to 30 years.
Description
The appearance of C. monstrosa shares characteristics of its distant relatives, sharks. It characteristically has a large head and a tapering body that ends in its whip-like tail, and has a short snout with an overhanging mouth. The top dorsal fin is positioned high on the spine of the fish, and is triangular and tall in height. Positioned in the mid-section of the fish, the spine runs throughout the length of the fish and continuously joins with the upper part of the caudal fin; this dorsal spine is also mildly poisonous and can cause painful stings. One distinguishing feature of the fish, compared to its close relatives, is the anal fin, which is distinctly separated from lancet-shaped caudal fin. The color is silver-green with spots of brown. Additionally, they have marmor-white stripes in all directions with a distinct lateral line can be seen clearly on the head.
The rabbit fish can grow up to long, and weigh . More specifically, this chimaera species is characterized by a slow-growth rate, and a long life expectancy. In the study of one population, the theoretical asymptotic length of this fish was estimated at 78.87 cm with a yearly growth rate of 6.73% per year. With these estimates of growth, the study also suggests the maximum ages of the fish to be 30 years for males and 26 for females, with the maturity age of the sample being 13.4 years for males and 11.2 years for females.
Distribution and habitat
The geographic habitat of the fish has been registered around the Mediterranean Sea and the Eastern parts of the Atlantic Ocean. This geographic range starts northwards Morocco and extends to the northern areas of Norway and Iceland in the Northern North Sea.
Within these geological areas, the depth range of C. monstrosa is , but it is most abundant in
upper to middle continental slope habitats at depths of . Within these parameters, the water temperatures of the species habitats are most commonly in the range . There have been reports of summer inshore migration of C. monstrosa to lay eggs in depths as low as .
Diet
Chimaera monstrosa is classified as a benthophagous species. This means that its main diet comprises bottom-feeding invertebrates. This includes animals such as crabs, molluscs, octopuses, sea-worms, and sea urchins. However, studies have also shown that C. monstrosa are opportunistic feeders. Comparing the digestive tracts of individuals with varying body sizes, a study found that the diet of the species was widely diverse in relation to size. Specimens smaller than mainly fed on amphipods, while those with lengths between fed on both amphipods and decapods. Larger individuals (more than ) had a narrow diet spectrum, consuming mainly decapods. Conditioned by predator size group, significant differences in diet were observed between geographical areas and depths. This suggests that despite some degree of prey specialization according to predator size, this deep-water species can change its diet in accordance with the food-restricted environment that characterizes its habitat.
Reproduction
Chimaera monstrosa are fish that have distinct sex from birth. They reproduce by internal fertilization of male and female. For reproduction, C. monstrosa displays a small club like structure with a bulbous tip armed with numerous sharp denticles located on the top of the head. This structure is suggested to be used by male fish to grasp the pectoral fin of the female during copulation. The species is also oviparous, meaning that the embryo development happens in eggs, and not in the female. Specifically, the reproductive tendencies of the Chimaera monstrosa show sexual segregation in different depths of water, with the females living at lower depths. This segregation of the sexes is attributed to two main factors: the regulation of sperm in males in warmer and shallower waters, and less aggression of sex. For males, they live in water to regulate sperm. For the females, they prefer deeper waters of , but go up to depths of to mate with males. After mating, they migrate inshore to lay eggs in the spring of summer.
Conservation
According to the IUCN Red List, Chimaera monstrosa is categorized as vulnerable. Due to its high levels of lipids, the species has gained interest in fisheries for its liver oils to manufacture dietary supplements. Aside from its value for oil, the C. monstrosa is mainly discarded as bycatch in fishing.
| Biology and health sciences | Chimaeriformes | Animals |
20941658 | https://en.wikipedia.org/wiki/Anchiornis | Anchiornis | Anchiornis is a genus of small, four-winged paravian dinosaurs, with only one known species, the type species Anchiornis huxleyi, named for its similarity to modern birds. The Latin name Anchiornis derives from a Greek word meaning "near bird", and huxleyi refers to Thomas Henry Huxley, a contemporary of Charles Darwin.
Anchiornis fossils have been found only in the Tiaojishan Formation of Liaoning, China, in rocks dated to the Late Jurassic, about 160 million years ago. It is known from hundreds of specimens, and given the exquisite preservation of some of these fossils, it became the first Mesozoic dinosaur species for which almost the entire life appearance could be determined, and an important source of information on the early evolution of birds.
Discovery and history
The first known fossil of Anchiornis (its type specimen) was dug up in the Yaolugou area of Jianchang County, Liaoning, China. These rocks have been difficult to date, but most studies have concluded that they belong to the Tiaojishan Formation of rocks dated to the late Jurassic period (Oxfordian age), 160.89 to 160.25 million years old. Anchiornis was studied and described by paleontologist Xu Xing and colleagues in a paper accepted to the Chinese Science Bulletin in 2009. The specimen is currently in the collection of the Institute of Vertebrate Paleontology and Paleoanthropology with the catalogue number IVPP V14378. It is an articulated skeleton missing the skull, part of the tail, and the right forelimb. The name Anchiornis huxleyi was chosen by Xu and colleagues in honor of Thomas Henry Huxley, an early proponent of biological evolution, and one of the first to propose a close evolutionary relationship between birds and dinosaurs. The generic name Anchiornis comes from combining the Ancient Greek words for "nearby" and "bird", because it was interpreted as important in filling a gap in the transition between the body plans of birds and dinosaurs.
A second specimen came to light around the same time, and was given to a team of scientists from Shenyang Normal University by a local farmer when they were excavating at the Yaolugou dig site. According to the farmer, this second specimen had been found nearby in the area of Daxishan, also from Tiaojishan Formation rocks of about the same age as the first Anchiornis. Two scientists visited the site in order to compare the new fossil with the rock types found there, and were able to confirm that the new specimen probably did come from the area the farmer described. They were able to dig up several fish fossils and a third Anchiornis fossil. The farmer's fossil underwent study which was published on September 24, 2009, in the journal Nature. It was assigned the catalogue number in the Liaoning Paleontological Museum. It is larger and much more complete than the first specimen, and preserved long wing feathers on the hands, arms, legs and feet, showing that it was a four-winged dinosaur similar to Microraptor.
While only a few specimens have been described in detail, many more have been identified and are held in both private collections and museums. One of these, a nearly complete skeleton missing the tail, also preserving extensive feather remains, was reported in 2010. This fossil also showed evidence that Anchiornis had a feathered crest on its head, and was used to determine the animal's life coloration. It is housed in the Beijing Museum of Natural History with the specimen number BMNHC PH828. Another specimen from the same fossil quarry as the type specimen was found by a local fossil dealer and sold to the Yizhou Fossil & Geology Park and catalogued there as YTGP-T5199. This fossil, a nearly complete skeleton, was prepared and studied by scientists at the Geology Park and identified as an Anchiornis. It was then used for a scanning electron microscope study of Anchiornis feather microstructure. The study also examined the well-preserved melanosomes of the feathers to determine their color. The scientists involved in the study found that the coloration found for this specimen was different than the color reported for BMNHC PH828, and they noted that the BMNHC specimen may not in fact be Anchiornis, as it was described before similar species from the same formation had been discovered.
The Shandong Tianyu Museum of Nature in Pingyi County, China, for example, was reported to hold 255 specimens of Anchiornis in its collections in 2010. Among their collection is a very well preserved fossil with visible color patterns, catalogued as STM 0-214. While this specimen has yet to be fully described, it was photographed for a 2011 article in National Geographic and was used in a study of Anchiornis covert feathers and wing anatomy the following year.
Description
Anchiornis huxleyi was a small, bipedal theropod dinosaur with a triangular skull bearing several details in common with dromaeosaurids, troodontids, and primitive avialans. Like other early paravians, Anchiornis was small, about the size of a crow. It had long, wing-bearing arms, long legs, and a long tail. Like all paravians, it was covered in feathers, though it also had scales on certain parts of the body. The wings, legs, and tail supported long but relatively narrow vaned feathers. Two types of simpler, downy (plumaceous) feathers covered the rest of the body, as in Sinornithosaurus: down feathers made up of filaments attached at their bases, and more complex down feathers with barbs attached along a central quill. Long, simple feathers covered almost the entire head and neck, torso, upper legs, and the first half of the tail. The rest of the tail bore pennaceous tail feathers (rectrices). Long feathers on the head (crown) may have formed a crest. While the first specimen of Anchiornis preserved only faint traces of feathers around the preserved portion of the body, many more well-preserved fossils have since been found. Studies of Anchiornis specimens using laser fluorescence have revealed not only more details of the feathers, but also of the skin and muscle tissue. Taken together, this evidence has given scientists a nearly complete picture of Anchiornis anatomy. Additional studies indicate that Anchiornis had body plumage that consisted of short quills with long and independent, flexible barbs. These barbs stuck out from the quills at low angles on two opposing blades. This also gave each feather an overall forked shape and resulted in the theropod possessing a softer textured and "shaggier" appearing plumage than is seen in modern birds. 'Shaggy' contour feathers probably influenced thermoregulatory and water repellence abilities, and, in combination with open-vaned wing feathers, would have decreased aerodynamic efficiency.
The holotype, belonging to a subadult or young adult individual, measured long and weighed . The largest specimens measured in total body length, nearly in wingspan and weighed about .
Wings
Like other early paravians, Anchiornis had large wings made up of pennaceous feathers attached to the arm and hand. The wing of Anchiornis was composed of 11 primary feathers and 10 secondary feathers. The primary feathers in Anchiornis were about as long as the secondaries, and formed a rounded wing. The wing feathers had curved but symmetrical central quills, with small and thin relative size, and rounded tips, all indicating poor aerodynamic ability. In the related dinosaurs Microraptor and Archaeopteryx, the longest wing feathers were closest to the tip of the wing, making the wings appear relatively long and pointed. However, in Anchiornis, the longest wing feathers were those nearest the wrist, making the wing broadest in the middle and tapering near the tip for a more rounded, less flight-adapted profile.
Like other maniraptorans, Anchiornis had a propatagium, a flap of skin connecting the wrist to the shoulder and rounding out the front edge of the wing. In Anchiornis, this part of the wing was covered in covert feathers which smoothed the wing and covered the gaps between the larger primary and secondary feathers. However, unlike modern birds, the covert feathers of Anchiornis were not arranged in tracts or rows. The arrangement of the covert feathers was also more primitive in Anchiornis than in birds and more advanced paravians. In modern birds, the coverts usually cover only the upper portion of the wing, with most of the wing surface made up of uncovered flight feathers. In some Anchiornis fossils, on the other hand, several layers of covert feathers seem to extend down to cover most of the wing's surface, so that the wing is essentially made of multiple layers of feathers, rather than a layer of broad feathers with only their bases hidden by layers of coverts. This multi-layered wing arrangement might have helped strengthen the wing, considering that the primary and secondary feathers themselves were narrow and weak.
The wing included three clawed fingers; however, unlike in some more primitive theropods, the longest two fingers were not separate, but were bound together by the skin and other tissue forming the wing, so Anchiornis was functionally two-fingered. These bound fingers were incorporated into a post-patagium, or flap of skin and other tissues that helped support the bases of the main wing feathers. Like the toes, the skin around the bottom of the fingers was covered in tiny, rounded scales. Unlike the toes, the flesh around the underside of the finger bones was twice as thick as the bones themselves and lacked distinct pads; instead, the fingers were straight and smooth without any major creases at the joints. Scales and skin around the fingers is very rarely preserved in fossils of early pennaraptorans, the only notable exceptions being Anchiornis and Caudipteryx, which had similar thick, scaly fingers associated with its wings.
Legs
In addition to the front wings, Anchiornis had long, vaned feathers on the hind legs. This has led many scientists to call Anchiornis a four-winged dinosaur, along with similar animals like Microraptor and Sapeornis. However, the feathers on the hind legs in Anchiornis did not have the shape or arrangement expected from flight feathers, and it is likely that their primary role was in display rather than flight.
Anchiornis had very long legs, which is usually an indication that an animal is a strong runner. However, the extensive leg feathers indicate that this may be a vestigial trait, as running animals tend to have reduced, not increased, hair or feathers on their legs. Like most paravians, Anchiornis had four toes on the foot, with the third and fourth toes the longest. The first toe, or hallux, was not reversed as in perching species. The hindwings of Anchiornis were also shorter than those of Microraptor, and were made up of 12 to 13 flight feathers anchored to the tibia (lower leg) and 10 to 11 to the tarsus (upper foot). Also unlike Microraptor, the hindwing feathers were longest closer to the body, with the foot feathers being short and directed downward, almost perpendicular to the foot bones.
Unlike many other paravians, the feet of Anchiornis (except for the claws) were completely covered in feathers, though these were much shorter than the ones making up the hindwing. Some specimens have preserved scales on the toes, tarsus, and even lower leg (tibia), suggesting that scales existed beneath the feathers. The underside of the toes were formed into fleshy pads with distinct creases at the joints. The foot pads were covered in small, pebble-like scales. Scales were also present on the top of the feet but these are very hard to see in all known fossils.
Color
In 2010, a team of scientists examined numerous points among the feathers of an extremely well-preserved Anchiornis specimen in the Beijing Museum of Natural History to survey the distribution of melanosomes, the pigment cells that give feathers their color. By studying the types of melanosomes and comparing them with those of modern birds, the scientists were able to map the specific colors and patterning present on this Anchiornis when it was alive. Though this technique had been used and described for isolated bird feathers and portions of other dinosaurs (such as the tail of Sinosauropteryx), Anchiornis became the first Mesozoic dinosaur for which almost the entire life coloration was known (note that the tail of this specimen was not preserved). The study found that most of the body feathers of this Anchiornis specimen were gray and black. The crown feathers were mainly rufous with a gray base and front, and the face had rufous speckles among predominantly black head feathers. The forewing and hindwing feathers were white with black tips. The coverts (shorter feathers covering the bases of the long wing feathers) were gray, contrasting the mainly white main wings. The larger coverts of the wing were also white with gray or black tips, forming rows of darker dots along mid-wing. These took the form of dark stripes or even rows of dots on the outer wing (primary feather coverts) but a more uneven array of speckles on the inner wing (secondary coverts). The shanks of the legs were gray other than the long hindwing feathers, and the feet and toes were black.
In 2015, a second Anchiornis fossil at the Yizhou Fossil & Geology Park was subjected to a similar study that included a survey of melanosome shapes across all the feathers. In contrast to the 2010 study, only gray-black type melanosomes were found. Even when the crown feathers were examined, none of the rounder, rufous-type melanosomes were seen. The scientists who conducted this second study suggested several possible explanations for this discrepancy. First, the different preservation of melanosomes or different investigative techniques might have influenced the results of the original study. Second, because the Beijing Museum specimen was smaller, it is possible that the rufous color was replaced as these animals aged. Third, it is possible that there were regional differences or even different species of Anchiornis which had different color patterns in their plumage.
Classification
When it was first discovered, the scientists who studied Anchiornis conducted a phylogenetic analysis and concluded that it was an early member of the group Avialae, along with Archaeopteryx. Members of Avialae, called avialans, are all more closely related to modern birds than they are to dromaeosaurid and troodontid dinosaurs, though the earliest and most primitive members of all three groups are extremely similar to each other, which makes it difficult to sort out exactly which of these three main paravian branches they belong to.
The second specimen of Anchiornis was more complete than the first, and preserved several features which led Hu Dongyu and his colleagues to reclassify Anchiornis as a troodontid. Several more studies using similar analyses have also found Anchiornis to be a troodontid, though there have been exceptions. One study found Anchiornis to be a member of Archaeopterygidae, and it along with Archaeopteryx were considered more primitive than dromeosaurids, troodontids, or avialans. In 2015, Sankar Chatterjee placed Anchiornis along with Microraptor and other four-winged paravians in a group he called "Tetrapterygidae", just outside the Avialae, though this was not supported with a phylogenetic analysis. More comprehensive studies suggested that Anchiornis may have been an avialan after all, though new finds and updated versions of the same study later reversed this finding, concluding that Anchiornis was most likely a basal member of the clade Paraves, just outside the clade that includes dromaeosaurids, troodontids, and avialans.
In a 2017 re-evaluation of the Haarlem Archaeopteryx specimen, Anchiornis was found to be in a group with other genera, like Eosinopteryx, Xiaotingia, and was placed in the family Anchiornithidae along with other relatives.
Paleobiology
Anchiornis is notable for its proportionally long forelimbs, which measured 80% of the total length of the hindlimbs. This is similar to the condition in early avians such as Archaeopteryx, and the authors pointed out that long forelimbs are necessary for flight. Anchiornis also had a more avian wrist than other non-avialan theropods. The authors initially speculated that it would have been possible for Anchiornis to fly or glide. However, further finds showed that the wings of Anchiornis, while well-developed, were short when compared to later species like Microraptor, with relatively short primary feathers that had rounded, symmetrical tips, unlike the pointed, aerodynamically proportioned feathers of Microraptor. A 2016 study of potential flight performance in early paravians concluded that while juvenile Anchiornis specimens may have been able to use their wings to assist running up an incline, and could possibly have achieved flapping flight if a very high-angle flapping wing stroke was used, the larger adult specimens would not have gained any aerodynamic benefit from their wings—they were simply too heavy compared to their total wing area. The same study found that flapping the wings while running would have resulted in a small (10%) increase to its running speed. Similarly, use of the wings during leaping would have resulted in a 15 to 20% increase in height and distance. Notably, Anchiornis seems to have lacked a breastbone (sternum), which may have been made of cartilage rather than bone, as in more primitive theropods.
Anchiornis has hindleg proportions more like those of more primitive theropod dinosaurs than avialans, with long legs indicating a fast-running lifestyle. However, while long legs normally indicate a fast runner, the legs and even feet and toes of Anchiornis were covered in feathers, including long feathers on the legs, similar to those in the hindwings of Microraptor. Long leg feathers on the lower legs may have slowed the running speed of Anchiornis. In modern birds, especially those that live on the ground, the lower legs tend to show reduction or even loss of feathers. The hind wings of Anchiornis were smaller and made of more curved, symmetrical feathers than those of Microraptor, suggesting that they were used mainly for display rather than flight. However, they might still have granted the animal some kind of aerodynamic advantage, even if their primary purpose was for display or some other function.
The skeletal structure of Anchiornis is similar to Eosinopteryx, which appears to have been a capable runner due to its uncurved rear claws and absence of flight feathers on its tail or lower legs. Anchiornis shared a similar body plan and the same ecosystem as Eosinopteryx, suggesting different niches and a complex picture for the origin of flight.
Like many modern birds, Anchiornis exhibited a complex pattern of coloration with different colors in speckled patterns across the body and wings, or "within- and among-feather plumage coloration." In modern birds, such color patterning is used in communication and display, either to members of the same species (e.g. for mating or territorial threat display) or to threaten and warn off competing or predatory species.
Feeding
A 2018 study reported gastric pellets in association with Anchiornis specimens; some of the Anchiornis were even preserved with pellets still inside their bodies. Anchiornis is the earliest theropod known to have produced pellets. The pellets contained lizard bones and ptycholepid fish scales.
| Biology and health sciences | Theropods | Animals |
19817681 | https://en.wikipedia.org/wiki/Tardigrade | Tardigrade | Tardigrades (), known colloquially as water bears or moss piglets, are a phylum of eight-legged segmented micro-animals. They were first described by the German zoologist Johann August Ephraim Goeze in 1773, who called them . In 1776, the Italian biologist Lazzaro Spallanzani named them Tardigrada, which means 'slow walker'.
They live in diverse regions of Earth's biospheremountaintops, the deep sea, tropical rainforests, and the Antarctic. Tardigrades are among the most resilient animals known, with individual species able to survive extreme conditions – such as exposure to extreme temperatures, extreme pressures (both high and low), air deprivation, radiation, dehydration, and starvation – that would quickly kill most other forms of life. Tardigrades have survived exposure to outer space.
There are about 1,500 known species in the phylum Tardigrada, a part of the superphylum Ecdysozoa. The earliest known fossil is from the Cambrian, some 500 million years ago. They lack several of the Hox genes found in arthropods, and the middle region of the body corresponding to an arthropod's thorax and abdomen. Instead, most of their body is homologous to an arthropod's head.
Tardigrades are usually about long when fully grown. They are short and plump, with four pairs of legs, each ending in claws (usually four to eight) or sticky pads. Tardigrades are prevalent in mosses and lichens and can readily be collected and viewed under a low-power microscope, making them accessible to students and amateur scientists. Their clumsy crawling and their well-known ability to survive life-stopping events have brought them into science fiction and popular culture including items of clothing, statues, soft toys and crochet patterns.
Description
Body structure
Tardigrades have a short plump body with four pairs of hollow unjointed legs. Most range from in length, although the largest species may reach . The body cavity is a haemocoel, an open circulatory system, filled with a colourless fluid. The body covering is a cuticle that is replaced when the animal moults; it contains hardened (sclerotised) proteins and chitin but is not calcified. Each leg ends in one or more claws according to the species; in some species, the claws are modified as sticky pads. In marine species, the legs are telescopic. There are no lungs, gills, or blood vessels, so tardigrades rely on diffusion through the cuticle and body cavity for gas exchange.
Nervous system and senses
The tardigrade nervous system has a pair of ventral nerve cords with a pair of ganglia serving each pair of legs. The nerve cords end near the mouth at a pair of subpharyngeal (or suboesophageal) ganglia. These are connected by paired commissures (either side of the tube from the mouth to the pharynx) to the dorsally located cerebral ganglion or 'brain'. Also in the head are two eyespots in the brain, and several sensory cirri and pairs of hollow antenna-like clavae which may be chemoreceptors.
The tardigrade Dactylobiotus dispar can be trained by classical conditioning to curl up into the defensive 'tun' state in response to a blue light associated with a small electric shock, an aversive stimulus. This demonstrates that tardigrades are capable of learning.
Locomotion
Although the body is flexible and fluid-filled, locomotion does not operate mainly hydrostatically. Instead, as in arthropods, the muscles (sometimes just one or a few cells) work in antagonistic pairs that make each leg step backwards and forwards; there are also some flexors that work against hydrostatic pressure of the haemocoel. The claws help to stop the legs sliding during walking, and are used for gripping.
Feeding and excretion
Tardigrades feed by sucking animal or plant cell fluids, or on detritus. A pair of stylets pierce the prey; the pharynx muscles then pump the fluids from the prey into the gut. A pair of salivary glands secrete a digestive fluid into the mouth, and produce replacement stylets each time the animal moults. Non-marine species have excretory Malpighian tubules where the intestine joins the hindgut. Some species have excretory or other glands between or at the base of the legs.
Reproduction and life cycle
Most tardigrades have both male and female animals which copulate by a variety of methods. The females lay eggs; those of Austeruseus faeroensis are spherical, 80 μm in diameter, with a knobbled surface. In other species the eggs can be ovoid, as in Hypsibius annulatus, or may be spherical with pyramidal or bottle-shaped surface ornamentation. Some species appear to have no males, suggesting that parthenogenesis is common.
Both sexes have a single gonad (an ovary or testis) located above the intestine. A pair of ducts run from the testis, opening through a single gonopore in front of the anus. Females have a single oviduct opening either just above the anus or directly into the rectum, which forms a cloaca.
The male may place his sperm into the cloaca, or may penetrate the female's cuticle and place the sperm straight into her body cavity, for it to fertilise the eggs directly in the ovary. A third mechanism in species such as H. annulatus is for the male to place the sperm under the female's cuticle; when she moults, she lays eggs into the cast cuticle, where they are fertilised. Courtship occurs in some aquatic tardigrades, with the male stroking his partner with his cirri to stimulate her to lay eggs; fertilisation is then external.
Up to 30 eggs are laid, depending on the species. Terrestrial tardigrade eggs have drought-resistant shells. Aquatic species either glue their eggs to a substrate or leave them in a cast cuticle. The eggs hatch within 14 days, the hatchlings using their stylets to open their egg shells.
Ecology and life history
Tardigrades as a group are cosmopolitan, living in many environments on land, in freshwater, and in the sea. Their eggs and resistant life-cycle stages (cysts and tuns) are small and durable enough to enable long-distance transport, whether on the feet of other animals or by the wind.
Individual species have more specialised distributions, many being both regional and limited to a single type of habitat, such as mountains. Some species have wide distributions: for instance, Echiniscus lineatus is pantropical. Halobiotus is restricted to cold Holarctic seas. Species such as Borealibius and Echiniscus lapponicus have a discontinuous distribution, being both polar and on tall mountains. This could be a result of long-distance transport by the wind, or the remains of an ancient geographic range when the climate was colder. A small percentage of species may be cosmopolitan.
The majority of species live in damp habitats such as on lichens, liverworts, and mosses, and directly in soil and leaf litter. In freshwater and the sea they live on and in the bottom, such as in between particles or around seaweeds. More specialised habitats include hot springs and as parasites or commensals of marine invertebrates. In soil there can be as many as 300,000 per square metre; on mosses they can reach a density of over 2 million per square metre.
Tardigrades are host to many microbial symbionts and parasites. In glacial environments, the bacterial genera Flavobacterium, Ferruginibacter, and Polaromonas are common in tardigrades' microbiomes. Many tardigrades are predatory; Milnesium lagniappe includes other tardigrades such as Macrobiotus acadianus among its prey. Tardigrades consume prey such as nematodes, and are themselves predated upon by soil arthropods including mites, spiders and cantharid beetle larvae.
With the exception of 62 exclusively freshwater species, all non-marine tardigrades are found in terrestrial environments. Because the majority of the marine species belongs to Heterotardigrada, the most ancestral class, the phylum evidently has a marine origin.
Environmental tolerance
Tardigrades are not considered universally extremophilic because they are not adapted to exploit many of the extreme conditions that their environmental tolerance has been measured in, only to endure them. This means that their chances of dying increase the longer they are exposed to theses extreme environments, whereas true extremophiles thrive there.
Dehydrated 'tun' state
Tardigrades are capable of suspending their metabolism, going into a state of cryptobiosis. Terrestrial and freshwater tardigrades are able to tolerate long periods when water is not available, such as when the moss or pond they are living in dries out, by drawing their legs in and forming a desiccated cyst, the cryptobiotic 'tun' state, where no metabolic activity takes place. In this state, they can go without food or water for several years. Further, in that state they become highly resistant to environmental stresses, including temperatures from as low as to as much as (at least for short periods of time), lack of oxygen, vacuum, ionising radiation, and high pressure.
Surviving other stresses
Marine tardigrades such as Halobiotus crispae alternate each year (cyclomorphosis) between an active summer morph and a hibernating winter morph (a pseudosimplex) that can resist freezing and low salinity, but which remains active throughout. Reproduction however takes place only in the summer morph.
Tardigrades can survive impacts up to about , and momentary shock pressures up to about .
Exposure to space
Tardigrades have survived exposure to space. In 2007, dehydrated tardigrades were taken on the FOTON-M3 mission and exposed to vacuum, or to both vacuum and solar ultraviolet, for 10 days. Back on Earth, more than 68% of the subjects protected from ultraviolet were reanimated by rehydration, and many produced viable embryos.
In contrast, hydrated samples exposed to vacuum and solar ultraviolet survived poorly, with only three subjects of Milnesium tardigradum surviving. The space vacuum did not much affect egg-laying in either R. coronifer or M. tardigradum, whereas UV radiation reduced egg-laying in M. tardigradum. In 2011, tardigrades went on the International Space Station STS-134, showing that they could survive microgravity and cosmic radiation, and should be suitable model organisms.
In 2019, a capsule containing tardigrades in a cryptobiotic state was on board the Israeli lunar lander Beresheet which crashed on the Moon.
Damage protection proteins
Tardigrades' ability to remain desiccated for long periods of time was thought to depend on high levels of the sugar trehalose, common in organisms that survive desiccation. However, tardigrades do not synthesize enough trehalose for this function. Instead, tardigrades produce intrinsically disordered proteins in response to desiccation. Three of these are specific to tardigrades and have been called tardigrade specific proteins. These may protect membranes from damage by associating with the polar heads of lipid molecules. The proteins may also form a glass-like matrix that protects cytoplasm from damage during desiccation.
Anhydrobiosis in response to desiccation has a complex molecular basis; in Hypsibius exemplaris, 1,422 genes are upregulated during the process. Of those, 406 are specific to tardigrades, 55 being intrinsically disordered and the others globular with unknown functions.
Tardigrades possess a cold shock protein; Maria Kamilari and colleagues propose (2019) that this may serve "as a RNA-chaperone involved in regulation of translation [of RNA code to proteins] following freezing."
Tardigrade DNA is protected from radiation by the Dsup ("damage suppressor") protein. The Dsup proteins of Ramazzottius varieornatus and H. exemplaris promote survival by binding to nucleosomes and protecting chromosomal DNA from hydroxyl radicals. The Dsup protein of R. varieornatus confers resistance to ultraviolet-C by upregulating DNA repair genes.
Some of these proteins are of interest to biomedical research. Potential is seen in Dsup's ability to protect against damage; in CAHS and LEA's ability to protect from desiccation; and some CAHS proteins could serve to prevent programmed cell death (apoptosis).
Taxonomic history
In 1773, Johann August Ephraim Goeze named the tardigrade , meaning 'little water-bear' in German (today, Germans often call them 'little bear-animal'). The name water bear comes from the way they walk, reminiscent of a bear's gait. The name Tardigradum means 'slow walker' and was given by Lazzaro Spallanzani in 1776. In 1834, C.A.S. Schulze gave the first formal description of a tardigrade, Macrobiotus hufelandi, in a work subtitled "a new animal from the crustacean class, capable of reviving after prolonged asphyxia and dryness". This was soon followed by descriptions of species including Echiniscus testudo, Milnesium tardigradum, Hypsibius dujardini, and Ramazzottius oberhaeuseri by L.M.F. Doyère in 1840. All four of these are now the nominal species for higher tardigrade taxa. The zoologist Hartmut Greven wrote that "The unanimous opinion of all later researchers is that Doyère's 1842 dissertation is an indisputable milestone in tardigradology".
Ferdinand Richters worked on the taxonomy of tardigrades from 1900 to 1913, with studies of Nordic, Arctic, marine, and South American species; he described many species at this time, and in 1926 proposed the class Eutardigrada. In 1927, Ernst Marcus created the class Heterotardigrada. and in 1929 a monograph on tardigrades which Greven describes as "comprehensive" and "unsurpassed today". In 1937 Gilbert Rahm, studying the fauna of Japan's hot springs, distinguished the class Mesotardigrada, with a single species Thermozodium esakii; its validity is now doubted.
In 1962, Giuseppe Ramazzotti proposed the phylum Tardigrada.
In 2019, Noemi Guil and colleagues proposed to promote the order Apochela to the new class Apotardigrada. There are some 1,488 described species of tardigrades, organised into 160 genera and 36 families.
Evolution
Evolutionary history
Tardigrade fossils are rare. The only known specimens are those from mid-Cambrian deposits in Siberia (in the Orsten fauna) and a few specimens in amber from the Cretaceous of North America and the Neogene of Dominica. The Siberian fossils differ from living tardigrades in several ways. They have three pairs of legs rather than four, they have a simplified head morphology, and they have no posterior head appendages, but they share with modern tardigrades their columnar cuticle construction. Scientists think they represent a stem group of living tardigrades.
Multiple lines of evidence show that tardigrades are secondarily miniaturised from a larger ancestor, probably a lobopodian, perhaps resembling the mid-Cambrian Aysheaia, which many analyses place close to the divergence of the tardigrade lineage. An alternative hypothesis derives tactopoda from a clade encompassing dinocaridids and Opabinia. The enigmatic panarthropodan Sialomorpha found in 30-million year old Dominican amber, while not a tardigrade, shows some apparent affinities. A 2023 morphological analysis concluded that luolishaniids, a group of Cambrian lobopodians, might be the tardigrades' closest known relatives.
The oldest remains of modern tardigrades are those of Milnesium swolenskyi, belonging to the living genus Milnesium known from a Late Cretaceous (Turonian) aged specimen of New Jersey amber, around 90 mya. Another fossil species, Beorn leggi, is known from a Late Campanian (~72 mya) specimen of Canadian amber, belonging to the family Hypsibiidae. The related hypsibioidean Aerobius dactylus was found in the same amber piece. The youngest known fossil tadigrade genus, Paradoryphoribius, was discovered in amber dated to about 16 mya.
Morphological and molecular phylogenetics studies have attempted to define how tardigrades relate to other ecdysozoan groups; alternative placements have been proposed within the Panarthropoda. The Tactopoda hypothesis holds that Tardigrada are sister to Arthropoda; the Antennopoda hypothesis is that Tardigrada are sister to (Onychophora + Arthropoda; and the Lobopodia (sensu Smith & Goldstein 2017) hypothesis is that Tardigrada are sister to Onychophora. The relationships have been debated on the basis of conflicting evidence.
Genomics
Tardigrade genomes vary widely in size. Hypsibius exemplaris (part of the Hypsibius dujardini group) has a compact genome of 100 megabase pairs and a generation time of about two weeks; it can be cultured indefinitely and cryopreserved. The genome of Ramazzottius varieornatus, one of the most stress-tolerant species of tardigrades, is about half as big, at 55 Mb. About 1.6% of its genes are the result of horizontal gene transfer from other species, not implying any dramatic effect.
Genomic studies across different tardigrade groups help reconstruct the evolution of their genome, such as the relationship of tardigrade body segments to those of other Panarthropoda. A 2023 review concludes that despite the diversity of body plan among the Panarthropoda, the tardigrade body plan maps best with "a simple one-to-one alignment of anterior segments". Such studies may eventually reveal how they miniaturised themselves from larger ecdysozoans.
Tardigrades lack several of the Hox genes found in arthropods, and a large intermediate region of the body axis. In insects, this corresponds to the entire thorax and abdomen. Practically the whole body, except for the last pair of legs, is made up of just the segments that are homologous to the head region in arthropods. This implies that tardigrades evolved from an ancestral ecdysozoan with a longer body and more segments.
Phylogeny
In 2012, the phylogeny of the phylum was studied using molecular markers (ribosomal RNA), finding that the Heterotardigrada and Arthrotardigrada seemed to be paraphyletic.
In 2018, a report integrating multiple morphological and molecular studies concluded that while the Arthrotardigrada appear to be paraphyletic, the Heterotardigrada is an accepted clade. All the lower-level taxa have been much reorganised, but the major groupings remain in place.
In culture and society
Early 20th century beginnings
Possibly the first time that tardigrades appear in non-scientific literature is in the short-story "Bathybia" by the geologist and explorer Douglas Mawson. Published in the 1908 book Aurora Australis and printed in the Antarctic, it deals with an expedition to the South Pole where the team encounters giant mushrooms and arthropods. The team watches a giant tardigrade fighting a similarly enormous rotifer; another giant water bear bites a man's toe, rendering him comatose for half an hour with its anaesthetic bite. Finally, a four-foot-long tardigrade, waking from hibernation, scares the narrator from his sleep, and he realizes it was all a dream.
Popularity
Tardigrades are common in mosses and lichens on walls and roofs, and can readily be collected and viewed under a low-power microscope. If they are dry, they can be reanimated on a microscope slide by adding a little water, making them accessible to beginning students and amateur scientists. Current Biology attributed their popularity to "their clumsy crawling [which] is about as adorable as can be." The zoologists James F. Fleming and Kazuhuru Arakawa called them "a charismatic phylum". They have been famous for their ability to survive life-stopping events such as being dried out since Spallanzani first resuscitated them from some dry sediment in a gutter in the 18th century. In 2015, the astrophysicist and science communicator Neil deGrasse Tyson described Earth as "the planet of the tardigrades", and they were nominated for the American Name Society's Name of the Year Award. Live Science notes that they are popular enough to appear on merchandise like clothes, earrings, and keychains, with crochet patterns for people to make their own tardigrade. The Dutch artist created statues for St Eusebius' Church, Arnhem of microscopic organisms including a tardigrade and a coronavirus.
From science to popular culture
The tardigrades' traits, including their ability to survive extreme conditions, have earned them a place in science fiction and other pop culture. The musician Cosmo Sheldrake imagines himself as a robust tardigrade in his 2015 "Tardigrade Song". He sings "If I were a tardigrade ... Pressure wouldn't squash me and fire couldn't burn ... I can live life in vacuums for years with no drink (A ha)".
The biologists Mark Blaxter and Arakawa Kazuharu describe tardigrades' transition to science fiction and fantasy as resulting in "rare but entertaining walk-on parts". They note that in the 2015 sci-fi horror film Harbinger Down, the protagonists have to deal with tardigrades that have mutated through Cold War experiments into intelligent and deadly shapeshifters.
In the 2017 Star Trek: Discovery, the alien "Ripper" creature is a huge but as The Routledge Handbook of Star Trek writes "generally recognisable" version of a terrestrial tardigrade. The protagonist, the xeno-anthropologist Michael Burnham, explains that the Ripper can "incorporate foreign DNA into its own genome via horizontal gene transfer. When Ripper borrows DNA from the mycelium [of its symbiotic fungi], he's granted an all-access travel pass". The scholar of science in popular culture Lisa Meinecke, in Fighting for the Future: Essays on Star Trek: Discovery, writes that the animal shares some of the real tardigrade's characteristics, including "its physical resilience to extreme environmental" stresses. She adds that while taking on fungal DNA is "ostensibly grounded" in science, it equally carries a "mystical impetus of what [the French philosophers] Deleuze and Guattari call a becoming", an entanglement of species that changes those involved "and ties together all life". The border of that symbiosis is the "Outsider or Anomalous", which stabilises the system and embodies its future possibilities. The characters Burnham and Stamets see that the tardigrade plays this 'Outsider' role.
| Biology and health sciences | Ecdysozoa | null |
19818280 | https://en.wikipedia.org/wiki/Deuterostome | Deuterostome | Deuterostomes (from Greek: ) are bilaterian animals of the superphylum Deuterostomia (), typically characterized by their anus forming before the mouth during embryonic development. Deuterostomia is further divided into four phyla: Chordata, Echinodermata, Hemichordata, and the extinct Vetulicolia known from Cambrian fossils. The extinct clade Cambroernida is thought to be a member of Deuterostomia.
In deuterostomes, the developing embryo's first opening (the blastopore) becomes the anus and cloaca, while the mouth is formed at a different site later on. This was initially the group's distinguishing characteristic, but deuterostomy has since been discovered among protostomes as well. The deuterostomes are also known as enterocoelomates, because their coelom develops through pouching of the gut, enterocoely.
Deuterostomia's sister clade is Protostomia, animals that develop mouth first and whose digestive tract development is more varied. Protostomia includes the ecdysozoans and spiralians, as well as the extinct Kimberella.
Deuterostomia and Protostomia, together with their outgroup Xenacoelomorpha, constitute the large infrakingdom Bilateria, i.e. animals with bilateral symmetry and three germ layers.
Systematics
History of classification
Initially, Deuterostomia included the phyla Brachiopoda, Bryozoa, Chaetognatha, and Phoronida based on morphological and embryological characteristics. However, Deuterostomia was redefined in 1995 based on DNA molecular sequence analyses, leading to the removal of the lophophorates which was later combined with other protostome animals to form the superphylum Lophotrochozoa. The arrow worms may also be deuterostomes, but molecular studies have placed them in the protostomes more often. Genetic studies have also revealed that deuterostomes have more than 30 genes not found in any other animal groups, but which yet are present in some marine algae and prokaryotes. This could mean that these are ancient genes that were lost in other organisms, or that a common ancestor acquired them through horizontal gene transfer.
Taxonomy
A consensus phylogeny of the deuterostomes is:
Superphylum Deuterostomia
Phylum Chordata
Subphylum Cephalochordata (lancelets)
Clade Olfactores
Subphylum Tunicata (tunicates)
Subphylum Vertebrata
Superclass Agnatha (jawless fish)
Infraphylum Gnathostomata (jawed fish)
Class Chondrichthyes (cartilaginous fish)
Superclass Osteichthyes (bony fish - includes tetrapods)
Clade Ambulacraria
Phylum Hemichordata
Class Enteropneusta (acorn worms)
Class Planctosphaeroidea
Class Pterobranchia
Phylum Echinodermata
Subphylum Asterozoa
Class Asteroidea (starfish)
Class Ophiuroidea (brittle stars)
Subphylum Blastozoa †
Subphylum Crinozoa (sea lillies and extinct relatives)
Subphylum Echinozoa
Echinoidea (sea urchins)
Holothuroidea (sea cucumbers)
There is a possibility that Ambulacraria is the sister clade to Xenacoelomorpha, and could form the Xenambulacraria group.
Characteristics
In deuterostomes, the developing embryo's first opening, the blastopore, becomes the anus, while the gut eventually tunnels through the embryo until it reaches the other side, forming an opening that becomes the mouth. This distinguishes them from protostomes, which have a variety of patterns of development.
In both deuterostomes and protostomes, a zygote first develops into a hollow ball of cells, called a blastula. In deuterostomes, the early divisions occur parallel or perpendicular to the polar axis. This is called radial cleavage, and also occurs in certain protostomes, such as the lophophorates.
Most deuterostomes display indeterminate cleavage, in which the developmental fate of the cells in the developing embryo is not determined by the identity of the parent cell. Thus, if the first four cells are separated, each can develop into a complete small larva; and if a cell is removed from the blastula, the other cells will compensate. This is the source of identical twins.
The mesoderm forms as evaginations of the developed gut that pinch off to form the coelom. This process is called enterocoely.
Another feature present in both the Hemichordata and Chordata is pharyngotremy — the presence of spiracles or gill slits into the pharynx, which is also found in some primitive fossil echinoderms (mitrates).
A hollow nerve cord is found in all chordates, including tunicates (in the larval stage). Some hemichordates also have a tubular nerve cord. In the early embryonic stage, it looks like the hollow nerve cord of chordates.
Both the hemichordates and the chordates have a thickening of the aorta, homologous to the chordate heart, which contracts to pump blood. This suggests a presence in the deuterostome ancestor of the three groups, with the echinoderms having secondarily lost it.
The highly modified nervous system of echinoderms obscures much about their ancestry, but several facts suggest that all present deuterostomes evolved from a common ancestor that had pharyngeal gill slits, a hollow nerve cord, circular and longitudinal muscles and a segmented body.
Origins and evolution
Bilateria, one of the five major lineages of animals, is split into two groups; the protostomes and deuterostomes. Deuterostomes consist of chordates (which include the vertebrates) and ambulacrarians. It seems likely that the Kimberella was a member of the protostomes. That implies that the protostome and deuterostome lineages split long before Kimberella appeared, and hence well before the start of the Cambrian , i.e. during the earlier part of the Ediacaran Period (circa 635-539 Mya, around the end of global Marinoan glaciation in the late Neoproterozoic). It has been proposed that the ancestral deuterostome, before the chordate/ambulacrarian split, could have been a chordate-like animal with a terminal anus and pharyngeal openings but no gill slits, with active suspension feeding strategy.
The last common ancestor of the deuterostomes had lost all innexin diversity.
Fossil record
Deuterostomes have a rich fossil record with thousands of fossil species being found throughout the Phanerozoic. There are also a few earlier fossils that may represent deuterostomes, but these remain debated. The earliest of these disputed fossils are the tunicate-like organisms Burykhia and Ausia from the Ediacaran period. While these may in fact be tunicates, others have interpreted them as cnidarians or sponges, and as such their true affinity remains uncertain. Another Ediacaran fossil, Arkarua, may represent the earliest echinoderm, while Yanjiahella from the early Cambrian (Fortunian) period is another notable stem group echinoderm.
Fossils of one major deuterostome group, the echinoderms (whose modern members include sea stars, sea urchins and crinoids), are quite common from the start of Stage 3 of the Cambrian, starting with forms such as Helicoplacus. Two other Cambrian Stage 3 (521-514 mya) species, Haikouichthys and Myllokunmingia from the Chengjiang biota, are the earliest bodyfossils of fish, whereas Pikaia, discovered much earlier but from the Mid Cambrian Burgess Shale, is now regarded as a primitive chordate. The Mid Cambrian fossil Rhabdotubus johanssoni has been interpreted as a pterobranch hemichordate, whereas Spartobranchus is an acorn-worm from the Burgess Shale, providing proof that all main lineages were already well established 508 mya.
On the other hand, fossils of early chordates are very rare, as non-vertebrate chordates have no bone tissue or teeth, and fossils of no Post-Cambrian non-vertebrate chordates are known aside from the Permian-aged Paleobranchiostoma, trace fossils of the Ordovician colonial tunicate Catellocaula, and various Jurassic-aged and Tertiary-aged spicules tentatively attributed to ascidians.. Fossils of Echinodermata are very common after the Cambrian. Fossils of Hemichordata are less common, except for graptolites until the Lower Carbonoferous.
Phylogeny
, the deuterostomes are considered to be monophyletic. The ancestral deuterostome was most likely a benthic worm that possessed a cartilaginous skeleton, a central nervous system, and gill slits. Approximate dates for clades are given in millions of years ago (mya).
| Biology and health sciences | General classifications | Animals |
19818410 | https://en.wikipedia.org/wiki/Phoronid | Phoronid | Phoronids (scientific name Phoronida, sometimes called horseshoe worms) are a small phylum of marine animals that filter-feed with a lophophore (a "crown" of tentacles), and build upright tubes of chitin to support and protect their soft bodies. They live in most of the oceans and seas, including the Arctic Ocean but excluding the Antarctic Ocean, and between the intertidal zone and about 400 meters down. Most adult phoronids are 2 cm long and about 1.5 mm wide, although the largest are 50 cm long.
The name of the group comes from its type genus: Phoronis.
Overview
The bottom end of the body is an ampulla (a flask-like swelling), which anchors the animal in the tube and enables it to retract its body very quickly when threatened. When the lophophore is extended at the top of the body, cilia (little hairs) on the sides of the tentacles draw food particles to the mouth, which is inside and slightly to one side of the base of the lophophore. Unwanted material can be excluded by closing a lid above the mouth or be rejected by the tentacles, whose cilia can switch into reverse. The food then moves down to the stomach, which is in the ampulla. Solid wastes are moved up the intestine and out through the anus, which is outside and slightly below the lophophore.
A blood vessel leads up the middle of the body from the stomach to a circular vessel at the base of the lophophore, and from there a single blind vessel runs up each tentacle. A pair of blood vessels near the body wall lead downward from the lophophore ring to the stomach and also to blind branches throughout the body. There is no heart, but the major vessels can contract in waves to move the blood. Phoronids do not ventilate their trunks with oxygenated water, but rely on respiration through the lophophore. The blood contains hemoglobin, which is unusual in such small animals and seems to be an adaptation to anoxic and hypoxic environments. The blood of Phoronis architecta carries twice as much oxygen as a human of the same weight. Two metanephridia filter the body fluid, returning any useful products and dumping the remaining soluble wastes through a pair of pores beside the anus.
One species builds colonies by budding or by splitting into top and bottom sections, and all phoronids reproduce sexually from spring to autumn. The eggs of most species form free-swimming actinotroch larvae, which feed on plankton. An actinotroch settles to the seabed after about 20 days and then undergoes a radical change in 30 minutes: the larval tentacles are replaced by the adult lophophore; the anus moves from the bottom to just outside the lophophore; and this changes the gut from upright to a U-bend, with the stomach at the bottom of the body. One species forms a "slug-like" larva, and the larvae of a few species are not known. Phoronids live for about one year.
Some species live separately, in vertical tubes embedded in soft sediment, while others form tangled masses buried in or encrusting rocks and shells. Species able to bore into materials like limestone and dead corals do so by chemical secretions. In some habitats populations of phoronids reach tens of thousand of individuals per square meter. The actinotroch larvae are familiar among plankton, and sometimes account for a significant proportion of the zooplankton biomass. Predators include fish, gastropods (snails), and nematodes (tiny roundworms). One phoronid species is unpalatable to many epibenthic predators. Various parasites infest phoronids' body cavities, digestive tract and tentacles. It is unknown whether phoronids have any significance for humans. The International Union for Conservation of Nature (IUCN) has not listed any phoronid species as endangered.
As of 2010 there are no indisputable body fossils of phoronids. There is good evidence that phoronids created trace fossils found in the Silurian, Devonian, Permian, Jurassic and Cretaceous periods, and possibly in the Ordovician and Triassic. Phoronids, brachiopods and bryozoans (ectoprocts) have collectively been called lophophorates, because all use lophophores to feed. From about the 1940s to the 1990s, family trees based on embryological and morphological features placed lophophorates among or as a sister group to the deuterostomes, a super-phylum which includes chordates and echinoderms. While a minority adhere to this view, most researchers now regard phoronids as members of the protostome super-phylum Lophotrochozoa. Although analysts using molecular phylogeny are confident that members of Lophotrochozoa are more closely related to each other than of non-members, the relationships between members are mostly unclear. Some analyses regard phoronids and brachiopods as sister-groups, while others place phoronids as a sub-group within brachiopoda.
Comparison of similar phyla
Description
Body structure
Most adult phoronids are 2 to 20 cm long and about 1.5 mm wide,
although the largest are 50 cm long. Their skins have no cuticle but secrete rigid tubes of chitin, similar to the material used in arthropods' exoskeletons, and sometimes reinforced with sediment particles and other debris. Most species' tubes are erect, but those of Phoronis vancouverensis are horizontal and tangled. Phoronids can move within their tubes but never leave them. The bottom end of the body is an ampulla (a flask-like swelling in a tube-like structure), which anchors the animal in the tube and enables it to retract its body when threatened, reducing the body to 20 percent of its maximum length. Longitudinal muscles retract the body very quickly, while circular muscles slowly extend the body by compressing the internal fluid.
For feeding and respiration each phoronid has at the top end a lophophore, a "crown" of tentacles with which the animal filter-feeds. In small species the "crown" is a simple circle, in medium-size species it is bent into the shape of a horseshoe with tentacles on the outer and inner sides, and in the largest species the ends of the horseshoe wind into complex spirals. These more elaborate shapes increase the area available for feeding and respiration. The tentacles are hollow, held upright by fluid pressure, and can be moved individually by muscles.
The mouth is inside the base of the crown of tentacles but to one side. The gut runs from the mouth to one side of the stomach, in the bottom of the ampulla. The intestine runs from the stomach, up the other side of the body, and exits at the anus, outside and a little below the crown of tentacles. The gut and intestine are both supported by two mesenteries (partitions that run the length of the body) connected to the body wall, and another mesentery connects the gut to the intestine.
The body is divided into coeloms, compartments lined with mesothelium. The main body cavity, under the crown of tentacles, is called the metacoelom, and the tentacles and their base share the mesocoelom. Above the mouth is the epistome, a hollow lid which can close the mouth. The cavity in the epistome is sometimes called the protocoelom, although other authors disagree that it is a coelom and Ruppert, Fox and Barnes think it is built by a different process.
The tube comprises a three-layered organic inner cylinder, and an agglutinated external layer.
Feeding, circulation and excretion
When the lophophore is extended, cilia (little hairs) on the sides of the tentacles draw water down between the tentacles and out at the base of the lophophore. Shorter cilia on the inner sides of the tentacles flick food particles into a groove in a circle under and just inside the tentacles, and cilia in the groove push the particles into the mouth. Phoronids direct their lophophores into the water current, and quickly reorient to maximize the food-catching area when currents change. Their diet includes algae, diatoms, flagellates, peridinians, small invertebrate larvae, and detritus. Unwanted material can be excluded by closing the epistome (lid above the mouth) or be rejected by the tentacles, whose cilia can switch into reverse. The gut uses cilia and muscles to move food towards the stomach and secretes enzymes that digest some of the food, but the stomach digests the majority of the food. Phoronids also absorb amino acids (the building blocks of proteins) through their skins, mainly in summer. Solid wastes are moved up the intestine and out through the anus, which is outside and slightly below the lophophore.
A blood vessel starts from the peritoneum (the membrane that loosely encloses the stomach), with blind capillaries supplying the stomach. The blood vessel leads up the middle of the body to a circular vessel at the base of the lophophore, and from there a single blind vessel runs up each tentacle. A pair of blood vessels near the body wall lead downward from the lophophore ring, and in most species these are combined into one a little below the lophophore ring. The downward vessel(s) leads back to the peritoneum, and also to blind branches throughout the body. There is no heart, but muscles in the major vessels contract in waves to move the blood. Unlike many animals that live in tubes, phoronids do not ventilate their trunks with oxygenated water, but rely on respiration by the lophophore, which extends above hypoxic sediments. The blood has hemocytes containing hemoglobin, which unusual in such small animals and seems to be an adaptation to anoxic and hypoxic environments. The blood of Phoronis architecta carries as much oxygen per cm3 as that of most vertebrates; the blood's volume in cm3 per gm of body weight is twice that of a human.
Podocytes on the walls of the blood vessels perform first-stage filtration of soluble wastes into the main coelom's fluid. Two metanephridia, each with a funnel-like intake, filter the fluid a second time, returning any useful products to the coelom and dumping the remaining wastes through a pair of nephridiopores beside the anus.
Nervous system and movement
There is a nervous center between the mouth and anus, and a nerve ring at the base of the lophophore. The ring supplies nerves to the tentacles and, just under the skin, to the body-wall muscles. Phoronis ovalis has two nerve trunks under the skin, whereas other species have one. The trunk(s) have giant axons (nerves that transmit signals very fast) which co-ordinate the retraction of the body when danger threatens.
Except for retracting the body into the tube, phoronids have limited and slow movement: partial emerging from the tube; bending the body when extended; and the lophophore's flicking of food into the mouth.
Reproduction and lifecycle
Only the smallest species of horseshoe worms, Phoronis ovalis, naturally builds colonies by budding or by splitting into top and bottom sections which then grow into full bodies. In experiments, other species have split successfully, but only when both parts have enough gonadal (reproductive) tissue. All phoronids breed sexually from spring to autumn. Some species are hermaphroditic (have both male and female reproductive organs) but cross-fertilize (fertilize the eggs of other members), while others are dioecious (have separate sexes). The gametes (sperms and ova) are produced in the swollen gonads, around the stomach. The gametes swim through the metacoelom to the metanephridia. Sperm exit by the nephridiopores and some are captured by the lophophores of individuals of the same species. Species that lay small fertilized eggs release them into the water as plankton, while species with larger eggs brood them either in the body's tube or stuck in the center of the lophophore by adhesive. The brooded eggs are released to feed on plankton when they develop into larvae.
Development of the eggs is a mixture of deuterostome and protostome characteristics. Early divisions of the egg are holoblastic (the cells divide completely) and radial (they gradually form a stack of circles). The process is regulative (the fate of each cell depends on interaction with other cells, not on a rigid program in each cell), and experiments that divided early embryos produced complete larvae. Mesoderm is formed from mesenchyme originating from the archenteron.
The coelom is formed by schizocoely, and the blastopore (a dent in the embryo) becomes the mouth.
The slug-like larva of Phoronis ovalis, the only known species with a lecithotrophic (non-feeding) larvae, lack tentacles and swims for about 4 days, creeps on the seabed for 3 to 4 days, then bores into a carbonate floor. Nothing is known about three species. The remaining species develop free-swimming actinotroch larvae, which feed on plankton. The actinotroch is an upright cylinder with the anus at the bottom and fringed with cilia. At the top is a lobe or hood, under which are: a ganglion, connected to a patch of cilia outside the apex of the hood; a pair of protonephridia (smaller and simpler than the metanephridia in the adult); the mouth; and feeding tentacles that encircle the mouth. After swimming for about 20 days, the actinotroch settles on the seabed and undergoes a catastrophic metamorphosis (radical change) in 30 minutes: the hood and larval tentacles are absorbed and the juvenile body forms from the larva's metasomal sack.
The adult lophophore is created around the mouth, and by growing a ventral side that is extremely long compared to the dorsal side, the gut develops a U-bend so that the anus is just under and outside the lophophore. Finally the adult phoronid builds a tube.
Phoronids live for about one year.
Ecology
Phoronids live in all the oceans and seas including the Arctic and excepting the Antarctic Ocean, and appear between the intertidal zone and about 400 meters down. Some occur separately, in vertical tubes embedded in soft sediment such as sand, mud, or fine gravel. Others form tangled masses of many individuals buried in or encrusting rocks and shells. In some habitats populations of phoronids reach tens of thousand of individuals per square meter. The actinotroch larvae are familiar among plankton, and sometimes account for a significant proportion of the zooplankton biomass.
Phoronis australis bores into the wall of the tube of a cerianthid anemone, Ceriantheomorphe brasiliensis, and uses this as a foundation for building its own tube. One cerianthid can house up to 100 phoronids. In this unequal relationship, the anemone experiences no significant benefits nor harm, while the phoronid benefits from: a foundation for its tube; food (both animals are filter-feeders); and protection, as the cerianthid withdraws into its tube when danger threatens, and this alerts the phoronid to retract into its own tube.
Although predators of phoronids are not well known, they include fish, gastropods (snails), and nematodes (tiny roundworms). Phoronopsis viridis, which reaches densities of 26,500 per square meter on tidal flats in California (USA), is unpalatable to many epibenthic predators, including fish and crabs. The unpalatability is strongest in the top section, including the lophophore, which is exposed to predators when phoronids feed. When the lophophores were removed in an experiment, the phoronids were more palatable, but this effect reduced over 12 days as the lophophores regenerated. These broadly effective defenses, which appear unusual among invertebrates inhabiting soft sediment, may be important in allowing Phoronopsis viridis to reach high densities. Some parasites infest phoronids: progenetic metacercariae and cysts of trematodes in phoronids' coelomic cavities; unidentified gregarines in phoronids' digestive tract; and an ancistrocomid ciliate parasite, Heterocineta, in the tentacles.
It is unknown whether phoronids have any significance for humans. The International Union for Conservation of Nature (IUCN) has not listed any phoronid species as endangered.
Evolutionary history
Fossil record
As of 2016 there are no indisputable body fossils of phoronids. Researching the Lower Cambrian Chengjiang fossils, in 1997 Chen and Zhou interpreted Iotuba chengjiangensis as a phoronid since it had tentacles and a U-shaped gut, and in 2004 Chen interpreted Eophoronis as a phoronid. However, in 2006 Conway Morris regarded Iotuba and Eophoronis as synonyms for the same genus, which in his opinion looked like the priapulid Louisella. In 2009 Balthasar and Butterfield found in western Canada two specimens from about 505 million years ago of a new fossil, Lingulosacculus nuda, which had two shells like those of brachiopods but not mineralized. In the authors' opinion, the U-shaped gut extended beyond the hinge line and outside the smaller shell. This would have precluded the attachment of muscles to close and open the shells, and the 50% of the animal's length beyond the hinge line would have needed longitudinal muscles and also a cuticle for protection. Hence they suggest that Lingulosacculus may have been a member of a phoronid stem group within the linguliform brachiopods. Another alternative is that Eccentrotheca lies somewhere in the phoronid stem lineage.
There is good evidence that species of Phoronis created the trace fossils of the ichnogenus Talpina, which have been found in the Devonian, Jurassic and Cretaceous periods. The Talpina animal bored into calcareous algae, corals, echinoid tests (shells), mollusc shells and the rostra of belemnites. Hederellids or Hederelloids are fossilized tubes, usually curved and between 0.1 and 1.8 mm wide, found from the Silurian to the Permian, and possibly in the Ordovician and Triassic. Their branching colonies may have been made by phoronids.
Family tree
Phoronids, brachiopods and bryozoans (ectoprocts) are collectively called lophophorates, because all feed using lophophores. From about the 1940s to the 1990s, family trees based on embryological and morphological features placed lophophorates among or as a sister group to the deuterostomes, a super-phylum that includes chordates and echinoderms. In the early development of their embryos, deuterostomes form the anus before the mouth, while protostomes form the mouth first.
Nielsen (2002) views the phoronids and brachiopods as affiliated with the deuterostome pterobranchs, which also filter-feed by tentacles, because the current-driving cells of the lophophores of all three have one cilium per cell, while lophophores of bryozoans, which he regards as protostomes, have multiple cilia per cell. Helmkampf, Bruchhaus and Hausdorf (2008) summarise several authors' embryological and morphological analyses which doubt or disagree that phoronids and brachiopods are deuterostomes:
While deuterostomes have three coelomic cavities, lophophorates such as phoronids and brachiopods have only two.
Pterobranchs may be a sub-group of enteropneusts ("acorn worms"). This suggests that the ancestral deuterostome looks more like a mobile worm-like enteropneust than a sessile colonial pterobranch. The fact that lophophorates and pterobranchs both use tentacles for feeding is probably not a synapomorphy of lophophorates and deuterostomes, but evolved independently as convergent adaptations to a sessile lifestyle.
The mesoderm does not form by enterocoely in phoronids and bryozoans, but does in deuterostomes, while there are disagreements about whether brachiopods form the mesoderm by enterocoely.
Relationships of Phoronida to other protostomes
From 1988 onwards analyses based on molecular phylogeny, which compares biochemical features such as similarities in DNA, have placed phoronids and brachiopods among the Lophotrochozoa, a protostome super-phylum that includes molluscs, annelids and flatworms but excludes the other main protostome super-phylum Ecdysozoa, whose members include arthropods. Cohen wrote, "This inference, if true, undermines virtually all morphology–based reconstructions of phylogeny made during the past century or more."
While analyses by molecular phylogeny are confident that members of Lophotrochozoa are more closely related to each other than of non-members, the relationships between members are mostly unclear. The Lophotrochozoa are generally divided into: Lophophorata (animals that have lophophores), including Phoronida and Brachiopoda; Trochozoa (animals many of which have trochophore larvae), including molluscs, annelids, echiurans, sipunculans and nemerteans; and some other phyla (such as Platyhelminthes, Gastrotricha, Gnathostomulida, Micrognathozoa, and Rotifera).
Molecular phylogeny indicates that Phoronida are closely related to Brachiopoda, but Bryozoa (Ectoprocta) are not closely related to this group, despite using a similar lophophore for feeding and respiration. This implies that the traditional definition "Lophophorata" is not monophyletic. Recently the term "Lophophorata" has been applied only to the Phoronida and Brachiopoda, and Halanych (2004) thinks this change will cause confusion. Some analyses regard Phoronida and Brachiopoda as sister-groups, while others place Phoronida as a sub-group within Brachiopoda, implying that Brachiopoda is paraphyletic. Cohen and Weydman's analysis (2005) concludes that phoronids are a sub-group of inarticulate brachiopods (those in which the hinge between the two valves have no teeth and sockets) and sister-group of the other inarticulate sub-groups. The authors also suggest that the ancestors of molluscs and the brachiopod+phoronid clade diverged between 900 Ma and 560 Ma, most probably about 685 Ma.
Taxonomy
The phylum has two genera, with no class or order names. Zoologists have given the larvae, usually called an actinotroch, a separate genus name from the adults.
In 1999 Temereva and Malakhov described Phoronis svetlanae. In 2000 Temereva described a new species, Phoronopsis malakhovi, while Emig regards it as a synonym for Phoronopsis harmeri. Santagata thinks Phoronis architecta is a different species from both Phoronis psammophila and Phoronis muelleri, and that "[the phoronids'] species diversity is currently underestimated". In 2009 Temereva described what may be larvae of Phoronopsis albomaculata and Phoronopsis californica. She wrote that, while there are 12 undisputed adult phoronid species, 25 morphological types of larvae have been identified.
| Biology and health sciences | Lophotrochozoa | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.