id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
47532 | https://en.wikipedia.org/wiki/Cumulus%20cloud | Cumulus cloud | Cumulus clouds are clouds that have flat bases and are often described as puffy, cotton-like, or fluffy in appearance. Their name derives from the Latin , meaning "heap" or "pile". Cumulus clouds are low-level clouds, generally less than in altitude unless they are the more vertical cumulus congestus form. Cumulus clouds may appear by themselves, in lines, or in clusters.
Cumulus clouds are often precursors of other types of clouds, such as cumulonimbus, when influenced by weather factors such as instability, humidity, and temperature gradient. Normally, cumulus clouds produce little or no precipitation, but they can grow into the precipitation-bearing cumulus congestus or cumulonimbus clouds. Cumulus clouds can be formed from water vapour, supercooled water droplets, or ice crystals, depending upon the ambient temperature. They come in many distinct subforms and generally cool the earth by reflecting the incoming solar radiation.
Cumulus clouds are part of the larger category of free-convective cumuliform clouds, which include cumulonimbus clouds. The latter genus-type is sometimes categorized separately as cumulonimbiform due to its more complex structure that often includes a cirriform or anvil top. There are also cumuliform clouds of limited convection that comprise stratocumulus (low-étage), altocumulus (middle-étage) and cirrocumulus (high-étage). These last three genus-types are sometimes classified separately as stratocumuliform.
Formation
Cumulus clouds form via atmospheric convection as air warmed by the surface begins to rise. As the air rises, the temperature drops (following the lapse rate), causing the relative humidity (RH) to rise. If convection reaches a certain level the RH reaches one hundred percent, and the "wet-adiabatic" phase begins. At this point a positive feedback ensues: since the RH is above 100%, water vapor condenses, releasing latent heat, warming the air and spurring further convection.
In this phase, water vapor condenses on various nuclei present in the air, forming the cumulus cloud. This creates the characteristic flat-bottomed puffy shape associated with cumulus clouds. The height of the cloud (from its bottom to its top) depends on the temperature profile of the atmosphere and of the presence of any inversions. During the convection, surrounding air is entrained (mixed) with the thermal and the total mass of the ascending air increases.
Rain forms in a cumulus cloud via a process involving two non-discrete stages. The first stage occurs after the droplets coalesce onto the various nuclei. Langmuir writes that surface tension in the water droplets provides a slightly higher pressure on the droplet, raising the vapor pressure by a small amount. The increased pressure results in those droplets evaporating and the resulting water vapor condensing on the larger droplets. Due to the extremely small size of the evaporating water droplets, this process becomes largely meaningless after the larger droplets have grown to around 20 to 30 micrometres, and the second stage takes over. In the accretion phase, the raindrop begins to fall, and other droplets collide and combine with it to increase the size of the raindrop. Langmuir was able to develop a formula which predicted that the droplet radius would grow unboundedly within a discrete time period.
Description
The liquid water density within a cumulus cloud has been found to change with height above the cloud base rather than being approximately constant throughout the cloud. In one particular study, the concentration was found to be zero at cloud base. As altitude increased, the concentration rapidly increased to the maximum concentration near the middle of the cloud. The maximum concentration was found to be anything up to 1.25 grams of water per kilogram of air. The concentration slowly dropped off as altitude increased to the height of the top of the cloud, where it immediately dropped to zero again.
Cumulus clouds can form in lines stretching over long called cloud streets. These cloud streets cover vast areas and may be broken or continuous. They form when wind shear causes horizontal circulation in the atmosphere, producing the long, tubular cloud streets. They generally form during high-pressure systems, such as after a cold front.
The height at which the cloud forms depends on the amount of moisture in the thermal that forms the cloud. Humid air will generally result in a lower cloud base. In temperate areas, the base of the cumulus clouds is usually below above ground level, but it can range up to in altitude. In arid and mountainous areas, the cloud base can be in excess of .
Cumulus clouds can be composed of ice crystals, water droplets, supercooled water droplets, or a mixture of them.
One study found that in temperate regions, the cloud bases studied ranged from above ground level. These clouds were normally above , and the concentration of droplets ranged from . This data was taken from growing isolated cumulus clouds that were not precipitating. The droplets were very small, ranging down to around 5 micrometres in diameter. Although smaller droplets may have been present, the measurements were not sensitive enough to detect them. The smallest droplets were found in the lower portions of the clouds, with the percentage of large droplets (around 20 to 30 micrometres) rising dramatically in the upper regions of the cloud. The droplet size distribution was slightly bimodal in nature, with peaks at the small and large droplet sizes and a slight trough in the intermediate size range. The skew was roughly neutral. Furthermore, large droplet size is roughly inversely proportional to the droplet concentration per unit volume of air.
In places, cumulus clouds can have "holes" where there are no water droplets. These can occur when winds tear the cloud and incorporate the environmental air or when strong downdrafts evaporate the water.
Subforms
Cumulus clouds come in four distinct species, cumulus humilis, mediocris, congestus, and fractus. These species may be arranged into the variety, cumulus radiatus; and may be accompanied by up to seven supplementary features, cumulus pileus, velum, virga, praecipitatio, arcus, pannus, and tuba.
The species Cumulus fractus is ragged in appearance and can form in clear air as a precursor to cumulus humilis and larger cumulus species-types; or it can form in precipitation as the supplementary feature pannus (also called scud) which can also include stratus fractus of bad weather. Cumulus humilis clouds look like puffy, flattened shapes. Cumulus mediocris clouds look similar, except that they have some vertical development. Cumulus congestus clouds have a cauliflower-like structure and tower high into the atmosphere, hence their alternate name "towering cumulus". The variety Cumulus radiatus forms in radial bands called cloud streets and can comprise any of the four species of cumulus.
Cumulus supplementary features are most commonly seen with the species congestus. Cumulus virga clouds are cumulus clouds producing virga (precipitation that evaporates while aloft), and cumulus praecipitatio produce precipitation that reaches the Earth's surface. Cumulus pannus comprise shredded clouds that normally appear beneath the parent cumulus cloud during precipitation. Cumulus arcus clouds have a gust front, and cumulus tuba clouds have funnel clouds or tornadoes. Cumulus pileus clouds refer to cumulus clouds that have grown so rapidly as to force the formation of pileus over the top of the cloud. Cumulus velum clouds have an ice crystal veil over the growing top of the cloud.
There are also cumulus cataractagenitus, which are formed by waterfalls.
Forecast
Cumulus humilis clouds usually indicate fair weather. Cumulus mediocris clouds are similar, except that they have some vertical development, which implies that they can grow into cumulus congestus or even cumulonimbus clouds, which can produce heavy rain, lightning, severe winds, hail, and even tornadoes. Cumulus congestus clouds, which appear as towers, will often grow into cumulonimbus storm clouds. They can produce precipitation. Glider pilots often pay close attention to cumulus clouds, as they can be indicators of rising air drafts or thermals underneath that can suck the plane high into the sky—a phenomenon known as cloud suck.
Effects on climate
Due to reflectivity, clouds cool the earth by around , an effect largely caused by stratocumulus clouds. However, at the same time, they heat the earth by around by reflecting emitted radiation, an effect largely caused by cirrus clouds. This averages out to a net loss of . Cumulus clouds, on the other hand, have a variable effect on heating the Earth's surface. The more vertical cumulus congestus species and cumulonimbus genus of clouds grow high into the atmosphere, carrying moisture with them, which can lead to the formation of cirrus clouds. The researchers speculated that this might even produce a positive feedback, where the increasing upper atmospheric moisture further warms the earth, resulting in an increasing number of cumulus congestus clouds carrying more moisture into the upper atmosphere.
Relation to other clouds
Cumulus clouds are a genus of free-convective low-level cloud along with the related limited-convective cloud stratocumulus. These clouds form from ground level to at all latitudes. Stratus clouds are also low-level. In the middle level are the alto- clouds, which consist of the limited-convective stratocumuliform cloud altocumulus and the stratiform cloud altostratus. Mid-level clouds form from to in polar areas, in temperate areas, and in tropical areas. The high-level cloud, cirrocumulus, is a stratocumuliform cloud of limited convection. The other clouds in this level are cirrus and cirrostratus. High clouds form in high latitudes, in temperate latitudes, and in low, tropical latitudes. Cumulonimbus clouds, like cumulus congestus, extend vertically rather than remaining confined to one level.
Cirrocumulus clouds
Cirrocumulus clouds form in patches and cannot cast shadows. They commonly appear in regular, rippling patterns or in rows of clouds with clear areas between. Cirrocumulus are, like other members of the cumuliform and stratocumuliform categories, formed via convective processes. Significant growth of these patches indicates high-altitude instability and can signal the approach of poorer weather. The ice crystals in the bottoms of cirrocumulus clouds tend to be in the form of hexagonal cylinders. They are not solid, but instead tend to have stepped funnels coming in from the ends. Towards the top of the cloud, these crystals have a tendency to clump together. These clouds do not last long, and they tend to change into cirrus because as the water vapor continues to deposit on the ice crystals, they eventually begin to fall, destroying the upward convection. The cloud then dissipates into cirrus. Cirrocumulus clouds come in four species which are common to all three genus-types that have limited-convective or stratocumuliform characteristics: stratiformis, lenticularis, castellanus, and floccus. They are iridescent when the constituent supercooled water droplets are all about the same size.
Altocumulus clouds
Altocumulus clouds are a mid-level cloud that forms from high to in polar areas, in temperate areas, and in tropical areas. They can have precipitation and are commonly composed of a mixture of ice crystals, supercooled water droplets, and water droplets in temperate latitudes. However, the liquid water concentration was almost always significantly greater than the concentration of ice crystals, and the maximum concentration of liquid water tended to be at the top of the cloud while the ice concentrated itself at the bottom. The ice crystals in the base of the altocumulus clouds and in the virga were found to be dendrites or conglomerations of dendrites while needles and plates resided more towards the top. Altocumulus clouds can form via convection or via the forced uplift caused by a warm front.
Stratocumulus clouds
A stratocumulus cloud is another type of stratocumuliform cloud. Like cumulus clouds, they form at low levels and via convection. However, unlike cumulus clouds, their growth is almost completely retarded by a strong inversion. As a result, they flatten out like stratus clouds, giving them a layered appearance. These clouds are extremely common, covering on average around twenty-three percent of the Earth's oceans and twelve percent of the Earth's continents. They are less common in tropical areas and commonly form after cold fronts. Additionally, stratocumulus clouds reflect a large amount of the incoming sunlight, producing a net cooling effect. Stratocumulus clouds can produce drizzle, which stabilizes the cloud by warming it and reducing turbulent mixing.
Cumulonimbus clouds
Cumulonimbus clouds are the final form of growing cumulus clouds. They form when cumulus congestus clouds develop a strong updraft that propels their tops higher and higher into the atmosphere until they reach the tropopause at in altitude. Cumulonimbus clouds, commonly called thunderheads, can produce high winds, torrential rain, lightning, gust fronts, waterspouts, funnel clouds, and tornadoes. They commonly have anvil clouds.
Horseshoe clouds
A short-lived horseshoe cloud may occur when a horseshoe vortex deforms a cumulus cloud.
Extraterrestrial
Some cumuliform and stratocumuliform clouds have been discovered on most other planets in the Solar System. On Mars, the Viking Orbiter detected cirrocumulus and stratocumulus clouds forming via convection primarily near the polar icecaps. The Galileo space probe detected massive cumulonimbus clouds near the Great Red Spot on Jupiter. Cumuliform clouds have also been detected on Saturn. In 2008, the Cassini spacecraft determined that cumulus clouds near Saturn's south pole were part of a cyclone over in diameter. The Keck Observatory detected whitish cumulus clouds on Uranus. Like Uranus, Neptune has methane cumulus clouds. Venus, however, does not appear to have cumulus clouds.
| Physical sciences | Clouds | null |
47535 | https://en.wikipedia.org/wiki/Haptophyte | Haptophyte | The haptophytes, classified either as the Haptophyta, Haptophytina or Prymnesiophyta (named for Prymnesium), are a clade of algae.
The names Haptophyceae or Prymnesiophyceae are sometimes used instead. This ending implies classification at the class rank rather than as a division. Although the phylogenetics of this group has become much better understood in recent years, there remains some dispute over which rank is most appropriate.
Characteristics
The chloroplasts are pigmented similarly to those of the heterokonts, but the structure of the rest of the cell is different, so it may be that they are a separate line whose chloroplasts are derived from similar red algal endosymbionts. Haptophyte chloroplasts contain chlorophylls a, c1, and c2 but lack chlorophyll b. For carotenoids, they have beta-, alpha-, and gamma- carotenes. Like diatoms and brown algae, they have also fucoxanthin, an oxidized isoprenoid derivative that is likely the most important driver of their brownish-yellow color.
The cells typically have two slightly unequal flagella, both of which are smooth, and a unique organelle called a haptonema, which is superficially similar to a flagellum but differs in the arrangement of microtubules and in its use. The name comes from the Greek hapsis, touch, and nema, round thread. The mitochondria have tubular cristae.
Most haptophytes reportedly produce chrysolaminarin rather than starch as their major storage polysaccharide, but some Pavlovaceae produce paramylon. The chain length of the chrysolaminarin is reportedly short (polymers of 20–50 glycosides, unlike the 300+ of comparable amylose), and it is located in cytoplasmic membrane-bound vacuoles.
Significance
The best-known haptophytes are coccolithophores, which make up 673 of the 762 described haptophyte species, and have an exoskeleton of calcareous plates called coccoliths. Coccolithophores are some of the most abundant marine phytoplankton, especially in the open ocean, and are extremely abundant as microfossils, forming chalk deposits. Other planktonic haptophytes of note include Chrysochromulina and Prymnesium, which periodically form toxic marine algal blooms, and Phaeocystis, blooms of which can produce unpleasant foam which often accumulates on beaches.
Haptophytes are economically important, as species such as Pavlova lutheri and Isochrysis sp. are widely used in the aquaculture industry to feed oyster and shrimp larvae. They contain a large amount of polyunsaturated fatty acids such as docosahexaenoic acid (DHA), stearidonic acid and alpha-linolenic acid. Tisochrysis lutea contains betain lipids and phospholipids.
Classification
The haptophytes were first placed in the class Chrysophyceae (golden algae), but ultrastructural data have provided evidence to classify them separately. Both molecular and morphological evidence supports their division into five orders; coccolithophores make up the Isochrysidales and Coccolithales. Very small (2-3μm) uncultured pico-prymnesiophytes are ecologically important.
Haptophytes was discussed to be closely related to cryptomonads.
Haptophytes are closely related to the SAR clade.
Subphylum Haptophytina Cavalier-Smith 2015 [Haptophyta Hibberd 1976 sensu Ruggerio et al. 2015]
Clade Rappemonada Kim et al. 2011
Class Rappephyceae Cavalier-Smith 2015
Order Rappemonadales
Family Rappemonadaceae
Clade Haptomonada (Margulis & Schwartz 1998) [Haptophyta Hibberd 1976 emend. Edvardsen & Eikrem 2000; Prymnesiophyta Green & Jordan, 1994; Prymnesiomonada; Prymnesiida Hibberd 1976; Haptophyceae Christensen 1962 ex Silva 1980; Haptomonadida; Patelliferea Cavalier-Smith 1993]
Class Pavlovophyceae Cavalier-Smith 1986 [Pavlovophycidae Cavalier-Smith 1986]
Order Pavlovales Green 1976
Family Pavlovaceae Green 1976
Class Prymnesiophyceae Christensen 1962 emend. Cavalier-Smith 1996 [Haptophyceae s.s.; Prymnesiophycidae Cavalier-Smith 1986; Coccolithophyceae Casper 1972 ex Rothmaler 1951]
Family †Eoconusphaeraceae Kristan-Tollmann 1988 [Conusphaeraceae]
Family †Goniolithaceae Deflandre 1957
Family †Lapideacassaceae Black, 1971
Family †Microrhabdulaceae Deflandre 1963
Family †Nannoconaceae Deflandre 1959
Family †Polycyclolithaceae Forchheimer 1972 emend Varol, 1992
Family †Lithostromationaceae Deflandre 1959
Family †Rhomboasteraceae Bown, 2005
Family Braarudosphaeraceae Deflandre 1947
Family Ceratolithaceae Norris 1965 emend Young & Bown 2014 [Triquetrorhabdulaceae Lipps 1969 - cf Young & Bown 2014]
Family Alisphaeraceae Young et al., 2003
Family Papposphaeraceae Jordan & Young 1990 emend Andruleit & Young 2010
Family Umbellosphaeraceae Young et al., 2003 [Umbellosphaeroideae]
Order †Discoasterales Hay 1977
Family †Discoasteraceae Tan 1927
Family †Heliolithaceae Hay & Mohler 1967
Family †Sphenolithaceae Deflandre 1952
Family †Fasciculithaceae Hay & Mohler 1967
Order Phaeocystales Medlin 2000
Family Phaeocystaceae Lagerheim 1896
Order Prymnesiales Papenfuss 1955 emend. Edvardsen and Eikrem 2000
Family Chrysochromulinaceae Edvardsen, Eikrem & Medlin 2011
Family Prymnesiaceae Conrad 1926 ex Schmidt 1931
Subclass Calcihaptophycidae
Order Isochrysidales Pascher 1910 [Prinsiales Young & Bown 1997]
Family †Prinsiaceae Hay & Mohler 1967 emend. Young & Bown, 1997
Family Isochrysidaceae Parke 1949 [Chrysotilaceae; Marthasteraceae Hay 1977]
Family Noëlaerhabdaceae Jerkovic 1970 emend. Young & Bown, 1997 [Gephyrocapsaceae Black 1971]
Order †Eiffellithales Rood, Hay & Barnard 1971 (loxolith; imbricating murolith)
Family †Chiastozygaceae Rood, Hay & Barnard 1973 [Ahmuellerellaceae Reinhardt, 1965]
Family †Eiffellithaceae Reinhardt 1965
Family †Rhagodiscaceae Hay 1977
Order Stephanolithiales Bown & Young 1997 (protolith; non-imbrication murolith)
Family Parhabdolithaceae Bown 1987
Family †Stephanolithiaceae Black 1968 emend. Black 1973
Order Zygodiscales Young & Bown 1997 [Crepidolithales]
Family Helicosphaeraceae Black 1971
Family Pontosphaeraceae Lemmermann 1908
Family †Zygodiscaceae Hay & Mohler 1967
Order Syracosphaerales Ostenfeld 1899 emend. Young et al., 2003 [Rhabdosphaerales Ostenfeld 1899]
Family Calciosoleniaceae Kamptner 1927
Family Syracosphaeraceae Lohmann, 1902 [Halopappiaceae Kamptner 1928] (caneolith & cyrtolith; murolith)
Family Rhabdosphaeraceae Haeckel, 1894 (planolith)
Order †Watznaueriales Bown 1987 (imbricating placolith)
Family †Watznaueriaceae Rood, Hay & Barnard 1971
Order †Arkhangelskiales Bown & Hampton 1997
Family †Arkhangelskiellaceae Bukry 1969
Family †Kamptneriaceae Bown & Hampton 1997
Order †Podorhabdales Rood 1971 [Biscutales Aubry 2009; Prediscosphaerales Aubry 2009] (non-imbricating or radial placolith)
Family †Axopodorhabdaceae Wind & Wise 1977 [Podorhabdaceae Noel 1965]
Family †Biscutaceae Black, 1971
Family †Calyculaceae Noel 1973
Family †Cretarhabdaceae Thierstein 1973
Family †Mazaganellaceae Bown 1987
Family †Prediscosphaeraceae Rood et al., 1971 [Deflandriaceae Black 1968]
Family †Tubodiscaceae Bown & Rutledge 1997
Order Coccolithales Schwartz 1932 [Coccolithophorales]
Family Reticulosphaeraceae Cavalier-Smith 1996 [Reticulosphaeridae]
Family Calcidiscaceae Young & Bown 1997
Family Coccolithaceae Poche 1913 emend. Young & Bown, 1997 [Coccolithophoraceae]
Family Pleurochrysidaceae Fresnel & Billard 1991
Family Hymenomonadaceae Senn 1900 [Ochrosphaeraceae Schussnig 1930]
| Biology and health sciences | Other organisms | null |
47544 | https://en.wikipedia.org/wiki/Carrying%20capacity | Carrying capacity | The carrying capacity of an environment is the maximum population size of a biological species that can be sustained by that specific environment, given the food, habitat, water, and other resources available. The carrying capacity is defined as the environment's maximal load, which in population ecology corresponds to the population equilibrium, when the number of deaths in a population equals the number of births (as well as immigration and emigration). Carrying capacity of the environment implies that the resources extraction is not above the rate of regeneration of the resources and the wastes generated are within the assimilating capacity of the environment. The effect of carrying capacity on population dynamics is modelled with a logistic function. Carrying capacity is applied to the maximum population an environment can support in ecology, agriculture and fisheries. The term carrying capacity has been applied to a few different processes in the past before finally being applied to population limits in the 1950s. The notion of carrying capacity for humans is covered by the notion of sustainable population.
An early detailed examination of global limits was published in the 1972 book Limits to Growth, which has prompted follow-up commentary and analysis, including much criticism. A 2012 review in Nature by 22 international researchers expressed concerns that the Earth may be "approaching a state shift" in which the biosphere may become less hospitable to human life and in which human carrying capacity may diminish. This concern that humanity may be passing beyond "tipping points" for safe use of the biosphere has increased in subsequent years. Recent estimates of Earth's carrying capacity run between two billion and four billion people, depending on how optimistic researchers are about international cooperation to solve collective action problems.
Origins
In terms of population dynamics, the term 'carrying capacity' was not explicitly used in 1838 by the Belgian mathematician Pierre François Verhulst when he first published his equations based on research on modelling population growth.
The origins of the term "carrying capacity" are uncertain, with sources variously stating that it was originally used "in the context of international shipping" in the 1840s, or that it was first used during 19th-century laboratory experiments with micro-organisms. A 2008 review finds the first use of the term in English was an 1845 report by the US Secretary of State to the US Senate. It then became a term used generally in biology in the 1870s, being most developed in wildlife and livestock management in the early 1900s. It had become a staple term in ecology used to define the biological limits of a natural system related to population size in the 1950s.
Neo-Malthusians and eugenicists popularised the use of the words to describe the number of people the Earth can support in the 1950s, although American biostatisticians Raymond Pearl and Lowell Reed had already applied it in these terms to human populations in the 1920s.
Hadwen and Palmer (1923) defined carrying capacity as the density of stock that could be grazed for a definite period without damage to the range.
It was first used in the context of wildlife management by the American Aldo Leopold in 1933, and a year later by the American Paul Lester Errington, a wetlands specialist. They used the term in different ways, Leopold largely in the sense of grazing animals (differentiating between a 'saturation level', an intrinsic level of density a species would live in, and carrying capacity, the most animals which could be in the field) and Errington defining 'carrying capacity' as the number of animals above which predation would become 'heavy' (this definition has largely been rejected, including by Errington himself). The important and popular 1953 textbook on ecology by Eugene Odum, Fundamentals of Ecology, popularised the term in its modern meaning as the equilibrium value of the logistic model of population growth.
Mathematics
The specific reason why a population stops growing is known as a limiting or regulating factor.
The difference between the birth rate and the death rate is the natural increase. If the population of a given organism is below the carrying capacity of a given environment, this environment could support a positive natural increase; should it find itself above that threshold the population typically decreases. Thus, the carrying capacity is the maximum number of individuals of a species that an environment can support in long run.
Population size decreases above carrying capacity due to a range of factors depending on the species concerned, but can include insufficient space, food supply, or sunlight. The carrying capacity of an environment varies for different species.
In the standard ecological algebra as illustrated in the simplified Verhulst model of population dynamics, carrying capacity is represented by the constant K:
where
is the population size,
is the intrinsic rate of natural increase
is the carrying capacity of the local environment, and
, the derivative of with respect to time , is the rate of change in population with time.
Thus, the equation relates the growth rate of the population to the current population size, incorporating the effect of the two constant parameters and . (Note that decrease is negative growth.) The choice of the letter came from the German Kapazitätsgrenze (capacity limit).
This equation is a modification of the original Verhulst model:
In this equation, the carrying capacity , , is
When the Verhulst model is plotted into a graph, the population change over time takes the form of a sigmoid curve, reaching its highest level at . This is the logistic growth curve and it is calculated with:
where
is the natural logarithm base (also known as Euler's number),
is the value of the sigmoid's midpoint,
is the curve's maximum value,
is the logistic growth rate or steepness of the curve and
The logistic growth curve depicts how population growth rate and carrying capacity are inter-connected. As illustrated in the logistic growth curve model, when the population size is small, the population increases exponentially. However, as population size nears carrying capacity, the growth decreases and reaches zero at .
What determines a specific system's carrying capacity involves a limiting factor; this may be available supplies of food or water, nesting areas, space, or the amount of waste that can be absorbed without degrading the environment and decreasing carrying capacity.
Population ecology
Carrying capacity is a commonly used concept for biologists when trying to better understand biological populations and the factors which affect them. When addressing biological populations, carrying capacity can be seen as a stable dynamic equilibrium, taking into account extinction and colonization rates. In population biology, logistic growth assumes that population size fluctuates above and below an equilibrium value.
Numerous authors have questioned the usefulness of the term when applied to actual wild populations. Although useful in theory and in laboratory experiments, carrying capacity as a method of measuring population limits in the environment is less useful as it sometimes oversimplifies the interactions between species.
Agriculture
It is important for farmers to calculate the carrying capacity of their land so they can establish a sustainable stocking rate. For example, calculating the carrying capacity of a paddock in Australia is done in Dry Sheep Equivalents (DSEs). A single DSE is 50 kg Merino wether, dry ewe or non-pregnant ewe, which is maintained in a stable condition. Not only sheep are calculated in DSEs, the carrying capacity for other livestock is also calculated using this measure. A 200 kg weaned calf of a British style breed gaining 0.25 kg/day is 5.5DSE, but if the same weight of the same type of calf were gaining 0.75 kg/day, it would be measured at 8DSE. Cattle are not all the same, their DSEs can vary depending on breed, growth rates, weights, if it is a cow ('dam'), steer or ox ('bullock' in Australia), and if it weaning, pregnant or 'wet' (i.e. lactating).
In other parts of the world different units are used for calculating carrying capacities. In the United Kingdom the paddock is measured in LU, livestock units, although different schemes exist for this. New Zealand uses either LU, EE (ewe equivalents) or SU (stock units). In the US and Canada the traditional system uses animal units (AU). A French/Swiss unit is Unité de Gros Bétail (UGB).
In some European countries such as Switzerland the pasture (alm or alp) is traditionally measured in Stoß, with one Stoß equaling four Füße (feet). A more modern European system is Großvieheinheit (GV or GVE), corresponding to 500 kg in liveweight of cattle. In extensive agriculture 2 GV/ha is a common stocking rate, in intensive agriculture, when grazing is supplemented with extra fodder, rates can be 5 to 10 GV/ha. In Europe average stocking rates vary depending on the country, in 2000 the Netherlands and Belgium had a very high rate of 3.82 GV/ha and 3.19 GV/ha respectively, surrounding countries have rates of around 1 to 1.5 GV/ha, and more southern European countries have lower rates, with Spain having the lowest rate of 0.44 GV/ha.
This system can also be applied to natural areas. Grazing megaherbivores at roughly 1 GV/ha is considered sustainable in central European grasslands, although this varies widely depending on many factors. In ecology it is theoretically (i.e. cyclic succession, patch dynamics, Megaherbivorenhypothese) taken that a grazing pressure of 0.3 GV/ha by wildlife is enough to hinder afforestation in a natural area. Because different species have different ecological niches, with horses for example grazing short grass, cattle longer grass, and goats or deer preferring to browse shrubs, niche differentiation allows a terrain to have slightly higher carrying capacity for a mixed group of species, than it would if there were only one species involved.
Some niche market schemes mandate lower stocking rates than can maximally be grazed on a pasture. In order to market ones' meat products as 'biodynamic', a lower Großvieheinheit of 1 to 1.5 (2.0) GV/ha is mandated, with some farms having an operating structure using only 0.5 to 0.8 GV/ha.
The Food and Agriculture Organization has introduced three international units to measure carrying capacity: FAO Livestock Units for North America, FAO Livestock Units for sub-Saharan Africa, and Tropical Livestock Units.
Another rougher and less precise method of determining the carrying capacity of a paddock is simply by looking objectively at the condition of the herd. In Australia, the national standardized system for rating livestock conditions is done by body condition scoring (BCS). An animal in a very poor condition is scored with a BCS of 0, and an animal which is extremely healthy is scored at 5: animals may be scored between these two numbers in increments of 0.25. At least 25 animals of the same type must be scored to provide a statistically representative number, and scoring must take place monthly -if the average falls, this may be due to a stocking rate above the paddock's carrying capacity or too little fodder. This method is less direct for determining stocking rates than looking at the pasture itself, because the changes in the condition of the stock may lag behind changes in the condition of the pasture.
Fisheries
In fisheries, carrying capacity is used in the formulae to calculate sustainable yields for fisheries management. The maximum sustainable yield (MSY) is defined as "the highest average catch that can be continuously taken from an exploited population (=stock) under average environmental conditions". MSY was originally calculated as half of the carrying capacity, but has been refined over the years, now being seen as roughly 30% of the population, depending on the species or population. Because the population of a species which is brought below its carrying capacity due to fishing will find itself in the exponential phase of growth, as seen in the Verhulst model, the harvesting of an amount of fish at or below MSY is a surplus yield which can be sustainably harvested without reducing population size at equilibrium, keeping the population at its maximum recruitment. However, annual fishing can be seen as a modification of r in the equation -i.e. the environment has been modified, which means that the population size at equilibrium with annual fishing is slightly below what K would be without it.
Note that mathematically and in practical terms, MSY is problematic. If mistakes are made and even a tiny amount of fish are harvested each year above the MSY, populations dynamics imply that the total population will eventually decrease to zero. The actual carrying capacity of the environment may fluctuate in the real world, which means that practically, MSY may actually vary from year to year (annual sustainable yields and maximum average yield attempt to take this into account). Other similar concepts are optimum sustainable yield and maximum economic yield; these are both harvest rates below MSY.
These calculations are used to determine fishing quotas.
Humans
Human carrying capacity is a function of how people live and the technology at their disposal. The two great economic revolutions that marked human history up to 1900—the agricultural and industrial revolutions—greatly increased the Earth's human carrying capacity, allowing human population to grow from 5 to 10 million people in 10,000 BCE to 1.5 billion in 1900. The immense technological improvements of the past 100 years—in applied chemistry, physics, computing, genetic engineering, and more—have further increased Earth's human carrying capacity, at least in the short term. Without the Haber-Bosch process for fixing nitrogen, modern agriculture could not support 8 billion people. Without the Green Revolution of the 1950s and 60s, famine might have culled large numbers of people in poorer countries during the last three decades of the twentieth century.
Recent technological successes, however, have come at grave environmental costs. Climate change, ocean acidification, and the huge dead zones at the mouths of many of world's great rivers, are a function of the scale of contemporary agriculture and the many other demands 8 billion people make on the planet. Scientists now speak of humanity exceeding or threatening to exceed 9 planetary boundaries for safe use of the biosphere. Humanity's unprecedented ecological impacts threaten to degrade the ecosystem services that people and the rest of life depend on—potentially decreasing Earth's human carrying capacity. The signs that we have crossed this threshold are increasing.
The fact that degrading Earth's essential services is obviously possible, and happening in some cases, suggests that 8 billion people may be above Earth's human carrying capacity. But human carrying capacity is always a function of a certain number of people living a certain way. This was encapsulated by Paul Ehrlich and James Holdren's (1972) IPAT equation: environmental impact (I) = population (P) x affluence (A) x the technologies used to accommodate human demands (T). IPAT has found spectacular confirmation in recent decades within climate science, where the Kaya identity for explaining changes in emissions is essentially IPAT with two technology factors broken out for ease of use.
This suggests to technological optimists that new technological discoveries (or the deployment of existing ones) could continue to increase Earth's human carrying capacity, as it has in the past. Yet technology has unexpected side effects, as we have seen with stratospheric ozone depletion, excessive nitrogen deposition in the world's rivers and bays, and global climate change. This suggests that 8 billion people may be sustainable for a few generations, but not over the long term, and the term ‘carrying capacity’ implies a population that is sustainable indefinitely. It is possible, too, that efforts to anticipate and manage the impacts of powerful new technologies, or to divide up the efforts needed to keep global ecological impacts within sustainable bounds among more than 200 nations all pursuing their own self-interest, may prove too complicated to achieve over the long haul.
One issue with applying carrying capacity to any species is that ecosystems are not constant and change over time, therefore changing the resources available. Research has shown that sometimes the presence of human populations can increase local biodiversity, demonstrating that human habitation does not always lead to deforestation and decreased biodiversity. Another issue to consider when applying carrying capacity, especially to humans, is that measuring food resources is arbitrary. This is due to choosing what to consider (e.g., whether or not to include plants that are not available every year), how to classify what is considered (e.g., classifying edible plants that are not usually eaten as food resources or not), and determining if caloric values or nutritional values are privileged. Additional layers to this for humans are their cultural differences in taste (e.g., some consume flying termites) and individual choices on what to invest their labor into (e.g., fishing vs. farming), both of which vary over time. This leads to the need to determine whether or not to include all food resources or only those the population considered will consume. Carrying capacity measurements over large areas also assumes homogeneity in the resources available but this does not account for how resources and access to them can greatly vary within regions and populations. They also assume that the populations in the region only rely on that region’s resources even though humans exchange resources with others from other regions and there are few, if any, isolated populations. Variations in standards of living which directly impact resource consumption are also not taken into account. These issues show that while there are limits to resources, a more complex model of how humans interact with their ecosystem needs to be used to understand them.
Recent warnings that humanity may have exceeded Earth's carrying capacity
Between 1900 and 2020, Earth's human population increased from 1.6 billion to 7.8 billion (a 390% increase). These successes greatly increased human resource demands, generating significant environmental degradation.
Millennium ecosystem assessment
The Millennium Ecosystem Assessment (MEA) of 2005 was a massive, collaborative effort to assess the state of Earth's ecosystems, involving more than 1,300 experts worldwide. Their first two of four main findings were the following. The first finding is:Over the past 50 years, humans have changed ecosystems more rapidly and extensively than in any comparable period of time in human history, largely to meet rapidly growing demands for food, fresh water, timber, fiber, and fuel. This has resulted in a substantial and largely irreversible loss in the diversity of life on Earth.The second of the four main findings is:The changes that have been made to ecosystems have contributed to substantial net gains in human well-being and economic development, but these gains have been achieved at growing costs in the form of the degradation of many ecosystem services, increased risks of nonlinear changes, and the exacerbation of poverty for some groups of people. These problems, unless addressed, will substantially diminish the benefits that future generations obtain from ecosystems.According to the MEA, these unprecedented environmental changes threaten to reduce the Earth's long-term human carrying capacity. “The degradation of ecosystem services could grow significantly worse during the first half of this [21st] century,” they write, serving as a barrier to improving the lives of poor people around the world.
Critiques of Carrying Capacity with Relation to Humans
Humans and human culture itself are highly adaptable things that have overcome issues that seemed incomprehensible at the time before. It is not to say that carrying capacity is not something that should be considered and thought about, but it should be taken with some skepticism when presented as a concretely evidenced proof of something. Many biologists, ecologists, and social scientists have disposed of the term altogether due to the generalizations that are made that gloss over the complexity of interactions that take place on the micro and macro level. Carrying capacity in a human environment is subject to change at any time due to the highly adaptable nature of human society and culture. If resources, time, and energy are put into an issue, there very well may be a solution that exposes itself. This also should not be used as an excuse to overexploit or take advantage of the land or resources that are available. Nonetheless, it is possible to not be pessimistic as technological, social, and institutional adaptions could be accelerated, especially in a time of need, to solve problems, or in this case, increase carrying capacity. There are also of course resources on this Earth that are limited that most certainly will run out if overused or used without proper oversight/checks and balances. If things are left without remaining checked then overconsumption and exploitation of land and resources is likely to occur.
Ecological footprint accounting
Ecological Footprint accounting measures the demands people make on nature and compares them to available supplies, for both individual countries and the world as a whole. Developed originally by Mathis Wackernagel and William Rees, it has been refined and applied in a variety of contexts over the years by Global Footprint Network (GFN). On the demand side, the Ecological Footprint measures how fast a population uses resources and generates wastes, with a focus on five main areas: carbon emissions (or carbon footprint), land devoted to direct settlement, timber and paper use, food and fiber use, and seafood consumption. It converts these into per capita or total hectares used. On the supply side, national or global biocapacity represents the productivity of ecological assets in a particular nation or the world as a whole; this includes “cropland, grazing land, forest land, fishing grounds, and built-up land.” Again the various metrics to capture biocapacity are translated into the single term of hectares of available land. As Global Footprint Network (GFN) states:Each city, state or nation’s Ecological Footprint can be compared to its biocapacity, or that of the world. If a population’s Ecological Footprint exceeds the region’s biocapacity, that region runs a biocapacity deficit. Its demand for the goods and services that its land and seas can provide—fruits and vegetables, meat, fish, wood, cotton for clothing, and carbon dioxide absorption—exceeds what the region’s ecosystems can regenerate. In more popular communications, this is called “an ecological deficit.” A region in ecological deficit meets demand by importing, liquidating its own ecological assets (such as overfishing), and/or emitting carbon dioxide into the atmosphere. If a region’s biocapacity exceeds its Ecological Footprint, it has a biocapacity reserve.According to the GFN's calculations, humanity has been using resources and generating wastes in excess of sustainability since approximately 1970: currently humanity use Earth's resources at approximately 170% of capacity. This implies that humanity is well over Earth's human carrying capacity for our current levels of affluence and technology use. According to Global Footprint Network:In 2024, [Earth Overshoot Day] fell on August 1. Earth Overshoot Day marks the date when humanity has exhausted nature’s budget for the year. For the rest of the year, we are maintaining our ecological deficit by drawing down local resource stocks and accumulating carbon dioxide in the atmosphere. We are operating in overshoot.The concept of ‘ecological overshoot’ can be seen as equivalent to exceeding human carrying capacity. According to the most recent calculations from Global Footprint Network, most of the world's residents live in countries in ecological overshoot (see the map on the right).
This includes countries with dense populations (such as China, India, and the Philippines), countries with high per capita consumption and resource use (France, Germany, and Saudi Arabia), and countries with both high per capita consumption and large numbers of people (Japan, the United Kingdom, and the United States).
Planetary Boundaries Framework
According to its developers, the planetary boundaries framework defines “a safe operating space for humanity based on the intrinsic biophysical processes that regulate the stability of the Earth system.” Human civilization has evolved in the relative stability of the Holocene epoch; crossing planetary boundaries for safe levels of atmospheric carbon, ocean acidity, or one of the other stated boundaries could send the global ecosystem spiraling into novel conditions that are less hospitable to life—possibly reducing global human carrying capacity. This framework, developed in an article published in 2009 in Nature and then updated in two articles published in 2015 in Science and in 2018 in PNAS, identifies nine stressors of planetary support systems that need to stay within critical limits to preserve stable and safe biospheric conditions (see figure below). Climate change and biodiversity loss are seen as especially crucial, since on their own, they could push the Earth system out of the Holocene state: “transitions between time periods in Earth history have often been delineated by substantial shifts in climate, the biosphere, or both.”
The scientific consensus is that humanity has exceeded three to five of the nine planetary boundaries for safe use of the biosphere and is pressing hard on several more. By itself, crossing one of the planetary boundaries does not prove humanity has exceeded Earth's human carrying capacity; perhaps technological improvements or clever management might reduce this stressor and bring us back within the biosphere's safe operating space. But when several boundaries are crossed, it becomes harder to argue that carrying capacity has not been breached. Because fewer people helps reduce all nine planetary stressors, the more boundaries are crossed, the clearer it appears that reducing human numbers is part of what is needed to get back within a safe operating space. Population growth regularly tops the list of causes of humanity's increasing impact on the natural environment in Earth system science literature. Recently, planetary boundaries developer Will Steffen and co-authors ranked global population change as the leading indicator of the influence of socio-economic trends on the functioning of the Earth system in the modern era, post-1750.
| Biology and health sciences | Ecology | Biology |
47548 | https://en.wikipedia.org/wiki/Daylight%20saving%20time | Daylight saving time | Daylight saving time (DST), also referred to as daylight saving(s), daylight savings time, daylight time (United States and Canada), or summer time (United Kingdom, European Union, and others), is the practice of advancing clocks to make better use of the longer daylight available during summer so that darkness falls at a later clock time. The typical implementation of DST is to set clocks forward by one hour in spring or late winter, and to set clocks back by one hour to standard time in the autumn (or fall in North American English, hence the mnemonic: "spring forward and fall back").
Overview
Around 34 percent of the world's countries use DST. Some countries observe it only in some regions. In Canada, all of Yukon, most of Saskatchewan, and parts of Nunavut, Ontario, British Columbia and Quebec do not observe DST. It is observed by four Australian states and one territory. In the United States, it is observed by all states except Hawaii and Arizona (within the latter, however, the Navajo Nation does observe it).
Historically, several ancient societies adopted seasonal changes to their timekeeping to make better use of daylight; Roman timekeeping even included changes to water clocks to accommodate this. However, these were changes to the time divisions of the day rather than setting the whole clock forward. In a satirical letter to the editor of the Journal de Paris in 1784, Benjamin Franklin suggested that if Parisians could only wake up earlier in the summer they would economize on candle and oil usage, but he did not propose changing the clocks. In 1895, New Zealand entomologist and astronomer George Hudson made the first realistic proposal to change clocks by two hours every spring to the Wellington Philosophical Society, but this was not implemented until 1928 and in another form. In 1907, William Willett proposed the adoption of British Summer Time as a way to save energy; although seriously considered by Parliament, it was not implemented until 1916.
The first implementation of DST was by Port Arthur (today merged into Thunder Bay), in Ontario, Canada, in 1908, but only locally, not nationally. The first nation-wide implementations were by the German and Austro-Hungarian Empires, both starting on 30 April 1916. Since then, many countries have adopted DST at various times, particularly since the 1970s energy crisis.
Rationale
Industrialized societies usually follow a clock-based schedule for daily activities that do not change throughout the course of the year. The time of day that individuals begin and end work or school, and the coordination of mass transit, for example, usually remain constant year-round. In contrast, an agrarian society's daily routines for work and personal conduct are more likely governed by the length of daylight hours and by solar time, which change seasonally because of the Earth's axial tilt. North and south of the tropics, daylight lasts longer in that hemisphere's summer and is shorter in that hemisphere's winter, with the effect becoming greater the farther one moves away from the equator. DST is of little use for locations near the Equator, because these regions see only a small variation in daylight over the course of the year.
After synchronously resetting all clocks in a region to one hour ahead of standard time in spring in anticipation of longer daylight hours, individuals following a clock-based schedule will be awakened an hour earlier in the solar day than they would have otherwise. They will begin and complete daily work routines an hour earlier; in most cases, they will have an extra hour of daylight available to them after their workday activities.
The clock shift is partly motivated by practicality. At the summer solstice, in American temperate latitudes, for example, the sun rises around 4:30 standard time and sets around 19:30. Since most people are asleep at 04:30, it is seen as practical to treat 04:30 as if it were 05:30, thereby allowing people to wake closer to the sunrise and be active in the evening light, as the sun under DST sets an hour later (20:30). The longer evening daylight hours are attractive to golfers, for example, while farmers traditionally expressed dislike for having to be out working while dew is still heavy.
Proponents of daylight saving time argue that most people prefer more daylight hours after the typical "nine to five" workday. Supporters have also argued that DST decreases energy consumption by reducing the need for lighting and heating, but the actual effect on overall energy use is heavily disputed. For evaluation, it is required to go beyond considering only energy demand for lighting and also consider the energy used for heating or cooling buildings.
Variation within a time zone
The effect of daylight saving time also varies according to how far east or west the location is within its time zone, with locations farther east inside the time zone benefiting more from DST than locations farther west in the same time zone. In spite of a width spanning thousands of kilometers, all of China is located within a single time zone per government mandate, minimizing any potential benefit of daylight saving time there.
History
Ancient civilizations adjusted daily schedules to the sun more flexibly than DST does, often dividing daylight into 12 hours regardless of daytime, so that each daylight hour became progressively longer during spring and shorter during autumn. For example, the Romans kept time with water clocks that had different scales for different months of the year; at Rome's latitude, the third hour from sunrise (hora tertia) started at 09:02 solar time and lasted 44 minutes at the winter solstice, but at the summer solstice it started at 06:58 and lasted 75 minutes. From the 14th century onward, equal-length civil hours supplanted unequal ones, so civil time no longer varied by season. Unequal hours are still used in a few traditional settings, such as monasteries of Mount Athos and in Jewish ceremonies.
Benjamin Franklin published the proverb "early to bed and early to rise makes a man healthy, wealthy, and wise", and published a letter in the Journal de Paris when he was an American envoy to France (1776–1785) suggesting that Parisians economize on candles by rising earlier to use morning sunlight. This 1784 satire proposed taxing window shutters, rationing candles, and waking the public by ringing church bells and firing cannons at sunrise. Despite common misconception, Franklin did not actually propose DST; 18th-century Europe did not even keep precise schedules. However, this changed as rail transport and communication networks required a standardization of clocks unknown in Franklin's day.
In 1810, the Spanish National Assembly Cortes of Cádiz issued a regulation that moved certain meeting times forward by one hour from 1 May to 30 September in recognition of seasonal changes, but it did not change the clocks. It also acknowledged that private businesses were in the practice of changing their opening hours to suit daylight conditions, but they did so of their volition.
New Zealand entomologist George Hudson first proposed modern DST. His shift-work job gave him spare time to collect insects and led him to value after-hours daylight. In 1895, he presented a paper to the Wellington Philosophical Society proposing a two-hour daylight-saving shift, and considerable interest was expressed in Christchurch; he followed up with an 1898 paper. Many publications credit the DST proposal to prominent English builder and outdoorsman William Willett, who independently conceived DST in 1907 during a pre-breakfast ride when he observed how many Londoners slept through a large part of a summer day. Willett also was an avid golfer who disliked cutting short his round at dusk. His solution was to advance the clock during the summer, and he published the proposal two years later. Liberal Party member of parliament Robert Pearce took up the proposal, introducing the first Daylight Saving Bill to the British House of Commons on 12 February 1908. A select committee was set up to examine the issue, but Pearce's bill did not become law and several other bills failed in the following years. Willett lobbied for the proposal in the UK until his death in 1915.
Port Arthur, Ontario, Canada, was the first city in the world to enact DST, on 1 July 1908. This was followed by Orillia, Ontario, introduced by William Sword Frost while mayor from 1911 to 1912. The first states to adopt DST () nationally were those of the German Empire and its World War I ally Austria-Hungary commencing on 30 April 1916, as a way to conserve coal during wartime. Britain, most of its allies, and many European neutrals soon followed. Russia and a few other countries waited until the next year, and the United States adopted daylight saving in 1918. Most jurisdictions abandoned DST in the years after the war ended in 1918, with exceptions including Canada, the United Kingdom, France, Ireland, and the United States. It became common during World War II (some countries adopted double summer time), and was standardized in the US by federal law in 1966, and widely adopted in Europe from the 1970s as a result of the 1970s energy crisis. Since then, the world has seen many enactments, adjustments, and repeals.
It is a common myth in the United States that DST was first implemented for the benefit of farmers. In reality, farmers have been one of the strongest lobbying groups against DST since it was first implemented. The factors that influence farming schedules, such as morning dew and dairy cattle's readiness to be milked, are ultimately dictated by the sun, so the clock change introduces unnecessary challenges.
DST was first implemented in the US with the Standard Time Act of 1918, a wartime measure for seven months during World War I in the interest of adding more daylight hours to conserve energy resources. Year-round DST, or "War Time", was implemented again during World War II. After the war, local jurisdictions were free to choose if and when to observe DST until the Uniform Time Act which standardized DST in 1966. Permanent daylight saving time was enacted for the winter of 1974, but there were complaints of children going to school in the dark and working people commuting and starting their work day in pitch darkness during the winter, and it was repealed a year later.
Year-round daylight time has been adopted by the Canadian province of Saskatchewan, except Lloydminster and area.
Procedure
The relevant authorities usually schedule clock changes to occur at (or soon after) midnight and on a weekend, in order to lessen disruption to weekday schedules. A one-hour change is usual, but twenty-minute and two-hour changes have been used in the past. Notable exceptions today include Lord Howe Island with a thirty-minute change, and Troll (research station) that shifts two hours directly between CEST and GMT since 2016.
In all countries that observe daylight saving time seasonally (i.e., during summer and not winter), the clock is advanced from standard time to daylight saving time in the spring, and it is turned back from daylight saving time to standard time in the autumn.
For a midnight change in spring, a digital display of local time would appear to jump from 23:59:59.9 to 01:00:00.0. For the same clock in autumn, the local time would appear to repeat the hour preceding midnight, i.e. it would jump from 23:59:59.9 to 23:00:00.0.
In most countries that observe seasonal daylight saving time, clocks revert in winter to "standard time". An exception exists in Ireland, where its winter clock has the same offset (UTC+00:00) and legal name as that in Britain (Greenwich Mean Time)—but while its summer clock also has the same offset as Britain's (UTC+01:00), its legal name is confusingly called Irish Standard Time as opposed to British Summer Time.
Since 2019, Morocco observes daylight saving time every month but Ramadan. During the holy month (the date of which is determined by the lunar calendar and thus moves annually with regard to the Gregorian calendar), the country's civil clocks observe Western European Time (UTC+00:00, which geographically overlaps most of the nation). At the close of that month, its clocks are turned forward to Western European Summer Time (UTC+01:00).
The time at which to change clocks differs across jurisdictions. Members of the European Union conduct a coordinated change, changing all zones at the same instant, at 01:00 Coordinated Universal Time (UTC), which means that it changes at 02:00 Central European Time (CET), equivalent to 03:00 Eastern European Time (EET). As a result, the time differences across European time zones remain constant. North America coordination of the clock change differs, in that each jurisdiction changes at each local clock's 02:00, which temporarily creates an imbalance with the next time zone (until it adjusts its clock, one hour later, at 2 am there). For example, Mountain Time is for one hour in the spring two hours ahead of Pacific Time instead of the usual one hour ahead, and instead of one hour in the autumn, briefly zero hours ahead of Pacific Time.
The dates on which clocks change vary with location and year; consequently, the time differences between regions also vary throughout the year. For example, Central European Time is usually six hours ahead of North American Eastern Time, except for a few weeks in March and October/November, while the United Kingdom and mainland Chile could be five hours apart during the northern summer, three hours during the southern summer, and four hours for a few weeks per year. Since 1996, European Summer Time has been observed from the last Sunday in March to the last Sunday in October; previously the rules were not uniform across the European Union. Starting in 2007, most of the United States and Canada observed DST from the second Sunday in March to the first Sunday in November, almost two-thirds of the year. Moreover, the beginning and ending dates are roughly reversed between the northern and southern hemispheres because spring and autumn are displaced six months. For example, mainland Chile observes DST from the second Saturday in October to the second Saturday in March, with transitions at the local clock's 24:00. In some countries, clocks are governed by regional jurisdictions within the country such that some jurisdictions change and others do not; this is currently the case in Australia, Canada, and the United States.
From year to year, the dates on which to change clock may also move for political or social reasons. The Uniform Time Act of 1966 formalized the United States' period of daylight saving time observation as lasting six months (it was previously declared locally); this period was extended to seven months in 1986, and then to eight months in 2005. The 2005 extension was motivated in part by lobbyists from the candy industry, seeking to increase profits by including Halloween (31 October) within the daylight saving time period. In recent history, Australian state jurisdictions not only changed at different local times but sometimes on different dates. For example, in 2008 most states there that observed daylight saving time changed clocks forward on 5 October, but Western Australia changed on 26 October.
Politics, religion and sport
The concept of daylight saving has caused controversy since its early proposals. Winston Churchill argued that it enlarges "the opportunities for the pursuit of health and happiness among the millions of people who live in this country" and pundits have dubbed it "Daylight Slaving Time". Retailing, sports, and tourism interests have historically favored daylight saving, while agricultural and evening-entertainment interests (and some religious groups) have opposed it; energy crises and war prompted its initial adoption.
Willett's 1907 proposal illustrates several political issues. It attracted many supporters, including Arthur Balfour, Churchill, David Lloyd George, Ramsay MacDonald, King Edward VII (who used half-hour DST or "Sandringham time" at Sandringham), the managing director of Harrods, and the manager of the National Bank Ltd. However, the opposition proved stronger, including Prime Minister H. H. Asquith, William Christie (the Astronomer Royal), George Darwin, Napier Shaw (director of the Meteorological Office), many agricultural organizations, and theatre-owners. After many hearings, a parliamentary committee vote narrowly rejected the proposal in 1909. Willett's allies introduced similar bills every year from 1911 through 1914, to no avail. People in the US demonstrated even more skepticism; Andrew Peters introduced a DST bill to the House of Representatives in May 1909, but it soon died in committee.
Germany and its allies led the way in introducing DST during World War I on 30 April 1916, aiming to alleviate hardships due to wartime coal shortages and air-raid blackouts. The political equation changed in other countries; the United Kingdom used DST first on 21 May 1916. US retailing and manufacturing interests—led by Pittsburgh industrialist Robert Garland—soon began lobbying for DST, but railroads opposed the idea. The US' 1917 entry into the war overcame objections, and DST started in 1918.
The end of World War I brought a change in DST use. Farmers continued to dislike DST, and many countries repealed it—like Germany itself, which dropped DST from 1919 to 1939 and from 1950 to 1979. Britain proved an exception; it retained DST nationwide but adjusted transition dates over the years for several reasons, including special rules during the 1920s and 1930s to avoid clock shifts on Easter mornings. , summer time began annually on the last Sunday in March under a European Community directive, which may be Easter Sunday (as in 2016). In the US, Congress repealed DST after 1919. President Woodrow Wilson—an avid golfer like Willett—vetoed the repeal twice, but his second veto was overridden. Only a few US cities retained DST locally, including New York (so that its financial exchanges could maintain an hour of arbitrage trading with London), and Chicago and Cleveland (to keep pace with New York). Wilson's successor as president, Warren G. Harding, opposed DST as a "deception", reasoning that people should instead get up and go to work earlier in the summer. He ordered District of Columbia federal employees to start work at 8 am rather than 9 am during the summer of 1922. Some businesses followed suit, though many others did not; the experiment was not repeated.
Since Germany's adoption of DST in 1916, the world has seen many enactments, adjustments, and repeals of DST, with similar politics involved. The history of time in the United States features DST during both world wars, but no standardization of peacetime DST until 1966. St. Paul and Minneapolis, Minnesota, kept different clocks for two weeks in May 1965: the capital city decided to switch to daylight saving time, while Minneapolis opted to follow the later date set by state law. In the mid-1980s, Clorox and 7-Eleven provided the primary funding for the Daylight Saving Time Coalition behind the 1987 extension to US DST. Both senators from Idaho, Larry Craig and Mike Crapo, voted for it based on the premise that fast-food restaurants sell more French fries (made from Idaho potatoes) during DST.
A referendum on the introduction of daylight saving took place in Queensland, Australia, in 1992, after a three-year trial of daylight saving. It was defeated with a 54.5% "no" vote, with regional and rural areas strongly opposed, and those in the metropolitan southeast in favor.
In 2003, the United Kingdom's Royal Society for the Prevention of Accidents supported a proposal to observe year-round daylight saving time, but it has been opposed by some industries, by some postal workers and farmers, and particularly by those living in the northern regions of the UK.
In 2005, the Sporting Goods Manufacturers Association and the National Association of Convenience Stores successfully lobbied for the 2007 extension to US DST.
In December 2008, the Daylight Saving for South East Queensland (DS4SEQ) political party was officially registered in Queensland, advocating the implementation of a dual-time-zone arrangement for daylight saving in South East Queensland, while the rest of the state maintained standard time. DS4SEQ contested the March 2009 Queensland state election with 32 candidates and received one percent of the statewide primary vote, equating to around 2.5% across the 32 electorates contested. After a three-year trial, more than 55% of Western Australians voted against DST in 2009, with rural areas strongly opposed. Queensland Independent member Peter Wellington introduced the Daylight Saving for South East Queensland Referendum Bill 2010 into the Queensland parliament on 14 April 2010, after being approached by the DS4SEQ political party, calling for a referendum at the next state election on the introduction of daylight saving into South East Queensland under a dual-time-zone arrangement. The Queensland parliament rejected Wellington's bill on 15 June 2011.
Russia declared in 2011 that it would stay in DST all year long (UTC+4:00) and Belarus followed with a similar declaration. (The Soviet Union had operated under permanent "summer time" from 1930 to at least 1982.) Russia's plan generated widespread complaints due to the dark of winter-time mornings, and thus was abandoned in 2014. The country changed its clocks to standard time (UTC+3:00) on 26 October 2014, intending to stay there permanently.
In the United States, Arizona (with the exception of the Navajo Nation), Hawaii, and the five populated territories (American Samoa, Guam, Puerto Rico, the Northern Mariana Islands, and the US Virgin Islands) do not participate in daylight saving time. Indiana only began participating in daylight saving time as recently as 2006. Since 2018, Florida Republican Senator Marco Rubio has repeatedly filed bills to extend daylight saving time permanently into winter, without success.
Mexico observed summertime daylight saving time starting in 1996. In late 2022, the nation's clocks "fell back" for the last time, in restoration of permanent standard time.
Religion
Some religious groups and individuals have opposed DST on religious grounds. For religious Muslims and Jews it makes religious practices such as prayer and fasting more difficult or inconvenient.
Some Muslim countries, such as Morocco, have temporarily abandoned DST during Ramadan.
In Israel, DST has been a point of contention between the religious and secular, resulting in fluctuations over the years, and a shorter DST period than in the EU and US. Religious Jews prefer a shorter DST due to DST delaying scheduled morning prayers, thus conflicting with standard working and business hours. Additionally, DST is ended before Yom Kippur (a 25-hour fast day starting and ending at sunset, much of which is spent praying in synagogue until the fast ends at sunset) since DST would result in the day ending later, which many feel makes it more difficult.
In the US, Orthodox Jewish groups have opposed extensions to DST, as well as a 2022 bipartisan bill that would make DST permanent, saying it will "interfere with the ability of members of our community to engage in congregational prayers and get to their places of work on time."
Effects
Effects on electricity consumption
Proponents of DST generally argue that it saves energy, promotes outdoor leisure activity in the evening (in summer), and is therefore good for physical and psychological health, reduces traffic accidents, reduces crime or is good for business. Opponents argue the actual energy savings are inconclusive.
Although energy conservation goals still remain, energy usage patterns have greatly changed since then. Electricity use is greatly affected by geography, climate, and economics, so the results of a study conducted in one place may not be relevant to another country or climate.
A 2017 meta-analysis of 44 studies found that DST leads to electricity savings of 0.3% during the days when DST applies. Several studies have suggested that DST increases motor fuel consumption, but a 2008 United States Department of Energy report found no significant increase in motor gasoline consumption due to the 2007 United States extension of DST. An early goal of DST was to reduce evening usage of incandescent lighting, once a primary use of electricity.
Economic effects
It has been argued that clock shifts correlate with decreased economic efficiency and that in 2000, the daylight-saving effect implied an estimated one-day loss of $31 billion on US stock exchanges. Others have asserted that the observed results depend on methodology and disputed the findings, though the original authors have refuted points raised by disputers.
Effects on health
There are measurable adverse effects of clock-shifts on human health. It has been shown to disrupt human circadian rhythms, negatively affecting human health in the process, and that the yearly DST clock-shifts can increase health risks such as heart attacks and traffic accidents.
A 2017 study in the American Economic Journal: Applied Economics estimated that "the transition into DST caused over 30 deaths at a social cost of $275 million annually", primarily by increasing sleep deprivation.
A correlation between clock shifts and increase in traffic accidents has been observed in North America and the UK but not in Finland or Sweden. Four reports have found that this effect is smaller than the overall reduction in traffic fatalities. According to data shared by Titan Casket, hospitals see a 24% increase in heart attacks and a 6% increase in fatal crashes each year when the time changes. In 2018, the European Parliament, reviewing a possible abolition of DST, approved a more in-depth evaluation examining the disruption of the human body's circadian rhythms which provided evidence suggesting the existence of an association between DST clock-shifts and a modest increase of occurrence of acute myocardial infarction, especially in the first week after the spring shift. However a Netherlands study found, against the majority of investigations, contrary or minimal effect. Year-round standard time (not year-round DST) is proposed by some to be the preferred option for public health and safety. Clock shifts were found to increase the risk of heart attack by 10 percent, and to disrupt sleep and reduce its efficiency. Effects on seasonal adaptation of the circadian rhythm can be severe and last for weeks.
Effects on social relations
DST hurts prime-time television broadcast ratings, drive-ins and other theaters. Artificial outdoor lighting has a marginal and sometimes even contradictory influence on crime and fear of crime.
Later sunsets from DST are thought to affect behavior; for example, increasing participation in after-school sports programs or outdoor afternoon sports such as golf, and attendance at professional sporting events. Advocates of daylight saving time argue that having more hours of daylight between the end of a typical workday and evening induces people to consume other goods and services.
In 2022, a publication of three replicating studies of individuals, between individuals, and transecting societies, demonstrated that sleep loss affects the human motivation to help others, which in its fMRI findings is "associated with deactivation of key nodes within the social cognition brain network that facilitates prosociality." Furthermore, they detected, through analysis of over three million real-world charitable donations, that the loss of sleep inflicted by the transition to daylight saving time reduces altruistic giving compared to controls (being states not implementing DST). They conclude that the effects on civil society are "non-trivial".
Another study, which also examined sleep manipulation due to the shift to daylight saving time in the spring, analyzed archival data from judicial punishment imposed by US federal courts which showed sleep-deprived judges exact more severe penalties.
Inconvenience
DST's clock shifts have the disadvantage of complexity. People must remember to change their clocks; this can be time-consuming, particularly for mechanical clocks that cannot be moved backward safely. People who work across time zone boundaries need to keep track of multiple DST rules, as not all locations observe DST or observe it the same way. The length of the calendar day becomes variable; it is no longer always 24 hours. Disruption to meetings, travel, broadcasts, billing systems, and records management is common, and can be expensive. During an autumn transition from 02:00 to 01:00, a clock shows local times from 01:00:00 through 01:59:59 twice, possibly leading to confusion.
Many farmers oppose DST, particularly dairy farmers as the milking patterns of their cows do not change with the time, and others whose hours are set by the sun. There is concern for schoolchildren who are out in the darkness during the morning due to late sunrises.
Remediation
Some clock-shift problems could be avoided by adjusting clocks continuously or at least more gradually—for example, Willett at first suggested weekly 20-minute transitions—but this would add complexity and has never been implemented. DST inherits and can magnify the disadvantages of standard time. For example, when reading a sundial, one must compensate for it along with time zone and natural discrepancies. Also, sun-exposure guidelines such as avoiding the sun within two hours of noon become less accurate when DST is in effect.
Terminology
As explained by Richard Meade in the English Journal of the (American) National Council of Teachers of English, the form daylight savings time (with an "s") was already much more common than the older form daylight saving time in American English ("the change has been virtually accomplished") in 1978. Nevertheless, dictionaries such as Merriam-Webster's, American Heritage, and Oxford, which typically describe actual usage instead of prescribing outdated usage (and therefore also list the newer form), still list the older form first. This is because the older form is still very common in print and is preferred by many editors. ("Although daylight saving time is considered correct, daylight savings time (with an "s") is commonly used.") The first two words are sometimes hyphenated (daylight-saving(s) time). Merriam-Webster's also lists the forms daylight saving, daylight savings (both without "time"), and daylight time. The Oxford Dictionary of American Usage and Style explains the development and current situation as follows:Although the singular form daylight saving time is the original one, dating from the early 20th century—and is preferred by some usage critics—the plural form is now extremely common in AmE. [...] The rise of daylight savings time appears to have resulted from the avoidance of a miscue: when saving is used, readers might puzzle momentarily over whether saving is a gerund (the saving of daylight) or a participle (the time for saving). [...] Using savings as the adjective—as in savings account or savings bond—makes perfect sense. More than that, it ought to be accepted as the better form.In Britain, Willett's 1907 proposal used the term daylight saving, but by 1911, the term summer time replaced daylight saving time in draft legislation. The same or similar expressions are used in many other languages: Sommerzeit in German, zomertijd in Dutch, kesäaika in Finnish, horario de verano or hora de verano in Spanish, and heure d'été in French.
The name of local time typically changes when DST is observed. American English replaces standard with daylight: for example, Pacific Standard Time (PST) becomes Pacific Daylight Time (PDT). In the United Kingdom, the standard term for UK time when advanced by one hour is British Summer Time (BST), and British English typically inserts summer into other time zone names, e.g. Central European Time (CET) becomes Central European Summer Time (CEST).
In North American English, people use the mnemonic "spring forward, fall back" (also "spring ahead ...", "spring up ...", and "... fall behind") to remember the direction in which to shift the clocks.
Computing
Changes to DST rules cause problems in existing computer installations. For example, the 2007 change to DST rules in North America required that many computer systems be upgraded, with the greatest onus on e-mail and calendar programs. The upgrades required a significant effort by corporate information technologists.
Some applications standardize on UTC to avoid problems with clock shifts and time zone differences. Likewise, most modern operating systems internally handle and store all times as UTC and only convert to local time for display. However, even if UTC is used internally, the systems still require external leap second updates and time zone information to correctly calculate local time as needed. Many systems in use today base their date/time calculations from data derived from the tz database also known as zoneinfo.
IANA time zone database
The tz database maps a name to the named location's historical and predicted clock shifts. This database is used by many computer software systems, including most Unix-like operating systems, Java, and the Oracle RDBMS; HP's "tztab" database is similar but incompatible. When temporal authorities change DST rules, zoneinfo updates are installed as part of ordinary system maintenance. In Unix-like systems the TZ environment variable specifies the location name, as in TZ=':America/New_York'. In many of those systems there is also a system-wide setting that is applied if the TZ environment variable is not set: this setting is controlled by the contents of the /etc/localtime file, which is usually a symbolic link or hard link to one of the zoneinfo files. Internal time is stored in time-zone-independent Unix time; the TZ is used by each of potentially many simultaneous users and processes to independently localize time display.
Older or stripped-down systems may support only the TZ values required by POSIX, which specify at most one start and end rule explicitly in the value. For example, TZ='EST5EDT,M3.2.0/02:00,M11.1.0/02:00' specifies time for the eastern United States starting in 2007. Such a TZ value must be changed whenever DST rules change, and the new value applies to all years, mishandling some older timestamps.
Opposition to clock changes
A move to permanent daylight saving time (staying on summer hours all year with no clock shifts) is sometimes advocated and is currently implemented in some jurisdictions such as Argentina, Belarus, Iceland, Kyrgyzstan, Morocco, Namibia, Saskatchewan, Singapore, Syria, Turkey, Turkmenistan, Uzbekistan and Yukon. Although Saskatchewan follows Central Standard Time, its capital city Regina experiences solar noon close to 13:00, in effect putting the city on permanent daylight time. Similarly, Yukon is classified as being in the Mountain Time Zone, though in effect it observes permanent Pacific Daylight Time to align with the Pacific time zone in summer, but local solar noon in the capital Whitehorse occurs nearer to 14:00, in effect putting Whitehorse on "double daylight time".
The United Kingdom and Ireland put clocks forward by an extra hour during World War II and experimented with year-round summer time between 1968 and 1971. Russia switched to permanent DST from 2011 to 2014, but the move proved unpopular because of the extremely late winter sunrises; in 2014, Russia switched permanently back to standard time. However, the change to permanent DST has proven popular in Turkey, with the Minister of Energy and Natural Resources saying the practice saves "millions in energy costs and reduces depression and anxiety levels associated with short exposure to daylight".
In September 2018, the European Commission proposed to end seasonal clock changes as of 2019. Member states would have the option of observing either daylight saving time all year round or standard time all year round. In March 2019, the European Parliament approved the commission's proposal, while deferring implementation from 2019 until 2021. In response to this proposition, the European Sleep Research Society stated "installing permanent Central European Time (CET, standard time or 'wintertime') is the best option for public health." , the decision has not been confirmed by the Council of the European Union. The council has asked the commission to produce a detailed assessment of its effects, but the Commission considers that the onus is on the Member States to find a common position in Council. As a result, progress on the issue is effectively blocked.
In the United States, several states have enacted legislation to implement permanent DST, but the bills would require Congress to change federal law in order to take effect. The Uniform Time Act of 1966 permits states to opt out of DST and observe permanent standard time, but it does not permit permanent DST. Florida senator Marco Rubio in particular has promoted changing the federal law to implement permanent DST, with the support of the Florida Chamber of Commerce seeking to boost evening revenue. In 2022, Rubio's "Sunshine Protection Act" passed the United States Senate without committee review by way of voice consent, with many senators afterward stating they were unaware of the vote or its topic. The bill was stopped in the US House, where questions were raised as to whether permanent DST or standard time would be more beneficial.
Advocates cite the same advantages as normal DST without the problems associated with the twice yearly clock shifts. Additional benefits have also been cited, including safer roadways, boosting the tourism industry, and energy savings. Detractors cite the relatively late sunrises, particularly in winter, that year-round DST entails.
Some experts in circadian rhythms and sleep health recommend year-round standard time as the preferred option for public health and safety. However, some experts state that permanent daylight saving time is still a better option when compared to annual clock changes. Several chronobiology societies have published position papers against adopting DST permanently. A paper by the Society for Research on Biological Rhythms states: "based on comparisons of large populations living in DST or ST or on western versus eastern edges of time zones, the advantages of permanent ST outweigh switching to DST annually or permanently." The World Federation of Societies for Chronobiology recommended "reassigning countries and regions to their actual sun-clock based time zones" and held the position of being "against the switching between DST and Standard Time and even more so against adopting DST permanently." The American Academy of Sleep Medicine (AASM) holds the position that "seasonal time changes should be abolished in favor of a fixed, national, year-round standard time," and that "standard time is a better option than daylight saving time for our health, mood and well-being." Their position was endorsed by 20 other organizations, including the American College of Chest Physicians, National Safety Council, and National PTA.
Current public opinion polls show mixed results. Surveys reported between 2021 and 2022 by the National Sleep Foundation, YouGov, CBS, and Monmouth University indicate more Americans would prefer permanent DST. A 2019 survey by the National Opinion Research Center and a 2021 survey by the Associated Press indicate more Americans would prefer permanent Standard Time. The National Sleep Foundation, YouGov, and Monmouth University polls leaned significantly in favor of seeing daylight saving time made permanent. The Monmouth University poll reported 44% preferring year-round DST and 13% preferring year-round standard time. The NORC at the University of Chicago found 79% of those interviewed to be in favor of permanent DST during the Oil Crisis in December 1973; 42% of poll takers supported it the following February.
| Technology | Timekeeping | null |
47568 | https://en.wikipedia.org/wiki/Low%20Earth%20orbit | Low Earth orbit | A low Earth orbit (LEO) is an orbit around Earth with a period of 128 minutes or less (making at least 11.25 orbits per day) and an eccentricity less than 0.25. Most of the artificial objects in outer space are in LEO, peaking in number at an altitude around , while the farthest in LEO, before medium Earth orbit (MEO), have an altitude of 2,000 kilometers, about one-third of the radius of Earth and near the beginning of the inner Van Allen radiation belt.
The term LEO region is used for the area of space below an altitude of (about one-third of Earth's radius). Objects in orbits that pass through this zone, even if they have an apogee further out or are sub-orbital, are carefully tracked since they present a collision risk to the many LEO satellites.
No human spaceflights other than the lunar missions of the Apollo program (1968-1972) have taken place beyond LEO. All space stations to date have operated geocentric within LEO.
Defining characteristics
A wide variety of sources define LEO in terms of altitude. The altitude of an object in an elliptic orbit can vary significantly along the orbit. Even for circular orbits, the altitude above ground can vary by as much as (especially for polar orbits) due to the oblateness of Earth's spheroid figure and local topography. While definitions based on altitude are inherently ambiguous, most of them fall within the range specified by an orbit period of 128 minutes because, according to Kepler's third law, this corresponds to a semi-major axis of . For circular orbits, this in turn corresponds to an altitude of above the mean radius of Earth, which is consistent with some of the upper altitude limits in some LEO definitions.
The LEO region is defined by some sources as a region in space that LEO orbits occupy. Some highly elliptical orbits may pass through the LEO region near their lowest altitude (or perigee) but are not in a LEO orbit because their highest altitude (or apogee) exceeds . Sub-orbital objects can also reach the LEO region but are not in a LEO orbit because they re-enter the atmosphere. The distinction between LEO orbits and the LEO region is especially important for analysis of possible collisions between objects which may not themselves be in LEO but could collide with satellites or debris in LEO orbits.
Orbital characteristics
The mean orbital velocity needed to maintain a stable low Earth orbit is about , which translates to . However, this depends on the exact altitude of the orbit. Calculated for a circular orbit of the orbital velocity is , but for a higher orbit the velocity is reduced to . The launch vehicle's delta-v needed to achieve low Earth orbit starts around .
The pull of gravity in LEO is only slightly less than on the Earth's surface. This is because the distance to LEO from the Earth's surface is much less than the Earth's radius. However, an object in orbit is in a permanent free fall around Earth, because in orbit the gravitational force and the centrifugal force balance each other out. As a result, spacecraft in orbit continue to stay in orbit, and people inside or outside such craft continuously experience weightlessness.
Objects in LEO orbit Earth between the denser part of the atmosphere and below the inner Van Allen radiation belt. They encounter atmospheric drag from gases in the thermosphere (approximately 80–600 km above the surface) or exosphere (approximately and higher), depending on orbit height. Satellites in orbits that reach altitudes below decay quickly due to atmospheric drag.
Equatorial low Earth orbits (ELEO) are a subset of LEO. These orbits, with low orbital inclination, allow rapid revisit times over low-latitude locations on Earth. Prograde equatorial LEOs also have lower delta-v launch requirements because they take advantage of the Earth's rotation. Other useful LEO orbits including polar orbits and Sun-synchronous orbits have a higher inclinations to the equator and provide coverage for higher latitudes on Earth. Some of the first generation of Starlink satellites used polar orbits which provide coverage everywhere on Earth. Later Starlink constellations orbit at a lower inclination and provide more coverage for populated areas.
Higher orbits include medium Earth orbit (MEO), sometimes called intermediate circular orbit (ICO), and further above, geostationary orbit (GEO). Orbits higher than low orbit can lead to early failure of electronic components due to intense radiation and charge accumulation.
In 2017, "very low Earth orbits" (VLEO) began to be seen in regulatory filings. These orbits, below about , require the use of novel technologies for orbit raising because they operate in orbits that would ordinarily decay too soon to be economically useful.
Use
A low Earth orbit requires the lowest amount of energy for satellite placement. It provides high bandwidth and low communication latency. Satellites and space stations in LEO are more accessible for crew and servicing.
Since it requires less energy to place a satellite into a LEO, and a satellite there needs less powerful amplifiers for successful transmission, LEO is used for many communication applications, such as the Iridium phone system. Some communication satellites use much higher geostationary orbits and move at the same angular velocity as the Earth as to appear stationary above one location on the planet.
Disadvantages
Unlike geosynchronous satellites, satellites in low orbit have a small field of view and can only observe and communicate with a fraction of the Earth at a given time. This means that a large network (or constellation) of satellites is required to provide continuous coverage.
Satellites at lower altitudes of orbit are in the atmosphere and suffer from rapid orbital decay, requiring either periodic re-boosting to maintain stable orbits, or the launching of replacements for those that re-enter the atmosphere. The effects of adding such quantities of vaporized metals to Earth's stratosphere are potentially of concern but currently unknown.
Examples
The International Space Station is in LEO about above the Earth's surface. The station’s orbit decays by about and consequently needs re-boosting a few times a year.
The Iridium telecom satellites orbit at about .
Earth observation satellites, also known as remote sensing satellites, including spy satellites and other Earth imaging satellites, use LEO as they are able to see the surface of the Earth more clearly by being closer to it. A majority of artificial satellites are placed in LEO. Satellites can also take advantage of consistent lighting of the surface below via Sun-synchronous LEO orbits at an altitude of about and near polar inclination. Envisat (2002–2012) is one example.
The Hubble Space Telescope orbits at about above Earth.
Satellite internet constellations such as Starlink.
The Chinese Tiangong space station was launched in April 2021 and currently orbits between above the Earth's surface.
The gravimetry mission GRACE-FO orbits at about as did its predecessor, GRACE.
Former
GOCE (2009-2013), an ESA gravimetry mission, orbited at about 255 km (158 mi).
Super Low Altitude Test Satellite (2017-2019), nicknamed Tsubame, orbited at , the lowest altitude ever among Earth observation satellites.
In fiction
In the film 2001: A Space Odyssey, Earth's transit station ("Space Station V") "orbited 300 km above Earth".
Space debris
The LEO environment is becoming congested with space debris because of the frequency of object launches. This has caused growing concern in recent years, since collisions at orbital velocities can be dangerous or deadly. Collisions can produce additional space debris, creating a domino effect known as Kessler syndrome. NASA's Orbital Debris Program tracks over 25,000 objects larger than 10 cm diameter in LEO, while the estimated number between 1 and 10 cm is 500,000, and the number of particles bigger than 1 mm exceeds 100 million. The particles travel at speeds up to , so even a small impact can severely damage a spacecraft.
| Physical sciences | Orbital mechanics | null |
47569 | https://en.wikipedia.org/wiki/Appian%20Way | Appian Way | The Appian Way (Latin and Italian: Via Appia) is one of the earliest and strategically most important Roman roads of the ancient republic. It connected Rome to Brindisi, in southeast Italy. Its importance is indicated by its common name, recorded by Statius, of ("the Appian Way, the queen of the long roads").
The road is named after Appius Claudius Caecus, the Roman censor who, during the Samnite Wars, began and completed the first section as a military road to the south in 312 BC.
In July 2024, the Appian Way entered the UNESCO World Heritage List.
Origins
The need for roads
The Appian Way was a Roman road which the republic used as a main route for military supplies for its conquest of southern Italy in 312 BC and for improvements in communication.
The Appian Way was the first long road built specifically to transport troops outside the smaller region of greater Rome (this was essential to the Romans). The few roads outside the early city were Etruscan and went mainly to Etruria. By the late Republic, the Romans had expanded over most of Italy and were masters of road construction. Their roads began at Rome, where the master , or list of destinations along the roads, was located, and extended to the borders of their domain – hence the expression, "All roads lead to Rome".
The Samnite Wars
Romans had an affinity for the people of Campania, who, like themselves, traced their backgrounds to the Etruscans. The Samnite Wars were instigated by the Samnites when Rome attempted to ally itself with the city of Capua in Campania. The Italic speakers in Latium had long ago been subdued and incorporated into the Roman state. They were responsible for changing Rome from a primarily Etruscan to a primarily Italic state.
Dense populations of sovereign Samnites remained in the mountains north of Capua, which is just north of the Greek city of Neapolis. Around 343 BC, Rome and Capua attempted to form an alliance. The Samnites reacted with military force.
The barrier of the Pontine Marshes
Between Capua and Rome lay the Pontine Marshes (Pomptinae paludes), a swamp infested with malaria. A tortuous coastal road wound between Ostia at the mouth of the Tiber and Neapolis. The Via Latina followed its ancient and scarcely more accessible path along the foothills of Monti Laziali and Monti Lepini, which are visible towering over the former marsh.
In the First Samnite War (343–341 BC) the Romans found they could not support or resupply troops in the field against the Samnites across the marsh. A revolt of the Latin League drained their resources further. They gave up the attempted alliance and settled with Samnium.
Colonization to the southeast
The Romans were only biding their time while they looked for a solution. The first answer was the colonia, a "cultivation" of settlers from Rome, who would maintain a permanent base of operations. The Second Samnite War (327–304 BC) erupted when Rome attempted to place a colony at Cales in 334 BC and again at Fregellae in 328 BC on the other side of the marshes. The Samnites, now a major power after defeating the Greeks of Tarentum, occupied Neapolis to try to ensure its loyalty. The Neapolitans appealed to Rome, which sent an army and expelled the Samnites from Neapolis.
Appius Claudius' beginning of the works
In 312 BC, Appius Claudius Caecus became censor at Rome. He was of the gens Claudia, who were patricians descended from the Sabines taken into the early Roman state. He had been given the name of the founding ancestor of the gens, Appius Claudius (Attus Clausus in Sabine). He was a populist, i.e., an advocate of the common people. A man of discernment and perception, in the years of success he was said to have lost his outer vision and thus acquired the name , "blind".
Without waiting to be told what to do by the Senate, Appius Claudius began bold public works to address the supply problem. An aqueduct (the Aqua Appia) secured the water supply of the city of Rome. By far the best known project was the road, which ran across the Pontine Marshes to the coast northwest of Naples, where it turned north to Capua. On it, any number of fresh troops could be sped to the theatre of operations, and supplies could be moved en masse to Roman bases without hindrance by either enemy or terrain. It is no surprise that, after his term as censor, Appius Claudius became consul twice, subsequently held other offices, and was a respected consultant to the state even during his later years.
Success of the road
The road achieved its purpose. The outcome of the Second Samnite War was at last favorable to Rome. In a series of blows the Romans reversed their fortunes, bringing Etruria to the table in 311 BC, the very year of their revolt, and Samnium in 304 BC. The road was the main factor that allowed them to concentrate their forces with sufficient rapidity and to keep them adequately supplied, whereafter they became a formidable opponent.
Construction
The main part of the Appian Way was started and finished in 312 BC.
The road began as a leveled dirt road upon which small stones and mortar were laid. Gravel was laid upon this, which was finally topped with tight fitting, interlocking stones to provide a flat surface. The historian Procopius said that the stones fit together so securely and closely that they appeared to have grown together rather than to have been fitted together. The road was cambered in the middle (for water runoff) and had ditches on either side of the road which were protected by retaining walls.
Between Rome and Lake Albano
The road began in the Forum Romanum, passed through the Servian Wall at the porta Capena, went through a cutting in the clivus Martis, and left the city. For this stretch of the road, the builders used the Via Latina. The building of the Aurelian Wall centuries later required the placing of another gate, the Porta Appia. Outside of Rome the new Via Appia went through well-to-do suburbs along the Via Norba, the ancient track to the Alban hills, where Norba was situated. The road at the time was a via glarea, a gravel road. The Romans built a high-quality road, with layers of cemented stone over a layer of small stones, cambered, drainage ditches on either side, low retaining walls on sunken portions, and dirt pathways for sidewalks. The Via Appia is believed to have been the first Roman road to feature the use of lime cement. The materials were volcanic rock. The surface was said to have been so smooth that you could not distinguish the joints. The Roman section still exists and is lined with monuments of all periods, although the cement has eroded out of the joints, leaving a very rough surface.
Across the marsh
The road concedes nothing to the Alban hills, but goes straight through them over cuts and fills. The gradients are steep. Then it enters the former Pontine Marshes. A stone causeway of about led across stagnant and foul-smelling pools blocked from the sea by sand dunes. Appius Claudius planned to drain the marsh, taking up earlier attempts, but he failed. The causeway and its bridges subsequently needed constant repair. In 162 BC, Marcus Cornelius Cathegus had a canal constructed along the road to relieve the traffic and provide an alternative when the road was being repaired. Romans preferred using the canal.
Along the coast
The Via Appia picked up the coastal road at Tarracina (Terracina). However, the Romans straightened it somewhat with cuttings, which form cliffs today. From there the road swerved north to Capua, where, for the time being, it ended. The Caudine Forks were not far to the north. The itinerary was Aricia (Ariccia), Tres Tabernae, Forum Appii, Tarracina, Fundi (Fondi), Formiae (Formia), Minturnae (Minturno), Suessa, Casilinum and Capua, but some of these were colonies added after the Samnite Wars. The distance was . The original road had no milestones, as they were not yet in use. A few survive from later times, including a first milestone near the Porta Appia.
Extension to Beneventum
The Third Samnite War (298–290 BC) is perhaps misnamed. It was an all-out attempt by all the neighbors of Rome: Italics, Etruscans and Gauls, to check the power of Rome. The Samnites were the leading people of the conspiracy. Rome dealt the northerners a crushing blow at the Battle of Sentinum in Umbria in 295. The Samnites fought on alone. Rome now placed 13 colonies in Campania and Samnium. It must have been during this time that they extended the Via Appia 35 miles beyond Capua past the Caudine Forks to a place the Samnites called Maloenton, "passage of the flocks". The itinerary added Calatia, Caudium and Beneventum (not yet called that). Here also ended the Via Latina.
Extension to Apulia and Calabria
By 290 BC, the sovereignty of the Samnites had ended. The heel of Italy lay open to the Romans. The dates are somewhat uncertain and there is considerable variation in the sources, but during the Third Samnite War the Romans seem to have extended the road to Venusia, where they placed a colony of 20,000 men. After that they were at Tarentum.
Roman expansion alarmed Tarentum, the leading city of the Greek presence (Magna Graecia) in southern Italy. They hired the mercenary King Pyrrhus of Epirus in neighboring Greece to fight the Romans on their behalf. In 280 BC the Romans suffered a defeat at the hands of Pyrrhus at the Battle of Heraclea on the coast west of Tarentum. The battle was costly for both sides, prompting Pyrrhus to remark "One more such victory and I am lost." Making the best of it, the Roman army turned on Greek Rhegium and effected a massacre of Pyrrhian partisans there.
Rather than pursue them, Pyrrhus went straight for Rome along the Via Appia and then the Via Latina. He knew that if he continued on the Via Appia he could be trapped in the marsh. Wary of such entrapment on the Via Latina also, he withdrew without fighting after encountering opposition at Anagni. Wintering in Campania, he withdrew to Apulia in 279 BC, where, pursued by the Romans, he won a second costly victory at the Battle of Asculum. Withdrawing from Apulia for a Sicilian interlude, he returned to Apulia in 275 BC and started for Campania up the Roman road.
Supplied by that same road, the Romans successfully defended the region against Pyrrhus, crushing his army in a two-day fight at the Battle of Beneventum in 275 BC. The Romans renamed the town from "Maleventum" ("site of bad events") to Beneventum ("site of good events") as a result. Pyrrhus withdrew to Greece, where he died in a street fight in Argos in 272 BC. Tarentum fell to the Romans that same year, who proceeded to consolidate their rule over all of Italy.
The Romans pushed the Via Appia to the port of Brundisium in 264 BC. The itinerary from Beneventum was now Aeculanum, , Venusia, Silvium, Tarentum, Uria and Brundisium. The Roman Republic was the government of Italy, for the time being. Appius Claudius died in 273, but in extending the road a number of times, no one has tried to displace his name upon it.
Rediscovery
The Appian Way's path across today's regions Lazio and Campania has always been well known, but the exact position of the part located in Apulia (the original one, not the extension by Trajan) was long unknown, since there were no visible remains of the Appian Way in that region.
In the first half of the 20th century, the professor of ancient Roman topography Giuseppe Lugli managed to discover, with the then-innovative technique of photogrammetry, what probably was the route of the Appian Way from Gravina in Puglia (Silvium) up to Taranto. When analysing aerophotogrammetric shots of the area, Lugli noticed a path () named la Tarantina, whose direction was still largely influenced by the centuriation; this, according to Lugli, was the path of the Appian Way. This path, as well as the part located in today's Apulia region, was still in use in the Middle Ages. A further piece of evidence for Lugli's proposed path is the presence of a number of archaeological remains in that region, among them the ancient settlement of Jesce.
By studying the distances given in the Antonine Itinerary, Lugli also assigned the Appian Way stations Blera and Sublupatia (which also occurs on the Tabula Peutingeriana) respectively to the areas Murgia Catena and Taverna (between masseria (estate farmhouse) S. Filippo and masseria S. Pietro). However, the toponym Murgia Catena defined too large an area, not allowing a clear localization of the Appian Way station. More recently Luciano Piepoli, based on the distances given in the Antonine Itinerary and on newer archeological findings, has suggested that Silvium should be Santo Staso, an area very close to Gravina in Puglia, Blera should be masseria Castello, and Sublupatia should be masseria Caione.
Main branches
Since the latter stretch of the Appian Way turned out to be very impervious, some branches were created: first the , then the , finally the emperor Trajan built the Via Traiana, a branch of the Via Appia from Beneventum, reaching Brundisium via Canusium and Barium rather than via Tarentum. This was commemorated by an arch at Beneventum.
Travellers could cross the Adriatic Sea through the Otranto Strait towards Albania either by landing at present day Durrës through the Via Egnatia or near the ancient town of Apollonia and continue towards present day Rrogozhinë in central Albania.
Notable historical events along the road
Crucifixion of Spartacus' army
In 73 BC, a slave revolt (known as the Third Servile War) under the ex-gladiator of Capua, Spartacus, began against the Romans. Slaves accounted for roughly every third person in Italy.
Spartacus defeated many Roman armies in a conflict that lasted for over two years. While trying to escape from Italy at Brundisium he unwittingly moved his forces into the historic trap at Apulia in Calabria. The Romans were well acquainted with the region. Legions were brought home from abroad and Spartacus was pinned between armies. The ex-slave army was defeated at the Siler River by Marcus Licinius Crassus. Pompey's armies captured and killed several thousand rebels that escaped from the battle and Crassus captured several thousand more. The Romans judged that the slaves had forfeited their right to live. In 71 BC, 6,000 slaves were crucified along the Via Appia from Rome to Capua.
World War II, Battle of Anzio
In 1943, during World War II, the Allies fell into the same trap Pyrrhus had retreated to avoid, in the Pomptine fields, the successor to the Pontine Marshes. The marsh remained, despite many efforts to drain it, until engineers working for Benito Mussolini finally succeeded (even so, the fields were infested with malarial mosquitos until the advent of DDT in the 1950s).
Hoping to break a stalemate at Monte Cassino, the Allies landed on the coast of Italy at the Anzio-Nettuno area – ancient Antium – which was midway between Ostia and Terracina. They found that the place was undefended. They intended to move along the line of the Via Appia to take Rome, outflanking Monte Cassino, but they did not do so quickly enough. The Germans occupied Mounts Laziali and Lepini along the track of the old Via Latina, from which they rained down shells on Anzio. Even though the Allies expanded into all the Pomptine region, they gained no ground. The Germans counterattacked down the via Appia from the Alban hills in a front four miles wide, but could not retake Anzio. The battle lasted for four months, one side being supplied by sea, the other by land through Rome. In May 1944, the Allies broke out of Anzio and took Rome. The German forces escaped to the north of Florence.
1960 Summer Olympics
For the 1960 Summer Olympics, it served as part of the men's marathon course that was won by Abebe Bikila of Ethiopia.
Main sights
Via Appia Antica
After the fall of the Western Roman Empire, the road fell out of use; Pope Pius VI ordered its restoration. A new Appian Way was built in parallel with the old one in 1784 as far as the Alban Hills region. The new road is the Via Appia Nuova ("New Appian Way") as opposed to the old section, now known as Via Appia Antica. The old Appian Way close to Rome is now a free tourist attraction. It was extensively restored for Rome's Millennium and Great Jubilee celebrations. The first are still heavily used by cars, buses and coaches but from then on traffic is very light and the ruins can be explored on foot in relative safety. The Church of Domine Quo Vadis is in the second mile of the road. Along or close to the part of the road closest to Rome, there are three catacombs of Roman and early Christian origin and one of Jewish origin.
The construction of Rome's ring road, the Grande Raccordo Anulare or GRA, in 1951 caused the Appian Way to be cut in two. More recent improvements to the GRA have rectified this through the construction of a tunnel under the Appia, so that it is now possible to follow the Appia on foot for about from its beginning near the Baths of Caracalla.
Many parts of the original road beyond Rome's environs have been preserved, and some are now used by cars (for example, in the area of Velletri). The road inspires the last movement of Ottorino Respighi's Pini di Roma. To this day the Via Appia contains the longest stretch of straight road in Europe, totaling .
Monuments along the Via Appia
1st to 4th mile
Porta Appia (Porta San Sebastiano), the gate of the Aurelian Walls
Church of Domine Quo Vadis
Tomb of Priscilla
Catacomb of Callixtus
Hypogeum of Vibia
San Sebastiano fuori le mura
Catacombs of St Sebastian
Vigna Randanini Jewish catacombs
Circus of Maxentius
Tomb of Caecilia Metella
Roman baths of Capo di Bove
Tomb of Hilarus Fuscus
5th mile
Mausoleum of the Orazi and Curiazi
Villa dei Quintili, with nympheum, theatre, and baths
Mausoleum of Casal Rotondo
6th mile and beyond
Minucia tomb
Torre Selce
Temple of Hercules
Berrettia di Prete (tomb and later church)
Mausoleum of Gallienus
Tres Tabernae
Villa of Publius Clodius Pulcher (in the Villa Santa Caterina, owned by the Pontifical North American College), 14th mile
Villa of Pompey
Bridges along the road
There are the remains of several Roman bridges along the road, including the Ponte di Tre Ponti, Ponte di Vigna Capoccio, Viadotta di Valle Ariccia, Ponte Alto and Ponte Antico.
| Technology | Ground transportation networks | null |
47571 | https://en.wikipedia.org/wiki/American%20pickerel | American pickerel | The American pickerel (Esox americanus) is a medium-sized species of North American freshwater predatory fish belonging to the pike family. The genus Esox is placed in family Esocidae in order Esociformes).
Two subspecies are sometimes recognised:
Redfin pickerel, sometimes called the brook pickerel, E. americanus americanus Gmelin, 1789;
Grass pickerel, E. americanus vermiculatus Lesueur, 1846.
Lesueur originally classified the grass pickerel as E. vermiculatus, but it is now considered a subspecies of E. americanus.
There is no widely accepted English common collective name for the two E. americanus subspecies; "American pickerel" is a translation of the French systematic name brochet d'Amérique.
Description
The two subspecies are very similar, but the grass pickerel lacks the redfin's distinctive orange to red fin coloration. The former's fins have dark leading edges and amber to dusky coloration. In addition, the light areas between the dark bands are generally wider on the grass pickerel and narrower on the redfin pickerel. Record size grass and redfin pickerels can weigh around and reach lengths of around . Redfin and grass pickerels are typically smaller than chain pickerels, which can be much larger.
Distribution and habitats
The redfin and grass pickerels occur primarily in sluggish, vegetated waters of pools, lakes and wetlands, and are carnivorous predators feeding on smaller fish. However, larger fishes, such as the striped bass (Morone saxatilis), bowfin (Amia calva) and gray weakfish (Cynoscion regalis), prey on the pickerels in turn when the latter venture into larger rivers or estuaries.
The pickerels reproduce by scattering spherical, sticky eggs in shallow, heavily vegetated waters. The eggs hatch in 11–15 days; the adult pickerels guard neither the eggs nor the young.
Both subspecies are native to the freshwater bodies of North America, and are not to be confused with their more aggressive big cousin, the northern pike. The redfin pickerel's range extends from the Saint Lawrence basin in Quebec down to the Gulf Coast, from Mississippi to Florida; while the grass pickerel's range is further west, extending from the Great Lakes Basin, from Ontario to Michigan, down to the western Gulf Coast, from eastern Texas to Mississippi.
Fishing
The E. americanus subspecies are not as highly prized as a game fish as their larger cousins, the northern pike and muskellunge, but they are nevertheless caught by anglers. McClane's Standard Fishing Encyclopedia describes ultralight tackle as a sporty if overlooked method to catch these small but voracious pikes.
| Biology and health sciences | Esociformes | Animals |
47578 | https://en.wikipedia.org/wiki/Muskellunge | Muskellunge | The muskellunge (Esox masquinongy), often shortened to muskie, musky, ski, or lunge, is a species of large freshwater predatory fish native to North America. It is the largest member of the pike family, Esocidae.
Origin of name
The name "muskellunge" originates from the Ojibwe words maashkinoozhe meaning "great fish", mji-gnoozhe, maskinoše, or mashkinonge, meaning "bad pike", "big pike", or "ugly pike" respectively. The Algonquin word maskinunga is borrowed into the Canadian French words masquinongé or maskinongé. In English, before settling on the common name "muskellunge", there were at least 94 common names applied to this species, including but not limited to: muskelunge, muscallonge, muskallonge, milliganong, maskinonge, maskalonge, mascalonge, maskalung, muskinunge and masquenongez.
Description
Muskellunge closely resemble other esocids such as the northern pike (Esox lucius) and American pickerel (E. americanus) in both appearance and behavior. Like the northern pike and other aggressive pikes, the body plan is typical of ambush predators with an elongated body, flat head, and dorsal, pelvic, and anal fins set far back on the body. Muskellunge are typically long and weigh , though some have reached up to and almost . Martin Arthur Williamson caught a muskellunge with a weight of in November 2000 on Georgian Bay. The fish are a light silver, brown, or green, with dark vertical stripes on the flank, which may tend to break up into spots. In some cases, markings may be absent altogether, especially in fish from turbid waters. This is in contrast to northern pike, which have dark bodies with light markings. A reliable method to distinguish the two similar species is by counting the sensory pores on the underside of the mandible. A muskie will have seven or more per side, while the northern pike never has more than six. The lobes of the caudal (tail) fin in muskellunge come to a sharper point, while those of northern pike are more generally rounded. In addition, unlike pike, muskies have no scales on the lower half of their opercula.
Anglers seek large muskies as trophies or for sport. In places where muskie are not native, such as in Maine, anglers are encouraged not to release the fish back into the water because of their negative impact on native populations of trout and other smaller fish species.
Habitat
Muskellunge are found in oligotrophic and mesotrophic lakes and large rivers from northern Michigan, northern Wisconsin, and northern Minnesota through the Great Lakes region, Chautauqua Lake in western New York, north into Canada, throughout most of the St Lawrence River drainage, and northward throughout the upper Mississippi valley, although the species also extends as far south as Chattanooga in the Tennessee River valley. Also, a small population is found in the Broad River in South Carolina. Several North Georgia reservoirs also have healthy stocked populations of muskie. They are also found in the Red River drainage of the Hudson Bay basin. Muskie were introduced to western Saint John River in the late 1960s and have now spread to many connecting waterways in northern Maine. The Pineview Reservoir in Utah is one of three Utah locations where the hybrid Tiger muskellunge is found.
They prefer clear waters where they lurk along weed edges, rock outcrops, or other structures to rest. A fish forms two distinct home ranges in summer: a shallow range and a deeper one. The shallow range is generally much smaller than the deeper range due to shallow water heating up. A muskie continually patrols the ranges in search of available food in the appropriate conditions of water temperature.
Diet
Muskies are ambush predators who will swiftly bite their prey and then swallow it head first. Muskellunge are the top predator in any body of water where they occur and they will eat larger prey than most other freshwater fish. They eat all varieties of fish present in their ecosystem (including other muskellunge), along with the occasional insect, muskrat, rat, mouse, frog, or duck. They are capable of taking prey up to two-thirds of their body length due to their large stomachs. There have even been reports of large muskellunge attacking small dogs and even humans, although most of these reports are greatly exaggerated.
Length and weight
As muskellunge grow longer they increase in weight, but the relationship between length and weight is not linear. The relationship between them can be expressed by a power-law equation:
The exponent b is close to 3.0 for all species, and c is a constant for each species. For muskellunge, b = 3.325, higher than for many common species, and c = .
According to the International Game Fish Association (IGFA) the largest muskellunge on record was caught by Cal Johnson in Lac Courte Oreilles (recognized as Lake Courte Oreilles by the association), Hayward, Wisconsin, United States, on July 24, 1949. The fish weighed and was in length, and in girth.
Behavior
Muskellunge are sometimes gregarious, forming small schools in distinct territories. Muskellunge feeding behavior is directly synchronized with the lunar cycle. During both full and new moons, an increase in feeding activity can be attributed to the increase of moonlight, as it most similarly simulates daytime feeding. They spawn in mid- to late spring, somewhat later than northern pike, over shallow, vegetated areas. A rock or sand bottom is preferred for spawning so the eggs do not sink into the mud and suffocate. The males arrive first and attempt to establish dominance over a territory. Spawning may last from five to 10 days and occurs mainly at night. The eggs are negatively buoyant and slightly adhesive; they adhere to plants and the bottom of the lake. Soon afterward, they are abandoned by the adults. Those embryos which are not eaten by fish, insects, or crayfish hatch within two weeks. The larvae live on yolk until the mouth is fully developed, when they begin to feed on copepods and other zooplankton. They soon begin to prey upon fish. Juveniles generally attain a length of by November of their first year.
Predators
Adult muskellunge are apex predators where they occur naturally. Only humans and (rarely) large birds of prey such as bald eagles (Haliaeetus leucocephalus) pose a threat to an adult. But juveniles are consumed by other muskies, northern pike, bass, trout, and occasionally birds of prey. The muskellunge's low reproductive rate and slow growth render populations highly vulnerable to overfishing. This has prompted some jurisdictions to institute artificial propagation programs in an attempt to maintain otherwise unsustainably high rates of angling effort and habitat destruction.
Subspecies and hybrids
Though interbreeding with other pike species can complicate the classification of some individuals, zoologists usually recognize up to three subspecies of muskellunge.
The Great Lakes Muskellunge or Spotted Muskellunge (E. m. masquinongy) is the most common variety in the Great Lakes basin and surrounding area. The spots on the body form oblique rows.
The Chautauqua Muskellunge or Barred Muskellunge (E. m. ohioensis) is known from the Ohio River system, Chautauqua Lake, Lake Ontario, and the St Lawrence River.
The Clear Muskellunge (E. m. immaculatus) is most common in the inland lakes of Wisconsin, Minnesota, northwestern Ontario, and southeastern Manitoba.
The tiger muskellunge (E. masquinongy × lucius or E. lucius × masquinongy) is a hybrid of the muskie and northern pike. Hybrids are sterile, although females sometimes unsuccessfully engage in spawning motions. Some hybrids are artificially produced and planted for anglers to catch. Tiger muskies grow faster than pure muskies, but do not attain the ultimate size of their pure relatives, as the tiger muskie does not live as long.
Attacks on humans
Although very rare, muskie attacks on humans do occur on occasion.
| Biology and health sciences | Esociformes | Animals |
47579 | https://en.wikipedia.org/wiki/Pollock | Pollock | Pollock or pollack (pronounced ) is the common name used for either of the two species of North Atlantic marine fish in the genus Pollachius. Pollachius pollachius is referred to as "pollock" in North America, Ireland and the United Kingdom, while Pollachius virens is usually known as saithe or coley in Great Britain and Ireland (derived from the older name coalfish). Other names for P. pollachius include the Atlantic pollock, European pollock, lieu jaune, and lythe or lithe; while P. virens is also known as Boston blue (distinct from bluefish), silver bill, or saithe.
Species
The recognized species in this genus are:
Pollachius pollachius (Linnaeus, 1758) (pollack)
Pollachius virens (Linnaeus, 1758) (coalfish)
Description
Both species can grow to . P. virens can weigh up to and P. pollachius can weigh up to . P. virens has a strongly defined, silvery lateral line running down the sides. Above the lateral line, the colour is a greenish black. The belly is white, while P. pollachius has a distinctly crooked lateral line, grayish to golden belly, and a dark brown back. P. pollachius also has a strong underbite. It can be found in water up to deep over rocks and anywhere in the water column.
As food
Atlantic pollock is largely considered to be a whitefish. Traditionally a popular source of food in some countries, such as Norway, in the United Kingdom it has previously been largely consumed as a cheaper and versatile alternative to cod and haddock. However, in recent years, pollock has become more popular due to overfishing of cod and haddock. It can now be found in most supermarkets as fresh fillets or prepared freezer items. For example, it is used minced in fish fingers or as an ingredient in imitation crab meat and is commonly used to make fish and chips.
Because of its slightly grey colour, pollock is often prepared, as in Norway, as fried fish balls, or if juvenile-sized, breaded with oatmeal and fried, as in Shetland. Year-old fish are traditionally split, salted, and dried over a peat hearth in Orkney, where their texture becomes wooden. Coalfish can also be salted and smoked and achieve a salmon-like orange color (although it is not closely related to the salmon), as is the case in Germany, where the fish is commonly sold as Seelachs or sea salmon.
In 2009, UK supermarket Sainsbury's briefly renamed Atlantic pollock "colin" in a bid to boost ecofriendly sales of the fish as an alternative to cod. Sainsbury's, which said the new name was derived from the French for cooked pollock (colin), launched the product under the banner "Colin and chips can save British cod."
Pollock is regarded as a "low-mercury fish" – a woman weighing can safely eat up to per week, and a child weighing can safely eat up to .
Other fish called pollock
One member of the genus Gadus is also commonly referred to as pollock: the Alaska pollock or walleye pollock (Gadus chalcogrammus), including the form known as the Norway pollock. They are also members of the family Gadidae but not members of the genus Pollachius.
| Biology and health sciences | Acanthomorpha | Animals |
47588 | https://en.wikipedia.org/wiki/Neurosis | Neurosis | Neurosis (: neuroses) is a term mainly used today by followers of Freudian thinking to describe mental disorders caused by past anxiety, often that has been repressed. In recent history, the term has been used to refer to anxiety-related conditions more generally.
The term "neurosis" is no longer used in condition names or categories by the World Health Organization's International Classification of Diseases (ICD) or the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM). According to the American Heritage Medical Dictionary of 2007, the term is "no longer used in psychiatric diagnosis".
Neurosis is distinguished from psychosis, which refers to a loss of touch with reality. Its descendant term, neuroticism, refers to a personality trait of being prone to anxiousness and mental collapse. The term "neuroticism" is also no longer used for DSM or ICD conditions; however, it is a common name for one of the Big Five personality traits. A similar concept is included in the ICD-11 as the condition "negative affectivity".
History
A broad condition (1769–1879)
The term neurosis was coined by Scottish doctor William Cullen to refer to "disorders of sense and motion" caused by a "general affection of the nervous system". The term is derived from the Greek word neuron (νεῦρον, 'nerve') and the suffix -osis (-ωσις, 'diseased' or 'abnormal condition'). It was first used in print in Cullen's System of Nosology, first published in Latin in 1769.
Cullen used the term to describe various nervous disorders and symptoms that could not be explained physiologically. Physical features, however, were almost inevitably present, and physical diagnostic tests, such as exaggerated knee-jerks, loss of the gag reflex and dermatographia, were used into the 20th century.
French psychiatrist Phillipe Pinnel's Nosographie philosophique ou La méthode de l'analyse appliquée à la médecine (1798) was greatly inspired by Cullen. It divided medical conditions into five categories, with one being "neurosis". This was divided into four basic types of mental disorder: melancholia, mania, dementia, and idiotism.
Morphine was first isolated from opium in 1805, by German chemist Friedrich Sertürner. After the publication of his third paper on the topic in 1817, morphine became more widely known, and used to treat neuroses and other kinds of mental distress. After becoming addicted to this highly addictive substance, he warned "I consider it my duty to attract attention to the terrible effects of this new substance I called morphium in order that calamity may be averted."
German psychologist Johann Friedrich Herbart used the term repression in 1824, in a discussion of unconscious ideas competing to get into consciousness.
The tranquilising properties of potassium bromide were noted publicly by British doctor Charles Locock in 1857. Over the coming decades, this and other bromides were used in great quantities to calm people with neuroses. This led to many cases of bromism.
French psychiatrist Henri Legrand du Saulle used exposure therapy to treat phobias.
American doctor Weir Mitchell first published an account of his rest cure for non-psychotic mental disorders in 1875. His 1877 book "Fat and Blood: and how to make them" gave a fuller explanation. The cure originally involved women being isolated in bed, only communicating with a nurse trained to talk about unchallenging topics, a fattening diet of milk, plus massage and the application of electricity. Eventually, the cure advocated by the Mitchell family had less strict isolation and diet, and was followed by men as well as women. "Fat and Blood" was revised and reprinted for many decades.
Breuer, Freud and contemporaries (1880-1939)
Austrian psychiatrist Josef Breuer first used psychoanalysis to treat hysteria in 1880–1882. Bertha Pappenheim was treated for a variety of symptoms that began when her father suddenly fell seriously ill in mid-1880 during a family holiday in Ischl. His illness was a turning point in her life. While sitting up at night at his sickbed she was suddenly tormented by hallucinations and a state of anxiety. At first the family did not react to these symptoms, but in November 1880, Breuer, a friend of the family, began to treat her. He encouraged her, sometimes under light hypnosis, to narrate stories, which led to partial improvement of the clinical picture, although her overall condition continued to deteriorate.
According to Breuer, the slow and laborious progress of her "remembering work" in which she recalled individual symptoms after they had occurred, thus "dissolving" them, came to a conclusion on 7 June 1882 after she had reconstructed the first night of hallucinations in Ischl. "She has fully recovered since that time" were the words with which Breuer concluded his case report. Accounts differ on the success of Pappenheim's treatment by Breuer. She did not speak about this episode in her later life, and vehemently opposed any attempts at psychoanalytic treatment of people in her care. Breuer was not quick to publish about this case.
(Subsequent research has suggested Pappenheim may have had one of a number of neurological illnesses. This includes temporal lobe epilepsy, tuberculous meningitis, and encephalitis. Whatever the nature of her condition, she went on to run an orphanage, and then found and lead the for twenty years.)
The term psychoneurosis was coined by Scottish psychiatrist Thomas Clouston for his 1883 book Clinical Lectures on Mental Diseases. He describes a condition that covers what is today considered the schizophrenia and autism spectrums (a combination of symptoms that would soon become better known as dementia praecox).
French neurologist Jean-Martin Charcot came to believe that psychological trauma was a cause of some cases of hysteria. He wrote in his book Leçons sur les maladies du système nerveux, (1885-1887) (and published in English as Clinical Lectures on the Diseases of the Nervous System):Quite recently male hysteria has been studied by Messrs. Putnam [1884] and Walton [1883] in America, principally as it occurs after injuries, and especially after railway accidents. They have recognised, like Mr. Page, [1885] who in England has also paid attention to this subject, that many of those nervous accidents described under the name of Railway-spine, and which according to them would be better described as Railway-brain, are in fact, whether occurring in man or woman, simply manifestations of hysteria.Charcot documented around two dozen cases where psychological trauma appears to have caused hysteria. In some cases, the results are described like the modern concept of PTSD.
Austrian psychiatrist Sigmund Freud was a student of Charcot in 1885–6. In 1893 Freud credited Charcot with being the source of "all the modern advances made in the understanding and knowledge of hysteria."
French psychiatrist Pierre Janet released his book L'automatisme psychologique (Psychological automatism) in 1889, its third chapter detailing his understanding of hypnosis and the unconscious. At this time, he claimed that the main aspect of psychological trauma is dissociation (a disconnection of the conscious mind from reality). (Freud would later claim Janet as a major influence.)
In 1891, Thomas Clouston published Neuroses of Development, which covered a wide range of physical and mental developmental conditions.Breuer came to mentor Freud. The pair released the paper "Ueber den psychischen Mechanismus hysterischer Phänomene. (Vorläufige Mittheilung.)" (known in English as "On the physical mechanism of hysterical phenomena: preliminary communication") in January 1893. It opens with:A chance observation has led us, over a number of years, to investigate a great variety of different forms and symptoms of hysteria, with a view to discovering their precipitating cause the event which provoked the first occurrence, often many years earlier, of the phenomenon in question. In the great majority of cases it is not possible to establish the point of origin by a simple interrogation of the patient, however thoroughly it may be carried out. This is in part because what is in question is often some experience which the patient dislikes discussing; but principally because he is genuinely unable to recollect it and often has no suspicion of the causal connection between the precipitating event and the pathological phenomenon. As a rule it is necessary to hypnotize the patient and to arouse his memories under hypnosis of the time at which the symptom made its first appearance; when this has been done, it becomes possible to demonstrate the connection in the clearest and most convincing fashion...
It is of course obvious that in cases of 'traumatic' hysteria what provokes the symptoms is the accident. The causal connection is equally evident in hysterical attacks when it is possible to gather from the patient's utterances that in each attack he is hallucinating the same event which provoked the first one. The situation is more obscure in the case of other phenomena.
Our experiences have shown us, however, that the most various symptoms, which are ostensibly spontaneous and, as one might say, idiopathic products of hysteria, are just as strictly related to the precipitating trauma as the phenomena to which we have just alluded and which exhibit the connection quite clearly.This paper was reprinted and supplemented with case studies in the pair's 1895 book Studien über Hysterie (Studies on Hysteria). Of the book's five case studies, the most famous became that of Breuer's patient Bertha Pappenheim (given the pseudonym "Anna O."). This book established the field of psychoanalysis.
French neurologist Paul Oulmont was mentored by Charcot. In his 1894 book Thérapeutique des névroses (Therapy of neuroses), he lists the neuroses as being hysteria, neurasthenia, exophthalmic goitre, epilepsy, migraine, Sydenham's chorea, Parkinson's disease and tetany.
The fifth edition of German psychiatrist Emil Kraepelin's popular psychiatry textbook in 1896 gave "neuroses" a well-accepted definition:In the following presentation we want to summarize a group of disease states as general neuroses, which are accompanied by more or less pronounced nervous dysfunctions. What is common to these manifestations of insanity is that we are constantly dealing with the morbid processing of vital stimuli; what they also have in common is the occurrence of more transitory, peculiar manifestations of illness, sometimes in the physical, sometimes in the psychic area. These attacks of fluctuations in mental balance are therefore not independent illnesses, but only the occasional increase in a persistent illness...
It seems useful to me, for the time being, to distinguish between two main forms of general neuroses, epileptic and hysterical insanity.Pierre Janet published the two volume work Névroses et Idées Fixes (Neuroses and Fixations) in 1898. According to Janet, neuroses could be usefully divided into hysterias and psychasthenias. Hysterias induced such symptoms as anaesthesia, visual field narrowing, paralyses, and unconscious acts. Psychasthenias involved the ability to adjust to one's surroundings, similar to the later concepts of adjustment disorder and executive functions.
Janet founded the French "Société de psychologie" in 1901. This later became the "Société française de psychologie", and continues today as France's main psychology body.
Barbiturates are a class of highly addictive sedative drugs. The first barbiturate, barbital, was synthesized in 1902 by German chemists Emil Fischer and Joseph von Mering and was first marketed as "Veronal" in 1904. The similar barbiturate phenobarbital was brought to market in 1912 under the name "Luminal". Barbiturates became popular drugs in many countries to reduce neurotic anxiety and displaced the use of bromides.
Janet published the book Les Obsessions et la Psychasthénie (The Obsessions and the Psychasthenias) in 1903. Janet followed this with the books The Major Symptoms of Hysteria in 1907, and Les Névroses (The Neuroses) in 1909.
According to Janet, one cause of neurosis is when the mental force of a traumatic event is stronger than what someone can counter using their normal coping mechanisms.
The Swiss psychiatrist Paul Charles Dubois published the book Les psychonévroses et leur traitement moral in 1904, which was translated into English as "Psychic Treatment of Nervous Disorders (The Psychoneuroses and Their Moral Treatment)" in 1905. Dubois believed that neurosis could be successfully treated by listening carefully to patients, and rationally convincing them of the truth — what he called "rational psychotherapy". This was a form of cognitive behavioural therapy. He also followed Weir Mitchell's rest cure, though with a broad fattening diet and other modifications.
Meanwhile, Freud developed a number of different theories of neurosis. The most impactful one was that it referred to mental disorders caused by the brain's defence against past psychological trauma. This redefined the general understanding and use of the word. It came to replace the concept of "hysteria".
He held the First Congress for Freudian Psychology in Salzburg in April 1908. Subsequent Congresses continue today.
Progressive muscle relaxation (PMR) was first developed by American psychiatrist and physiologist Edmund Jacobson. This began at Harvard University in 1908. PMR involves learning to relieve the tension in specific muscle groups by first tensing and then relaxing each muscle group. When the muscle tension is released, attention is directed towards the differences felt during tension and relaxation so that the patient learns to recognize the contrast between the states. This reduces anxiety and the effect of phobias.
Freud published the detailed case study "Bemerkungen über einen Fall von Zwangsneurose" ( | Biology and health sciences | Mental disorder | null |
47592 | https://en.wikipedia.org/wiki/Waveform | Waveform | In electronics, acoustics, and related fields, the waveform of a signal is the shape of its graph as a function of time, independent of its time and magnitude scales and of any displacement in time. Periodic waveforms repeat regularly at a constant period. The term can also be used for non-periodic or aperiodic signals, like chirps and pulses.
In electronics, the term is usually applied to time-varying voltages, currents, or electromagnetic fields. In acoustics, it is usually applied to steady periodic sounds — variations of pressure in air or other media. In these cases, the waveform is an attribute that is independent of the frequency, amplitude, or phase shift of the signal.
The waveform of an electrical signal can be visualized in an oscilloscope or any other device that can capture and plot its value at various times, with suitable scales in the time and value axes. The electrocardiograph is a medical device to record the waveform of the electric signals that are associated with the beating of the heart; that waveform has important diagnostic value. Waveform generators, that can output a periodic voltage or current with one of several waveforms, are a common tool in electronics laboratories and workshops.
The waveform of a steady periodic sound affects its timbre. Synthesizers and modern keyboards can generate sounds with many complicated waveforms.
Common periodic waveforms
Simple examples of periodic waveforms include the following, where is time, is wavelength, is amplitude and is phase:
Sine wave: The amplitude of the waveform follows a trigonometric sine function with respect to time.
Square wave: This waveform is commonly used to represent digital information. A square wave of constant period contains odd harmonics that decrease at −6 dB/octave.
Triangle wave: It contains odd harmonics that decrease at −12 dB/octave.
Sawtooth wave: This looks like the teeth of a saw. Found often in time bases for display scanning. It is used as the starting point for subtractive synthesis, as a sawtooth wave of constant period contains odd and even harmonics that decrease at −6 dB/octave.
The Fourier series describes the decomposition of periodic waveforms, such that any periodic waveform can be formed by the sum of a (possibly infinite) set of fundamental and harmonic components. Finite-energy non-periodic waveforms can be analyzed into sinusoids by the Fourier transform.
Other periodic waveforms are often called composite waveforms and can often be described as a combination of a number of sinusoidal waves or other basis functions added together.
| Physical sciences | Waves | Physics |
47607 | https://en.wikipedia.org/wiki/Suspension%20bridge | Suspension bridge | A suspension bridge is a type of bridge in which the deck is hung below suspension cables on vertical suspenders. The first modern examples of this type of bridge were built in the early 1800s. Simple suspension bridges, which lack vertical suspenders, have a long history in many mountainous parts of the world.
Besides the bridge type most commonly called suspension bridges, covered in this article, there are other types of suspension bridges. The type covered here has cables suspended between towers, with vertical suspender cables that transfer the live and dead loads of the deck below, upon which traffic crosses. This arrangement allows the deck to be level or to arc upward for additional clearance. Like other suspension bridge types, this type often is constructed without the use of falsework.
The suspension cables must be anchored at each end of the bridge, since any load applied to the bridge is transformed into tension in these main cables. The main cables continue beyond the pillars to deck-level supports, and further continue to connections with anchors in the ground. The roadway is supported by vertical suspender cables or rods, called hangers. In some circumstances, the towers may sit on a bluff or canyon edge where the road may proceed directly to the main span. Otherwise, the bridge will typically have two smaller spans, running between either pair of pillars and the highway, which may be supported by suspender cables or their own trusswork. In cases where trusswork supports the spans, there will be very little arc in the outboard main cables.
History
The earliest suspension bridges were ropes slung across a chasm, with a deck possibly at the same level or hung below the ropes such that the rope had a catenary shape.
Precursors
The Tibetan siddha and bridge-builder Thangtong Gyalpo originated the use of iron chains in his version of simple suspension bridges. In 1433, Gyalpo built eight bridges in eastern Bhutan. The last surviving chain-linked bridge of Gyalpo's was the Thangtong Gyalpo Bridge in Duksum en route to Trashi Yangtse, which was finally washed away in 2004. Gyalpo's iron chain bridges did not include a suspended-deck bridge, which is the standard on all modern suspension bridges today. Instead, both the railing and the walking layer of Gyalpo's bridges used wires. The stress points that carried the screed were reinforced by the iron chains. Before the use of iron chains it is thought that Gyalpo used ropes from twisted willows or yak skins. He may have also used tightly bound cloth.
The Inca used rope bridges, documented as early as 1615. It is not known when they were first made. Queshuachaca is considered the last remaining Inca rope bridge and is rebuilt annually.
Chain bridges
The first iron chain suspension bridge in the Western world was the Jacob's Creek Bridge (1801) in Westmoreland County, Pennsylvania, designed by inventor James Finley. Finley's bridge was the first to incorporate all of the necessary components of a modern suspension bridge, including a suspended deck which hung by trusses. Finley patented his design in 1808, and published it in the Philadelphia journal, The Port Folio, in 1810.
Early British chain bridges included the Dryburgh Abbey Bridge (1817) and 137 m Union Bridge (1820), with spans rapidly increasing to 176 m with the Menai Bridge (1826), "the first important modern suspension bridge". The first chain bridge on the German speaking territories was the Chain Bridge in Nuremberg. The Sagar Iron Suspension Bridge with a 200 feet span (also termed Beose Bridge) was constructed near Sagar, India during 1828–1830 by Duncan Presgrave, Mint and Assay Master. The Clifton Suspension Bridge (designed in 1831, completed in 1864 with a 214 m central span), is similar to the Sagar bridge. It is one of the longest of the parabolic arc chain type. The current Marlow suspension bridge was designed by William Tierney Clark and was built between 1829 and 1832, replacing a wooden bridge further downstream which collapsed in 1828. It is the only suspension bridge across the non-tidal Thames. The Széchenyi Chain Bridge, (designed in 1840, opened in 1849), spanning the River Danube in Budapest, was also designed by William Clark and it is a larger-scale version of Marlow Bridge.
An interesting variation is Thornewill and Warham's Ferry Bridge in Burton-on-Trent, Staffordshire (1889), where the chains are not attached to abutments as is usual, but instead are attached to the main girders, which are thus in compression. Here, the chains are made from flat wrought iron plates, eight inches (203 mm) wide by an inch and a half (38 mm) thick, rivetted together.
Wire-cable
The first wire-cable suspension bridge was the Spider Bridge at Falls of Schuylkill (1816), a modest and temporary footbridge built following the collapse of James Finley's nearby Chain Bridge at Falls of Schuylkill (1808). The footbridge's span was 124 m, although its deck was only 0.45 m wide.
Development of wire-cable suspension bridges dates to the temporary simple suspension bridge at Annonay built by Marc Seguin and his brothers in 1822. It spanned only 18 m. The first permanent wire cable suspension bridge was Guillaume Henri Dufour's Saint Antoine Bridge in Geneva of 1823, with two 40 m spans. The first with cables assembled in mid-air in the modern method was Joseph Chaley's Grand Pont Suspendu in Fribourg, in 1834.
In the United States, the first major wire-cable suspension bridge was the Wire Bridge at Fairmount in Philadelphia, Pennsylvania. Designed by Charles Ellet Jr. and completed in 1842, it had a span of 109 m. Ellet's Niagara Falls suspension bridge (1847–48) was abandoned before completion. It was used as scaffolding for John A. Roebling's double decker railroad and carriage bridge (1855).
The Otto Beit Bridge (1938–1939) was the first modern suspension bridge outside the United States built with parallel wire cables.
Structure
Bridge main components
Two towers/pillars, two suspension cables, four suspension cable anchors, multiple suspender cables, the bridge deck.
Structural analysis
The main cables of a suspension bridge will form a catenary when hanging under their own weight only. When supporting the deck, the cables will instead form a parabola, assuming the weight of the cables is small compared to the weight of the deck. One can see the shape from the constant increase of the gradient of the cable with linear (deck) distance, this increase in gradient at each connection with the deck providing a net upward support force. Combined with the relatively simple constraints placed upon the actual deck, that makes the suspension bridge much simpler to design and analyze than a cable-stayed bridge in which the deck is in compression.
Comparison with cable-stayed bridge
Cable-stayed bridges and suspension bridges may appear to be similar, but are quite different in principle and in their construction.
In suspension bridges, large main cables (normally two) hang between the towers and are anchored at each end to the ground. The main cables, which are free to move on bearings in the towers, bear the load of the bridge deck. Before the deck is installed, the cables are under tension from their own weight. Along the main cables smaller cables or rods connect to the bridge deck, which is lifted in sections. As this is done, the tension in the cables increases, as it does with the live load of traffic crossing the bridge. The tension on the main cables is transferred to the ground at the anchorages and by downwards compression on the towers.
In cable-stayed bridges, the towers are the primary load-bearing structures that transmit the bridge loads to the ground. A cantilever approach is often used to support the bridge deck near the towers, but lengths further from them are supported by cables running directly to the towers. By design, all static horizontal forces of the cable-stayed bridge are balanced so that the supporting towers do not tend to tilt or slide and so must only resist horizontal forces from the live loads.
Advantages
Longer main spans are achievable than with any other type of bridge.
Less material may be required than other bridge types, even at spans they can achieve, leading to a reduced construction cost.
Except for installation of the initial temporary cables, little or no access from below is required during construction and so a waterway can remain open while the bridge is built above.
They may be better able to withstand earthquake movements than heavier and more rigid bridges.
Bridge decks can have deck sections replaced in order to widen traffic lanes for larger vehicles or add additional width for separated cycling/pedestrian paths.
Disadvantages
Considerable stiffness or aerodynamic profiling may be required to prevent the bridge deck from vibrating under high winds.
The relatively low deck stiffness compared to other (non-suspension) types of bridges makes it more difficult to carry heavy rail traffic in which high concentrated live loads occur.
Some access below may be required during construction to lift the initial cables or to lift deck units. That access can often be avoided in cable-stayed bridge construction.
Variations
Underspanned
In an underspanned suspension bridge, also called under-deck cable-stayed bridge, the main cables hang entirely below the bridge deck, but are still anchored into the ground in a similar way to the conventional type. Very few bridges of this nature have been built, as the deck is inherently less stable than when suspended below the cables. Examples include the Pont des Bergues of 1834 designed by Guillaume Henri Dufour; James Smith's Micklewood Bridge; and a proposal by Robert Stevenson for a bridge over the River Almond near Edinburgh.
Roebling's Delaware Aqueduct (begun 1847) consists of three sections supported by cables. The timber structure essentially hides the cables; and from a quick view, it is not immediately apparent that it is even a suspension bridge.
Suspension cable types
The main suspension cables in older bridges were often made from a chain or linked bars, but modern bridge cables are made from multiple strands of wire. This not only adds strength but improves reliability (often called redundancy in engineering terms) because the failure of a few flawed strands in the hundreds used pose very little threat of failure, whereas a single bad link or eyebar can cause failure of an entire bridge. (The failure of a single eyebar was found to be the cause of the collapse of the Silver Bridge over the Ohio River.) Another reason is that as spans increased, engineers were unable to lift larger chains into position, whereas wire strand cables can be formulated one by one in mid-air from a temporary walkway.
Suspender-cable terminations
Poured sockets are used to make a high strength, permanent cable termination. They are created by inserting the suspender wire rope (at the bridge deck supports) into the narrow end of a conical cavity which is oriented in-line with the intended direction of strain. The individual wires are splayed out inside the cone or 'capel', and the cone is then filled with molten lead-antimony-tin (Pb80Sb15Sn5) solder.
Deck structure types
Most suspension bridges have open truss structures to support the roadbed, particularly owing to the unfavorable effects of using plate girders, discovered from the Tacoma Narrows Bridge (1940) bridge collapse. In the 1960s, developments in bridge aerodynamics allowed the re-introduction of plate structures as shallow box girders, first seen on the Severn bridge, built 1961–1966. In the picture of the Yichang Bridge, note the very sharp entry edge and sloping undergirders in the suspension bridge shown. This enables this type of construction to be used without the danger of vortex shedding and consequent aeroelastic effects, such as those that destroyed the original Tacoma Narrows bridge.
Forces
Three kinds of forces operate on any bridge: the dead load, the live load, and the dynamic load. Dead load refers to the weight of the bridge itself. Like any other structure, a bridge has a tendency to collapse simply because of the gravitational forces acting on the materials of which the bridge is made. Live load refers to traffic that moves across the bridge as well as normal environmental factors such as changes in temperature, precipitation, and winds. Dynamic load refers to environmental factors that go beyond normal weather conditions, factors such as sudden gusts of wind and earthquakes. All three factors must be taken into consideration when building a bridge.
Use other than road and rail
The principles of suspension used on a large scale also appear in contexts less dramatic than road or rail bridges. Light cable suspension may prove less expensive and seem more elegant for a cycle or footbridge than strong girder supports. An example of this is the Nescio Bridge in the Netherlands, and the Roebling designed 1904 Riegelsville suspension pedestrian bridge across the Delaware River in Pennsylvania. The longest pedestrian suspension bridge, which spans the River Paiva, Arouca Geopark, Portugal, opened in April 2021. The 516 metres bridge hangs 175 meters above the river.
Where such a bridge spans a gap between two buildings, there is no need to construct towers, as the buildings can anchor the cables. Cable suspension may also be augmented by the inherent stiffness of a structure that has much in common with a tubular bridge.
Construction sequence (wire strand cable type)
Typical suspension bridges are constructed using a sequence generally described as follows. Depending on length and size, construction may take anywhere between a year and a half (construction on the original Tacoma Narrows Bridge took only 19 months) up to as long as a decade (the Akashi-Kaikyō Bridge's construction began in May 1986 and was opened in May 1998 – a total of twelve years).
Where the towers are founded on underwater piers, caissons are sunk and any soft bottom is excavated for a foundation. If the bedrock is too deep to be exposed by excavation or the sinking of a caisson, pilings are driven to the bedrock or into overlying hard soil, or a large concrete pad to distribute the weight over less resistant soil may be constructed, first preparing the surface with a bed of compacted gravel. (Such a pad footing can also accommodate the movements of an active fault, and this has been implemented on the foundations of the cable-stayed Rio-Antirio bridge.) The piers are then extended above water level, where they are capped with pedestal bases for the towers.
Where the towers are founded on dry land, deep foundation excavation or pilings are used.
From the tower foundation, towers of single or multiple columns are erected using high-strength reinforced concrete, stonework, or steel. Concrete is used most frequently in modern suspension bridge construction due to the high cost of steel.
Large devices called saddles, which will carry the main suspension cables, are positioned atop the towers. Typically of cast steel, they can also be manufactured using riveted forms, and are equipped with rollers to allow the main cables to shift under construction and normal loads.
Anchorages are constructed, usually in tandem with the towers, to resist the tension of the cables and form as the main anchor system for the entire structure. These are usually anchored in good quality rock but may consist of massive reinforced concrete deadweights within an excavation. The anchorage structure will have multiple protruding open eyebolts enclosed within a secure space.
Temporary suspended walkways, called catwalks, are then erected using a set of guide wires hoisted into place via winches positioned atop the towers. These catwalks follow the curve set by bridge designers for the main cables, in a path mathematically described as a catenary arc. Typical catwalks are usually between eight and ten feet wide and are constructed using wire grate and wood slats.
Gantries are placed upon the catwalks, which will support the main cable spinning reels. Then, cables attached to winches are installed, and in turn, the main cable spinning devices are installed.
High strength wire (typically 4 or 6 gauge galvanized steel wire), is pulled in a loop by pulleys on the traveler, with one end affixed at an anchorage. When the traveler reaches the opposite anchorage the loop is placed over an open anchor eyebar. Along the catwalk, workers also pull the cable wires to their desired tension. This continues until a bundle, called a "cable strand" is completed, and temporarily bundled using stainless steel wire. This process is repeated until the final cable strand is completed. Workers then remove the individual wraps on the cable strands (during the spinning process, the shape of the main cable closely resembles a hexagon), and then the entire cable is then compressed by a traveling hydraulic press into a closely packed cylinder and tightly wrapped with additional wire to form the final circular cross-section. The wire used in suspension bridge construction is a galvanized steel wire that has been coated with corrosion inhibitors.
At specific points along the main cable (each being the exact distance horizontally in relation to the next) devices called "cable bands" are installed to carry steel wire ropes called Suspender cables. Each suspender cable is engineered and cut to precise lengths, and are looped over the cable bands. In some bridges, where the towers are close to or on the shore, the suspender cables may be applied only to the central span. Early suspender cables were fitted with zinc jewels and a set of steel washers, which formed the support for the deck. Modern suspender cables carry a shackle-type fitting.
Special lifting hoists attached to the suspenders or from the main cables are used to lift prefabricated sections of the bridge deck to the proper level, provided that the local conditions allow the sections to be carried below the bridge by barge or other means. Otherwise, a traveling cantilever derrick may be used to extend the deck one section at a time starting from the towers and working outward. If the addition of the deck structure extends from the towers the finished portions of the deck will pitch upward rather sharply, as there is no downward force in the center of the span. Upon completion of the deck, the added load will pull the main cables into an arc mathematically described as a parabola, while the arc of the deck will be as the designer intended – usually a gentle upward arc for added clearance if over a shipping channel, or flat in other cases such as a span over a canyon. Arched suspension spans also give the structure more rigidity and strength.
With the completion of the primary structure various details such as lighting, handrails, finish painting and paving is installed or completed.
Longest spans
Suspension bridges are typically ranked by the length of their main span. These are the ten bridges with the longest spans, followed by the length of the span and the year the bridge opened for traffic:
Other examples
(Chronological)
Union Bridge (England/Scotland, 1820), the longest span (137 m) from 1820 to 1826. The oldest suspension bridge in the world still carrying road traffic.
Roebling's Delaware Aqueduct (USA, 1847), the oldest wire suspension bridge still in service in the United States.
John A. Roebling Suspension Bridge (USA, 1866), then the longest wire suspension bridge in the world at 1,057 feet (322 m) main span.
Brooklyn Bridge (USA, 1883), the first steel-wire suspension bridge.
Bear Mountain Bridge (USA, 1924), the longest suspension span (497 m) from 1924 to 1926. The first suspension bridge to have a concrete deck. The construction methods pioneered in building it would make possible several much larger projects to follow.
Benjamin Franklin Bridge (USA, 1926), replaced Bear Mountain Bridge as the longest span at 1,750 feet between the towers. Includes an active subway line and never-used trolley stations on the span.
San Francisco–Oakland Bay Bridge eastern span (USA, 2013). The eastern portion is a self-anchored suspension bridge, the longest of its type in the world. It replaced a cantilever bridge.
Golden Gate Bridge (USA, 1937), the longest suspension bridge from 1937 to 1964. It was also the world's tallest bridge from 1937 to 1993, and remains the tallest bridge in the United States.
Mackinac Bridge (USA, 1957), the longest suspension bridge between anchorages in the Western hemisphere.
Si Du River Bridge (China, 2009), the highest bridge in the world, with its deck around 500 meters above the surface of the river.
Rod El Farag Axis Bridge (Egypt, 2019), a modern Egyptian steel wire-cables based suspension bridge crossing the river Nile, which was completed in 2019 and holds the Guinness World Record for the widest suspension bridge in the world with a width of 67.3 meters, and with a span of 540 meters.
Notable collapses
Broughton Suspension Bridge (England) was an iron chain bridge built in 1826. One of Europe's first suspension bridges, it collapsed in 1831 due to mechanical resonance induced by troops marching in step. As a result of the incident, the British Army issued an order that troops should "break step" when crossing a bridge.
Silver Bridge (USA) was an eyebar chain highway bridge, built in 1928, that collapsed in late 1967, killing forty-six people. The bridge had a low-redundancy design that was difficult to inspect. The collapse inspired legislation to ensure that older bridges were regularly inspected and maintained. Following the collapse a bridge of similar design was immediately closed and eventually demolished. A second similarly-designed bridge had been built with a higher margin of safety and remained in service until 1991.
The Tacoma Narrows Bridge, (USA), 1940, was vulnerable to structural vibration in sustained and moderately strong winds due to its plate-girder deck structure. Wind caused a phenomenon called aeroelastic fluttering that led to its collapse only months after completion. The collapse was captured on film. There were no human deaths in the collapse; several drivers escaped their cars on foot and reached the anchorages before the span dropped.
Yarmouth suspension bridge (England) was built in 1829 and collapsed in 1845, killing 79 people.
Peace River Suspension Bridge (Canada), which was completed in 1943, collapsed when the north anchor's soil support for the suspension bridge failed in October 1957. The entire bridge subsequently collapsed.
Kutai Kartanegara Bridge (Indonesia) over the Mahakam River, located in Kutai Kartanegara Regency, East Kalimantan district on the Indonesia island of Borneo, was built in 1995, completed in 2001 and collapsed in 2011. Dozens of vehicles on the bridge fell into the Mahakam River. As a result of this incident, 24 people died and dozens of others were injured and were treated at the Aji Muhammad Parikesit Regional Hospital. Meanwhile, 12 people were reported missing, 31 people were seriously injured, and 8 people had minor injuries. Research findings indicate that the collapse was largely caused by the construction failure of the vertical hanging clamp. It was also found that poor maintenance, fatigue in the cable hanger construction materials, material quality, and bridge loads that exceed vehicle capacity, can also have an impact on bridge collapse. In 2013 the Kutai Kartanegara Bridge rebuilt the same location and completed in 2015 with a Through arch bridge design.
On 30 October 2022, Jhulto Pul, a pedestrian suspension bridge over the Machchhu River in the city of Morbi, Gujarat, India collapsed, leading to the deaths of at least 141 people.
| Technology | Transport infrastructure | null |
47610 | https://en.wikipedia.org/wiki/Great%20Belt%20Bridge | Great Belt Bridge | The Great Belt Bridge () or Great Belt fixed link () is a multi-element fixed link crossing the Great Belt strait between the Danish islands of Zealand and Funen. It consists of a road suspension bridge and a railway tunnel between Zealand and the small island Sprogø in the middle of the Great Belt, and a box-girder bridge for both road and rail traffic between Sprogø and Funen. The total length is .
The term Great Belt Bridge commonly refers to the suspension bridge, although it may also be used to mean the box-girder bridge or the link in its entirety. Officially named the East Bridge, the suspension bridge was designed by the Danish firms COWI and Ramboll, and the architecture firm Dissing+Weitling. It has the world's sixth-longest main span (). At the time of the opening of the bridge it was the second longest, beaten by the Akashi Kaikyō Bridge opened a few months previously.
Together with the New Little Belt Bridge, the Great Belt link provides a continuous road and rail connection between Copenhagen and the Danish mainland. The link replaced the Great Belt ferries service, which had been the primary means of crossing the Great Belt. After more than 50 years of debate, the Danish government decided in 1986 to construct a link; it opened to rail traffic in 1997 and to road traffic in 1998. At an estimated cost of DKK 21.4 billion (EUR 2.8 billion) (1988 prices), the link is the largest construction project in Danish history. It has reduced travel times significantly; previously taking one hour by ferry, the Great Belt can now be crossed in ten minutes. This link, together with the Øresund Bridge (built 1995–1999) and the Little Belt Bridge, have together enabled driving from mainland Europe to Sweden through Denmark.
Operation and maintenance are performed by A/S Storebælt under Sund & Bælt. Construction and maintenance are financed by tolls on vehicles and trains. Cyclists are not permitted to use the bridge, but bicycles may be transported by train or bus.
History
The Great Belt ferries entered service between the coastal towns of Korsør and Nyborg in 1883, connecting the railway lines on either side of the Belt. In 1957, road traffic was moved to the Halsskov–Knudshoved route, about 1.5 kilometres to the north and close to the fixed link.
Construction drafts for a fixed link were presented as early as the 1850s, with several suggestions appearing in the following decades. The Danish State Railways, responsible for the ferry service, presented plans for a bridge in 1934. The concepts of bridges over Øresund (152 million DKK) and Storebælt (257 million DKK) were calculated around 1936. In 1948, the Ministry for Public Works (now the Ministry of Transport) established a commission to investigate the implications of a fixed link.
The first law concerning a fixed link was enacted in 1973, but the project was put on hold in 1978 as the Venstre (Liberal) party demanded postponing public spending. Political agreement to restart work was reached in 1986, with a construction law () being passed in 1987.
The design was carried out by the engineering firms COWI and Ramboll together with Dissing+Weitling architecture practice.
Construction of the link commenced in 1988. In 1991, Finland sued Denmark at the International Court of Justice, on the grounds that Finnish-built mobile offshore drilling units would be unable to pass beneath the bridge. The two countries negotiated a financial compensation of 90 million Danish kroner, and Finland withdrew the lawsuit in 1992.
A European Court of Justice ruling in 1993 found that a contractual condition requiring use of local labour and local materials in constructing the bridge was incompatible with the principles of the EEC Treaty.
The link is estimated to have created a value of 379 billion DKK after 50 years of use.
In 2022, the bridge was crossed as part of the route of Stage 2 of the 2022 Tour de France.
Construction
The construction of the fixed link became the biggest building project in the history of Denmark. In order to connect Halsskov on Zealand with Knudshoved on Funen, to its west, a two-track railway and a four-lane motorway had to be built, via the small island of Sprogø in the middle of the Great Belt. The project comprised three different tasks: the East Bridge for road transport, the East Tunnel for rail transport and the West Bridge for road and rail transport combined. The construction work was carried out by Sundlink Contractors, a consortium of Skanska, Hochtief, Højgaard & Schultz (which built the West Bridge) and Monberg & Thorsen (which built the section under the Great Belt). The work of lifting and placing the elements was carried out by Ballast Nedam using a floating crane.
East Bridge
Built between 1991 and 1998 at a cost of US$950 million, the East Bridge (Østbroen) is a suspension bridge between Halsskov and Sprogø. It is long with a free span of ,. The East Bridge had been planned to be completed in time to be the longest bridge in the world, but there were delays in construction. Therefore, it happened that the Akashi-Kaikyo Bridge was opened two months earlier.
The vertical clearance for ships is , meaning the world's largest cruise ship, an Oasis-class cruise ship, just fits under with its smokestack folded. At above sea level, the two pylons of the East Bridge are the highest points on self-supporting structures in Denmark. Some radio masts, such as Tommerup transmitter, are taller.
To keep the main cables tensioned, an anchorage structure on each side of the span is placed below the road deck. After 15 years, the cables have no rust. They were scheduled for a 15 million DKK paint job, but due to corroding cables on other bridges, the decision was made to instead install a 70 million DKK sealed de-humidifying system in the cables. This was carried out by UK engineering firm Spencer Group, with help from Danish subcontractors Davai who provided the manpower, and Belvent A/S who provided the dehumidification system.
Nineteen concrete pillars (12 on the Zealand side, seven by Sprogø), apart, carry the road deck outside the span.
West Bridge
The West Bridge (Vestbroen) is a box girder bridge between Sprogø and Knudshoved. It is long, and has a vertical clearance for ships of . It is actually two separate, adjacent bridges: the northern one carries rail traffic and the southern one road traffic. The pillars of the two bridges rest on common foundations below sea level. The West Bridge was built between 1988 and 1994; its road/rail deck comprises 63 sections, supported by 62 pillars.
East Tunnel
The twin bored tunnel tubes of the East Tunnel (Østtunnelen) are each long. There are 31 connecting tunnels between the two main tunnels, at intervals. The equipment that is necessary for train operation in the tunnels is installed in the connecting tunnels, which also serve as emergency escape routes.
There were delays and cost overruns in the tunnel construction. The plan was to open it in 1993, giving the trains a head start of three years over road traffic, but train traffic started in 1997 and road traffic in 1998. During construction the sea bed gave way and one of the tunnels was flooded. The water continued to rise and reached the end at Sprogø, where it continued into the (still dry) other tunnel. The water damaged two of the four tunnel boring machines, but no workers were injured. Only by placing a clay blanket on the sea bed was it possible to dry out the tunnels. The two damaged machines were repaired and the majority of the tunnelling was undertaken from the Sprogø side. The machines on the Zealand side tunnelled through difficult ground and made little progress. A major fire on one of the Zealand machines in June 1994 stopped these drives and the tunnels were completed by the two Sprogø machines.
A total of 320 compressed air workers were involved in 9,018 pressure exposures in the four tunnel-boring machines. The project had a decompression sickness incidence of 0.14% with two workers having long-term residual symptoms.
Traffic implications
Prior to the opening of the link, an average of 8,000 cars used the ferries across the Great Belt every day. The traffic across the strait increased 127 percent over the first year after the link's opening due to the so-called traffic leap: new traffic generated by the improved ease, facility and lower price of crossing the Great Belt.
In 2021, an average of 34,100 vehicles used the link each day. On August 7, 2022 a record 61,528 vehicles passed the bridge in 24 hours. The increase in traffic is partly caused by the general growth of traffic, partly diversion of traffic volume from other services via ferry and services.
The fixed link has produced considerable savings in travel time between eastern and western Denmark. Previously, it took approximately 90 minutes on average to cross the Great Belt in a car with transfer by ferry, including the waiting time at the ports. It took considerably longer during peak periods, such as weekends and holidays. With the opening of the link, the journey is now between 10 and 15 minutes.
By train the time savings are significant as well. The journey has been reduced by 60 minutes, and there are many more seats available because more carriages may be added to a train that does not have to fit on a ferry. The seating capacity offered by DSB across the Great Belt on an ordinary Wednesday has risen from 11,060 seats to 37,490 seats. On Fridays the seating capacity exceeds 40,000 seats.
The shortest travel times are: Copenhagen–Odense 1 hour 15 minutes, Copenhagen–Aarhus 2 hours 30 minutes, Copenhagen–Aalborg 3 hours 55 minutes and Copenhagen–Esbjerg 2 hours 35 minutes.
Flights between Copenhagen and Odense, and between Copenhagen and Esbjerg have ceased, and the train now has the largest market share between Copenhagen and Aarhus.
Together with the Øresund Bridge, and the two Little Belt bridges, the link provides a direct fixed connection between western Continental Europe and northern Scandinavia, eventually connecting all parts of the European Union except Ireland, Malta, Cyprus, and outlying islands. Most people from Zealand still prefer to take the ferry between Puttgarden and Rødby, as it is a much shorter distance and provides a needed break for those travelling a long distance.
For freight trains, the fixed links are a large improvement between Sweden and Germany, and between Sweden and the UK. The Sweden-to-Germany ferry system is still used to some extent owing to limited rail capacity, with heavy passenger traffic over the bridges and some single track stretches in southern Denmark and northern Germany.
The Great Belt was used by now defunct night passenger trains between Copenhagen and Germany, which were too long to fit on the ferries. Day trains on the Copenhagen-Hamburg route first continued to use the Fehmarn Belt ferries, utilising short diesel trains, but now also use the Great Belt route, which potentially allows longer trains to be used, increasing capacity.
By 2028, the Fehmarn Belt Fixed Link is expected to be complete with much of the international traffic being shifted from the Great Belt Fixed Link. This more direct route will reduce the rail journey from Hamburg to Copenhagen from 4:45 to 3:30 hours.
Toll charge
In 2019, the vehicle tolls were:
Environmental effects
Environmental considerations have been an integral part of the project, and have been of decisive significance for the choice of alignment and determination of the design. Great Belt A/S established an environmental monitoring programme in 1988, and initiated co-operation with authorities and external consultants on the definition of environmental concerns during the construction work and the professional requirements to the monitoring programme. This co-operation issued in a report published at the beginning of 1997 on the state of the environment in the Great Belt. The conclusion of the report was that the marine environment was at least as good as before construction work began.
With regards to the water flow, the link must comply with the so-called zero-solution. This has been achieved by deepening parts of the Great Belt, so that the water flow cross section has been increased. This excavation compensates for the blocking effect caused by the bridge pylons and approach ramps. The conclusion of the report is that water flows are now almost at the level they were before the bridge was built.
The fixed link has generated increased road traffic volume, which has meant increased air pollution. However, there has been significant savings in the energy consumption by switching from ferries to the fixed link. Train and car ferries consume much energy for propulsion, high-speed ferries consume large amounts of energy at high speeds, and air transport is highly energy consuming. Domestic air travel over the Great Belt was greatly reduced after the opening of the bridge, with the former air travellers now using trains and private cars.
The larger energy consumption by ferries as opposed to via the fixed link is most clearly seen when comparing short driving distances from areas immediately east or west of the link. For more extended driving distances the difference in energy consumption is smaller, but any transport within Denmark across the link shows very clear energy savings.
During 2009, seven large wind turbines, likely Vestas 3MWs totalling 21MW capacity, were erected in the sea north of Sprogø to contribute to the electrical demand of the Great Belt Link. Their hub heights are about the same level as the road deck of the suspension bridge. Part of the project was to showcase sea wind at the December 2009 Copenhagen climate meeting.
Accidents
During construction 479 work-related accidents were reported, of which 53 resulted in serious injuries or death. Seven workers died as a result of work-related accidents.
The West Bridge has been struck by sea traffic twice. While the link was still under construction on 14 September 1993, the ferry M/F Romsø drifted off course in bad weather and hit the West Bridge. At 19:17 on 3 March 2005, the 3,500-ton freighter MV Karen Danielsen crashed into the West Bridge 800 metres from Funen. All traffic across the bridge was halted, effectively cutting Denmark in two. The bridge was re-opened shortly after midnight, after the freighter was pulled free and inspectors had found no structural damage to the bridge.
The East Bridge has so far been in the clear, although on 16 May 2001, the bridge was closed for 10 minutes as the Cambodian 27,000-ton bulk carrier Bella was heading straight for one of the anchorage structures. The ship was deflected by a swift response from the navy.
On 5 June 2006, a maintenance vehicle burst into flames in the east-bound railway tunnel at about 21:30. Nobody was hurt; its crew of three fled to the other tunnel and escaped. The fire was put out shortly before midnight, and the vehicle was removed from the tunnel the next day. Train service resumed on 6 June at reduced speed, and normal service was restored on 12 June.
On 2 January 2019, eight people were killed in a train accident on the West Bridge. A passenger train was hit by a semi-trailer that fell off a freight train travelling in the opposite direction.
In 2023, a 57-year-old truck driver was arrested by police after traffic on the bridge was disrupted due to spilled potatoes. Police stated that they were working on the hypothesis that the potatoes were either planted intentionally or as an accident.
Operations
In 2009, a study characterized the rail tunnel (together with other major projects like the Channel Tunnel between England and France) as financially non-viable.
Gallery
| Technology | Bridges | null |
47612 | https://en.wikipedia.org/wiki/Hexactinellid | Hexactinellid | Hexactinellid sponges are sponges with a skeleton made of four- and/or six-pointed siliceous spicules, often referred to as glass sponges. They are usually classified along with other sponges in the phylum Porifera, but some researchers consider them sufficiently distinct to deserve their own phylum, Symplasma. Some experts believe glass sponges are the longest-lived animals on earth; these scientists tentatively estimate a maximum age of up to 15,000 years.
Biology
Glass sponges are relatively uncommon and are mostly found at depths from below sea level. Although the species Oopsacas minuta has been found in shallow water, others have been found much deeper. They are found in all oceans of the world, although they are particularly common in Antarctic and Northern Pacific waters.
They are more-or-less cup-shaped animals, ranging from in height, with sturdy skeletons made of glass-like silica spicules, fused to form a lattice. In some glass sponges such as members of the genus Euplectela, these structures are aided by a protein called glassin. It helps accelerate the production of silicas from the silicic acid absorbed from the surrounding seawater. The body is relatively symmetrical, with a large central cavity that, in many species, opens to the outside through a sieve formed from the skeleton. Some species of glass sponges are capable of fusing together to create reefs or bioherms. They are generally pale in colour, ranging from white to orange.
Much of the body is composed of syncitial tissue, extensive regions of multinucleate cytoplasm. The epidermal cells characteristic of other sponges are absent, being replaced by a syncitial net of amoebocytes, through which the spicules penetrate. Unlike other sponges, they do not possess the ability to contract.
Their body comprises three parts: the inner and outer peripheral trabecular networks, and the choanosome, which is used for feeding purposes. The choanosome acts as the mouth for the sponge while the inner and outer canals that meet at the choanosome are passages for the food, creating a consumption path for the sponge.
All hexactinellids have the potential to grow to different sizes, but the average maximum growth is estimated to be around 32 centimeters long. Some grow past that length and continue to extend their length up to 1 meter long. The estimated life expectancy for hexactinellids that grow around 1 meter is approximately 200 years (Plyes).
Glass sponges possess a unique system for rapidly conducting electrical impulses across their bodies, making it possible for them to respond quickly to external stimuli. In the case Rhabdocalyptus dawsoni, the sponge uses electrical neuron signaling to detect outside stimuli, such as sediments, and then send a signal through its body system to alert the organism to no longer be actively feeding. Another glass sponge species in the same experiment of R. dawsoni, showed that the electrical conduction system for this class of sponges all has its own threshold of how much outside stimuli, sediments, etc., it can endure before it will stop its feeding process. Species like "Venus' flower basket" have a tuft of fibers that extends outward like an inverted crown at the base of their skeleton. These fibers are long and about the thickness of a human hair.
Syncytia
Bodies of glass sponges are different from those other sponges in various other ways. For example, most of their cytoplasm is not divided into separate cells by membranes, but forms a syncytium or continuous mass of cytoplasm with many nuclei (e.g., Reiswig and Mackie, 1983); it is held suspended like a cobweb by a scaffolding-like framework made of silica spicules. The remaining cells are connected to the syncytium by bridges of cytoplasmic "rivers" that transport nuclei, organelles ("organs" within cells) and other substances. Instead of choanocytes, these bridges have further syncytia, known as choanosyncytia, which form bell-shaped chambers where water enters via perforations. The insides of these chambers are lined with "collar bodies", each consisting of a collar and flagellum but without a nucleus of its own. The motion of the flagella sucks water through passages in the "cobweb" and expels it via the open ends of the bell-shaped chambers.
Some types of cells have a single nucleus and membrane each but are connected to other single-nucleus cells and to the main syncytium by "bridges" made of cytoplasm. The sclerocytes that build spicules have multiple nuclei, and in glass sponge larvae they are connected to other tissues by cytoplasm bridges; such connections between sclerocytes have not so far been found in adults, but this may simply reflect the difficulty of investigating such small-scale features. The bridges are controlled by "plugged junctions" that apparently permit some substances to pass while blocking others.
This physiology is what allows for a greater flow of ions and electrical signals to move throughout the organism, with around 75% of the sponge tissue being fused in this way. Another way is their role in the nutrient cycles of deep-sea environments. One species for example, Vazella pourtalesii, has an abundance of symbiotic microbes which aid in the nitrification and denitrification of the communities in which they are present. These interactions help the sponges survive in the low-oxygen conditions of the depths.
Longevity
These creatures are long-lived, but the exact age is hard to measure; one study based on modelling gave an estimated age of a specimen of Scolymastra joubini as 23,000 years (with a range from 13,000 to 40,000 years). However, due to changes in sea levels since the Last Glacial Maximum, its maximum age is thought to be no more than 15,000 years, hence its listing of c. 15,000 years in the AnAge Database. The shallow-water occurrence of hexactinellids is rare worldwide. In the Antarctic, two species occur as shallow as 33 meters under the ice. In the Mediterranean, one species occurs as shallow as in a cave with deep water upwelling (Boury-Esnault & Vacelet (1994))
Reefs
The sponges form reefs (called sponge reefs) off the coast of British Columbia, southeast Alaska and Washington state, which are studied in the Sponge Reef Project. In the case of Sarostegia oculata, this species almost always hosts symbiotic zoanthids, which cause the hexactinellid sponge to imitate the appearance and structure of coral reefs. Only 33 species of this sponge have ever been reported in the South Atlantic until 2017 when the Shinkai 6500 submersible went on an expedition through the Rio Grande Rise. Reefs discovered in Hecate Strait, British Columbia, have grown to up to 7 kilometres long and 20 metres high. Prior to these discoveries, sponge reefs were thought to have died out in the Jurassic period.
Reports of glass sponges have also been recorded on the HMCS Saskatchewan and HMCS Cape Breton wrecks off the coast of Vancouver Island. Species of zoantharin that rely on hexactinellid have also been found off the coast of the Japanese island of Minami-Torishima. Unidentified species of zoanthids have also been found in Australian waters, if these are identified as the same as the ones found in Minami-Torishima, this could potentially be proof of hexactinellids existing in all of the Pacific Ocean.
Conservation
Most hexactinellids live in deep waters that are not impacted by human activities. However, there are glass sponge reefs off the coast of British Columbia. The Canadian government designated 2140 km2 of the Hecate strait and Queen Charlotte sound as a marine protected area. This area contains four glass sponge reefs. The new regulations prohibit bottom contact fishing within 200 meters of the sponge reefs. Although human activities only affect a small portion of glass sponges, they are still subject to the threat of climate changes. Experiments using the species Aphrocallistes vastus have shown that increases in temperature and acidification can lead to weakened skeletal strength and stiffness. In 1995, an Antarctic ice shelf collapsed due to climate change. Since then, studies of the area have shown that hexactinellid reefs have been increasing in size despite the changes in climate.
Evolution and taxonomy
The earliest known hexactinellids are from the earliest Cambrian or late Neoproterozoic eras; Helicolocellus is a possible hexactinellid relative from the late Ediacaran. They are fairly common relative to demosponges as fossils, but this is thought to be, at least in part, because their spicules are sturdier than spongin and fossilize better. Like almost all sponges, the hexactinellids draw water in through a series of small pores by the whip-like beating of a series of hairs or flagella in chambers which in this group line the sponge wall.
The class is divided into two subclasses and several orders:
Class Hexactinellida
Subclass Amphidiscophora
Order Amphidiscosida
Order †Hemidiscosa
Order †Reticulosa
Subclass Hexasterophora
Incertae sedis
Dactylocalycidae Gray, 1867
Order Lychniscosida
Order Lyssacinosida
Order Sceptrulophora
| Biology and health sciences | Porifera | Animals |
47628 | https://en.wikipedia.org/wiki/Bomb | Bomb | A bomb is an explosive weapon that uses the exothermic reaction of an explosive material to provide an extremely sudden and violent release of energy. Detonations inflict damage principally through ground- and atmosphere-transmitted mechanical stress, the impact and penetration of pressure-driven projectiles, pressure damage, and explosion-generated effects. Bombs have been utilized since the 11th century starting in East Asia.
The term bomb is not usually applied to explosive devices used for civilian purposes such as construction or mining, although the people using the devices may sometimes refer to them as a "bomb". The military use of the term "bomb", or more specifically aerial bomb action, typically refers to airdropped, unpowered explosive weapons most commonly used by air forces and naval aviation. Other military explosive weapons not classified as "bombs" include shells, depth charges (used in water), or land mines. In unconventional warfare, other names can refer to a range of offensive weaponry. For instance, in recent asymmetric conflicts, homemade bombs called "improvised explosive devices" (IEDs) have been employed by irregular forces to great effectiveness.
The word comes from the Latin , which in turn comes from the Greek romanized , an onomatopoetic term meaning 'booming', 'buzzing'.
History
Gunpowder bombs had been mentioned since the 11th century. In 1000 AD, a soldier by the name of Tang Fu (唐福) demonstrated a design of gunpowder pots (a proto-bomb which spews fire) and gunpowder caltrops, for which he was richly rewarded. In the same year, Xu Dong wrote that trebuchets used bombs that were like "flying fire", suggesting that they were incendiaries. In the military text Wujing Zongyao of 1044, bombs such as the "ten-thousand fire flying sand magic bomb", "burning heaven fierce fire unstoppable bomb", and "thunderclap bomb" (pilipao) were mentioned. However these were soft-shell bombs and did not use metal casings.
Bombs made of cast iron shells packed with explosive gunpowder date to 13th century China. Explosive bombs were used in East Asia in 1221, by a Jurchen Jin army against a Chinese Song city. The term for this explosive bomb seems to have been coined the "thunder crash bomb" during a Jin dynasty (1115–1234) naval battle in 1231 against the Mongols.
The History of Jin (金史) (compiled by 1345) states that in 1232, as the Mongol general Subutai (1176–1248) descended on the Jin stronghold of Kaifeng, the defenders had a "thunder crash bomb" which "consisted of gunpowder put into an iron container ... then when the fuse was lit (and the projectile shot off) there was a great explosion the noise whereof was like thunder, audible for more than thirty miles, and the vegetation was scorched and blasted by the heat over an area of more than half a mou. When hit, even iron armour was quite pierced through."
The Song Dynasty (960–1279) official Li Zengbo wrote in 1257 that arsenals should have several hundred thousand iron bomb shells available and that when he was in Jingzhou, about one to two thousand were produced each month for dispatch of ten to twenty thousand at a time to Xiangyang and Yingzhou. The Ming Dynasty text Huolongjing describes the use of poisonous gunpowder bombs, including the "wind-and-dust" bomb.
During the Mongol invasions of Japan, the Mongols used the explosive "thunder-crash bombs" against the Japanese. Archaeological evidence of the "thunder-crash bombs" has been discovered in an underwater shipwreck off the shore of Japan by the Kyushu Okinawa Society for Underwater Archaeology. X-rays by Japanese scientists of the excavated shells confirmed that they contained gunpowder.
Shock
Explosive shock waves can cause situations such as body displacement (i.e., people being thrown through the air), dismemberment, internal bleeding and ruptured eardrums.
Shock waves produced by explosive events have two distinct components, the positive and negative wave. The positive wave shoves outward from the point of detonation, followed by the trailing vacuum space "sucking back" towards the point of origin as the shock bubble collapses. The greatest defense against shock injuries is distance from the source of shock. As a point of reference, the overpressure at the Oklahoma City bombing was estimated in the range of
Heat
A thermal wave is created by the sudden release of heat caused by an explosion. Military bomb tests have documented temperatures of up to 2,480 °C (4,500 °F). While capable of inflicting severe to catastrophic burns and causing secondary fires, thermal wave effects are considered very limited in range compared to shock and fragmentation. This rule has been challenged, however, by military development of thermobaric weapons, which employ a combination of negative shock wave effects and extreme temperature to incinerate objects within the blast radius.
Fragmentation
Fragmentation is produced by the acceleration of shattered pieces of bomb casing and adjacent physical objects. The use of fragmentation in bombs dates to the 14th century, and appears in the Ming Dynasty text Huolongjing. The fragmentation bombs were filled with iron pellets and pieces of broken porcelain. Once the bomb explodes, the resulting fragments are capable of piercing the skin and blinding enemy soldiers.
While conventionally viewed as small metal shards moving at super-supersonic and hypersonic speeds, fragmentation can occur in epic proportions and travel for extensive distances. When the SS Grandcamp exploded in the Texas City Disaster on April 16, 1947, one fragment of that blast was a two-ton anchor which was hurled nearly two miles inland to embed itself in the parking lot of the Pan American refinery.
Effects on living things
To people who are close to a blast incident, such as bomb disposal technicians, soldiers wearing body armor, deminers, or individuals wearing little to no protection, there are four types of blast effects on the human body: overpressure (shock), fragmentation, impact, and heat. Overpressure refers to the sudden and drastic rise in ambient pressure that can damage the internal organs, possibly leading to permanent damage or death. Fragmentation can also include sand, debris and vegetation from the area surrounding the blast source. This is very common in anti-personnel mine blasts. The projection of materials poses a potentially lethal threat caused by cuts in soft tissues, as well as infections, and injuries to the internal organs. When the overpressure wave impacts the body it can induce violent levels of blast-induced acceleration. Resulting injuries may range from minor to unsurvivable. Immediately following this initial acceleration, deceleration injuries can occur when a person impacts directly against a rigid surface or obstacle after being set in motion by the force of the blast. Finally, injury and fatality can result from the explosive fireball as well as incendiary agents projected onto the body. Personal protective equipment, such as a bomb suit or demining ensemble, as well as helmets, visors and foot protection, can dramatically reduce the four effects, depending upon the charge, proximity and other variables.
Types
Experts commonly distinguish between civilian and military bombs. The latter are almost always mass-produced weapons, developed and constructed to a standard design out of standard components and intended to be deployed in a standard explosive device. IEDs are divided into three basic categories by basic size and delivery. Type 76, IEDs are hand-carried parcel or suitcase bombs, type 80, are "suicide vests" worn by a bomber, and type 3 devices are vehicles laden with explosives to act as large-scale stationary or self-propelled bombs, also known as VBIED (vehicle-borne IEDs).
Improvised explosive materials are typically unstable and subject to spontaneous, unintentional detonation triggered by a wide range of environmental effects, ranging from impact and friction to electrostatic shock. Even subtle motion, change in temperature, or the nearby use of cellphones or radios can trigger an unstable or remote-controlled device. Any interaction with explosive materials or devices by unqualified personnel should be considered a grave and immediate risk of death or dire injury. The safest response to finding an object believed to be an explosive device is to get as far away from it as possible.
Atomic bombs are based on the theory of nuclear fission, that when a large atom splits, it releases a massive amount of energy. Thermonuclear weapons, (colloquially known as "hydrogen bombs") use the energy from an initial fission explosion to create an even more powerful fusion explosion.
The term "dirty bomb" refers to a specialized device that relies on a comparatively low explosive yield to scatter harmful material over a wide area. Most commonly associated with radiological or chemical materials, dirty bombs seek to kill or injure and then to deny access to a contaminated area until a thorough clean-up can be accomplished. In the case of urban settings, this clean-up may take extensive time, rendering the contaminated zone virtually uninhabitable in the interim.
The power of large bombs is typically measured in kilotons (kt) or megatons of TNT (Mt). The most powerful bombs ever used in combat were the two atomic bombs dropped by the United States to attack Hiroshima and Nagasaki, and the most powerful ever tested was the Tsar Bomba. The most powerful non-nuclear bomb is Russian "Father of All Bombs" (officially Aviation Thermobaric Bomb of Increased Power (ATBIP)) followed by the United States Air Force's MOAB (officially Massive Ordnance Air Blast, or more commonly known as the "Mother of All Bombs").
Below is a list of five different types of bombs based on the fundamental explosive mechanism they employ.
Compressed gas
Relatively small explosions can be produced by pressurizing a container until catastrophic failure such as with a dry ice bomb. Technically, devices that create explosions of this type can not be classified as "bombs" by the definition presented at the top of this article. However, the explosions created by these devices can cause property damage, injury, or death. Flammable liquids, gasses and gas mixtures dispersed in these explosions may also ignite if exposed to a spark or flame.
Low explosive
The simplest and oldest bombs store energy in the form of a low explosive. Black powder is an example of a low explosive. Low explosives typically consist of a mixture of an oxidizing salt, such as potassium nitrate (saltpeter), with solid fuel, such as charcoal or aluminium powder. These compositions deflagrate upon ignition, producing hot gas. Under normal circumstances, this deflagration occurs too slowly to produce a significant pressure wave; low explosives, therefore, must generally be used in large quantities or confined in a container with a high burst pressure to be useful as a bomb.
High explosive
A high explosive bomb is one that employs a process called "detonation" to rapidly go from an initially high energy molecule to a very low energy molecule. Detonation is distinct from deflagration in that the chemical reaction propagates faster than the speed of sound (often many times faster) in an intense shock wave. Therefore, the pressure wave produced by a high explosive is not significantly increased by confinement as detonation occurs so quickly that the resulting plasma does not expand much before all the explosive material has reacted. This has led to the development of plastic explosive. A casing is still employed in some high explosive bombs, but with the purpose of fragmentation. Most high explosive bombs consist of an insensitive secondary explosive that must be detonated with a blasting cap containing a more sensitive primary explosive.
Thermobaric
A thermobaric bomb is a type of explosive that utilizes oxygen from the surrounding air to generate an intense, high-temperature explosion, and in practice the blast wave typically produced by such a weapon is of a significantly longer duration than that produced by a conventional condensed explosive. The fuel-air bomb is one of the best-known types of thermobaric weapons.
Nuclear fission
Nuclear fission type atomic bombs utilize the energy present in very heavy atomic nuclei, such as U-235 or Pu-239. In order to release this energy rapidly, a certain amount of the fissile material must be very rapidly consolidated while being exposed to a neutron source. If consolidation occurs slowly, repulsive forces drive the material apart before a significant explosion can occur. Under the right circumstances, rapid consolidation can provoke a chain reaction that can proliferate and intensify by many orders of magnitude within microseconds. The energy released by a nuclear fission bomb may be tens of thousands of times greater than a chemical bomb of the same mass.
Nuclear fusion
A thermonuclear weapon is a type of nuclear bomb that releases energy through the combination of fission and fusion of the light atomic nuclei of deuterium and tritium. With this type of bomb, a thermonuclear detonation is triggered by the detonation of a fission type nuclear bomb contained within a material containing high concentrations of deuterium and tritium. Weapon yield is typically increased with a tamper that increases the duration and intensity of the reaction through inertial confinement and neutron reflection. Nuclear fusion bombs can have arbitrarily high yields making them hundreds or thousands of times more powerful than nuclear fission.
A pure fusion weapon is a hypothetical nuclear weapon that does not require a primary fission stage to start a fusion reaction.
Antimatter
Antimatter bombs can theoretically be constructed, but antimatter is very costly to produce and hard to store safely.
Other
Aerial bomb – designed to be dropped from a military aircraft (or even any aircraft) and carried on hardpoints or in bomb bays
Delay-action bomb – explodes some time after impact, as opposed to before or on impact
Dummy bomb – harmless bomb that has been fully disabled or has had its explosive contents removed, often used for training or display
Glide bomb – features flight control surfaces, allowing it to glide fairly long distances to its target
General-purpose bomb – aerial bomb dropped for multiple purposes, and thus designed to suit multiple purposes
Incendiary bomb – designed to set targets ablaze
Cluster bomb – releases additional submunitions, often smaller bombs, upon detonation
Anti-runway penetration bomb – designed to destroy runways and aprons
Bunker buster – capable of penetrating hardened or fortified surfaces before detonating
Concrete bomb – contains dense, inert material (typically concrete) instead of explosives, using the kinetic energy of the falling bomb to destroy target
Improvised explosive device – classification of bombs produced in unconventional ways or using unconventional materials; includes explosives such as the barrel bomb, nail bomb, pipe bomb, pressure cooker bomb, fertilizer bomb, and Molotov cocktail
Delivery
The first air-dropped bombs were used by the Austrians in the 1849 siege of Venice. Two hundred unmanned balloons carried small bombs, although few bombs actually hit the city.
The first bombing from a fixed-wing aircraft took place in 1911 when the Italians dropped bombs by hand on the Turkish lines in what is now Libya, during the Italo-Turkish War. The first large scale dropping of bombs took place during World War I starting in 1915 with the German Zeppelin airship raids on London, England, and the same war saw the invention of the first heavy bombers. One Zeppelin raid on 8 September 1915 dropped of high explosives and incendiary bombs, including one bomb that weighed .
During World War II bombing became a major military feature, and a number of novel delivery methods were introduced. These included Barnes Wallis's bouncing bomb, designed to bounce across water, avoiding torpedo nets and other underwater defenses, until it reached a dam, ship, or other destination, where it would sink and explode. By the end of the war, planes such as the allied forces' Avro Lancaster were delivering with accuracy from , ten ton earthquake bombs (also invented by Barnes Wallis) named "Grand Slam", which, unusually for the time, were delivered from high altitude in order to gain high speed, and would, upon impact, penetrate and explode deep underground ("camouflet"), causing massive caverns or craters, and affecting targets too large or difficult to be affected by other types of bomb.
Modern military bomber aircraft are designed around a large-capacity internal bomb bay, while fighter-bombers usually carry bombs externally on pylons or bomb racks or on multiple ejection racks, which enable mounting several bombs on a single pylon. Some bombs are equipped with a parachute, such as the World War II "parafrag" (an fragmentation bomb), the Vietnam War-era daisy cutters, and the bomblets of some modern cluster bombs. Parachutes slow the bomb's descent, giving the dropping aircraft time to get to a safe distance from the explosion. This is especially important with air-burst nuclear weapons (especially those dropped from slower aircraft or with very high yields), and in situations where the aircraft releases a bomb at low altitude. A number of modern bombs are also precision-guided munitions, and may be guided after they leave an aircraft by remote control, or by autonomous guidance.
Aircraft may also deliver bombs in the form of warheads on guided missiles, such as long-range cruise missiles, which can also be launched from warships.
A hand grenade is delivered by being thrown. Grenades can also be projected by other means, such as being launched from the muzzle of a rifle (as in the rifle grenade), using a grenade launcher (such as the M203), or by attaching a rocket to the explosive grenade (as in a rocket-propelled grenade (RPG)).
A bomb may also be positioned in advance and concealed.
A bomb destroying a rail track just before a train arrives will usually cause the train to derail. In addition to the damage to vehicles and people, a bomb exploding in a transport network often damages, and is sometimes mainly intended to damage, the network itself. This applies to railways, bridges, runways, and ports, and, to a lesser extent (depending on circumstances), to roads.
In the case of suicide bombing, the bomb is often carried by the attacker on their body, or in a vehicle driven to the target.
The Blue Peacock nuclear mines, which were also termed "bombs", were planned to be positioned during wartime and be constructed such that, if disturbed, they would explode within ten seconds.
The explosion of a bomb may be triggered by a detonator or a fuse. Detonators are triggered by clocks, remote controls like cell phones or some kind of sensor, such as pressure (altitude), radar, vibration or contact. Detonators vary in ways they work, they can be electrical, fire fuze or blast initiated detonators and others,
Blast seat
In forensic science, the point of detonation of a bomb is referred to as its blast seat, seat of explosion, blast hole or epicenter. Depending on the type, quantity and placement of explosives, the blast seat may be either spread out or concentrated (i.e., an explosion crater).
Other types of explosions, such as dust or vapor explosions, do not cause craters or even have definitive blast seats.
| Technology | Weapons | null |
47641 | https://en.wikipedia.org/wiki/Standard%20Model | Standard Model | The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.
Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain why there is more matter than anti-matter, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses.
The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.
Historical background
In 1928, Paul Dirac introduced the Dirac equation, which implied the existence of antimatter.
In 1954, Yang Chen-Ning and Robert Mills extended the concept of gauge theory for abelian groups, e.g. quantum electrodynamics, to nonabelian groups to provide an explanation for strong interactions. In 1957, Chien-Shiung Wu demonstrated parity was not conserved in the weak interaction.
In 1961, Sheldon Glashow combined the electromagnetic and weak interactions. In 1964, Murray Gell-Mann and George Zweig introduced quarks and that same year Oscar W. Greenberg implicitly introduced color charge of quarks. In 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak interaction, giving it its modern form.
In 1970, Sheldon Glashow, John Iliopoulos, and Luciano Maiani introduced the GIM mechanism, predicting the charm quark. In 1973 Gross and Wilczek and Politzer independently discovered that non-Abelian gauge theories, like the color theory of the strong force, have asymptotic freedom. In 1976, Martin Perl discovered the tau lepton at the SLAC. In 1977, a team led by Leon Lederman at Fermilab discovered the bottom quark.
The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions, i.e. the quarks and leptons.
After the neutral weak currents caused by Z boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W± and Z0 bosons were discovered experimentally in 1983; and the ratio of their masses was found to be as the Standard Model predicted.
The theory of the strong interaction (i.e. quantum chromodynamics, QCD), to which many contributed, acquired its modern form in 1973–74 when asymptotic freedom was proposed (a development that made QCD the main focus of theoretical research) and experiments confirmed that the hadrons were composed of fractionally charged quarks.
The term "Standard Model" was introduced by Abraham Pais and Sam Treiman in 1975, with reference to the electroweak theory with four quarks. Steven Weinberg, has since claimed priority, explaining that he chose the term Standard Model out of a sense of modesty and used it in 1973 during a talk in Aix-en-Provence in France.
Particle content
The Standard Model includes members of several classes of elementary particles, which in turn can be distinguished by other characteristics, such as color charge.
All particles can be summarized as follows:
Fermions
The Standard Model includes 12 elementary particles of spin , known as fermions. Fermions respect the Pauli exclusion principle, meaning that two identical fermions cannot simultaneously occupy the same quantum state in the same atom. Each fermion has a corresponding antiparticle, which are particles that have corresponding properties with the exception of opposite charges. Fermions are classified based on how they interact, which is determined by the charges they carry, into two groups: quarks and leptons. Within each group, pairs of particles that exhibit similar physical behaviors are then grouped into generations (see the table). Each member of a generation has a greater mass than the corresponding particle of generations prior. Thus, there are three generations of quarks and leptons. As first-generation particles do not decay, they comprise all of ordinary (baryonic) matter. Specifically, all atoms consist of electrons orbiting around the atomic nucleus, ultimately constituted of up and down quarks. On the other hand, second- and third-generation charged particles decay with very short half-lives and can only be observed in high-energy environments. Neutrinos of all generations also do not decay, and pervade the universe, but rarely interact with baryonic matter.
There are six quarks: up, down, charm, strange, top, and bottom. Quarks carry color charge, and hence interact via the strong interaction. The color confinement phenomenon results in quarks being strongly bound together such that they form color-neutral composite particles called hadrons; quarks cannot individually exist and must always bind with other quarks. Hadrons can contain either a quark-antiquark pair (mesons) or three quarks (baryons). The lightest baryons are the nucleons: the proton and neutron. Quarks also carry electric charge and weak isospin, and thus interact with other fermions through electromagnetism and weak interaction. The six leptons consist of the electron, electron neutrino, muon, muon neutrino, tau, and tau neutrino. The leptons do not carry color charge, and do not respond to strong interaction. The charged leptons carry an electric charge of −1 e, while the three neutrinos carry zero electric charge. Thus, the neutrinos' motions are influenced by only the weak interaction and gravity, making them difficult to observe.
Gauge bosons
The Standard Model includes 4 kinds of gauge bosons of spin 1, with bosons being quantum particles containing an integer spin. The gauge bosons are defined as force carriers, as they are responsible for mediating the fundamental interactions. The Standard Model explains the four fundamental forces as arising from the interactions, with fermions exchanging virtual force carrier particles, thus mediating the forces. At a macroscopic scale, this manifests as a force. As a result, they do not follow the Pauli exclusion principle that constrains fermions; bosons do not have a theoretical limit on their spatial density. The types of gauge bosons are described below.
Electromagnetism: Photons mediate the electromagnetic force, responsible for interactions between electrically charged particles. The photon is massless and is described by the theory of quantum electrodynamics (QED).
Strong Interactions: Gluons mediate the strong interactions, which binds quarks to each other by influencing the color charge, with the interactions being described in the theory of quantum chromodynamics (QCD). They have no mass, and there are eight distinct gluons, with each being denoted through a color-anticolor charge combination (e.g. red–antigreen). As gluons have an effective color charge, they can also interact amongst themselves.
Weak Interactions: The , , and gauge bosons mediate the weak interactions between all fermions, being responsible for radioactivity. They contain mass, with the having more mass than the . The weak interactions involving the act only on left-handed particles and right-handed antiparticles. The carries an electric charge of +1 and −1 and couples to the electromagnetic interaction. The electrically neutral boson interacts with both left-handed particles and right-handed antiparticles. These three gauge bosons along with the photons are grouped together, as collectively mediating the electroweak interaction.
Gravity: It is currently unexplained in the Standard Model, as the hypothetical mediating particle graviton has been proposed, but not observed. This is due to the incompatibility of quantum mechanics and Einstein's theory of general relativity, regarded as being the best explanation for gravity. In general relativity, gravity is explained as being the geometric curving of spacetime.
The Feynman diagram calculations, which are a graphical representation of the perturbation theory approximation, invoke "force mediating particles", and when applied to analyze high-energy scattering experiments are in reasonable agreement with the data. However, perturbation theory (and with it the concept of a "force-mediating particle") fails in other situations. These include low-energy quantum chromodynamics, bound states, and solitons. The interactions between all the particles described by the Standard Model are summarized by the diagrams on the right of this section.
Higgs boson
The Higgs particle is a massive scalar elementary particle theorized by Peter Higgs (and others) in 1964, when he showed that Goldstone's 1962 theorem (generic continuous symmetry, which is spontaneously broken) provides a third polarisation of a massive vector field. Hence, Goldstone's original scalar doublet, the massive spin-zero particle, was proposed as the Higgs boson, and is a key building block in the Standard Model. It has no intrinsic spin, and for that reason is classified as a boson with spin-0.
The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons) are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself.
Because the Higgs boson is a very massive particle and also decays almost immediately when created, only a very high-energy particle accelerator can observe and record it. Experiments to confirm and determine the nature of the Higgs boson using the Large Hadron Collider (LHC) at CERN began in early 2010 and were performed at Fermilab's Tevatron until its closure in late 2011. Mathematical consistency of the Standard Model requires that any mechanism capable of generating the masses of elementary particles must become visible at energies above ; therefore, the LHC (designed to collide two proton beams) was built to answer the question of whether the Higgs boson actually exists.
On 4 July 2012, two of the experiments at the LHC (ATLAS and CMS) both reported independently that they had found a new particle with a mass of about (about 133 proton masses, on the order of ), which is "consistent with the Higgs boson". On 13 March 2013, it was confirmed to be the searched-for Higgs boson.
Theoretical aspects
Construction of the Standard Model Lagrangian
Technically, quantum field theory provides the mathematical framework for the Standard Model, in which a Lagrangian controls the dynamics and kinematics of the theory. Each kind of particle is described in terms of a dynamical field that pervades space-time.
The construction of the Standard Model proceeds following the modern method of constructing most field theories: by first postulating a set of symmetries of the system, and then by writing down the most general renormalizable Lagrangian from its particle (field) content that observes these symmetries.
The global Poincaré symmetry is postulated for all relativistic quantum field theories. It consists of the familiar translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity. The local SU(3) × SU(2) × U(1) gauge symmetry is an internal symmetry that essentially defines the Standard Model. Roughly, the three factors of the gauge symmetry give rise to the three fundamental interactions. The fields fall into different representations of the various symmetry groups of the Standard Model (see table). Upon writing the most general Lagrangian, one finds that the dynamics depends on 19 parameters, whose numerical values are established by experiment. The parameters are summarized in the table (made visible by clicking "show") above.
Quantum chromodynamics sector
The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, which is a Yang–Mills gauge theory with SU(3) symmetry, generated by . Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by
where is a three component column vector of Dirac spinors, each element of which refers to a quark field with a specific color charge (i.e. red, blue, and green) and summation over flavor (i.e. up, down, strange, etc.) is implied.
The gauge covariant derivative of QCD is defined by , where
are the Dirac matrices,
is the 8-component () SU(3) gauge field,
are the 3 × 3 Gell-Mann matrices, generators of the SU(3) color group,
represents the gluon field strength tensor, and
is the strong coupling constant.
The QCD Lagrangian is invariant under local SU(3) gauge transformations; i.e., transformations of the form , where is 3 × 3 unitary matrix with determinant 1, making it a member of the group SU(3), and is an arbitrary function of spacetime.
Electroweak sector
The electroweak sector is a Yang–Mills gauge theory with the symmetry group ,
where the subscript sums over the three generations of fermions; , and are the left-handed doublet, right-handed singlet up type, and right handed singlet down type quark fields; and and are the left-handed doublet and right-handed singlet lepton fields.
The electroweak gauge covariant derivative is defined as , where
is the U(1) gauge field,
is the weak hypercharge – the generator of the U(1) group,
is the 3-component SU(2) gauge field,
are the Pauli matrices – infinitesimal generators of the SU(2) group – with subscript L to indicate that they only act on left-chiral fermions,
and are the U(1) and SU(2) coupling constants respectively,
() and are the field strength tensors for the weak isospin and weak hypercharge fields.
Notice that the addition of fermion mass terms into the electroweak Lagrangian is forbidden, since terms of the form do not respect gauge invariance. Neither is it possible to add explicit mass terms for the U(1) and SU(2) gauge fields. The Higgs mechanism is responsible for the generation of the gauge boson masses, and the fermion masses result from Yukawa-type interactions with the Higgs field.
Higgs sector
In the Standard Model, the Higgs field is an SU(2) doublet of complex scalar fields with four degrees of freedom:
where the superscripts + and 0 indicate the electric charge of the components. The weak hypercharge of both components is 1. Before symmetry breaking, the Higgs Lagrangian is
where is the electroweak gauge covariant derivative defined above and is the potential of the Higgs field. The square of the covariant derivative leads to three and four point interactions between the electroweak gauge fields and and the scalar field . The scalar potential is given by
where , so that acquires a non-zero Vacuum expectation value, which generates masses for the Electroweak gauge fields (the Higgs mechanism), and , so that the potential is bounded from below. The quartic term describes self-interactions of the scalar field .
The minimum of the potential is degenerate with an infinite number of equivalent ground state solutions, which occurs when . It is possible to perform a gauge transformation on such that the ground state is transformed to a basis where and . This breaks the symmetry of the ground state. The expectation value of now becomes
where has units of mass and sets the scale of electroweak physics. This is the only dimensional parameter of the Standard Model and has a measured value of ~.
After symmetry breaking, the masses of the W and Z are given by and , which can be viewed as predictions of the theory. The photon remains massless. The mass of the Higgs boson is . Since and are free parameters, the Higgs's mass could not be predicted beforehand and had to be determined experimentally.
Yukawa sector
The Yukawa interaction terms are:
where , , and are matrices of Yukawa couplings, with the term giving the coupling of the generations and , and h.c. means Hermitian conjugate of preceding terms. The fields and are left-handed quark and lepton doublets. Likewise, and are right-handed up-type quark, down-type quark, and lepton singlets. Finally is the Higgs doublet and is its charge conjugate state.
The Yukawa terms are invariant under the SU(2) × U(1) gauge symmetry of the Standard Model and generate masses for all fermions after spontaneous symmetry breaking.
Fundamental interactions
The Standard Model describes three of the four fundamental interactions in nature; only gravity remains unexplained. In the Standard Model, such an interaction is described as an exchange of bosons between the objects affected, such as a photon for the electromagnetic force and a gluon for the strong interaction. Those particles are called force carriers or messenger particles.
Gravity
Despite being perhaps the most familiar fundamental interaction, gravity is not described by the Standard Model, due to contradictions that arise when combining general relativity, the modern theory of gravity, and quantum mechanics. However, gravity is so weak at microscopic scales, that it is essentially unmeasurable. The graviton is postulated to be the mediating particle, but has not yet been proved to exist.
Electromagnetism
Electromagnetism is the only long-range force in the Standard Model. It is mediated by photons and couples to electric charge. Electromagnetism is responsible for a wide range of phenomena including atomic electron shell structure, chemical bonds, electric circuits and electronics. Electromagnetic interactions in the Standard Model are described by quantum electrodynamics.
Weak nuclear force
The weak interaction is responsible for various forms of particle decay, such as beta decay. It is weak and short-range, due to the fact that the weak mediating particles, W and Z bosons, have mass. W bosons have electric charge and mediate interactions that change the particle type (referred to as flavor) and charge. Interactions mediated by W bosons are charged current interactions. Z bosons are neutral and mediate neutral current interactions, which do not change particle flavor. Thus Z bosons are similar to the photon, aside from them being massive and interacting with the neutrino. The weak interaction is also the only interaction to violate parity and CP. Parity violation is maximal for charged current interactions, since the W boson interacts exclusively with left-handed fermions and right-handed antifermions.
In the Standard Model, the weak force is understood in terms of the electroweak theory, which states that the weak and electromagnetic interactions become united into a single electroweak interaction at high energies.
Strong nuclear force
The strong nuclear force is responsible for hadronic and nuclear binding. It is mediated by gluons, which couple to color charge. Since gluons themselves have color charge, the strong force exhibits confinement and asymptotic freedom. Confinement means that only color-neutral particles can exist in isolation, therefore quarks can only exist in hadrons and never in isolation, at low energies. Asymptotic freedom means that the strong force becomes weaker, as the energy scale increases. The strong force overpowers the electrostatic repulsion of protons and quarks in nuclei and hadrons respectively, at their respective scales.
While quarks are bound in hadrons by the fundamental strong interaction, which is mediated by gluons, nucleons are bound by an emergent phenomenon termed the residual strong force or nuclear force. This interaction is mediated by mesons, such as the pion. The color charges inside the nucleon cancel out, meaning most of the gluon and quark fields cancel out outside of the nucleon. However, some residue is "leaked", which appears as the exchange of virtual mesons, that causes the attractive force between nucleons. The (fundamental) strong interaction is described by quantum chromodynamics, which is a component of the Standard Model.
Tests and predictions
The Standard Model predicted the existence of the W and Z bosons, gluon, top quark and charm quark, and predicted many of their properties before these particles were observed. The predictions were experimentally confirmed with good precision.
The Standard Model also predicted the existence of the Higgs boson, which was found in 2012 at the Large Hadron Collider, the final fundamental particle predicted by the Standard Model to be experimentally confirmed.
Challenges
Self-consistency of the Standard Model (currently formulated as a non-abelian gauge theory quantized through path-integrals) has not been mathematically proved. While regularized versions useful for approximate computations (for example lattice gauge theory) exist, it is not known whether they converge (in the sense of S-matrix elements) in the limit that the regulator is removed. A key question related to the consistency is the Yang–Mills existence and mass gap problem.
Experiments indicate that neutrinos have mass, which the classic Standard Model did not allow. To accommodate this finding, the classic Standard Model can be modified to include neutrino mass, although it is not obvious exactly how this should be done.
If one insists on using only Standard Model particles, this can be achieved by adding a non-renormalizable interaction of leptons with the Higgs boson. On a fundamental level, such an interaction emerges in the seesaw mechanism where heavy right-handed neutrinos are added to the theory.
This is natural in the left-right symmetric extension of the Standard Model and in certain grand unified theories. As long as new physics appears below or around 1014 GeV, the neutrino masses can be of the right order of magnitude.
Theoretical and experimental research has attempted to extend the Standard Model into a unified field theory or a theory of everything, a complete theory explaining all physical phenomena including constants. Inadequacies of the Standard Model that motivate such research include:
The model does not explain gravitation, although physical confirmation of a theoretical particle known as a graviton would account for it to a degree. Though it addresses strong and electroweak interactions, the Standard Model does not consistently explain the canonical theory of gravitation, general relativity, in terms of quantum field theory. The reason for this is, among other things, that quantum field theories of gravity generally break down before reaching the Planck scale. As a consequence, we have no reliable theory for the very early universe.
Some physicists consider it to be ad hoc and inelegant, requiring 19 numerical constants whose values are unrelated and arbitrary. Although the Standard Model, as it now stands, can explain why neutrinos have masses, the specifics of neutrino mass are still unclear. It is believed that explaining neutrino mass will require an additional 7 or 8 constants, which are also arbitrary parameters.
The Higgs mechanism gives rise to the hierarchy problem if some new physics (coupled to the Higgs) is present at high energy scales. In these cases, in order for the weak scale to be much smaller than the Planck scale, severe fine tuning of the parameters is required; there are, however, other scenarios that include quantum gravity in which such fine tuning can be avoided. There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles.
The model is inconsistent with the emerging Lambda-CDM model of cosmology. Contentions include the absence of an explanation in the Standard Model of particle physics for the observed amount of cold dark matter (CDM) and its contributions to dark energy, which are many orders of magnitude too large. It is also difficult to accommodate the observed predominance of matter over antimatter (matter/antimatter asymmetry). The isotropy and homogeneity of the visible universe over large distances seems to require a mechanism like cosmic inflation, which would also constitute an extension of the Standard Model.
Currently, no proposed theory of everything has been widely accepted or verified.
| Physical sciences | Physics | null |
47651 | https://en.wikipedia.org/wiki/Reproducibility | Reproducibility | Reproducibility, closely related to replicability and repeatability, is a major principle underpinning the scientific method. For the findings of a study to be reproducible means that results obtained by an experiment or an observational study or in a statistical analysis of a data set should be achieved again with a high degree of reliability when the study is replicated. There are different kinds of replication but typically replication studies involve different researchers using the same methodology. Only after one or several such successful replications should a result be recognized as scientific knowledge.
With a narrower scope, reproducibility has been defined in computational sciences as having the following quality: the results should be documented by making all data and code available in such a way that the computations can be executed again with identical results.
In recent decades, there has been a rising concern that many published scientific results fail the test of reproducibility, evoking a reproducibility or replication crisis.
History
The first to stress the importance of reproducibility in science was the Anglo-Irish chemist Robert Boyle, in England in the 17th century. Boyle's air pump was designed to generate and study vacuum, which at the time was a very controversial concept. Indeed, distinguished philosophers such as René Descartes and Thomas Hobbes denied the very possibility of vacuum existence. Historians of science Steven Shapin and Simon Schaffer, in their 1985 book Leviathan and the Air-Pump, describe the debate between Boyle and Hobbes, ostensibly over the nature of vacuum, as fundamentally an argument about how useful knowledge should be gained. Boyle, a pioneer of the experimental method, maintained that the foundations of knowledge should be constituted by experimentally produced facts, which can be made believable to a scientific community by their reproducibility. By repeating the same experiment over and over again, Boyle argued, the certainty of fact will emerge.
The air pump, which in the 17th century was a complicated and expensive apparatus to build, also led to one of the first documented disputes over the reproducibility of a particular scientific phenomenon. In the 1660s, the Dutch scientist Christiaan Huygens built his own air pump in Amsterdam, the first one outside the direct management of Boyle and his assistant at the time Robert Hooke. Huygens reported an effect he termed "anomalous suspension", in which water appeared to levitate in a glass jar inside his air pump (in fact suspended over an air bubble), but Boyle and Hooke could not replicate this phenomenon in their own pumps. As Shapin and Schaffer describe, "it became clear that unless the phenomenon could be produced in England with one of the two pumps available, then no one in England would accept the claims Huygens had made, or his competence in working the pump". Huygens was finally invited to England in 1663, and under his personal guidance Hooke was able to replicate anomalous suspension of water. Following this Huygens was elected a Foreign Member of the Royal Society. However, Shapin and Schaffer also note that "the accomplishment of replication was dependent on contingent acts of judgment. One cannot write down a formula saying when replication was or was not achieved".
The philosopher of science Karl Popper noted briefly in his famous 1934 book The Logic of Scientific Discovery that "non-reproducible single occurrences are of no significance to science". The statistician Ronald Fisher wrote in his 1935 book The Design of Experiments, which set the foundations for the modern scientific practice of hypothesis testing and statistical significance, that "we may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us statistically significant results". Such assertions express a common dogma in modern science that reproducibility is a necessary condition (although not necessarily sufficient) for establishing a scientific fact, and in practice for establishing scientific authority in any field of knowledge. However, as noted above by Shapin and Schaffer, this dogma is not well-formulated quantitatively, such as statistical significance for instance, and therefore it is not explicitly established how many times must a fact be replicated to be considered reproducible.
Terminology
Replicability and repeatability are related terms broadly or loosely synonymous with reproducibility (for example, among the general public), but they are often usefully differentiated in more precise senses, as follows.
Two major steps are naturally distinguished in connection with reproducibility of experimental or observational studies:
When new data is obtained in the attempt to achieve it, the term replicability is often used, and the new study is a replication or replicate of the original one. Obtaining the same results when analyzing the data set of the original study again with the same procedures, many authors use the term reproducibility in a narrow, technical sense coming from its use in computational research.
Repeatability is related to the repetition of the experiment within the same study by the same researchers.
Reproducibility in the original, wide sense is only acknowledged if a replication performed by an independent researcher team is successful.
The terms reproducibility and replicability sometimes appear even in the scientific literature with reversed meaning, as different research fields settled on their own definitions for the same terms.
Measures of reproducibility and repeatability
In chemistry, the terms reproducibility and repeatability are used with a specific quantitative meaning. In inter-laboratory experiments, a concentration or other quantity of a chemical substance is measured repeatedly in different laboratories to assess the variability of the measurements. Then, the standard deviation of the difference between two values obtained within the same laboratory is called repeatability. The standard deviation for the difference between two measurement from different laboratories is called reproducibility.
These measures are related to the more general concept of variance components in metrology.
Reproducible research
Reproducible research method
The term reproducible research refers to the idea that scientific results should be documented in such a way that their deduction is fully transparent. This requires a detailed description of the methods used to obtain the data
and making the full dataset and the code to calculate the results easily accessible.
This is the essential part of open science.
To make any research project computationally reproducible, general practice involves all data and files being clearly separated, labelled, and documented. All operations should be fully documented and automated as much as practicable, avoiding manual intervention where feasible. The workflow should be designed as a sequence of smaller steps that are combined so that the intermediate outputs from one step directly feed as inputs into the next step. Version control should be used as it lets the history of the project be easily reviewed and allows for the documenting and tracking of changes in a transparent manner.
A basic workflow for reproducible research involves data acquisition, data processing and data analysis. Data acquisition primarily consists of obtaining primary data from a primary source such as surveys, field observations, experimental research, or obtaining data from an existing source. Data processing involves the processing and review of the raw data collected in the first stage, and includes data entry, data manipulation and filtering and may be done using software. The data should be digitized and prepared for data analysis. Data may be analysed with the use of software to interpret or visualise statistics or data to produce the desired results of the research such as quantitative results including figures and tables. The use of software and automation enhances the reproducibility of research methods.
There are systems that facilitate such documentation, like the R Markdown language
or the Jupyter notebook.
The Open Science Framework provides a platform and useful tools to support reproducible research.
Reproducible research in practice
Psychology has seen a renewal of internal concerns about irreproducible results (see the entry on replicability crisis for empirical results on success rates of replications). Researchers showed in a 2006 study that, of 141 authors of a publication from the American Psychological Association (APA) empirical articles, 103 (73%) did not respond with their data over a six-month period. In a follow-up study published in 2015, it was found that 246 out of 394 contacted authors of papers in APA journals did not share their data upon request (62%). In a 2012 paper, it was suggested that researchers should publish data along with their works, and a dataset was released alongside as a demonstration. In 2017, an article published in Scientific Data suggested that this may not be sufficient and that the whole analysis context should be disclosed.
In economics, concerns have been raised in relation to the credibility and reliability of published research. In other sciences, reproducibility is regarded as fundamental and is often a prerequisite to research being published, however in economic sciences it is not seen as a priority of the greatest importance. Most peer-reviewed economic journals do not take any substantive measures to ensure that published results are reproducible, however, the top economics journals have been moving to adopt mandatory data and code archives. There is low or no incentives for researchers to share their data, and authors would have to bear the costs of compiling data into reusable forms. Economic research is often not reproducible as only a portion of journals have adequate disclosure policies for datasets and program code, and even if they do, authors frequently do not comply with them or they are not enforced by the publisher. A Study of 599 articles published in 37 peer-reviewed journals revealed that while some journals have achieved significant compliance rates, significant portion have only partially complied, or not complied at all. On an article level, the average compliance rate was 47.5%; and on a journal level, the average compliance rate was 38%, ranging from 13% to 99%.
A 2018 study published in the journal PLOS ONE found that 14.4% of a sample of public health statistics researchers had shared their data or code or both.
There have been initiatives to improve reporting and hence reproducibility in the medical literature for many years, beginning with the CONSORT initiative, which is now part of a wider initiative, the EQUATOR Network.
This group has recently turned its attention to how better reporting might reduce waste in research, especially biomedical research.
Reproducible research is key to new discoveries in pharmacology. A Phase I discovery will be followed by Phase II reproductions as a drug develops towards commercial production. In recent decades Phase II success has fallen from 28% to 18%. A 2011 study found that 65% of medical studies were inconsistent when re-tested, and only 6% were completely reproducible.
Noteworthy irreproducible results
Hideyo Noguchi became famous for correctly identifying the bacterial agent of syphilis, but also claimed that he could culture this agent in his laboratory. Nobody else has been able to produce this latter result.
In March 1989, University of Utah chemists Stanley Pons and Martin Fleischmann reported the production of excess heat that could only be explained by a nuclear process ("cold fusion"). The report was astounding given the simplicity of the equipment: it was essentially an electrolysis cell containing heavy water and a palladium cathode which rapidly absorbed the deuterium produced during electrolysis. The news media reported on the experiments widely, and it was a front-page item on many newspapers around the world (see science by press conference). Over the next several months others tried to replicate the experiment, but were unsuccessful.
Nikola Tesla claimed as early as 1899 to have used a high frequency current to light gas-filled lamps from over away without using wires. In 1904 he built Wardenclyffe Tower on Long Island to demonstrate means to send and receive power without connecting wires. The facility was never fully operational and was not completed due to economic problems, so no attempt to reproduce his first result was ever carried out.
Other examples which contrary evidence has refuted the original claim:
N-rays, a hypothesized form of radiation subsequently found to be illusory
Polywater, a hypothesized polymerized form of water found to be just water with common contaminations
Stimulus-triggered acquisition of pluripotency, revealed to be the result of fraud
GFAJ-1, a bacterium that could purportedly incorporate arsenic into its DNA in place of phosphorus
MMR vaccine controversy — a study in The Lancet claiming the MMR vaccine caused autism was revealed to be fraudulent
Schön scandal — semiconductor "breakthroughs" revealed to be fraudulent
Power posing — a social psychology phenomenon that went viral after being the subject of a very popular TED talk, but was unable to be replicated in dozens of studies
| Physical sciences | Science basics | Basics and measurement |
47671 | https://en.wikipedia.org/wiki/Toxic%20heavy%20metal | Toxic heavy metal | A toxic heavy metal is a common but misleading term for a metal-like element noted for its potential toxicity. Not all heavy metals are toxic and some toxic metals are not heavy. Elements often discussed as toxic include cadmium, mercury and lead, all of which appear in the World Health Organization's list of 10 chemicals of major public concern. Other examples include chromium and nickel, thallium, bismuth, arsenic, antimony and tin.
These toxic elements are found naturally in the earth. They become concentrated as a result of human caused activities and can enter plant and animal (including human) tissues via inhalation, diet, and manual handling. Then, they can bind to and interfere with the functioning of vital cellular components. The toxic effects of arsenic, mercury, and lead were known to the ancients, but methodical studies of the toxicity of some heavy metals appear to date from only 1868. In humans, heavy metal poisoning is generally treated by the administration of chelating agents. Some elements otherwise regarded as toxic heavy metals are essential, in small quantities, for human health.
Controversial terminology
The International Union of Pure and Applied Chemistry (IUPAC), which standardizes nomenclature, says the term “heavy metals” is both meaningless and misleading". The IUPAC report focuses on the legal and toxicological implications of describing "heavy metals" as toxins when there is no scientific evidence to support a connection. The density implied by the adjective "heavy" has almost no biological consequences and pure metals are rarely the biologically active substance.
This characterization has been echoed by numerous reviews. The most widely used toxicology textbook, Casarett and Doull’s toxicology uses "toxic metal" not "heavy metals". Nevertheless many scientific and science related articles continue to use "heavy metal" as a term for toxic substances.
Major and minor metal toxins
Metals with multiple toxic effects include arsenic (As), beryllium (Be), cadmium (Cd), chromium (Cr), lead (Pb), mercury (Hg), and nickel (Ni).
Elements that are nutritionally essential for animal or plant life but which are considered toxic metals in high doses or other forms include cobalt (Co), copper (Cu), iron (Fe), magnesium (Mg), manganese (Mn), molybdenum (Mo), selenium (Se), and zinc (Zn).
Contamination sources
Toxic metals are found naturally in the earth, and become concentrated as a result of human activities, or, in some cases geochemical processes, such as accumulation in peat soils that are then released when drained for agriculture. Common sources include fertilisers; aging water supply infrastructure; and microplastics floating in the world's oceans. Arsenic is thought to be used in connection with coloring dyes. Rat poison used in grain and mash stores may be another source of the arsenic.
The geographical extent of sources may be very large. For example, up to one-sixth of China's arable land might be affected by heavy metal contamination.
Lead is the most prevalent heavy metal contaminant. As a component of tetraethyl lead, , it was used extensively in gasoline during the 1930s–1970s. Lead levels in the aquatic environments of industrialised societies have been estimated to be two to three times those of pre-industrial levels. Although the use of leaded gasoline was largely phased out in North America by 1996, soils next to roads built before this time retain high lead concentrations. Lead (from lead(II) azide or lead styphnate used in firearms) gradually accumulates at firearms training grounds, contaminating the local environment and exposing range employees to a risk of lead poisoning.
Entry routes
Toxic metals enter plant, animal and human tissues via air inhalation, diet, and manual handling. Welding, galvanizing, brazing, and soldering exposes workers to fumes that may be inhaled and result in metal fume fever. Motor vehicle emissions are a major source of airborne contaminants including arsenic, cadmium, cobalt, nickel, lead, antimony, vanadium, zinc, platinum, palladium and rhodium. Water sources (groundwater, lakes, streams and rivers) can be polluted by toxic metals leaching from industrial and consumer waste; acid rain can exacerbate this process by releasing toxic metals trapped in soils. Transport through soil can be facilitated by the presence of preferential flow paths (macropores) and dissolved organic compounds. Plants are exposed to toxic metals through the uptake of water; animals eat these plants; ingestion of plant- and animal-based foods are the largest sources of toxic metals in humans. Absorption through skin contact, for example from contact with soil, or metal containing toys and jewelry, is another potential source of toxic metal contamination. Toxic metals can bioaccumulate in organisms as they are hard to metabolize.
Detrimental effects
Toxic metals "can bind to vital cellular components, such as structural proteins, enzymes, and nucleic acids, and interfere with their functioning". Symptoms and effects can vary according to the metal or metal compound, and the dose involved. Broadly, long-term exposure to toxic heavy metals can have carcinogenic, central and peripheral nervous system, and circulatory effects. For humans, typical presentations associated with exposure to any of the "classical" toxic heavy metals, or chromium (another toxic heavy metal) or arsenic (a metalloid), are shown in the table.
History
The toxic effects of arsenic, mercury and lead were known to the ancients but methodical studies of the overall toxicity of heavy metals appear to date from only 1868. In that year, Wanklyn and Chapman speculated on the adverse effects of the heavy metals "arsenic, lead, copper, zinc, iron and manganese" in drinking water. They noted an "absence of investigation" and were reduced to "the necessity of pleading for the collection of data". In 1884, Blake described an apparent connection between toxicity and the atomic weight of an element. The following sections provide historical thumbnails for the "classical" toxic heavy metals (arsenic, mercury and lead) and some more recent examples (chromium and cadmium).
Arsenic
Arsenic, as realgar () and orpiment (), was known in ancient times. Strabo (64–50 BCE – c. AD 24?), a Greek geographer and historian, wrote that only slaves were employed in realgar and orpiment mines since they would inevitably die from the toxic effects of the fumes given off from the ores. Arsenic-contaminated beer poisoned over 6,000 people in the Manchester area of England in 1900, and is thought to have killed at least 70 victims. Clare Luce, American ambassador to Italy from 1953 to 1956, suffered from arsenic poisoning. Its source was traced to flaking arsenic-laden paint on the ceiling of her bedroom. She may also have eaten food contaminated by arsenic in flaking ceiling paint in the embassy dining room. Ground water contaminated by arsenic, as of 2014, "is still poisoning millions of people in Asia".
Mercury
The first emperor of unified China, Qin Shi Huang, it is reported, died of ingesting mercury pills that were intended to give him eternal life. The phrase "mad as a hatter" is likely a reference to mercury poisoning among milliners (so-called "mad hatter disease"), as mercury-based compounds were once used in the manufacture of felt hats in the 18th and 19th century. Historically, gold amalgam (an alloy with mercury) was widely used in gilding, leading to numerous casualties among the workers. It is estimated that during the construction of Saint Isaac's Cathedral alone, 60 workers died from the gilding of the main dome. Outbreaks of methylmercury poisoning occurred in several places in Japan during the 1950s due to industrial discharges of mercury into rivers and coastal waters. The best-known instances were in Minamata and Niigata. In Minamata alone, more than 600 people died due to what became known as Minamata disease. More than 21,000 people filed claims with the Japanese government, of which almost 3000 became certified as having the disease. In 22 documented cases, pregnant women who consumed contaminated fish showed mild or no symptoms but gave birth to infants with severe developmental disabilities. Since the Industrial Revolution, mercury levels have tripled in many near-surface seawaters, especially around Iceland and Antarctica.
Lead
The adverse effects of lead were known to the ancients. In the 2nd century BC the Greek botanist Nicander described the colic and paralysis seen in lead-poisoned people. Dioscorides, a Greek physician who is thought to have lived in the 1st century CE, wrote that lead "makes the mind give way". Lead was used extensively in Roman aqueducts from about 500 BC to 300 AD. Julius Caesar's engineer, Vitruvius, reported, "water is much more wholesome from earthenware pipes than from lead pipes. For it seems to be made injurious by lead, because white lead is produced by it, and this is said to be harmful to the human body." During the Mongol period in China (1271−1368 AD), lead pollution due to silver smelting in the Yunnan region exceeded contamination levels from modern mining activities by nearly four times. In the 17th and 18th centuries, people in Devon were afflicted by a condition referred to as Devon colic; this was discovered to be due to the imbibing of lead-contaminated cider. In 2013, the World Health Organization estimated that lead poisoning resulted in 143,000 deaths, and "contribute[d] to 600,000 new cases of children with intellectual disabilities", each year. In the U.S. city of Flint, Michigan, lead contamination in drinking water has been an issue since 2014. The source of the contamination has been attributed to "corrosion in the lead and iron pipes that distribute water to city residents". In 2015, the lead concentration of drinking water in north-eastern Tasmania, Australia, reached a level over 50 times the prescribed national drinking water guidelines. The source of the contamination was attributed to "a combination of dilapidated drinking water infrastructure, including lead jointed pipelines, end-of-life polyvinyl chloride pipes and household plumbing".
Chromium
Chromium(III) compounds and chromium metal are not considered a health hazard, while the toxicity and carcinogenic properties of chromium(VI) have been known since at least the late 19th century. In 1890, Newman described the elevated cancer risk of workers in a chromate dye company. Chromate-induced dermatitis was reported in aircraft workers during World War II. In 1963, an outbreak of dermatitis, ranging from erythema to exudative eczema, occurred amongst 60 automobile factory workers in England. The workers had been wet-sanding chromate-based primer paint that had been applied to car bodies. In Australia, chromium was released from the Newcastle Orica explosives plant on August 8, 2011. Up to 20 workers at the plant were exposed as were 70 nearby homes in Stockton. The town was only notified three days after the release and the accident sparked a major public controversy, with Orica criticised for playing down the extent and possible risks of the leak, and the state Government attacked for their slow response to the incident.
Cadmium
Cadmium exposure is a phenomenon of the early 20th century, and onwards. In Japan in 1910, the Mitsui Mining & Smelting Company began discharging cadmium into the Jinzū River, as a byproduct of mining operations. Residents in the surrounding area subsequently consumed rice grown in cadmium-contaminated irrigation water. They experienced softening of the bones and kidney failure. The origin of these symptoms was not clear; possibilities raised at the time included "a regional or bacterial disease or lead poisoning". In 1955, cadmium was identified as the likely cause and in 1961 the source was directly linked to mining operations in the area. In February 2010, cadmium was found in Walmart exclusive Miley Cyrus jewelry. Wal-Mart continued to sell the jewelry until May, when covert testing organised by Associated Press confirmed the original results. In June 2010 cadmium was detected in the paint used on promotional drinking glasses for the movie Shrek Forever After, sold by McDonald's Restaurants, triggering a recall of 12 million glasses.
Remediation
Human
In humans, heavy metal poisoning is generally treated by the administration of chelating agents. These are chemical compounds, such as (calcium disodium ethylenediaminetetraacetate) that convert heavy metals to chemically inert forms that can be excreted without further interaction with the body. Chelates are not without side effects and can also remove beneficial metals from the body. Vitamin and mineral supplements are sometimes co-administered for this reason
Environment
Soils contaminated by heavy metals can be remediated by one or more of the following technologies: isolation; immobilization; toxicity reduction; physical separation; or extraction. Isolation involves the use of caps, membranes or below-ground barriers in an attempt to quarantine the contaminated soil. Immobilization aims to alter the properties of the soil so as to hinder the mobility of the heavy contaminants. Toxicity reduction attempts to oxidise or reduce the toxic heavy metal ions, via chemical or biological means into less toxic or mobile forms. Physical separation involves the removal of the contaminated soil and the separation of the metal contaminants by mechanical means. Extraction is an on or off-site process that uses chemicals, high-temperature volatization, or electrolysis to extract contaminants from soils. The process or processes used will vary according to contaminant and the characteristics of the site.
Benefits
Some elements otherwise regarded as toxic heavy metals are essential, in small quantities, for human health. These elements include vanadium, manganese, iron, cobalt, copper, zinc, selenium, strontium and molybdenum. A deficiency of these essential metals may increase susceptibility to heavy metal poisoning.
Selenium is the most toxic of the heavy metals that are essential for mammals. Selenium is normally excreted and only becomes toxic when the intake exceeds the excretory capacity.
| Physical sciences | Periodic table | Chemistry |
47687 | https://en.wikipedia.org/wiki/Cosmic%20ray | Cosmic ray | Cosmic rays or astroparticles are high-energy particles or clusters of particles (primarily represented by protons or atomic nuclei) that move through space at nearly the speed of light. They originate from the Sun, from outside of the Solar System in our own galaxy, and from distant galaxies. Upon impact with Earth's atmosphere, cosmic rays produce showers of secondary particles, some of which reach the surface, although the bulk are deflected off into space by the magnetosphere or the heliosphere.
Cosmic rays were discovered by Victor Hess in 1912 in balloon experiments, for which he was awarded the 1936 Nobel Prize in Physics.
Direct measurement of cosmic rays, especially at lower energies, has been possible since the launch of the first satellites in the late 1950s. Particle detectors similar to those used in nuclear and high-energy physics are used on satellites and space probes for research into cosmic rays.
Data from the Fermi Space Telescope (2013) have been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernova explosions of stars. Based on observations of neutrinos and gamma rays from blazar TXS 0506+056 in 2018, active galactic nuclei also appear to produce cosmic rays.
Etymology
The term ray (as in optical ray) seems to have arisen from an initial belief, due to their penetrating power, that cosmic rays were mostly electromagnetic radiation. Nevertheless, following wider recognition of cosmic rays as being various high-energy particles with intrinsic mass, the term "rays" was still consistent with then known particles such as cathode rays, canal rays, alpha rays, and beta rays. Meanwhile "cosmic" ray photons, which are quanta of electromagnetic radiation (and so have no intrinsic mass) are known by their common names, such as gamma rays or X-rays, depending on their photon energy.
Composition
Of primary cosmic rays, which originate outside of Earth's atmosphere, about 99% are the bare nuclei of common atoms (stripped of their electron shells), and about 1% are solitary electrons (that is, one type of beta particle). Of the nuclei, about 90% are simple protons (i.e., hydrogen nuclei); 9% are alpha particles, identical to helium nuclei; and 1% are the nuclei of heavier elements, called HZE ions. These fractions vary highly over the energy range of cosmic rays. A very small fraction are stable particles of antimatter, such as positrons or antiprotons. The precise nature of this remaining fraction is an area of active research. An active search from Earth orbit for anti-alpha particles as of 2019 had found no unequivocal evidence.
Upon striking the atmosphere, cosmic rays violently burst atoms into other bits of matter, producing large amounts of pions and muons (produced from the decay of charged pions, which have a short half-life) as well as neutrinos. The neutron composition of the particle cascade increases at lower elevations, reaching between 40% and 80% of the radiation at aircraft altitudes.
Of secondary cosmic rays, the charged pions produced by primary cosmic rays in the atmosphere swiftly decay, emitting muons. Unlike pions, these muons do not interact strongly with matter, and can travel through the atmosphere to penetrate even below ground level. The rate of muons arriving at the surface of the Earth is such that about one per second passes through a volume the size of a person's head. Together with natural local radioactivity, these muons are a significant cause of the ground level atmospheric ionisation that first attracted the attention of scientists, leading to the eventual discovery of the primary cosmic rays arriving from beyond our atmosphere.
Energy
Cosmic rays attract great interest practically, due to the damage they inflict on microelectronics and life outside the protection of an atmosphere and magnetic field, and scientifically, because the energies of the most energetic ultra-high-energy cosmic rays have been observed to approach (This is slightly greater than 21 million times the design energy of particles accelerated by the Large Hadron Collider, .) One can show that such enormous energies might be achieved by means of the centrifugal mechanism of acceleration in active galactic nuclei. At , the highest-energy ultra-high-energy cosmic rays (such as the OMG particle recorded in 1991) have energies comparable to the kinetic energy of a baseball. As a result of these discoveries, there has been interest in investigating cosmic rays of even greater energies. Most cosmic rays, however, do not have such extreme energies; the energy distribution of cosmic rays peaks at .
History
After the discovery of radioactivity by Henri Becquerel in 1896, it was generally believed that atmospheric electricity, ionization of the air, was caused only by radiation from radioactive elements in the ground or the radioactive gases or isotopes of radon they produce. Measurements of increasing ionization rates at increasing heights above the ground during the decade from 1900 to 1910 could be explained as due to absorption of the ionizing radiation by the intervening air.
Discovery
In 1909, Theodor Wulf developed an electrometer, a device to measure the rate of ion production inside a hermetically sealed container, and used it to show higher levels of radiation at the top of the Eiffel Tower than at its base. However, his paper published in Physikalische Zeitschrift was not widely accepted. In 1911, Domenico Pacini observed simultaneous variations of the rate of ionization over a lake, over the sea, and at a depth of 3 metres from the surface. Pacini concluded from the decrease of radioactivity underwater that a certain part of the ionization must be due to sources other than the radioactivity of the Earth.
In 1912, Victor Hess carried three enhanced-accuracy Wulf electrometers to an altitude of 5,300 metres in a free balloon flight. He found the ionization rate increased to twice the rate at ground level. Hess ruled out the Sun as the radiation's source by making a balloon ascent during a near-total eclipse. With the moon blocking much of the Sun's visible radiation, Hess still measured rising radiation at rising altitudes. He concluded that "The results of the observations seem most likely to be explained by the assumption that radiation of very high penetrating power enters from above into our atmosphere." In 1913–1914, Werner Kolhörster confirmed Victor Hess's earlier results by measuring the increased ionization enthalpy rate at an altitude of 9 km. Hess received the Nobel Prize in Physics in 1936 for his discovery.
Identification
Bruno Rossi wrote in 1964:
In the late 1920s and early 1930s the technique of self-recording electroscopes carried by balloons into the highest layers of the atmosphere or sunk to great depths under water was brought to an unprecedented degree of perfection by the German physicist Erich Regener and his group. To these scientists we owe some of the most accurate measurements ever made of cosmic-ray ionization as a function of altitude and depth.
Ernest Rutherford stated in 1931 that "thanks to the fine experiments of Professor Millikan and the even more far-reaching experiments of Professor Regener, we have now got for the first time, a curve of absorption of these radiations in water which we may safely rely upon".
In the 1920s, the term cosmic ray was coined by Robert Millikan who made measurements of ionization due to cosmic rays from deep under water to high altitudes and around the globe. Millikan believed that his measurements proved that the primary cosmic rays were gamma rays; i.e., energetic photons. And he proposed a theory that they were produced in interstellar space as by-products of the fusion of hydrogen atoms into the heavier elements, and that secondary electrons were produced in the atmosphere by Compton scattering of gamma rays. In 1927, while sailing from Java to the Netherlands, Jacob Clay found evidence, later confirmed in many experiments, that cosmic ray intensity increases from the tropics to mid-latitudes, which indicated that the primary cosmic rays are deflected by the geomagnetic field and must therefore be charged particles, not photons. In 1929, Bothe and Kolhörster discovered charged cosmic-ray particles that could penetrate 4.1 cm of gold. Charged particles of such high energy could not possibly be produced by photons from Millikan's proposed interstellar fusion process.
In 1930, Bruno Rossi predicted a difference between the intensities of cosmic rays arriving from the east and the west that depends upon the charge of the primary particles—the so-called "east–west effect". Three independent experiments found that the intensity is, in fact, greater from the west, proving that most primaries are positive. During the years from 1930 to 1945, a wide variety of investigations confirmed that the primary cosmic rays are mostly protons, and the secondary radiation produced in the atmosphere is primarily electrons, photons and muons. In 1948, observations with nuclear emulsions carried by balloons to near the top of the atmosphere showed that approximately 10% of the primaries are helium nuclei (alpha particles) and 1% are nuclei of heavier elements such as carbon, iron, and lead.
During a test of his equipment for measuring the east–west effect, Rossi observed that the rate of near-simultaneous discharges of two widely separated Geiger counters was larger than the expected accidental rate. In his report on the experiment, Rossi wrote "... it seems that once in a while the recording equipment is struck by very extensive showers of particles, which causes coincidences between the counters, even placed at large distances from one another." In 1937, Pierre Auger, unaware of Rossi's earlier report, detected the same phenomenon and investigated it in some detail. He concluded that high-energy primary cosmic-ray particles interact with air nuclei high in the atmosphere, initiating a cascade of secondary interactions that ultimately yield a shower of electrons, and photons that reach ground level.
Soviet physicist Sergei Vernov was the first to use radiosondes to perform cosmic ray readings with an instrument carried to high altitude by a balloon. On 1 April 1935, he took measurements at heights up to 13.6 kilometres using a pair of Geiger counters in an anti-coincidence circuit to avoid counting secondary ray showers.
Homi J. Bhabha derived an expression for the probability of scattering positrons by electrons, a process now known as Bhabha scattering. His classic paper, jointly with Walter Heitler, published in 1937 described how primary cosmic rays from space interact with the upper atmosphere to produce particles observed at the ground level. Bhabha and Heitler explained the cosmic ray shower formation by the cascade production of gamma rays and positive and negative electron pairs.
Energy distribution
Measurements of the energy and arrival directions of the ultra-high-energy primary cosmic rays by the techniques of density sampling and fast timing of extensive air showers were first carried out in 1954 by members of the Rossi Cosmic Ray Group at the Massachusetts Institute of Technology. The experiment employed eleven scintillation detectors arranged within a circle 460 metres in diameter on the grounds of the Agassiz Station of the Harvard College Observatory. From that work, and from many other experiments carried out all over the world, the energy spectrum of the primary cosmic rays is now known to extend beyond 1020 eV. A huge air shower experiment called the Auger Project is currently operated at a site on the Pampas of Argentina by an international consortium of physicists. The project was first led by James Cronin, winner of the 1980 Nobel Prize in Physics from the University of Chicago, and Alan Watson of the University of Leeds, and later by scientists of the international Pierre Auger Collaboration. Their aim is to explore the properties and arrival directions of the very highest-energy primary cosmic rays. The results are expected to have important implications for particle physics and cosmology, due to a theoretical Greisen–Zatsepin–Kuzmin limit to the energies of cosmic rays from long distances (about 160 million light years) which occurs above 1020 eV because of interactions with the remnant photons from the Big Bang origin of the universe. Currently the Pierre Auger Observatory is undergoing an upgrade to improve its accuracy and find evidence for the yet unconfirmed origin of the most energetic cosmic rays.
High-energy gamma rays (>50MeV photons) were finally discovered in the primary cosmic radiation by an MIT experiment carried on the OSO-3 satellite in 1967. Components of both galactic and extra-galactic origins were separately identified at intensities much less than 1% of the primary charged particles. Since then, numerous satellite gamma-ray observatories have mapped the gamma-ray sky. The most recent is the Fermi Observatory, which has produced a map showing a narrow band of gamma ray intensity produced in discrete and diffuse sources in our galaxy, and numerous point-like extra-galactic sources distributed over the celestial sphere.
Modulation
The solar cycle causes variations in the magnetic field of the solar wind through which cosmic rays propagate to Earth.
This results in a modulation of the arriving fluxes at lower energies, as detected indirectly by the globally distributed neutron monitor network.
Sources
Early speculation on the sources of cosmic rays included a 1934 proposal by Baade and Zwicky suggesting cosmic rays originated from supernovae. A 1948 proposal by Horace W. Babcock suggested that magnetic variable stars could be a source of cosmic rays. Subsequently, Sekido et al. (1951) identified the Crab Nebula as a source of cosmic rays. Since then, a wide variety of potential sources for cosmic rays began to surface, including supernovae, active galactic nuclei, quasars, and gamma-ray bursts.
Later experiments have helped to identify the sources of cosmic rays with greater certainty. In 2009, a paper presented at the International Cosmic Ray Conference by scientists at the Pierre Auger Observatory in Argentina showed ultra-high energy cosmic rays originating from a location in the sky very close to the radio galaxy Centaurus A, although the authors specifically stated that further investigation would be required to confirm Centaurus A as a source of cosmic rays. However, no correlation was found between the incidence of gamma-ray bursts and cosmic rays, causing the authors to set upper limits as low as 3.4 × 10−6× erg·cm−2 on the flux of cosmic rays from gamma-ray bursts.
In 2009, supernovae were said to have been "pinned down" as a source of cosmic rays, a discovery made by a group using data from the Very Large Telescope. This analysis, however, was disputed in 2011 with data from PAMELA, which revealed that "spectral shapes of [hydrogen and helium nuclei] are different and cannot be described well by a single power law", suggesting a more complex process of cosmic ray formation. In February 2013, though, research analyzing data from Fermi revealed through an observation of neutral pion decay that supernovae were indeed a source of cosmic rays, with each explosion producing roughly 3 × 1042 – 3 × 1043J of cosmic rays.
Supernovae do not produce all cosmic rays, however, and the proportion of cosmic rays that they do produce is a question which cannot be answered without deeper investigation. To explain the actual process in supernovae and active galactic nuclei that accelerates the stripped atoms, physicists use shock front acceleration as a plausibility argument (see picture at right).
In 2017, the Pierre Auger Collaboration published the observation of a weak anisotropy in the arrival directions of the highest energy cosmic rays. Since the Galactic Center is in the deficit region, this anisotropy can be interpreted as evidence for the extragalactic origin of cosmic rays at the highest energies. This implies that there must be a transition energy from galactic to extragalactic sources, and there may be different types of cosmic-ray sources contributing to different energy ranges.
Types
Cosmic rays can be divided into two types:
galactic cosmic rays (GCR) and extragalactic cosmic rays, i.e., high-energy particles originating outside the solar system, and
solar energetic particles, high-energy particles (predominantly protons) emitted by the sun, primarily in solar eruptions.
However, the term "cosmic ray" is often used to refer to only the extrasolar flux.
Cosmic rays originate as primary cosmic rays, which are those originally produced in various astrophysical processes. Primary cosmic rays are composed mainly of protons and alpha particles (99%), with a small amount of heavier nuclei (≈1%) and an extremely minute proportion of positrons and antiprotons. Secondary cosmic rays, caused by a decay of primary cosmic rays as they impact an atmosphere, include photons, hadrons, and leptons, such as electrons, positrons, muons, and pions. The latter three of these were first detected in cosmic rays.
Primary cosmic rays
Primary cosmic rays mostly originate from outside the Solar System and sometimes even outside the Milky Way. When they interact with Earth's atmosphere, they are converted to secondary particles. The mass ratio of helium to hydrogen nuclei, 28%, is similar to the primordial elemental abundance ratio of these elements, 24%. The remaining fraction is made up of the other heavier nuclei that are typical nucleosynthesis end products, primarily lithium, beryllium, and boron. These nuclei appear in cosmic rays in greater abundance (≈1%) than in the solar atmosphere, where they are only about 10 as abundant (by number) as helium. Cosmic rays composed of charged nuclei heavier than helium are called HZE ions. Due to the high charge and heavy nature of HZE ions, their contribution to an astronaut's radiation dose in space is significant even though they are relatively scarce.
This abundance difference is a result of the way in which secondary cosmic rays are formed. Carbon and oxygen nuclei collide with interstellar matter to form lithium, beryllium, and boron, an example of cosmic ray spallation. Spallation is also responsible for the abundances of scandium, titanium, vanadium, and manganese ions in cosmic rays produced by collisions of iron and nickel nuclei with interstellar matter.
At high energies the composition changes and heavier nuclei have larger abundances in some energy ranges. Current experiments aim at more accurate measurements of the composition at high energies.
Primary cosmic ray antimatter
Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe. Rather, they appear to consist of only these two elementary particles, newly made in energetic processes.
Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality. In September 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of . At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Cosmic ray antiprotons also have a much higher average energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.
There is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of for the antihelium to helium flux ratio.
Secondary cosmic rays
When cosmic rays enter the Earth's atmosphere, they collide with atoms and molecules, mainly oxygen and nitrogen. The interaction produces a cascade of lighter particles, a so-called air shower secondary radiation that rains down, including x-rays, protons, alpha particles, pions, muons, electrons, neutrinos, and neutrons. All of the secondary particles produced by the collision continue onward on paths within about one degree of the primary particle's original path.
Typical particles produced in such collisions are neutrons and charged mesons such as positive or negative pions and kaons. Some of these subsequently decay into muons and neutrinos, which are able to reach the surface of the Earth. Some high-energy muons even penetrate for some distance into shallow mines, and most neutrinos traverse the Earth without further interaction. Others decay into photons, subsequently producing electromagnetic cascades. Hence, next to photons, electrons and positrons usually dominate in air showers. These particles as well as muons can be easily detected by many types of particle detectors, such as cloud chambers, bubble chambers, water-Cherenkov, or scintillation detectors. The observation of a secondary shower of particles in multiple detectors at the same time is an indication that all of the particles came from that event.
Cosmic rays impacting other planetary bodies in the Solar System are detected indirectly by observing high-energy gamma ray emissions by gamma-ray telescope. These are distinguished from radioactive decay processes by their higher energies above about 10 MeV.
Cosmic-ray flux
The flux of incoming cosmic rays at the upper atmosphere is dependent on the solar wind, the Earth's magnetic field, and the energy of the cosmic rays. At distances of ≈94 AU from the Sun, the solar wind undergoes a transition, called the termination shock, from supersonic to subsonic speeds. The region between the termination shock and the heliopause acts as a barrier to cosmic rays, decreasing the flux at lower energies (≤ 1 GeV) by about 90%. However, the strength of the solar wind is not constant, and hence it has been observed that cosmic ray flux is correlated with solar activity.
In addition, the Earth's magnetic field acts to deflect cosmic rays from its surface, giving rise to the observation that the flux is apparently dependent on latitude, longitude, and azimuth angle.
The combined effects of all of the factors mentioned contribute to the flux of cosmic rays at Earth's surface. The following table of participial frequencies reach the planet and are inferred from lower-energy radiation reaching the ground.
{| class="wikitable"
|+Relative particle energies and rates of cosmic rays
|-
!scope="col"| Particle energy (eV)
!scope="col"| Particle rate (ms)
|-
!scope="row"| (GeV)
|
|-
!scope="row"| (TeV)
| 1
|-
!scope="row"| (10 PeV)
| (a few times a year)
|-
!scope="row"| (100 EeV)
| (once a century)
|-
|}
In the past, it was believed that the cosmic ray flux remained fairly constant over time. However, recent research suggests one-and-a-half- to two-fold millennium-timescale changes in the cosmic ray flux in the past forty thousand years.
The magnitude of the energy of cosmic ray flux in interstellar space is very comparable to that of other deep space energies: cosmic ray energy density averages about one electron-volt per cubic centimetre of interstellar space, or ≈1 eV/cm3, which is comparable to the energy density of visible starlight at 0.3 eV/cm3, the galactic magnetic field energy density (assumed 3 microgauss) which is ≈0.25 eV/cm3, or the cosmic microwave background (CMB) radiation energy density at ≈0.25 eV/cm3.
Detection methods
There are two main classes of detection methods. First, the direct detection of the primary cosmic rays in space or at high altitude by balloon-borne instruments. Second, the indirect detection of secondary particle, i.e., extensive air showers at higher energies. While there have been proposals and prototypes for space and balloon-borne detection of air showers, currently operating experiments for high-energy cosmic rays are ground based. Generally direct detection is more accurate than indirect detection. However the flux of cosmic rays decreases with energy, which hampers direct detection for the energy range above 1 PeV. Both direct and indirect detection are realized by several techniques.
Direct detection
Direct detection is possible by all kinds of particle detectors at the ISS, on satellites, or high-altitude balloons. However, there are constraints in weight and size limiting the choices of detectors.
An example for the direct detection technique is a method based on nuclear tracks developed by Robert Fleischer, P. Buford Price, and Robert M. Walker for use in high-altitude balloons. In this method, sheets of clear plastic, like 0.25 mm Lexan polycarbonate, are stacked together and exposed directly to cosmic rays in space or high altitude. The nuclear charge causes chemical bond breaking or ionization in the plastic. At the top of the plastic stack the ionization is less, due to the high cosmic ray speed. As the cosmic ray speed decreases due to deceleration in the stack, the ionization increases along the path. The resulting plastic sheets are "etched" or slowly dissolved in warm caustic sodium hydroxide solution, that removes the surface material at a slow, known rate. The caustic sodium hydroxide dissolves the plastic at a faster rate along the path of the ionized plastic. The net result is a conical etch pit in the plastic. The etch pits are measured under a high-power microscope (typically 1600× oil-immersion), and the etch rate is plotted as a function of the depth in the stacked plastic.
This technique yields a unique curve for each atomic nucleus from 1 to 92, allowing identification of both the charge and energy of the cosmic ray that traverses the plastic stack. The more extensive the ionization along the path, the higher the charge. In addition to its uses for cosmic-ray detection, the technique is also used to detect nuclei created as products of nuclear fission.
Indirect detection
There are several ground-based methods of detecting cosmic rays currently in use, which can be divided in two main categories: the detection of secondary particles forming extensive air showers (EAS) by various types of particle detectors, and the detection of electromagnetic radiation emitted by EAS in the atmosphere.
Extensive air shower arrays made of particle detectors measure the charged particles which pass through them. EAS arrays can observe a broad area of the sky and can be active more than 90% of the time. However, they are less able to segregate background effects from cosmic rays than can air Cherenkov telescopes. Most state-of-the-art EAS arrays employ plastic scintillators. Also water (liquid or frozen) is used as a detection medium through which particles pass and produce Cherenkov radiation to make them detectable. Therefore, several arrays use water/ice-Cherenkov detectors as alternative or in addition to scintillators.
By the combination of several detectors, some EAS arrays have the capability to distinguish muons from lighter secondary particles (photons, electrons, positrons). The fraction of muons among the secondary particles is one traditional way to estimate the mass composition of the primary cosmic rays.
An historic method of secondary particle detection still used for demonstration purposes involves the use of cloud chambers to detect the secondary muons created when a pion decays. Cloud chambers in particular can be built from widely available materials and can be constructed even in a high-school laboratory. A fifth method, involving bubble chambers, can be used to detect cosmic ray particles.
More recently, the CMOS devices in pervasive smartphone cameras have been proposed as a practical distributed network to detect air showers from ultra-high-energy cosmic rays. The first app to exploit this proposition was the CRAYFIS (Cosmic RAYs Found in Smartphones) experiment. In 2017, the CREDO (Cosmic-Ray Extremely Distributed Observatory) Collaboration released the first version of its completely open source app for Android devices. Since then the collaboration has attracted the interest and support of many scientific institutions, educational institutions, and members of the public around the world. Future research has to show in what aspects this new technique can compete with dedicated EAS arrays.
The first detection method in the second category is called the air Cherenkov telescope, designed to detect low-energy (<200 GeV) cosmic rays by means of analyzing their Cherenkov radiation, which for cosmic rays are gamma rays emitted as they travel faster than the speed of light in their medium, the atmosphere. While these telescopes are extremely good at distinguishing between background radiation and that of cosmic-ray origin, they can only function well on clear nights without the Moon shining, have very small fields of view, and are only active for a few percent of the time.
A second method detects the light from nitrogen fluorescence caused by the excitation of nitrogen in the atmosphere by particles moving through the atmosphere. This method is the most accurate for cosmic rays at highest energies, in particular when combined with EAS arrays of particle detectors. Similar to the detection of Cherenkov-light, this method is restricted to clear nights.
Another method detects radio waves emitted by air showers. This technique has a high duty cycle similar to that of particle detectors. The accuracy of this technique was improved in the last years as shown by various prototype experiments, and may become an alternative to the detection of atmospheric Cherenkov-light and fluorescence light, at least at high energies.
Effects
Changes in atmospheric chemistry
Cosmic rays ionize nitrogen and oxygen molecules in the atmosphere, which leads to a number of chemical reactions. Cosmic rays are also responsible for the continuous production of a number of unstable isotopes, such as carbon-14, in the Earth's atmosphere through the reaction:
Cosmic rays kept the level of carbon-14 in the atmosphere roughly constant (70 tons) for at least the past 100,000 years, until the beginning of above-ground nuclear weapons testing in the early 1950s. This fact is used in radiocarbon dating.
Reaction products of primary cosmic rays, radioisotope half-lifetime, and production reaction
Role in ambient radiation
Cosmic rays constitute a fraction of the annual radiation exposure of human beings on the Earth, averaging 0.39mSv out of a total of 3mSv per year (13% of total background) for the Earth's population. However, the background radiation from cosmic rays increases with altitude, from 0.3mSv per year for sea-level areas to 1.0mSv per year for higher-altitude cities, raising cosmic radiation exposure to a quarter of total background radiation exposure for populations of said cities. Airline crews flying long-distance high-altitude routes can be exposed to 2.2mSv of extra radiation each year due to cosmic rays, nearly doubling their total exposure to ionizing radiation.
Figures are for the time before the Fukushima Daiichi nuclear disaster. Human-made values by UNSCEAR are from the Japanese National Institute of Radiological Sciences, which summarized the UNSCEAR data.
Effect on electronics
Cosmic rays have sufficient energy to alter the states of circuit components in electronic integrated circuits, causing transient errors to occur (such as corrupted data in electronic memory devices or incorrect performance of CPUs) often referred to as "soft errors". This has been a problem in electronics at extremely high-altitude, such as in satellites, but with transistors becoming smaller and smaller, this is becoming an increasing concern in ground-level electronics as well. Studies by IBM in the 1990s suggest that computers typically experience about one cosmic-ray-induced error per 256 megabytes of RAM per month. To alleviate this problem, the Intel Corporation has proposed a cosmic ray detector that could be integrated into future high-density microprocessors, allowing the processor to repeat the last command following a cosmic-ray event. ECC memory is used to protect data against data corruption caused by cosmic rays.
In 2008, data corruption in a flight control system caused an Airbus A330 airliner to twice plunge hundreds of feet, resulting in injuries to multiple passengers and crew members. Cosmic rays were investigated among other possible causes of the data corruption, but were ultimately ruled out as being very unlikely.
In August 2020, scientists reported that ionizing radiation from environmental radioactive materials and cosmic rays may substantially limit the coherence times of qubits if they are not shielded adequately which may be critical for realizing fault-tolerant superconducting quantum computers in the future.
Significance to aerospace travel
Galactic cosmic rays are one of the most important barriers standing in the way of plans for interplanetary travel by crewed spacecraft. Cosmic rays also pose a threat to electronics placed aboard outgoing probes. In 2010, a malfunction aboard the Voyager 2 space probe was credited to a single flipped bit, probably caused by a cosmic ray. Strategies such as physical or magnetic shielding for spacecraft have been considered in order to minimize the damage to electronics and human beings caused by cosmic rays.
On 31 May 2013, NASA scientists reported that a possible crewed mission to Mars may involve a greater radiation risk than previously believed, based on the amount of energetic particle radiation detected by the RAD on the Mars Science Laboratory while traveling from the Earth to Mars in 2011–2012.
Flying high, passengers and crews of jet airliners are exposed to at least 10 times the cosmic ray dose that people at sea level receive. Aircraft flying polar routes near the geomagnetic poles are at particular risk.
Role in lightning
Cosmic rays have been implicated in the triggering of electrical breakdown in lightning. It has been proposed that essentially all lightning is triggered through a relativistic process, or "runaway breakdown", seeded by cosmic ray secondaries. Subsequent development of the lightning discharge then occurs through "conventional breakdown" mechanisms.
Postulated role in climate change
A role for cosmic rays in climate was suggested by Edward P. Ney in 1959 and by Robert E. Dickinson in 1975. It has been postulated that cosmic rays may have been responsible for major climatic change and mass extinction in the past. According to Adrian Mellott and Mikhail Medvedev, 62-million-year cycles in biological marine populations correlate with the motion of the Earth relative to the galactic plane and increases in exposure to cosmic rays. The researchers suggest that this and gamma ray bombardments deriving from local supernovae could have affected cancer and mutation rates, and might be linked to decisive alterations in the Earth's climate, and to the mass extinctions of the Ordovician.
Danish physicist Henrik Svensmark has controversially argued that because solar variation modulates the cosmic ray flux on Earth, it would consequently affect the rate of cloud formation and hence be an indirect cause of global warming. Svensmark is one of several scientists outspokenly opposed to the mainstream scientific assessment of global warming, leading to concerns that the proposition that cosmic rays are connected to global warming could be ideologically biased rather than scientifically based. Other scientists have vigorously criticized Svensmark for sloppy and inconsistent work: one example is adjustment of cloud data that understates error in lower cloud data, but not in high cloud data; another example is "incorrect handling of the physical data" resulting in graphs that do not show the correlations they claim to show. Despite Svensmark's assertions, galactic cosmic rays have shown no statistically significant influence on changes in cloud cover, and have been demonstrated in studies to have no causal relationship to changes in global temperature.
Possible mass extinction factor
A handful of studies conclude that a nearby supernova or series of supernovas caused the Pliocene marine megafauna extinction event by substantially increasing radiation levels to hazardous amounts for large seafaring animals.
Research and experiments
There are a number of cosmic-ray research initiatives, listed below.
Ground-based
Akeno Giant Air Shower Array
Chicago Air Shower Array
CHICOS
CLOUD
CRIPT
GAMMA
GRAPES-3
HAWC
HEGRA
High Energy Stereoscopic System
High Resolution Fly's Eye Cosmic Ray Detector
IceCube
KASCADE
MAGIC
MARIACHI
Milagro
NMDB
Pierre Auger Observatory
QuarkNet
Spaceship Earth
Telescope Array Project
Tunka experiment
VERITAS
Washington Large Area Time Coincidence Array
Satellite
ACE (Advanced Composition Explorer)
Alpha Magnetic Spectrometer
Cassini–Huygens
Fermi Gamma-ray Space Telescope
HEAO 1, HEAO 2, HEAO 3
Interstellar Boundary Explorer
Langton Ultimate Cosmic-Ray Intensity Detector
PAMELA
Solar and Heliospheric Observatory
Voyager 1 and Voyager 2
Balloon-borne
Advanced Thin Ionization Calorimeter
BESS
Cosmic Ray Energetics and Mass (CREAM)
HEAT (High Energy Antimatter Telescope)
PERDaix
TIGER
TRACER (cosmic ray detector)
| Physical sciences | Basics_3 | null |
47696 | https://en.wikipedia.org/wiki/SUV | SUV | A sport utility vehicle (SUV) is a car classification that combines elements of road-going passenger cars with features from off-road vehicles, such as raised ground clearance and four-wheel drive.
There is no commonly agreed-upon definition of an SUV, and usage of the term varies between countries. Thus, it is "a loose term that traditionally covers a broad range of vehicles with four-wheel drive." Some definitions claim that an SUV must be built on a light truck chassis; however, broader definitions consider any vehicle with off-road design features to be an SUV. A crossover SUV is often defined as an SUV built with a unibody construction (as with passenger cars); however, the designations are increasingly blurred because of the capabilities of the vehicles, the labelling by marketers, and the electrification of new models.
The predecessors to SUVs date back to military and low-volume models from the late 1930s, and the four-wheel-drive station wagons and carryalls that began to be introduced in 1949. Some SUVs produced today use unibody construction; however, in the past, more SUVs used body-on-frame construction. During the late 1990s and early 2000s, the popularity of SUVs significantly increased, often at the expense of the popularity of large sedans and station wagons. SUVs accounted for 45.9% of the world's passenger car market in 2021.
SUVs have been criticized for a variety of environmental and safety-related reasons. They generally have poorer fuel efficiency and require more resources to manufacture than smaller vehicles, contributing more to climate change and environmental degradation. Between 2010 and 2018, SUVs were the second-largest contributor to the global increase in carbon emissions worldwide. Their higher center of gravity increases their risk of rollovers. Their higher front-end profile makes them at least twice as likely to kill pedestrians they hit. Additionally, the psychological sense of security they provide influences drivers to drive less cautiously, and may in-turn, cause others with smaller vehicles to opt for SUVs in the future under the sense of security, all the while increasing the rate of fatalities of pedestrians.
Definitions
There is no universally accepted definition of the sport utility vehicle. Dictionaries, automotive experts, and journalists use varying wordings and defining characteristics, in addition to regional variations of usage by both the media and the general public. The auto industry also has not settled on one definition of the SUV.
American English
Automotive websites' descriptions of SUVs range from specifically "combining car-like appointments and wagon practicality with steadfast off-road capability" with "chair-height seats and picture-window visibility" to the more general "nearly anything with available all-wheel drive and raised ground clearance". It is also suggested that the term "SUV" has replaced "jeep" as a general term for off-road vehicles.
American dictionary definitions for SUVs include:
"rugged automotive vehicle similar to a station wagon but built on a light-truck chassis"
"automobile similar to a station wagon but built on a light truck frame"
"large vehicle that is designed to be used on rough surfaces but that is often used on city roads or highways"
"passenger vehicle similar to a station wagon but with the chassis of a small truck and, usually, four-wheel drive"
British English
In British English, the terms "4x4" (pronounced "four-by-four"), "jeep", four wheel drive, or "off-road vehicle" are generally used instead of "sport utility vehicle". The sardonic term "Chelsea tractor" is also commonly used, due to the perceived popularity of the vehicles with urban residents of Chelsea, London, and their likeness to vehicles used by farmers.
The Collins English Dictionary defines a sport utility vehicle as a "powerful vehicle with four-wheel drive that can be driven over rough ground. The abbreviation SUV is often used."
Other countries
In Europe, the term SUV is generally used for road-oriented vehicles, described as "J-segment" by the European Commission. "Four-by-four" or the brand name of the vehicle is typically used for off-road-oriented vehicles. Similarly, in New Zealand, vehicles designed for off-road use are typically referred to as "four-wheel drives" instead of SUVs.
Government regulations
In the United States, many government regulations simply have categories for "off-highway vehicles" which are loosely defined and often result in SUVs (along with pick-up trucks and minivans) being classified as light trucks. For example, corporate average fuel economy (CAFE) regulations previously included "permit greater cargo-carrying capacity than passenger carrying volume" in the definition for trucks, resulting in cars with removable rear seats, like the PT Cruiser, being classified as light trucks.
This classification as trucks allowed SUVs to be regulated less strictly than passenger cars under the Energy Policy and Conservation Act for fuel economy, and the Clean Air Act for emissions. However, from 2004 onwards, the United States Environmental Protection Agency (EPA) began to hold sport utility vehicles to the same tailpipe emissions standards as cars for criteria pollutants, though not greenhouse gas emissions standards as they were not set until 2010. In 2011, the CAFE regulations were changed to classify small, two-wheel-drive SUVs as passenger cars.
However, the licensing and traffic enforcement regulations in the United States vary from state to state, and an SUV may be classified as a car in some states but as a truck in others. For industry production statistics, SUVs are counted in the light truck product segment.
In India, all SUVs are classified in the "Utility Vehicle" category per the Society of Indian Automobile Manufacturers (SIAM) definitions and carry a 27% excise tax. Those that are long, have a engine or larger, along with of ground clearance, are subject to a 30% excise duty.
In Australia, SUV sales were helped by having lower import duties than passenger cars. Up until January 2010, SUVs were subject to a 5% import tariff, compared with 10% for passenger cars.
Higher parking fee
In February 2024, voters in Paris mandated a triple parking charge rate for SUVs, citing environmental impact and street capacity; this followed similar decisions in Lyon and Tübingen with similar ordinances being considered by London, Brussels and Amsterdam.
Characteristics
Chassis
Many years after most passenger cars had transitioned to unibody construction, most SUVs continued to use a separate body-on-frame method, due to being based on the chassis from a light truck, commercial vehicle, pickup truck, or off-road vehicle.
The first mass-produced unibody four-wheel-drive passenger car was the Russian 1955 GAZ-M20 Pobeda M-72, which could be considered the first crossover car. The 1977 Lada Niva was the first off-road vehicle to use both a unibody construction and a coil-sprung independent front suspension. The relatively compact Niva is considered a predecessor to the crossover SUV and combines a hatchback-like passenger car body with full-time four-wheel drive, low-range gearing, and lockable center differential.
Nonetheless, unibody SUVs remained rare until the 1984 Jeep Cherokee (XJ) was introduced and became a sales success. The introduction of the 1993 Jeep Grand Cherokee resulted in many of Jeep's SUV models using unibody construction, with many other brands following suit since the mid-1990s. Today, most SUVs in production use a unibody construction and relatively few models continue to use body-on-frame construction.
Body style
SUVs are typically of a two-box design similar to a station wagon. The engine compartment is in the front, followed by a combined passenger/cargo area (unlike a sedan, which has a separate trunk/boot compartment).
Up until approximately 2010, many SUV models were available in two-door body styles. Since then, manufacturers began to discontinue the two-door models as the four-door models became more popular.
A few two-door SUVs remain available, such as the body-on-frame Suzuki Jimny, Mahindra Thar, Toyota Land Cruiser Prado, Ford Bronco, and Jeep Wrangler as well as the Range Rover Evoque crossover SUV.
Safety
SUVs typically have high ground clearance and a tall body. This results in a high center of mass, which made SUVs more prone to roll-over accidents. In 2003, SUVs were quoted as 2.5 times more likely to roll over in a crash than regular cars and that SUV roofs were more likely to cave in on passengers than in other cars, resulting in increased harm to passengers.
Between 1991 and 2001, the United States saw a 150% increase in sport-utility vehicle rollover deaths. In 2001, though roll-overs constituted just 3% of vehicle crashes overall, they caused over 30% of occupant fatalities in crashes; and in crashes where the vehicle did roll over, SUV occupants in the early 2000s were nearly three times as likely to be killed as other car passengers. Vehicles with a high center of gravity do sometimes fail the moose test of maneuverability conducted by Swedish consumer magazine Teknikens Värld, for example, the 1997 Mercedes-Benz A-Class and 2011 Jeep Grand Cherokee.
The increasing popularity of SUVs in the 1990s and early 2000s was partly due to buyers perceiving that SUVs provide greater safety for occupants, due to their larger size and raised ride height. Regarding the safety of other road users, SUVs are exempted from U.S. regulation stating that a passenger car bumper must protect the area between above the ground. This often increases the damage to the other car in a collision with an SUV, because the impact occurs at a higher location on the other car. In 2000–2001, 60% of fatal side-impact collisions were where the other vehicle was an SUV, an increase from 30% in 1980–1981.
The introduction of electronic stability control (ESC) and rollover mitigation, as well as increased analysis of the risks of a rollover, led the IIHS to report in 2015 that "the rollover death rate of 5 per million registered vehicle years for 2011 models is less than a quarter of what it was for 2004 models. With ESC dramatically reducing rollover risk, the inherent advantages offered by SUVs' greater size, weight, and height emerge more clearly. Today's SUVs have the lowest driver death rate of any vehicle type."
The high danger for cyclists and pedestrians of being seriously injured or even killed by SUV drivers has caused some public protests against SUVs in urban areas. In 2020, a study by the U.S.-based IIHS found that, of a sample of 79 crashes from three urban areas in Michigan, SUVs caused more serious injuries compared to cars when impacts occurred at greater than . The IIHS noted the sample size of the study was small and that more research is needed. The popularity of SUVs contributed to an increase in pedestrian fatalities in the U.S. during the 2010s, alongside other factors such as distracted and drunk driving.
A 2021 study by the University of Illinois Springfield showed that SUVs are 8 times more likely to kill children in a collision than passenger cars, and multiple times more lethal to adult pedestrians and cyclists.
Environmental impact
SUVs generally have poorer fuel efficiency than smaller cars, and thus contribute more to environmental degradation and global warming.
SUVs emit about 700 megatonnes of carbon dioxide per year, a gas that is linked to global warming. According to the International Energy Agency, from 2010 SUVs have been the second-largest contributor to the increase in global emissions, second only to the power sector.
SUVs were responsible for all of the 3.3 million barrels a day growth in oil demand from passenger cars between 2010 and 2018, whereas efficiency improvements in smaller cars saved over 2 million barrels a day, with electric cars reducing oil demand by under 100,000 barrels a day.
Whereas SUVs can be electrified, or converted to run on a variety of alternative fuels, including hydrogen, their (manufacturing) emissions will always be larger than smaller electric cars. On average, SUVs consume about a quarter more energy than medium-size cars. Furthermore, the vast majority of these vehicles are not converted to use alternative fuels.
Between 2010 and 2018 SUVs were the second largest contributor to the global increase in carbon emissions worldwide.
Types of SUV
Crossover SUV
The "crossover SUV" segment (also known as "CUVs" or simply "crossovers") has become increasingly popular since around 2010. Crossovers are often based on a platform shared with a passenger car, as a result, they typically have better comfort and fuel economy, but less off-road capability (many crossovers are sold without all-wheel drive) than pickup truck-based SUVs.
The difference between crossovers and other SUVs is sometimes defined as a crossover being built using a unibody platform (the type used by most passenger cars), while an SUV is built using a body-on-frame platform (the type used by off-road vehicles and light trucks). However, these definitions are often blurred in practice, since unibody vehicles are also often referred to as SUVs. Also, crossover is a relatively recent term and early unibody SUVs (such as the 1984 Jeep Cherokee) are rarely called crossovers. Due to these inconsistencies, the term SUV is often used as a catch-all for both crossovers and SUVs.
Outside of the United States, the term crossover tends to be used for C-segment (compact) or smaller vehicles, with large unibody vehicles—such as the Mercedes-Benz GLS-Class, BMW X7, and Range Rover—usually referred to as SUVs rather than crossovers. In the United Kingdom, a crossover is sometimes defined as a hatchback model with raised ride height and SUV-like styling features.
Examples:
Mini SUV
The smallest size class of SUVs is the "mini SUV". In Japan, SUVs under —such as the Mitsubishi Pajero Mini—are included in the kei car category and therefore attract lower taxes.
Many recent vehicles labeled as mini SUVs are technically subcompact crossovers and are built on the platform of a subcompact (also called supermini or B-segment) passenger car.
Examples:
Compact SUV
The "compact SUV" is the next bigger-size class after mini SUVs.
Many recent vehicles labeled as compact SUVs are technically compact crossovers and are built on the platform of a compact (C-segment) passenger car.
Examples:
Mid-size SUV
The next larger size is called the "mid-size SUV". Some mid-size SUVs are based on platforms shared with passenger cars and therefore, are crossovers. Other mid-size SUVs are based on compact or mid-size pickups.
Examples:
Full-size SUV
Full-size SUVs are the largest size of commonly produced SUVs. Some, such as the Ford Expedition, and Chevrolet Tahoe, are marketed for their off-road capabilities, and others, such as the Lincoln Navigator and Cadillac Escalade, are marketed as luxury vehicles. While a few full-size SUVs are built on dedicated platforms; most share their platforms with full-size pickup trucks.
Examples:
Extended-length SUV
Some North American SUVs are available as a long-bodied version of a full-size SUV, which is called an "extended-length SUV" like the Ford Expedition EL and the Chevrolet Suburban. The additional length is used to provide extra space for rear passengers or cargo. As per the full-size SUVs they are based on, most extended-length SUVs are built on dedicated platforms, full-sized pickups (1⁄2 ton), or heavy-duty pickups (3⁄4 ton or more).
Extended-length SUVs are mostly sold in North America but may also be exported to other markets in small numbers.
Examples:
Coupe SUV
Some SUVs or crossovers with sloping rear rooflines are marketed as "coupe crossover SUVs" or "coupe SUVs", even though they have four side doors for passenger access to the seats and rear hatches for cargo area access.
History
1930s to 1948
Just before and during World War II, prototypes and low-volume production examples of military cars with sedan or station wagon-type bodies and rugged, off-road capable four-wheel drive chassis began to appear around the world. These early models included the 1936 Kurogane Type 95 from Japan, the 1938 GAZ-61 from Russia as well as the 1941 Volkswagen Kommandeurswagen and 1936 Opel Geländesportwagen from Germany. An early predecessor to the design of modern SUVs was the 1940 Humber Heavy Utility, a four-wheel-drive off-road vehicle built on the chassis of the Humber Super Snipe passenger car.
The most prohibitive initial factors to the potential civilian popularity of an SUV-like car were their cost and the availability of certain critical parts. Before the war, adding four-wheel drive to a car almost doubled its cost. Compared to a common, rear-wheel drive vehicle, any 4WD (four-wheel drive) needed many essential extra components, including a transfer case, a second differential, and constant-velocity joints for the driven front axle—which were expensive due to the precision involved in this required manufacturing gears and other specialized parts. Before World WarII, these were produced in the United States by only a few specialized firms with limited production capacity. Due to the increase in demand for parts for the war effort, in the spring of 1942 Ford, Dodge, and Chevrolet joined in fabricating these parts in mass quantities, boosting their production more than 100-fold.
An early usage of the term was the 1947 Crosley CC Four Sport Utility model, which used a convertible wagon body style and is therefore unrelated to the design of later SUVs.
1949 to 1970s
Several models of carryall wagons began to be offered with four-wheel drive, beginning in 1949 when the Willys Jeep Station Wagon introduced the option of four-wheel drive. Four-wheel-drive versions of the Chevrolet Suburban were introduced for 1955, followed by the International Harvester Travelall in 1956 (credited as being the first full-size SUV) and the Power Wagon Town Wagon in 1957.
Developed as a competitor to the Jeep CJ, the compact International Scout was introduced in 1961, offering either two- or four-wheel drive and a variety of engine options. The Harvester Scout provided many other options designed to appeal to a wide range of customers for numerous uses as well. The 1963 Jeep Wagoneer (SJ) introduced a sophisticated station wagon body design that was more carlike than any other four-wheel-drive vehicle on the market. The 1967 Toyota Land Cruiser FJ55 station wagon was the first comfort-oriented version of the Land Cruiser off-road vehicle. The two-door Chevrolet K5 Blazer (and related GMC K5 Jimmy) were introduced for 1969, and the two-door International Scout II was introduced in 1971. The first European luxury off-road vehicle was the 1970 Range Rover Classic, which was marketed as a luxury car for both on-road and off-road usage.
In 1972 Subaru Leone 4WD wagon was introduced in Japan, which was not designed as an off-road vehicle, but a version of the front-wheel-drive passenger car. Some argue that this was the first SUV. It was also classified as a commercial vehicle in the home market, just like later SUVs.
The first relevant usage of the term SUV was in advertising brochures for the full-sized 1974 Jeep Cherokee (SJ), which used the wording "sport(s) utility vehicle" as a description for the vehicle. The 1966 Ford Bronco included a "sport utility" model; however, in this case it was used for the two-door pickup truck version.
The VAZ-2121 (now designated Lada Niva Legend) was the first mass-market 4WD unibody car in some markets in 1977. The AMC Eagle introduced in the North American market in 1979, and is often called the first mass-market "crossover", although that term had not been coined at the time. In contrast to truck or utility-vehicle based designs and the Niva that was purpose-built for rural areas, American Motors Corporation (AMC) utilized a long-serving existing car platform and designed a new automatic full-time AWD system. It was first with "SUV styling on a raised passenger-car platform combined with AWD." Four Wheeler magazine described the AMC Eagle as "the beginning of a new generation of cars".
1980s to 1990s
The compact-sized 1984 Jeep Cherokee (XJ) is often credited as the first SUV in the modern understanding of the term. The use of unibody construction was unique at the time for a four-wheel drive and also reduced the weight of the new Cherokee. It also appealed to urban families due to having a more compact size (compared to the full-size Wagoneer and previous generation Cherokee SJ models) as well as a plush interior resembling a station wagon. As the new Cherokee became a major sales success, the term "sport utility vehicle" began to be used in the national press for the first time. "The advent and immediate success of AMC/Jeep's compact four-door Cherokee turned the truck industry upside down."
The U.S. corporate average fuel economy (CAFE) standard was introduced in 1975 to reduce fuel usage, but included relaxed regulations for "light trucks" to avoid businesses paying extra taxes for work vehicles. This created a loophole that manufacturers increasingly exploited since the 1980s oil glut (which started an era of cheap gasoline), whereby SUVs were designed to be classified as light trucks despite their primary use as passenger vehicles to receive tax concessions and less stringent fuel economy requirements. This enabled manufacturers to sell more profitable, larger, more polluting vehicles, instead of the smaller, less polluting, less profitable cars, that the CAFE regulations intended.
For example, the United States Environmental Protection Agency agreed to classify the new Jeep Cherokee as a light truck following lobbying from its manufacturer; the Cherokee was then marketed by the company as a passenger vehicle. This increased the SUV boom as other manufacturers introduced their own SUVs in response to the compact Cherokee taking sales from their regular cars.
In 1994 the U.S. Environmental Protection Agency began classifying vehicles by "market class". For SUVs in 1994 they included three Jeep models, the Cherokee, Grand Cherokee and Wrangler. Two Ford models were the Bronco and the Explorer. Six General Motors models including the GMC Jimmy, the Yukon, and the Suburban 1500; the Chevrolet Suburban 1500, and the Blazer (1500 and S10); the Geo Tracker (Convertible or Van); and finally the Oldsmobile Bravada. Eleven Japanese models classified as SUVs were the Toyota 4Runner and Land Cruiser; the Honda Passport; the Nissan Pathfinder; the Mazda Navajo; the Mitsubishi Montero; the Isuzu Amigo, Rodeo, and Trooper; and the Suzuki Samurai and Sidekick. From Europe the three Land Rover models, the Range Rover, the Defender and the Discovery were classified as SUVs.
By late 1996 Consumers Digest magazine was calling the trend an "SUV craze", and by 1999 the U.S. sales of SUVs and light trucks for the first time exceeded sales of regular passenger cars.
2000s
By 2003, there were 76 million SUVs and light trucks on U.S. roads, representing approximately 35% of the vehicles on the road.
Car manufacturers were keen to promote SUV sales over other types of cars due to higher profits in the segment. An SUV could be sold with a profit margin of or more ( per SUV in the case of the Ford Excursion), while compact cars were often sold at a loss of a few hundred dollars per car. As a result, several manufacturing plants were converted from car production to SUV production (such as the General Motors plant in Arlington, Texas in 1996), and many long-running U.S. sedan models were discontinued.
From the mid-2000s until 2010, U.S. sales of SUVs and other light trucks experienced a dip due to increasing fuel prices and then a declining economy. From 2008 until 2010, General Motors closed four assembly plants that were producing SUVs and trucks. Sales of SUVs and light trucks sales began to recover in 2010, as fuel prices decreased and the North American economy improved.
2010s to 2020s
In 2019, the International Energy Agency (IEA) reported that the global number of SUVs and crossovers on the road multiplied by six since 2010—from 35 million to 200 million vehicles, and their market share has grown to 40 percent of worldwide new light-vehicle sales at the end of the decade.
By 2013, small and compact SUVs had increased to become the third-largest market segment. Since the early 2000s, new versions have been introduced to appeal to a wider audience, such as crossovers and other small SUVs. Larger SUVs also remained popular, with sales of General Motors' large SUV models increasing significantly in 2013.
In 2015, global sales of SUVs overtook the "lower medium car" segment, to become the largest market segment, accounting for 22.9% of "light vehicle" sales in 2015. The following year, worldwide SUV sales experienced further growth of 22%. The world's fastest-growing SUV markets in 2014–2015 were: China (+47.9%), Italy (+48.6%), Spain (+42%), Portugal (+54.8 %), and Thailand (+56.4%). The SUV segment further grew to 26% of the global passenger car market in 2016, then to 36.8% of the market in Q1–Q3 of 2017.
In the U.S. at the end of 2016, sales of SUVs and light-duty trucks had surpassed traditional car sales for the year by over 3 million units. Manufacturers continued to phase out the production of sedan models, replacing them with new models of SUVs.
Luxury brands have increasingly introduced SUV or crossover models in the 2010s. For example: Rolls-Royce Cullinan, Bentley Bentayga, Aston Martin DBX, Maserati Levante, Lamborghini Urus, and Ferrari Purosangue.
In 2019 SUVs made up 47.4% of U.S. sales compared to only 22.1% for sedans.
Motorsport
SUVs have competed in various off-road racing competitions, such as the Dakar Rally, Baja 1000, FIA Cross-Country Rally World Cup, King of the Hammers, and Australasian Safari. SUVs have also competed in the Trophee Andros ice-racing series.
Nicknames
Several derogatory or pejorative terms for SUVs are based on the combination of an affluent suburb name and "tractor", particularly for expensive vehicles from luxury brands. Examples include "Toorak Tractor" (Melbourne, Australia), "Chelsea Tractor" (London, England) and "Remuera Tractor" (Auckland, New Zealand). These terms relate to the theory that four-wheel drive capabilities are not required by affluent SUV owners, and that the SUV is purchased as a status symbol rather than for practical reasons.
In Norway, the term ('Stock Exchange Tractor') serves a similar purpose. In the Netherlands, SUVs are sometimes called "P.C. Hooft-tractors" after the exclusive P.C. Hooftstraat Amsterdam shopping street.
Commercial SUVs
A commercial SUV is an SUV or crossover, that is used for commercial purposes. The category is very similar to panel trucks since the Chevrolet Suburban (an SUV) had panel truck versions, which were used for commercial purposes.
The first SUV-like vehicle that had commercial versions was the Chevrolet Suburban panel truck. Panel trucks by American manufacturers were built until the late 1970s.
While panel trucks manufactured by European manufacturers were rare, commercial versions of off-road vehicles were very common, Land Rover manufactured commercial versions of the Land Rover and the Defender. Commercial SUVs are factory-built and most of them are not independent conversions, which means they can be bought from dealerships and showrooms.
Examples of SUVs used as commercial vehicles in Europe include: Citroen C5 Aircross Commercial SUV, the Land Rover Discovery, the Dacia Duster Flika, and the Mitsubishi Pajero.
| Technology | Motorized road transport | null |
47700 | https://en.wikipedia.org/wiki/Coral | Coral | Corals are colonial marine invertebrates within the subphylum Anthozoa of the phylum Cnidaria. They typically form compact colonies of many identical individual polyps. Coral species include the important reef builders that inhabit tropical oceans and secrete calcium carbonate to form a hard skeleton.
A coral "group" is a colony of very many genetically identical polyps. Each polyp is a sac-like animal typically only a few millimeters in diameter and a few centimeters in height. A set of tentacles surround a central mouth opening. Each polyp excretes an exoskeleton near the base. Over many generations, the colony thus creates a skeleton characteristic of the species which can measure up to several meters in size. Individual colonies grow by asexual reproduction of polyps. Corals also breed sexually by spawning: polyps of the same species release gametes simultaneously overnight, often around a full moon. Fertilized eggs form planulae, a mobile early form of the coral polyp which, when mature, settles to form a new colony.
Although some corals are able to catch plankton and small fish using stinging cells on their tentacles, most corals obtain the majority of their energy and nutrients from photosynthetic unicellular dinoflagellates of the genus Symbiodinium that live within their tissues. These are commonly known as zooxanthellae and give the coral color. Such corals require sunlight and grow in clear, shallow water, typically at depths less than , but corals in the genus Leptoseris have been found as deep as . Corals are major contributors to the physical structure of the coral reefs that develop in tropical and subtropical waters, such as the Great Barrier Reef off the coast of Australia. These corals are increasingly at risk of bleaching events where polyps expel the zooxanthellae in response to stress such as high water temperature or toxins.
Other corals do not rely on zooxanthellae and can live globally in much deeper water, such as the cold-water genus Lophelia which can survive as deep as . Some have been found as far north as the Darwin Mounds, northwest of Cape Wrath, Scotland, and others off the coast of Washington state and the Aleutian Islands.
Taxonomy
The classification of corals has been discussed for millennia, owing to having similarities to both plants and animals. Aristotle's pupil Theophrastus described the red coral, korallion, in his book on stones, implying it was a mineral, but he described it as a deep-sea plant in his Enquiries on Plants, where he also mentions large stony plants that reveal bright flowers when under water in the Gulf of Heroes. Pliny the Elder stated boldly that several sea creatures including sea nettles and sponges "are neither animals nor plants, but are possessed of a third nature (tertia natura)". Petrus Gyllius copied Pliny, introducing the term zoophyta for this third group in his 1535 book On the French and Latin Names of the Fishes of the Marseilles Region; it is popularly but wrongly supposed that Aristotle created the term. Gyllius further noted, following Aristotle, how hard it was to define what was a plant and what was an animal. The Babylonian Talmud refers to coral among a list of types of trees, and the 11th-century French commentator Rashi describes it as "a type of tree (מין עץ) that grows underwater that goes by the (French) name 'coral'."
The Persian polymath Al-Biruni (d.1048) classified sponges and corals as animals, arguing that they respond to touch. Nevertheless, people believed corals to be plants until the eighteenth century when William Herschel used a microscope to establish that coral had the characteristic thin cell membranes of an animal.
Presently, corals are classified as species of animals within the sub-classes Hexacorallia and Octocorallia of the class Anthozoa in the phylum Cnidaria. Hexacorallia includes the stony corals and these groups have polyps that generally have a 6-fold symmetry. Octocorallia includes blue coral and soft corals and species of Octocorallia have polyps with an eightfold symmetry, each polyp having eight tentacles and eight mesenteries. The group of corals is paraphyletic because the sea anemones are also in the sub-class Hexacorallia.
Systematics
The delineation of coral species is challenging as hypotheses based on morphological traits contradict hypotheses formed via molecular tree-based processes. As of 2020, there are 2175 identified separate coral species, 237 of which are currently endangered, making distinguishing corals to be the utmost of importance in efforts to curb extinction. Adaptation and delineation continues to occur in species of coral in order to combat the dangers posed by the climate crisis. Corals are colonial modular organisms formed by asexually produced and genetically identical modules called polyps. Polyps are connected by living tissue to produce the full organism. The living tissue allows for inter module communication (interaction between each polyp), which appears in colony morphologies produced by corals, and is one of the main identifying characteristics for a species of coral.
There are two main classifications for corals: hard coral (scleractinian and stony coral) which form reefs by a calcium carbonate base, with polyps that bear six stiff tentacles, and soft coral (Alcyonacea and ahermatypic coral) which are pliable and formed by a colony of polyps with eight feather-like tentacles. These two classifications arose from differentiation in gene expressions in their branch tips and bases that arose through developmental signaling pathways such as Hox, Hedgehog, Wnt, BMP etc.
Scientists typically select Acropora as research models since they are the most diverse genus of hard coral, having over 120 species. Most species within this genus have polyps which are dimorphic: axial polyps grow rapidly and have lighter coloration, while radial polyps are small and are darker in coloration. In the Acropora genus, gamete synthesis and photosynthesis occur at the basal polyps, growth occurs mainly at the radial polyps. Growth at the site of the radial polyps encompasses two processes: asexual reproduction via mitotic cell proliferation, and skeleton deposition of the calcium carbonate via extra cellular matrix (EMC) proteins acting as differentially expressed (DE) signaling genes between both branch tips and bases. These processes lead to colony differentiation, which is the most accurate distinguisher between coral species. In the Acropora genus, colony differentiation through up-regulation and down-regulation of DEs.
Systematic studies of soft coral species have faced challenges due to a lack of taxonomic knowledge. Researchers have not found enough variability within the genus to confidently delineate similar species, due to a low rate in mutation of mitochondrial DNA.
Environmental factors, such as the rise of temperatures and acid levels in our oceans account for some speciation of corals in the form of species lost. Various coral species have heat shock proteins (HSP) that are also in the category of DE across species. These HSPs help corals combat the increased temperatures they are facing which lead to protein denaturing, growth loss, and eventually coral death. Approximately 33% of coral species are on the International Union for Conservation of Nature's endangered species list and at risk of species loss. Ocean acidification (falling pH levels in the oceans) is threatening the continued species growth and differentiation of corals. Mutation rates of Vibrio shilonii, the reef pathogen responsible for coral bleaching, heavily outweigh the typical reproduction rates of coral colonies when pH levels fall. Thus, corals are unable to mutate their HSPs and other climate change preventative genes to combat the increase in temperature and decrease in pH at a competitive rate to these pathogens responsible for coral bleaching, resulting in species loss.
Anatomy
For most of their life corals are sessile animals of colonies of genetically identical polyps. Each polyp varies from millimeters to centimeters in diameter, and colonies can be formed from many millions of individual polyps. Stony coral, also known as hard coral, polyps produce a skeleton composed of calcium carbonate to strengthen and protect the organism. This is deposited by the polyps and by the coenosarc, the living tissue that connects them. The polyps sit in cup-shaped depressions in the skeleton known as corallites. Colonies of stony coral are markedly variable in appearance; a single species may adopt an encrusting, plate-like, bushy, columnar or massive solid structure, the various forms often being linked to different types of habitat, with variations in light level and water movement being significant.
The body of the polyp may be roughly compared in a structure to a sac, the wall of which is composed of two layers of cells. The outer layer is known technically as the ectoderm, the inner layer as the endoderm. Between ectoderm and endoderm is a supporting layer of gelatinous substance termed mesoglea, secreted by the cell layers of the body wall. The mesoglea can contain skeletal elements derived from cells migrated from the ectoderm.
The sac-like body built up in this way is attached to a hard surface, which in hard corals are cup-shaped depressions in the skeleton known as corallites. At the center of the upper end of the sac lies the only opening called the mouth, surrounded by a circle of tentacles which resemble glove fingers. The tentacles are organs which serve both for tactile sense and for the capture of food. Polyps extend their tentacles, particularly at night, often containing coiled stinging cells (cnidocytes) which pierce, poison and firmly hold living prey paralyzing or killing them. Polyp prey includes plankton such as copepods and fish larvae. Longitudinal muscular fibers formed from the cells of the ectoderm allow tentacles to contract to convey the food to the mouth. Similarly, circularly disposed muscular fibres formed from the endoderm permit tentacles to be protracted or thrust out once they are contracted. In both stony and soft corals, the polyps can be retracted by contracting muscle fibres, with stony corals relying on their hard skeleton and cnidocytes for defense. Soft corals generally secrete terpenoid toxins to ward off predators.
In most corals, the tentacles are retracted by day and spread out at night to catch plankton and other small organisms. Shallow-water species of both stony and soft corals can be zooxanthellate, the corals supplementing their plankton diet with the products of photosynthesis produced by these symbionts. The polyps interconnect by a complex and well-developed system of gastrovascular canals, allowing significant sharing of nutrients and symbionts.
The external form of the polyp varies greatly. The column may be long and slender, or may be so short in the axial direction that the body becomes disk-like. The tentacles may number many hundreds or may be very few, in rare cases only one or two. They may be simple and unbranched, or feathery in pattern. The mouth may be level with the surface of the peristome, or may be projecting and trumpet-shaped.
Soft corals
Soft corals have no solid exoskeleton as such. However, their tissues are often reinforced by small supportive elements known as sclerites made of calcium carbonate. The polyps of soft corals have eight-fold symmetry, which is reflected in the Octo in Octocorallia.
Soft corals vary considerably in form, and most are colonial. A few soft corals are stolonate, but the polyps of most are connected by sheets of tissue called coenosarc, and in some species these sheets are thick and the polyps deeply embedded in them. Some soft corals encrust other sea objects or form lobes. Others are tree-like or whip-like and have a central axial skeleton embedded at their base in the matrix of the supporting branch. These branches are composed of a fibrous protein called gorgonin or of a calcified material.
Stony corals
The polyps of stony corals have six-fold symmetry. In stony corals, the tentacles are cylindrical and taper to a point, but in soft corals they are pinnate with side branches known as pinnules. In some tropical species, these are reduced to mere stubs and in some, they are fused to give a paddle-like appearance.
Coral skeletons are biocomposites (mineral + organics) of calcium carbonate, in the form of calcite or aragonite. In scleractinian corals, "centers of calcification" and fibers are clearly distinct structures differing with respect to both morphology and chemical compositions of the crystalline units. The organic matrices extracted from diverse species are acidic, and comprise proteins, sulphated sugars and lipids; they are species specific. The soluble organic matrices of the skeletons allow to differentiate zooxanthellae and non-zooxanthellae specimens.
Ecology
Feeding
Polyps feed on a variety of small organisms, from microscopic zooplankton to small fish. The polyp's tentacles immobilize or kill prey using stinging cells called nematocysts. These cells carry venom which they rapidly release in response to contact with another organism. A dormant nematocyst discharges in response to nearby prey touching the trigger (Cnidocil). A flap (operculum) opens and its stinging apparatus fires the barb into the prey. The venom is injected through the hollow filament to immobilise the prey; the tentacles then manoeuvre the prey into the stomach. Once the prey is digested the stomach reopens allowing the elimination of waste products and the beginning of the next hunting cycle.
Intracellular symbionts
Many corals, as well as other cnidarian groups such as sea anemones form a symbiotic relationship with a class of dinoflagellate algae, zooxanthellae of the genus Symbiodinium, which can form as much as 30% of the tissue of a polyp. Typically, each polyp harbors one species of alga, and coral species show a preference for Symbiodinium. Young corals are not born with zooxanthellae, but acquire the algae from the surrounding environment, including the water column and local sediment. The main benefit of the zooxanthellae is their ability to photosynthesize which supplies corals with the products of photosynthesis, including glucose, glycerol, also amino acids, which the corals can use for energy. Zooxanthellae also benefit corals by aiding in calcification, for the coral skeleton, and waste removal. In addition to the soft tissue, microbiomes are also found in the coral's mucus and (in stony corals) the skeleton, with the latter showing the greatest microbial richness.
The zooxanthellae benefit from a safe place to live and consume the polyp's carbon dioxide, phosphate and nitrogenous waste. Stressed corals will eject their zooxanthellae, a process that is becoming increasingly common due to strain placed on coral by rising ocean temperatures. Mass ejections are known as coral bleaching because the algae contribute to coral coloration; some colors, however, are due to host coral pigments, such as green fluorescent proteins (GFPs). Ejection increases the polyp's chance of surviving short-term stress and if the stress subsides they can regain algae, possibly of a different species, at a later time. If the stressful conditions persist, the polyp eventually dies. Zooxanthellae are located within the coral cytoplasm and due to the algae's photosynthetic activity the internal pH of the coral can be raised; this behavior indicates that the zooxanthellae are responsible to some extent for the metabolism of their host corals. Stony Coral Tissue Loss Disease has been associated with the breakdown of host-zooxanthellae physiology. Moreover, Vibrio bacterium are known to have virulence traits used for host coral tissue damage and photoinhibition of algal symbionts. Therefore, both coral and their symbiotic microorganisms could have evolved to harbour traits resistant to disease and transmission.
Reproduction
Corals can be both gonochoristic (unisexual) and hermaphroditic, each of which can reproduce sexually and asexually. Reproduction also allows coral to settle in new areas. Reproduction is coordinated by chemical communication.
Sexual
Corals predominantly reproduce sexually. About 25% of hermatypic corals (reef-building stony corals) form single-sex (gonochoristic) colonies, while the rest are hermaphroditic. It is estimated more than 67% of coral are simultaneous hermaphrodites.
Broadcasters
About 75% of all hermatypic corals "broadcast spawn" by releasing gametes—eggs and sperm—into the water where they meet and fertilize to spread offspring. Corals often synchronize their time of spawning. This reproductive synchrony is essential so that male and female gametes can meet. Spawning frequently takes place in the evening or at night, and can occur as infrequently as once a year, and within a window of 10–30 minutes.
Synchronous spawning is very typical on the coral reef, and often, all corals spawn on the same night even when multiple species are present. Synchronous spawning may form hybrids and is perhaps involved in coral speciation.
Environmental cues that influence the release of gametes into the water vary from species to species. The cues involve temperature change, lunar cycle, day length, and possibly chemical signalling.
Other factors that affect the rhythmicity of organisms in marine habitats include salinity, mechanical forces, and pressure or magnetic field changes.
Mass coral spawning often occurs at night on days following a full moon. A full moon is equivalent to four to six hours of continuous dim light exposure, which can cause light-dependent reactions in protein. Corals contain light-sensitive cryptochromes, proteins whose light-absorbing flavin structures are sensitive to different types of light. This allows corals such as Dipsastraea speciosa to detect and respond to changes in sunlight and moonlight.
Moonlight itself may actually suppress coral spawning. The most immediate cue to cause spawning appears to be the dark portion of the night between sunset and moonrise.
Over the lunar cycle, moonrise shifts progressively later, occurring after sunset on the day of the full moon. The resulting dark period between day-light and night-light removes the suppressive effect of moonlight and enables coral to spawn.
The spawning event can be visually dramatic, clouding the usually clear water with gametes. Once released, gametes fertilize at the water's surface and form a microscopic larva called a planula, typically pink and elliptical in shape. A typical coral colony needs to release several thousand larvae per year to overcome the odds against formation of a new colony.
Studies suggest that light pollution desynchronizes spawning in some coral species.
In areas such as the Red Sea, as many as 10 out of 50 species may be showing spawning asynchrony, compared to 30 years ago. The establishment of new corals in the area has decreased and in some cases ceased. The area was previously considered a refuge for corals because mass bleaching events due to climate change had not been observed there. Coral restoration techniques for coral reef management are being developed to increase fertilization rates, larval development, and settlement of new corals.
Brooders
Brooding species are most often ahermatypic (not reef-building) in areas of high current or wave action. Brooders release only sperm, which is negatively buoyant, sinking onto the waiting egg carriers that harbor unfertilized eggs for weeks. Synchronous spawning events sometimes occur even with these species. After fertilization, the corals release planula that are ready to settle.
Planulae
The time from spawning to larval settlement is usually two to three days but can occur immediately or up to two months. Broadcast-spawned planula larvae develop at the water's surface before descending to seek a hard surface on the benthos to which they can attach and begin a new colony. The larvae often need a biological cue to induce settlement such as specific crustose coralline algae species or microbial biofilms. High failure rates afflict many stages of this process, and even though thousands of eggs are released by each colony, few new colonies form. During settlement, larvae are inhibited by physical barriers such as sediment, as well as chemical (allelopathic) barriers. The larvae metamorphose into a single polyp and eventually develops into a juvenile and then adult by asexual budding and growth.
Asexual
Within a coral head, the genetically identical polyps reproduce asexually, either by budding (gemmation) or by dividing, whether longitudinally or transversely.
Budding involves splitting a smaller polyp from an adult. As the new polyp grows, it forms its body parts. The distance between the new and adult polyps grows, and with it, the coenosarc (the common body of the colony). Budding can be intratentacular, from its oral discs, producing same-sized polyps within the ring of tentacles, or extratentacular, from its base, producing a smaller polyp.
Division forms two polyps that each become as large as the original. Longitudinal division begins when a polyp broadens and then divides its coelenteron (body), effectively splitting along its length. The mouth divides and new tentacles form. The two polyps thus created then generate their missing body parts and exoskeleton. Transversal division occurs when polyps and the exoskeleton divide transversally into two parts. This means one has the basal disc (bottom) and the other has the oral disc (top); the new polyps must separately generate the missing pieces.
Asexual reproduction offers the benefits of high reproductive rate, delaying senescence, and replacement of dead modules, as well as geographical distribution.
Colony division
Whole colonies can reproduce asexually, forming two colonies with the same genotype. The possible mechanisms include fission, bailout and fragmentation. Fission occurs in some corals, especially among the family Fungiidae, where the colony splits into two or more colonies during early developmental stages. Bailout occurs when a single polyp abandons the colony and settles on a different substrate to create a new colony. Fragmentation involves individuals broken from the colony during storms or other disruptions. The separated individuals can start new colonies.
Coral microbiomes
Corals are one of the more common examples of an animal host whose symbiosis with microalgae can turn to dysbiosis, and is visibly detected as bleaching. Coral microbiomes have been examined in a variety of studies, which demonstrate how oceanic environmental variations, most notably temperature, light, and inorganic nutrients, affect the abundance and performance of the microalgal symbionts, as well as calcification and physiology of the host.
Studies have also suggested that resident bacteria, archaea, and fungi additionally contribute to nutrient and organic matter cycling within the coral, with viruses also possibly playing a role in structuring the composition of these members, thus providing one of the first glimpses at a multi-domain marine animal symbiosis. The gammaproteobacterium Endozoicomonas is emerging as a central member of the coral's microbiome, with flexibility in its lifestyle. Given the recent mass bleaching occurring on reefs, corals will likely continue to be a useful and popular system for symbiosis and dysbiosis research.
Astrangia poculata, the northern star coral, is a temperate stony coral, widely documented along the eastern coast of the United States. The coral can live with and without zooxanthellae (algal symbionts), making it an ideal model organism to study microbial community interactions associated with symbiotic state. However, the ability to develop primers and probes to more specifically target key microbial groups has been hindered by the lack of full-length 16S rRNA sequences, since sequences produced by the Illumina platform are of insufficient length (approximately 250 base pairs) for the design of primers and probes. In 2019, Goldsmith et al. demonstrated Sanger sequencing was capable of reproducing the biologically relevant diversity detected by deeper next-generation sequencing, while also producing longer sequences useful to the research community for probe and primer design (see diagram on right).
Holobionts
Reef-building corals are well-studied holobionts that include the coral itself together with its symbiont zooxanthellae (photosynthetic dinoflagellates), as well as its associated bacteria and viruses. Co-evolutionary patterns exist for coral microbial communities and coral phylogeny.
It is known that the coral's microbiome and symbiont influence host health, however, the historic influence of each member on others is not well understood. Scleractinian corals have been diversifying for longer than many other symbiotic systems, and their microbiomes are known to be partially species-specific. It has been suggested that Endozoicomonas, a commonly highly abundant bacterium in corals, has exhibited codiversification with its host. This hints at an intricate set of relationships between the members of the coral holobiont that have been developing as evolution of these members occurs.
A study published in 2018 revealed evidence of phylosymbiosis between corals and their tissue and skeleton microbiomes. The coral skeleton, which represents the most diverse of the three coral microbiomes, showed the strongest evidence of phylosymbiosis. Coral microbiome composition and richness were found to reflect coral phylogeny. For example, interactions between bacterial and eukaryotic coral phylogeny influence the abundance of Endozoicomonas, a highly abundant bacterium in the coral holobiont. However, host-microbial cophylogeny appears to influence only a subset of coral-associated bacteria.
Reefs
Many corals in the order Scleractinia are hermatypic, meaning that they are involved in building reefs. Most such corals obtain some of their energy from zooxanthellae in the genus Symbiodinium. These are symbiotic photosynthetic dinoflagellates which require sunlight; reef-forming corals are therefore found mainly in shallow water. They secrete calcium carbonate to form hard skeletons that become the framework of the reef. However, not all reef-building corals in shallow water contain zooxanthellae, and some deep water species, living at depths to which light cannot penetrate, form reefs but do not harbour the symbionts.
There are various types of shallow-water coral reef, including fringing reefs, barrier reefs and atolls; most occur in tropical and subtropical seas. They are very slow-growing, adding perhaps one centimetre (0.4 in) in height each year. The Great Barrier Reef is thought to have been laid down about two million years ago. Over time, corals fragment and die, sand and rubble accumulates between the corals, and the shells of clams and other molluscs decay to form a gradually evolving calcium carbonate structure. Coral reefs are extremely diverse marine ecosystems hosting over 4,000 species of fish, massive numbers of cnidarians, molluscs, crustaceans, and many other animals.
Evolution
At certain times in the geological past, corals were very abundant. Like modern corals, their ancestors built reefs, some of which ended as great structures in sedimentary rocks. Fossils of fellow reef-dwellers algae, sponges, and the remains of many echinoids, brachiopods, bivalves, gastropods, and trilobites appear along with coral fossils. This makes some corals useful index fossils. Coral fossils are not restricted to reef remnants, and many solitary fossils are found elsewhere, such as Cyclocyathus, which occurs in England's Gault clay formation.
Early corals
Corals first appeared in the Cambrian about . Fossils are extremely rare until the Ordovician period, 100 million years later, when Heliolitida, rugose, and tabulate corals became widespread. Paleozoic corals often contained numerous endobiotic symbionts.
Tabulate corals occur in limestones and calcareous shales of the Ordovician period, with a gap in the fossil record due to extinction events at the end of the Ordovician. Corals reappeared some millions of years later during the Silurian period, and tabulate corals often form low cushions or branching masses of calcite alongside rugose corals. Tabulate coral numbers began to decline during the middle of the Silurian period.
Rugose or horn corals became dominant by the middle of the Silurian period, and during the Devonian, corals flourished with more than 200 genera. The rugose corals existed in solitary and colonial forms, and were also composed of calcite. Both rugose and tabulate corals became extinct in the Permian–Triassic extinction event (along with 85% of marine species), and there is a gap of tens of millions of years until new forms of coral evolved in the Triassic.
Modern corals
The currently ubiquitous stony corals, Scleractinia, appeared in the Middle Triassic to fill the niche vacated by the extinct rugose and tabulate orders and is not closely related to the earlier forms. Unlike the corals prevalent before the Permian extinction, which formed skeletons of a form of calcium carbonate known as calcite, modern stony corals form skeletons composed of the aragonite. Their fossils are found in small numbers in rocks from the Triassic period, and become common in the Jurassic and later periods. Although they are geologically younger than the tabulate and rugose corals, the aragonite of their skeletons is less readily preserved, and their fossil record is accordingly less complete.
Status
Threats
Coral reefs are under stress around the world. In particular, coral mining, agricultural and urban runoff, pollution (organic and inorganic), overfishing, blast fishing, disease, and the digging of canals and access into islands and bays are localized threats to coral ecosystems. Broader threats are sea temperature rise, sea level rise and pH changes from ocean acidification, all associated with greenhouse gas emissions. In 1998, 16% of the world's reefs died as a result of increased water temperature.
Approximately 10% of the world's coral reefs are dead. About 60% of the world's reefs are at risk due to human-related activities. The threat to reef health is particularly strong in Southeast Asia, where 80% of reefs are endangered. Over 50% of the world's coral reefs may be destroyed by 2030; as a result, most nations protect them through environmental laws.
In the Caribbean and tropical Pacific, direct contact between ~40–70% of common seaweeds and coral causes bleaching and death to the coral via transfer of lipid-soluble metabolites. Seaweed and algae proliferate given adequate nutrients and limited grazing by herbivores such as parrotfish.
Water temperature changes of more than or salinity changes can kill some species of coral. Under such environmental stresses, corals expel their Symbiodinium; without them, coral tissues reveal the white of their skeletons, an event known as coral bleaching.
Submarine springs found along the coast of Mexico's Yucatán Peninsula produce water with a naturally low pH (relatively high acidity) providing conditions similar to those expected to become widespread as the oceans absorb carbon dioxide. Surveys discovered multiple species of live coral that appeared to tolerate the acidity. The colonies were small and patchily distributed and had not formed structurally complex reefs such as those that compose the nearby Mesoamerican Barrier Reef System.
Coral health
To assess the threat level of coral, scientists developed a coral imbalance ratio, Log (Average abundance of disease-associated taxa / Average abundance of healthy associated taxa). The lower the ratio the healthier the microbial community is. This ratio was developed after the microbial mucus of coral was collected and studied.
Climate change impacts
Increasing sea surface temperatures in tropical regions (~) the last century have caused major coral bleaching, death, and therefore shrinking coral populations. Although coral are able to adapt and acclimate, it is uncertain if this evolutionary process will happen quickly enough to prevent major reduction of their numbers. Climate change causes more frequent and more severe storms that can destroy coral reefs.
Annual growth bands in some corals, such as the deep sea bamboo corals (Isididae), may be among the first signs of the effects of ocean acidification on marine life. The growth rings allow geologists to construct year-by-year chronologies, a form of incremental dating, which underlie high-resolution records of past climatic and environmental changes using geochemical techniques.
Certain species form communities called microatolls, which are colonies whose top is dead and mostly above the water line, but whose perimeter is mostly submerged and alive. Average tide level limits their height. By analyzing the various growth morphologies, microatolls offer a low-resolution record of sea level change. Fossilized microatolls can also be dated using radiocarbon dating. Such methods can help to reconstruct Holocene sea levels.
Though coral have large sexually-reproducing populations, their evolution can be slowed by abundant asexual reproduction. Gene flow is variable among coral species. According to the biogeography of coral species, gene flow cannot be counted on as a dependable source of adaptation as they are very stationary organisms. Also, coral longevity might factor into their adaptivity.
However, adaptation to climate change has been demonstrated in many cases, which is usually due to a shift in coral and zooxanthellae genotypes. These shifts in allele frequency have progressed toward more tolerant types of zooxanthellae. Scientists found that a certain scleractinian zooxanthella is becoming more common where sea temperature is high. Symbionts able to tolerate warmer water seem to photosynthesise more slowly, implying an evolutionary trade-off.
In the Gulf of Mexico, where sea temperatures are rising, cold-sensitive staghorn and elkhorn coral have shifted in location.
Not only have the symbionts and specific species been shown to shift, but there seems to be a certain growth rate favorable to selection. Slower-growing but more heat-tolerant corals have become more common. The changes in temperature and acclimation are complex. Some reefs in current shadows represent a refugium location that will help them adjust to the disparity in the environment even if eventually the temperatures may rise more quickly there than in other locations. This separation of populations by climatic barriers causes a realized niche to shrink greatly in comparison to the old fundamental niche.
Geochemistry
Corals are shallow, colonial organisms that integrate oxygen and trace elements into their skeletal aragonite (polymorph of calcite) crystalline structures as they grow. Geochemical anomalies within the crystalline structures of corals represent functions of temperature, salinity and oxygen isotopic composition. Such geochemical analysis can help with climate modeling. The ratio of oxygen-18 to oxygen-16 (δ18O), for example, is a proxy for temperature.
Strontium/calcium ratio anomaly
Time can be attributed to coral geochemistry anomalies by correlating strontium/calcium minimums with sea surface temperature (SST) maximums to data collected from NINO 3.4 SSTA.
Oxygen isotope anomaly
The comparison of coral strontium/calcium minimums with sea surface temperature maximums, data recorded from NINO 3.4 SSTA, time can be correlated to coral strontium/calcium and δ18O variations. To confirm the accuracy of the annual relationship between Sr/Ca and δ18O variations, a perceptible association to annual coral growth rings confirms the age conversion. Geochronology is established by the blending of Sr/Ca data, growth rings, and stable isotope data. El Nino-Southern Oscillation (ENSO) is directly related to climate fluctuations that influence coral δ18O ratio from local salinity variations associated with the position of the South Pacific convergence zone (SPCZ) and can be used for ENSO modeling.
Sea surface temperature and sea surface salinity
The global moisture budget is primarily being influenced by tropical sea surface temperatures from the position of the Intertropical Convergence Zone (ITCZ). The Southern Hemisphere has a unique meteorological feature positioned in the southwestern Pacific Basin called the South Pacific Convergence Zone (SPCZ), which contains a perennial position within the Southern Hemisphere. During ENSO warm periods, the SPCZ reverses orientation extending from the equator down south through Solomon Islands, Vanuatu, Fiji and towards the French Polynesian Islands; and due east towards South America affecting geochemistry of corals in tropical regions.
Geochemical analysis of skeletal coral can be linked to sea surface salinity (SSS) and sea surface temperature (SST), from El Nino 3.4 SSTA data, of tropical oceans to seawater δ18O ratio anomalies from corals. ENSO phenomenon can be related to variations in sea surface salinity (SSS) and sea surface temperature (SST) that can help model tropical climate activities.
Limited climate research on current species
Climate research on live coral species is limited to a few studied species. Studying Porites coral provides a stable foundation for geochemical interpretations that is much simpler to physically extract data in comparison to Platygyra species where the complexity of Platygyra species skeletal structure creates difficulty when physically sampled, which happens to be one of the only multidecadal living coral records used for coral paleoclimate modeling.
Protection
Marine Protected Areas, Biosphere reserves, marine parks, national monuments world heritage status, fishery management and habitat protection can protect reefs from anthropogenic damage.
Many governments now prohibit removal of coral from reefs, and inform coastal residents about reef protection and ecology. While local action such as habitat restoration and herbivore protection can reduce local damage, the longer-term threats of acidification, temperature change and sea-level rise remain a challenge.
Protecting networks of diverse and healthy reefs, not only climate refugia, helps ensure the greatest chance of genetic diversity, which is critical for coral to adapt to new climates. A variety of conservation methods applied across marine and terrestrial threatened ecosystems makes coral adaption more likely and effective.
To eliminate destruction of corals in their indigenous regions, projects have been started to grow corals in non-tropical countries.
Relation to humans
Local economies near major coral reefs benefit from an abundance of fish and other marine creatures as a food source. Reefs also provide recreational scuba diving and snorkeling tourism. These activities can damage coral but international projects such as Green Fins that encourage dive and snorkel centres to follow a Code of Conduct have been proven to mitigate these risks.
Jewelry
Corals' many colors give it appeal for necklaces and other jewelry. Intensely red coral is prized as a gemstone. Sometimes called fire coral, it is not the same as fire coral. Red coral is very rare because of overharvesting. In general, it is inadvisable to give coral as gifts since they are in decline from stressors like climate change, pollution, and unsustainable fishing.
Always considered a precious mineral, "the Chinese have long associated red coral with auspiciousness and longevity because of its color and its resemblance to deer antlers (so by association, virtue, long life, and high rank". It reached its height of popularity during the Manchu or Qing Dynasty (1644–1911) when it was almost exclusively reserved for the emperor's use either in the form of coral beads (often combined with pearls) for court jewelry or as decorative Penjing (decorative miniature mineral trees). Coral was known as shanhu in Chinese. The "early-modern 'coral network' [began in] the Mediterranean Sea [and found its way] to Qing China via the English East India Company". There were strict rules regarding its use in a code established by the Qianlong Emperor in 1759.
Medicine
In medicine, chemical compounds from corals can potentially be used to treat cancer, neurological diseases, inflammation including arthritis, pain, bone loss, high blood pressure and for other therapeutic uses. Coral skeletons, e.g. Isididae are being researched for their potential near-future use for bone grafting in humans.
Coral Calx, known as Praval Bhasma in Sanskrit, is widely used in traditional system of Indian medicine as a supplement in the treatment of a variety of bone metabolic disorders associated with calcium deficiency. In classical times ingestion of pulverized coral, which consists mainly of the weak base calcium carbonate, was recommended for calming stomach ulcers by Galen and Dioscorides.
Construction
Coral reefs in places such as the East African coast are used as a source of building material. Ancient (fossil) coral limestone, notably including the Coral Rag Formation of the hills around Oxford (England), was once used as a building stone, and can be seen in some of the oldest buildings in that city including the Saxon tower of St Michael at the Northgate, St. George's Tower of Oxford Castle, and the medieval walls of the city.
Shoreline protection
Healthy coral reefs absorb 97 percent of a wave's energy, which buffers shorelines from currents, waves, and storms, helping to prevent loss of life and property damage. Coastlines protected by coral reefs are also more stable in terms of erosion than those without.
Local economies
Coastal communities near coral reefs rely heavily on them. Worldwide, more than 500 million people depend on coral reefs for food, income, coastal protection, and more. The total economic value of coral reef services in the United States – including fisheries, tourism, and coastal protection – is more than $3.4 billion a year.
Aquaria
The saltwater fishkeeping hobby has expanded, over recent years, to include reef tanks, fish tanks that include large amounts of live rock on which coral is allowed to grow and spread. These tanks are either kept in a natural-like state, with algae (sometimes in the form of an algae scrubber) and a deep sand bed providing filtration, or as "show tanks", with the rock kept largely bare of the algae and microfauna that would normally populate it, in order to appear neat and clean.
The most popular kind of coral kept is soft coral, especially zoanthids and mushroom corals, which are especially easy to grow and propagate in a wide variety of conditions, because they originate in enclosed parts of reefs where water conditions vary and lighting may be less reliable and direct. More serious fishkeepers may keep small polyp stony coral, which is from open, brightly lit reef conditions and therefore much more demanding, while large polyp stony coral is a sort of compromise between the two.
Aquaculture
Coral aquaculture, also known as coral farming or coral gardening, is the cultivation of corals for commercial purposes or coral reef restoration. Aquaculture is showing promise as a potentially effective tool for restoring coral reefs, which have been declining around the world. The process bypasses the early growth stages of corals when they are most at risk of dying. Coral fragments known as "seeds" are grown in nurseries then replanted on the reef. Coral is farmed by coral farmers who live locally to the reefs and farm for reef conservation or for income. It is also farmed by scientists for research, by businesses for the supply of the live and ornamental coral trade and by private aquarium hobbyists.
Gallery
Further images: commons:Category:Coral reefs and commons:Category:Corals
| Biology and health sciences | Cnidarians | null |
47710 | https://en.wikipedia.org/wiki/Cotton%20gin | Cotton gin | A cotton gin—meaning "cotton engine"—is a machine that quickly and easily separates cotton fibers from their seeds, enabling much greater productivity than manual cotton separation. The separated seeds may be used to grow more cotton or to produce cottonseed oil.
Handheld roller gins had been used in the Indian subcontinent since at earliest AD 500 and then in other regions. The Indian worm-gear roller gin was invented sometime around the 16th century and has, according to Lakwete, remained virtually unchanged up to the present time. A modern mechanical cotton gin was created by American inventor Eli Whitney in 1793 and patented in 1794.
Whitney's gin used a combination of a wire screen and small wire hooks to pull the cotton through, while brushes continuously removed the loose cotton lint to prevent jams. It revolutionized the cotton industry in the United States, but also inadvertently led to the growth of slavery in the American South. Whitney's gin made cotton farming more profitable and efficient so plantation owners expanded their plantations and used more of their slaves to pick cotton. Whitney never invented the machine to harvest cotton: it still had to be picked by hand. The invention has thus been identified as an inadvertent contributing factor to the outbreak of the American Civil War. Modern automated cotton gins use multiple powered cleaning cylinders and saws, and offer far higher productivity than their hand-powered precursors.
Purpose
Cotton fibers are produced in the seed pods ("bolls") of the cotton plant where the fibers ("lint") in the bolls are tightly interwoven with seeds. To make the fibers usable, the seeds and fibers must first be separated, a task which had been previously performed manually, with production of cotton requiring hours of labor for the separation. Many simple seed-removing devices had been invented, but until the innovation of the cotton gin, most required significant operator attention and worked only on a small scale.
Mechanism
Whitney's gin is made with two rotating cylinders. The first cylinder has lines of teeth around the circumference, and angled against this cylinder is a metal plate with small holes, "ginning ribs", through which the teeth can fit with minimal gaps. The teeth grip the cotton fibers as the mechanism rotates, dragging them through these small holes. The seeds are too big to fit through the holes, and are thus removed from the rotating cotton by the metal plate, before they fall into a collecting pot. On the other side of the first cylinder, there is a second cylinder, also rotating, with brushes attached. This second cylinder wipes the cotton from the first, and deposits it into the collecting bucket.
The seed is reused for planting or is sent to an oil mill to be further processed into cottonseed oil and cottonseed meal. The lint cleaners again use saws and grid bars, this time to separate immature seeds and any remaining foreign matter from the fibers. The bale press then compresses the cotton into bales for storage and shipping. Modern gins can process up to 15 tonnes (33,000 lb) of cotton per hour.
History
A single-roller cotton gin came into use in India by the 5th century. An improvement invented in India was the two-roller gin, known as the "churka", "charki", or "wooden-worm-worked roller".
Early cotton gins
The earliest versions of the cotton gin consisted of a single roller made of iron or wood and a flat piece of stone or wood. The earliest evidence of the cotton gin is found in the fifth century, in the form of Buddhist paintings depicting a single-roller gin in the Ajanta Caves in western India. These early gins were difficult to use and required a great deal of skill. A narrow single roller was necessary to expel the seeds from the cotton without crushing the seeds. The design was similar to that of a mealing stone, which was used to grind grain. The early history of the cotton gin is ambiguous, because archeologists likely mistook the cotton gin's parts for other tools.
Medieval and Early Modern India
Between the 12th and 14th centuries, dual-roller gins appeared in India and China. The Indian version of the dual-roller gin was prevalent throughout the Mediterranean cotton trade by the 16th century. This mechanical device was, in some areas, driven by waterpower.
The worm gear roller gin, which was invented in the Indian subcontinent during the early Delhi Sultanate era of the 13th to 14th centuries, came into use in the Mughal Empire sometime around the 16th century, and is still used in the Indian subcontinent through to the present day. Another innovation, the incorporation of the crank handle in the cotton gin, first appeared sometime during the late Delhi Sultanate or the early Mughal Empire. The incorporation of the worm gear and crank handle into the roller cotton gin led to greatly expanded Indian cotton textile production during the Mughal era.
It was reported that, with an Indian cotton gin, which is half machine and half tool, one man and one woman could clean 28 pounds of cotton per day. With a modified Forbes version, one man and a boy could produce 250 pounds per day. If oxen were used to power 16 of these machines, and a few people's labor was used to feed them, they could produce as much work as 750 people did formerly.
United States
The Indian roller cotton gin, known as the churka or charkha, was introduced to the United States in the mid-18th century, when it was adopted in the southern United States. The device was adopted for cleaning long-staple cotton but was not suitable for the short-staple cotton that was more common in certain states such as Georgia. Several modifications were made to the Indian roller gin by Mr. Krebs in 1772 and Joseph Eve in 1788, but their uses remained limited to the long-staple variety, up until Eli Whitney's development of a short-staple cotton gin in 1793.
Eli Whitney's patent
Eli Whitney (1765–1825) applied for a patent of his cotton gin on October 28, 1793; the patent was granted on March 14, 1794, but was not validated until 1807. Whitney's patent was assigned patent number 72X. There is slight controversy over whether the idea of the modern cotton gin and its constituent elements are correctly attributed to Eli Whitney. The popular image of Whitney inventing the cotton gin is attributed to an article on the subject written in the early 1870s and later reprinted in 1910 in The Library of Southern Literature. In this article, the author claimed Catharine Littlefield Greene suggested to Whitney the use of a brush-like component instrumental in separating out the seeds and cotton. Greene's alleged role in the invention of the gin has not been verified independently.
Whitney's cotton gin model was capable of cleaning of lint per day. The model consisted of a wooden cylinder covered by rows of slender wires which caught the fibers of the cotton bolls. Each row of wires then passed through the bars of a comb-like grid, pulling the cotton fibers through the grid as they did. The comb-like teeth of the grids were closely spaced, preventing the seeds, fragments of the hard dried calyx of the original cotton flower, or sticks and other debris attached to the fibers from passing through. A series of brushes on a second rotating cylinder then brushed the now-cleaned fibers loose from the wires, preventing the mechanism from jamming.
Many contemporary inventors attempted to develop a design that would process short staple cotton, and Hodgen Holmes, Robert Watkins, William Longstreet, and John Murray had all been issued patents for improvements to the cotton gin by 1796. However, the evidence indicates Whitney did invent the saw gin, for which he is famous. Although he spent many years in court attempting to enforce his patent against planters who made unauthorized copies, a change in patent law ultimately made his claim legally enforceable – too late for him to make much money from the device in the single year remaining before the patent expired.
McCarthy's gin
While Whitney's gin facilitated the cleaning of seeds from short-staple cotton, it damaged the fibers of extra-long staple cotton (Gossypium barbadense). In 1840 Fones McCarthy received a patent for a "Smooth Cylinder Cotton-gin", a roller gin. McCarthy's gin was marketed for use with both short-staple and extra-long staple cotton but was particularly useful for processing long-staple cotton. After McCarthy's patent expired in 1861, McCarthy type gins were manufactured in Britain and sold around the world. McCarthy's gin was adopted for cleaning the Sea Island variety of extra-long staple cotton grown in Florida, Georgia and South Carolina. It cleaned cotton several times faster than the older gins, and, when powered by one horse, produced 150 to 200 pounds of lint a day. The McCarthy gin used a reciprocating knife to detach seed from the lint. Vibration caused by the reciprocating motion limited the speed at which the gin could operate. In the middle of the 20th Century gins using a rotating blade replaced ones using a reciprocating blade. These descendants of the McCarthy gin are the only gins now used for extra-long staple cotton in the United States.
Munger system gin
For a decade and a half after the end of the Civil War in 1865, a number of innovative features became widely used for ginning in the United States. They included steam power instead of animal power, an automatic feeder to assure that the gin stand ran smoothly, a condenser to make the clean cotton coming out of the gin easier to handle, and indoor presses so that cotton no longer had to be carried across the gin yard to be baled. Then, in 1879, while he was running his father's gin in Rutersville, Texas, Robert S. Munger invented additional system ginning techniques. Robert and his wife, Mary Collett, later moved to Mexia, Texas, built a system gin, and obtained related patents.
The Munger System Ginning Outfit (or system gin) integrated all the ginning operation machinery, thus assuring the cotton would flow through the machines smoothly. Such system gins use air to move cotton from machine to machine. Munger's motivation for his inventions included improving employee working conditions in the gin. However, the selling point for most gin owners was the accompanying cost savings while producing cotton both more speedily and of higher quality.
By the 1960s, many other advances had been made in ginning machinery, but the manner in which cotton flowed through the gin machinery continued to be the Munger system.
Economic Historian William H. Phillips referred to the development of system ginning as "The Munger Revolution" in cotton ginning. He wrote,
"The Munger innovations were the culmination of what geographer Charles S. Aiken has termed the second ginning revolution, in which the privately owned plantation gins were replaced by large-scale public ginneries. This revolution, in turn, led to a major restructuring of the cotton gin industry, as the small, scattered gin factories and shops of the nineteenth century gave way to a dwindling number of large twentieth-century corporations designing and constructing entire ginning operations."
One of the few (and perhaps only) examples of a Munger gin left in existence is on display at Frogmore Plantation in Louisiana.
Effects in the United States
Prior to the introduction of the mechanical cotton gin, cotton had required considerable labor to clean and separate the fibers from the seeds. With Eli Whitney's gin, cotton became a tremendously profitable business, creating many fortunes in the Antebellum South. Cities such as New Orleans, Louisiana; Mobile, Alabama; Charleston, South Carolina; and Galveston, Texas became major shipping ports, deriving substantial economic benefit from cotton raised throughout the South. Additionally, the greatly expanded supply of cotton created strong demand for textile machinery and improved machine designs that replaced wooden parts with metal. This led to the invention of many machine tools in the early 19th century.
The invention of the cotton gin caused massive growth in the production of cotton in the United States, concentrated mostly in the South. Cotton production expanded from 750,000 bales in 1830 to 2.85 million bales in 1850. As a result, the region became even more dependent on plantations that used black slave labor, with plantation agriculture becoming the largest sector of its economy. While it took a single laborer about ten hours to separate a single pound of fiber from the seeds, a team of two or three slaves using a cotton gin could produce around fifty pounds of cotton in just one day. The number of slaves rose in concert with the increase in cotton production, increasing from around 700,000 in 1790 to around 3.2 million in 1850. The invention of the cotton gin led to increased demands for slave labor in the American South, reversing the economic decline that had occurred in the region during the late 18th century. The cotton gin thus "transformed cotton as a crop and the American South into the globe's first agricultural powerhouse".
The invention of the cotton gin led to an increase in the use of slaves on Southern plantations. Because of that inadvertent effect on American slavery, which ensured that the South's economy developed in the direction of plantation-based agriculture (while encouraging the growth of the textile industry elsewhere, such as in the North), the invention of the cotton gin is frequently cited as one of the indirect causes of the American Civil War.
Modern cotton gins
In modern cotton production, cotton arrives at industrial cotton gins either in trailers, in compressed rectangular "modules" weighing up to 10 metric tons each or in polyethylene wrapped round modules similar to a bale of hay produced during the picking process by the most recent generation of cotton pickers. Trailer cotton (i.e. cotton not compressed into modules) arriving at the gin is sucked in via a pipe, approximately in diameter, that is swung over the cotton. This pipe is usually manually operated but is increasingly automated in modern cotton plants. The need for trailers to haul the product to the gin has been drastically reduced since the introduction of modules. If the cotton is shipped in modules, the module feeder breaks the modules apart using spiked rollers and extracts the largest pieces of foreign material from the cotton. The module feeder's loose cotton is then sucked into the same starting point as the trailer cotton.
The cotton then enters a dryer, which removes excess moisture. The cylinder cleaner uses six or seven rotating, spiked cylinders to break up large clumps of cotton. Finer foreign material, such as soil and leaves, passes through rods or screens for removal. The stick machine uses centrifugal force to remove larger foreign matter, such as sticks and burrs, while the cotton is held by rapidly rotating saw cylinders.
The gin stand uses the teeth of rotating saws to pull the cotton through a series of "ginning ribs", which pull the fibers from the seeds which are too large to pass through the ribs. The cleaned seed is then removed from the gin via an auger conveyor system. The seed is reused for planting or is sent to an oil mill to be further processed into cottonseed oil and cottonseed meal. The lint cleaners again use saws and grid bars, this time to separate immature seeds and any remaining foreign matter from the fibers. The bale press then compresses the cotton into bales for storage and shipping. Modern gins can process up to of cotton per hour.
Modern cotton gins create a substantial amount of cotton gin residue (CGR) consisting of sticks, leaves, dirt, immature bolls, and cottonseed. Research is currently under way to investigate the use of this waste in producing ethanol. Due to fluctuations in the chemical composition in processing, there is difficulty in creating a consistent ethanol process, but there is potential to further maximize the utilization of waste in cotton production.
| Technology | Farm and garden machinery | null |
47713 | https://en.wikipedia.org/wiki/Direct%20current | Direct current | Direct current (DC) is one-directional flow of electric charge. An electrochemical cell is a prime example of DC power. Direct current may flow through a conductor such as a wire, but can also flow through semiconductors, insulators, or even through a vacuum as in electron or ion beams. The electric current flows in a constant direction, distinguishing it from alternating current (AC). A term formerly used for this type of current was galvanic current.
The abbreviations AC and DC are often used to mean simply alternating and direct, as when they modify current or voltage.
Direct current may be converted from an alternating current supply by use of a rectifier, which contains electronic elements (usually) or electromechanical elements (historically) that allow current to flow only in one direction. Direct current may be converted into alternating current via an inverter.
Direct current has many uses, from the charging of batteries to large power supplies for electronic systems, motors, and more. Very large quantities of electrical energy provided via direct-current are used in smelting of aluminum and other electrochemical processes. It is also used for some railways, especially in urban areas. High-voltage direct current is used to transmit large amounts of power from remote generation sites or to interconnect alternating current power grids.
History
Direct current was produced in 1800 by Italian physicist Alessandro Volta's battery, his Voltaic pile. The nature of how current flowed was not yet understood. French physicist André-Marie Ampère conjectured that current travelled in one direction from positive to negative. When French instrument maker Hippolyte Pixii built the first dynamo electric generator in 1832, he found that as the magnet used passed the loops of wire each half turn, it caused the flow of electricity to reverse, generating an alternating current. At Ampère's suggestion, Pixii later added a commutator, a type of "switch" where contacts on the shaft work with "brush" contacts to produce direct current.
The late 1870s and early 1880s saw electricity starting to be generated at power stations. These were initially set up to power arc lighting (a popular type of street lighting) running on very high voltage (usually higher than 3,000 volts) direct current or alternating current. This was followed by the widespread use of low voltage direct current for indoor electric lighting in business and homes after inventor Thomas Edison launched his incandescent bulb based electric "utility" in 1882. Because of the significant advantages of alternating current over direct current in using transformers to raise and lower voltages to allow much longer transmission distances, direct current was replaced over the next few decades by alternating current in power delivery. In the mid-1950s, high-voltage direct current transmission was developed, and is now an option instead of long-distance high voltage alternating current systems. For long distance undersea cables (e.g. between countries, such as NorNed), this DC option is the only technically feasible option. For applications requiring direct current, such as third rail power systems, alternating current is distributed to a substation, which utilizes a rectifier to convert the power to direct current.
Various definitions
The term DC is used to refer to power systems that use only one electrical polarity of voltage or current, and to refer to the constant, zero-frequency, or slowly varying local mean value of a voltage or current. For example, the voltage across a DC voltage source is constant as is the current through a direct current source. The DC solution of an electric circuit is the solution where all voltages and currents are constant. Any stationary voltage or current waveform can be decomposed into a sum of a DC component and a zero-mean time-varying component; the DC component is defined to be the expected value, or the average value of the voltage or current over all time.
Although DC stands for "direct current", DC often refers to "constant polarity". Under this definition, DC voltages can vary in time, as seen in the raw output of a rectifier or the fluctuating voice signal on a telephone line.
Some forms of DC (such as that produced by a voltage regulator) have almost no variations in voltage, but may still have variations in output power and current.
Circuits
A direct current circuit is an electrical circuit that consists of any combination of constant voltage sources, constant current sources, and resistors. In this case, the circuit voltages and currents are independent of time. A particular circuit voltage or current does not depend on the past value of any circuit voltage or current. This implies that the system of equations that represent a DC circuit do not involve integrals or derivatives with respect to time.
If a capacitor or inductor is added to a DC circuit, the resulting circuit is not, strictly speaking, a DC circuit. However, most such circuits have a DC solution. This solution gives the circuit voltages and currents when the circuit is in DC steady state. Such a circuit is represented by a system of differential equations. The solution to these equations usually contain a time varying or transient part as well as constant or steady state part. It is this steady state part that is the DC solution. There are some circuits that do not have a DC solution. Two simple examples are a constant current source connected to a capacitor and a constant voltage source connected to an inductor.
In electronics, it is common to refer to a circuit that is powered by a DC voltage source such as a battery or the output of a DC power supply as a DC circuit even though what is meant is that the circuit is DC powered.
In a DC circuit, a power source (e.g. a battery, capacitor, etc.) has a positive and negative terminal, and likewise, the load also has a positive and negative terminal. To complete the circuit, positive charges need to flow from the power source to the load. The charges will then return to the negative terminal of the load, which will then flow back to the negative terminal of the battery, completing the circuit. If either the positive or negative terminal is disconnected, the circuit will not be complete and the charges will not flow.
In some DC circuit applications, polarity does not matter, which means you can connect positive and negative backwards and the circuit will still be complete and the load will still function normally. However, in most DC applications, polarity does matter, and connecting the circuit backwards will result in the load not working properly.
Applications
Domestic and commercial buildings
DC is commonly found in many extra-low voltage applications and some low-voltage applications, especially where these are powered by batteries or solar power systems (since both can produce only DC).
Most electronic circuits or devices require a DC power supply.
Domestic DC installations usually have different types of sockets, connectors, switches, and fixtures from those suitable for alternating current. This is mostly due to the lower voltages used, resulting in higher currents to produce the same amount of power.
It is usually important with a DC appliance to observe polarity, unless the device has a diode bridge to correct for this.
Automotive
Most automotive applications use DC. An automotive battery provides power for engine starting, lighting, the ignition system, the climate controls, and the infotainment system among others. The alternator is an AC device which uses a rectifier to produce DC for battery charging. Most highway passenger vehicles use nominally 12 V systems. Many heavy trucks, farm equipment, or earth moving equipment with Diesel engines use 24 volt systems. In some older vehicles, 6 V was used, such as in the original classic Volkswagen Beetle. At one point a 42 V electrical system was considered for automobiles, but this found little use. To save weight and wire, often the metal frame of the vehicle is connected to one pole of the battery and used as the return conductor in a circuit. Often the negative pole is the chassis "ground" connection, but positive ground may be used in some wheeled or marine vehicles.
In a battery electric vehicle, there are usually two separate DC systems. The "low voltage" DC system typically operates at 12V, and serves the same purpose as in an internal combustion engine vehicle. The "high voltage" system operates at 300-400V (depending on the vehicle), and provides the power for the traction motors. Increasing the voltage for the traction motors reduces the current flowing through them, increasing efficiency.
Telecommunication
Telephone exchange communication equipment uses standard −48 V DC power supply. The negative polarity is achieved by grounding the positive terminal of power supply system and the battery bank. This is done to prevent electrolysis depositions. Telephone installations have a battery system to ensure power is maintained for subscriber lines during power interruptions.
Other devices may be powered from the telecommunications DC system using a DC-DC converter to provide any convenient voltage.
Many telephones connect to a twisted pair of wires, and use a bias tee to internally separate the AC component of the voltage between the two wires (the audio signal) from the DC component of the voltage between the two wires (used to power the phone).
High-voltage power transmission
High-voltage direct current (HVDC) electric power transmission systems use DC for the bulk transmission of electrical power, in contrast with the more common alternating current systems. For long-distance transmission, HVDC systems may be less expensive and suffer lower electrical losses.
Other
Applications using fuel cells (mixing hydrogen and oxygen together with a catalyst to produce electricity and water as byproducts) also produce only DC.
Light aircraft electrical systems are typically 12 V or 24 V DC similar to automobiles.
| Physical sciences | Electrical circuits | null |
47716 | https://en.wikipedia.org/wiki/High-voltage%20direct%20current | High-voltage direct current | A high-voltage direct current (HVDC) electric power transmission system uses direct current (DC) for electric power transmission, in contrast with the more common alternating current (AC) transmission systems. Most HVDC links use voltages between 100 kV and 800 kV.
HVDC lines are commonly used for long-distance power transmission, since they require fewer conductors and incur less power loss than equivalent AC lines. HVDC also allows power transmission between AC transmission systems that are not synchronized. Since the power flow through an HVDC link can be controlled independently of the phase angle between source and load, it can stabilize a network against disturbances due to rapid changes in power. HVDC also allows the transfer of power between grid systems running at different frequencies, such as 50 and 60 Hz. This improves the stability and economy of each grid, by allowing the exchange of power between previously incompatible networks.
The modern form of HVDC transmission uses technology developed extensively in the 1930s in Sweden (ASEA) and in Germany. Early commercial installations included one in the Soviet Union in 1951 between Moscow and Kashira, and a 100 kV, 20 MW system between Gotland and mainland Sweden in 1954. Before the Chinese project of 2019, the longest HVDC link in the world was the Rio Madeira link in Brazil, which consists of two bipoles of ±600 kV, 3150 MW each, connecting Porto Velho in the state of Rondônia to the São Paulo area with a length of more than .
High voltage transmission
High voltage is used for electric power transmission to reduce the energy lost in the resistance of the wires. For a given quantity of power transmitted, doubling the voltage will deliver the same power at only half the current:
Since the energy lost as heat in the wires is directly proportional to the square of the current using half the current at double the voltage reduces the line losses by a factor of 4. While energy lost in transmission can also be reduced by decreasing the resistance by increasing the conductor size, larger conductors are heavier and more expensive.
High voltage cannot readily be used for lighting or motors, so transmission-level voltages must be reduced for end-use equipment. Transformers are used to change the voltage levels in alternating current (AC) transmission circuits, but cannot pass DC current. Transformers made AC voltage changes practical, and AC generators were more efficient than those using DC. These advantages led to early low-voltage DC transmission systems being supplanted by AC systems around the turn of the 20th century.
Practical conversion of current between AC and DC became possible with the development of power electronics devices such as mercury-arc valves and, starting in the 1970s, power semiconductor devices including thyristors, integrated gate-commutated thyristors (IGCTs), MOS-controlled thyristors (MCTs) and insulated-gate bipolar transistors (IGBT).
History
Electromechanical systems
The first long-distance transmission of electric power was demonstrated using direct current in 1882 at Miesbach-Munich Power Transmission, but only 1.5 kW was transmitted. An early method of HVDC transmission was developed by the Swiss engineer René Thury and his method, the Thury system, was put into practice by 1889 in Italy by the Acquedotto De Ferrari-Galliera company. This system used series-connected motor-generator sets to increase the voltage. Each set was insulated from electrical ground and driven by insulated shafts from a prime mover. The transmission line was operated in a constant-current mode, with up to 5,000 volts across each machine, some machines having double commutators to reduce the voltage on each commutator. This system transmitted 630 kW at 14 kV DC over a distance of . The Moutiers–Lyon system transmitted 8,600 kW of hydroelectric power a distance of , including of underground cable. This system used eight series-connected generators with dual commutators for a total voltage of 150 kV between the positive and negative poles, and operated from 1906 until 1936. Fifteen Thury systems were in operation by 1913. Other Thury systems operating at up to 100 kV DC worked into the 1930s, but the rotating machinery required high maintenance and had high energy loss.
Various other electromechanical devices were tested during the first half of the 20th century with little commercial success. One technique attempted for conversion of direct current from a high transmission voltage to lower utilization voltage was to charge series-connected batteries, then reconnect the batteries in parallel to serve distribution loads. While at least two commercial installations were tried around the turn of the 20th century, the technique was not generally useful owing to the limited capacity of batteries, difficulties in switching between series and parallel configurations, and the inherent energy inefficiency of a battery charge/discharge cycle.
Mercury arc valves
First proposed in 1914, the grid controlled mercury-arc valve became available during the period 1920 to 1940 for the rectifier and inverter functions associated with DC transmission. Starting in 1932, General Electric tested mercury-vapor valves and a 12 kV DC transmission line, which also served to convert 40 Hz generation to serve 60 Hz loads, at Mechanicville, New York. In 1941, a 60 MW, ±200 kV, buried cable link, known as the Elbe-Project, was designed for the city of Berlin using mercury arc valves but, owing to the collapse of the German government in 1945, the project was never completed. The nominal justification for the project was that, during wartime, a buried cable would be less conspicuous as a bombing target. The equipment was moved to the Soviet Union and was put into service there as the Moscow–Kashira HVDC system. The Moscow–Kashira system and the 1954 connection by Uno Lamm's group at ASEA between the mainland of Sweden and the island of Gotland marked the beginning of the modern era of HVDC transmission.
Mercury arc valves were common in systems designed up to 1972, the last mercury arc HVDC system (the Nelson River Bipole 1 system in Manitoba, Canada) having been put into service in stages between 1972 and 1977. Since then, all mercury arc systems have been either shut down or converted to use solid-state devices. The last HVDC system to use mercury arc valves was the Inter-Island HVDC link between the North and South Islands of New Zealand, which used them on one of its two poles. The mercury arc valves were decommissioned on 1 August 2012, ahead of the commissioning of replacement thyristor converters.
Thyristor valves
The development of thyristor valves for HVDC began in the late 1960s. The first complete HVDC scheme based on thyristor was the Eel River scheme in Canada, which was built by General Electric and went into service in 1972.
Since 1977, new HVDC systems have used solid-state devices, in most cases thyristors. Like mercury arc valves, thyristors require connection to an external AC circuit in HVDC applications to turn them on and off. HVDC using thyristors is also known as line-commutated converter (LCC) HVDC.
On March 15, 1979, a 1920 MW thyristor based direct current connection between Cabora Bassa and Johannesburg () was energized. The conversion equipment was built in 1974 by Allgemeine Elektricitäts-Gesellschaft AG (AEG), and Brown, Boveri & Cie (BBC) and Siemens were partners in the project. Service interruptions of several years were a result of a civil war in Mozambique. The transmission voltage of ±533 kV was the highest in the world at the time.
Capacitor-commutated converters
Line-commutated converters have some limitations in their use for HVDC systems. This results from requiring a period of reverse voltage to affect the turn off. An attempt to address these limitations is the capacitor-commutated converter (CCC). The CCC has series capacitors inserted into the AC line connections. CCC has remained only a niche application because of the advent of voltage-source converters (VSCs) which more directly address turn-off issues.
Voltage-source converters
Widely used in motor drives since the 1980s, voltage-source converters (VSCs) started to appear in HVDC in 1997 with the experimental Hellsjön–Grängesberg project in Sweden. By the end of 2011, this technology had captured a significant proportion of the HVDC market.
The development of higher rated insulated-gate bipolar transistors (IGBTs), gate turn-off thyristors (GTOs), and integrated gate-commutated thyristors (IGCTs), has made HVDC systems more economical and reliable. This is because modern IGBTs incorporate a short-circuit failure mode, wherein should an IGBT fail, it is mechanically shorted. Therefore, modern VSC HVDC converter stations are designed with sufficient redundancy to guarantee operation over their entire service lives. The manufacturer ABB Group calls this concept HVDC Light, while Siemens calls a similar concept HVDC PLUS (Power Link Universal System) and Alstom call their product based upon this technology HVDC MaxSine. They have extended the use of HVDC down to blocks as small as a few tens of megawatts and overhead lines as short as a few dozen kilometers. There are several different variants of VSC technology: most installations built until 2012 use pulse-width modulation in a circuit that is effectively an ultra-high-voltage motor drive. More recent installations, including HVDC PLUS and HVDC MaxSine, are based on variants of a converter called a Modular Multilevel Converter (MMC).
Multilevel converters have the advantage that they allow harmonic filtering equipment to be reduced or eliminated altogether. By way of comparison, AC harmonic filters of typical line-commutated converter stations cover nearly half of the converter station area.
With time, voltage-source converter systems will probably replace all installed simple thyristor-based systems, including the highest DC power transmission applications.
Comparison with AC
Advantages
A long-distance, point-to-point HVDC transmission scheme generally has lower overall investment cost and lower losses than an equivalent AC transmission scheme. Although HVDC conversion equipment at the terminal stations is costly, the total DC transmission-line costs over long distances are lower than for an AC line of the same distance. HVDC requires less conductor per unit distance than an AC line, as there is no need to support three phases and there is no skin effect. AC systems use a higher peak voltage for the same power, increasing insulator costs.
Depending on voltage level and construction details, HVDC transmission losses are quoted at 3.5% per , about 50% less than AC (6.7%) lines at the same voltage. This is because direct current transfers only active power and thus causes lower losses than alternating current, which transfers both active and reactive power. In other words, transmitting electric AC power over long distances inevitably results in a phase shift between voltage and current. Because of this phase shift the effective Power=Current*Voltage, where * designates a vector product, decreases. Since DC power has no phase, the phase shift cannot occur in the DC case.
HVDC transmission may also be selected for other technical benefits. HVDC can transfer power between separate AC networks. HVDC power flow between separate AC systems can be automatically controlled to support either network during transient conditions, but without the risk that a major power-system collapse in one network will lead to a collapse in the second. The controllability feature is also useful where control of energy trading is needed.
Specific applications where HVDC transmission technology provides benefits include:
Undersea-cable transmission schemes (e.g. the North Sea Link, the NorNed cable between Norway and the Netherlands, Italy's SAPEI cable between Sardinia and the mainland, the Basslink between the Australian mainland and Tasmania, and the Baltic Cable between Sweden and Germany).
Endpoint-to-endpoint long-haul bulk power transmission without intermediate taps, usually to connect a remote generating plant to the main grid (e.g. the Nelson River DC Transmission System in Canada).
Increasing the capacity of an existing transmission line in situations where additional wires are difficult or expensive to install.
Power transmission and stabilization between unsynchronized AC networks, with the extreme example being an ability to transfer power between countries that use AC at different frequencies.
Stabilizing a predominantly AC power grid, without increasing prospective short-circuit current.
Integration of renewable resources such as wind into the main transmission grid. HVDC overhead lines for onshore wind integration projects and HVDC cables for offshore projects have been proposed in North America and Europe for both technical and economic reasons. DC grids with multiple VSCs are one of the technical solutions for pooling offshore wind energy and transmitting it to load centers located far away onshore.
Cable systems
Long undersea or underground high-voltage cables have a high electrical capacitance compared with overhead transmission lines since the live conductors within the cable are surrounded by a relatively thin layer of insulation (the dielectric), and a metal sheath. The geometry is that of a long coaxial capacitor. The total capacitance increases with the length of the cable. This capacitance is in a parallel circuit with the load. Where alternating current is used for cable transmission, additional current must flow in the cable to charge this cable capacitance. Another way to look at this is to realize, that such capacitance causes a phase shift between voltage and current, and thus decrease of the transmitted power, which is a vector product of voltage and current. Additional energy losses also occur as a result of dielectric losses in the cable insulation. For a sufficiently long AC cable, the entire current-carrying ability of the conductor would be needed to supply the charging current alone. This cable capacitance issue limits the length and power-carrying ability of AC power cables.
However, if direct current is used, the cable capacitance is charged only when the cable is first energized or if the voltage level changes; there is no additional current required. DC powered cables are limited only by their temperature rise and Ohm's law. Although some leakage current flows through the dielectric insulator, this effect is also present in AC systems and is small compared to the cable's rated current.
Overhead line systems
The capacitive effect of long underground or undersea cables in AC transmission applications also applies to AC overhead lines, although to a much lesser extent. Nevertheless, for a long AC overhead transmission line, the current flowing just to charge the line capacitance can be significant, and this reduces the capability of the line to carry useful current to the load at the remote end. Another factor that reduces the useful current-carrying ability of AC lines is the skin effect, which causes a nonuniform distribution of current over the cross-sectional area of the conductor. Transmission line conductors operating with direct current suffer from neither constraint. Therefore, for the same conductor losses (or heating effect), a given conductor can carry more power to the load when operating with HVDC than AC.
Finally, depending upon the environmental conditions and the performance of overhead line insulation operating with HVDC, it may be possible for a given transmission line to operate with a constant HVDC voltage that is approximately the same as the peak AC voltage for which it is designed and insulated. The power delivered in an AC system is defined by the root mean square (RMS) of an AC voltage, but RMS is only about 71% of the peak voltage. Therefore, if the HVDC line can operate continuously with an HVDC voltage that is the same as the peak voltage of the AC equivalent line, then for a given current (where HVDC current is the same as the RMS current in the AC line), the power transmission capability when operating with HVDC is approximately 40% higher than the capability when operating with AC.
Asynchronous connections
Because HVDC allows power transmission between unsynchronized AC distribution systems, it can help increase system stability, by preventing cascading failures from propagating from one part of a wider power transmission grid to another. Changes in load that would cause portions of an AC network to become unsynchronized and to separate, would not similarly affect a DC link, and the power flow through the DC link would tend to stabilize the AC network. The magnitude and direction of power flow through a DC link can be directly controlled and changed as needed to support the AC networks at either end of the DC link.
Disadvantages
The disadvantages of HVDC are in conversion, switching, control, availability, and maintenance.
HVDC is less reliable and has lower availability than alternating current (AC) systems, mainly due to the extra conversion equipment. Single-pole systems have availability of about 98.5%, with about a third of the downtime unscheduled due to faults. Fault-tolerant bipole systems provide high availability for 50% of the link capacity, but availability of the full capacity is about 97% to 98%.
The required converter stations are expensive and have limited overload capacity. At smaller transmission distances, the losses in the converter stations may be bigger than in an AC transmission line for the same distance. The cost of the converters may not be offset by reductions in line construction cost and power line loss.
Operating an HVDC scheme requires many spare parts to be kept, often exclusively for one system, as HVDC systems are less standardized than AC systems and technology changes more quickly.
In contrast to AC systems, realizing multi-terminal systems is complex (especially with line commutated converters), as is expanding existing schemes to multi-terminal systems. Controlling power flow in a multi-terminal DC system requires good communication between all the terminals; power flow must be actively regulated by the converter control system instead of relying on the inherent impedance and phase angle properties of an AC transmission line. Multi-terminal systems are therefore rare. only two are in service: the Quebec – New England Transmission between Radisson, Sandy Pond, and Nicolet and the Sardinia–mainland Italy link which was modified in 1989 to also provide power to the island of Corsica.
High-voltage DC circuit breaker
HVDC circuit breakers are difficult to build because of arcing: under AC, the voltage inverts and in doing so crosses zero volts dozens of times a second. An AC arc will self-extinguish at one of these zero-crossing points because there cannot be an arc where there is no potential difference. DC will never cross zero volts and never self-extinguish, so arc distance and duration is far greater with DC than the same voltage AC. This means some mechanism must be included in the circuit breaker to force current to zero and extinguish the arc, otherwise arcing and contact wear would be too great to allow reliable switching.
In November 2012, ABB announced the first ultrafast HVDC circuit breaker. Mechanical circuit breakers are too slow for use in HVDC grids, although they have been used for years in other applications. Conversely, semiconductor breakers are fast enough but have a high resistance when conducting, wasting energy and generating heat in normal operation. The ABB breaker combines semiconductor and mechanical breakers to produce a hybrid breaker with both a fast break time and a low resistance in normal operation.
Costs
Generally, vendors of HVDC systems, such as GE Vernova, Siemens and ABB, do not specify pricing details of particular projects; such costs are typically proprietary information between the supplier and the client. Costs vary widely depending on the specifics of the project (such as power rating, circuit length, overhead vs. cabled route, land costs, site seismology, and AC network improvements required at either terminal). A detailed analysis of DC vs. AC transmission costs may be required in situations where there is no obvious technical advantage to DC, and economical reasoning alone drives the selection.
However, some practitioners have provided some information:
An April 2010 announcement for a 2,000 MW, line between Spain and France is estimated at €700 million. This includes the cost of a tunnel through the Pyrenees.
Conversion process
Converter
At the heart of an HVDC converter station, the equipment that performs the conversion between AC and DC is referred to as the converter. Almost all HVDC converters are inherently capable of converting from AC to DC (rectification) and from DC to AC (inversion), although in many HVDC systems, the system as a whole is optimized for power flow in only one direction. Irrespective of how the converter itself is designed, the station that is operating (at a given time) with power flow from AC to DC is referred to as the rectifier and the station that is operating with power flow from DC to AC is referred to as the inverter.
Early HVDC systems used electromechanical conversion (the Thury system) but all HVDC systems built since the 1940s have used electronic converters. Electronic converters for HVDC are divided into two main categories:
Line-commutated converters
Voltage-sourced converters
Line-commutated converters
Most of the HVDC systems in operation today are based on line-commutated converters (LCCs).
The basic LCC configuration uses a three-phase bridge rectifier known as a six-pulse bridge, containing six electronic switches, each connecting one of the three phases to one of the two DC rails. A complete switching element is usually referred to as a valve, irrespective of its construction. However, with a phase change only every 60°, considerable harmonic distortion is produced at both the DC and AC terminals when this arrangement is used.
An enhancement of this arrangement uses 12 valves in a twelve-pulse bridge. The AC is split into two separate three-phase supplies before transformation. One of the sets of supplies is then configured to have a star (wye) secondary, and the other a delta secondary, establishing a 30° phase difference between the two sets of three phases. With twelve valves connecting each of the two sets of three phases to the two DC rails, there is a phase change every 30°, and harmonics are considerably reduced. For this reason, the twelve-pulse system has become standard on most line-commutated converter HVDC systems built since the 1970s.
With line commutated converters, the converter has only one degree of freedom the firing angle, which represents the time delay between the voltage across a valve becoming positive (at which point the valve would start to conduct if it were made from diodes) and the thyristors being turned on. The DC output voltage of the converter steadily becomes less positive as the firing angle is increased: firing angles of up to 90° correspond to rectification and result in positive DC voltages, while firing angles above 90° correspond to inversion and result in negative DC voltages. The practical upper limit for the firing angle is about 150–160° because above this, the valve would have insufficient turnoff time.
Early LCC systems used mercury-arc valves, which were rugged but required high maintenance. Because of this, many mercury-arc HVDC systems were built with bypass switchgear across each six-pulse bridge so that the HVDC scheme could be operated in six-pulse mode for short maintenance periods. The last mercury arc system was shut down in 2012.
The thyristor valve was first used in HVDC systems in 1972. The thyristor is a solid-state semiconductor device similar to the diode, but with an extra control terminal that is used to switch the device on at a particular instant during the AC cycle. Because the voltages in HVDC systems, up to 800 kV in some cases, far exceed the breakdown voltages of the thyristors used, HVDC thyristor valves are built using large numbers of thyristors in series. Additional passive components such as grading capacitors and resistors need to be connected in parallel with each thyristor in order to ensure that the voltage across the valve is evenly shared between the thyristors. The thyristor plus its grading circuits and other auxiliary equipment is known as a thyristor level.
Each thyristor valve will typically contain tens or hundreds of thyristor levels, each operating at a different (high) potential with respect to earth. The command information to turn on the thyristors therefore cannot simply be sent using a wire connection it needs to be isolated. The isolation method can be magnetic but is usually optical. Two optical methods are used: indirect and direct optical triggering. In the indirect optical triggering method, low-voltage control electronics send light pulses along optical fibers to the high-side control electronics, which derives its power from the voltage across each thyristor. The alternative direct optical triggering method dispenses with most of the high-side electronics, instead using light pulses from the control electronics to switch light-triggered thyristors (LTTs).
In a line-commutated converter, the DC current (usually) cannot change direction; it flows through a large inductance and can be considered almost constant. On the AC side, the converter behaves approximately as a current source, injecting both grid-frequency and harmonic currents into the AC network. For this reason, a line commutated converter for HVDC is also considered as a current-source inverter.
Voltage-sourced converters
Because thyristors can only be turned on (not off) by control action, the control system has only one degree of freedom – when to turn on the thyristor. This is an important limitation in some circumstances.
With some other types of semiconductor devices such as the insulated-gate bipolar transistor (IGBT), both turn-on and turn-off can be controlled, giving a second degree of freedom. As a result, they can be used to make self-commutated converters. In such converters, the electric polarity of DC voltage is usually fixed and the DC voltage, being smoothed by a large capacitance, can be considered constant. For this reason, an HVDC converter using IGBTs is usually referred to as a voltage-sourced converter. The additional controllability gives many advantages, notably the ability to switch the IGBTs on and off many times per cycle in order to improve the harmonic performance. Being self-commutated, the converter no longer relies on synchronous machines in the AC system for its operation. A voltage-sourced converter can therefore feed power to an AC network consisting only of passive loads, something which is impossible with LCC HVDC.
HVDC systems based on voltage-sourced converters normally use the six-pulse connection because the converter produces much less harmonic distortion than a comparable LCC and the twelve-pulse connection is unnecessary.
Most of the VSC HVDC systems built until 2012 were based on the two-level converter, which can be thought of as a six-pulse bridge in which the thyristors have been replaced by IGBTs with inverse-parallel diodes and the DC smoothing reactors have been replaced by DC smoothing capacitors. Such converters derive their name from the discrete, two voltage levels at the AC output of each phase that correspond to the electrical potentials of the positive and negative DC terminals. Pulse-width modulation (PWM) is usually used to improve the harmonic distortion of the converter.
Some HVDC systems have been built with three-level converters, but today most new VSC HVDC systems are being built with some form of multilevel converter, most commonly the modular multilevel converter (MMC), in which each valve consists of a number of independent converter submodules, each containing its own storage capacitor. The IGBTs in each submodule either bypass the capacitor or connect it into the circuit, allowing the valve to synthesize a stepped voltage with very low levels of harmonic distortion.
Converter transformers
At the AC side of each converter, a bank of transformers, often three physically separated single-phase transformers, isolate the station from the AC supply, to provide a local earth, and to ensure the correct eventual DC voltage. The output of these transformers is then connected to the converter.
Converter transformers for LCC HVDC schemes are quite specialized because of the high levels of harmonic currents that flow through them, and because the secondary winding insulation experiences a permanent DC voltage, which affects the design of the insulating structure inside the tank. In LCC systems, the transformers also need to provide the 30° phase shift required for harmonic cancellation.
Converter transformers for VSC HVDC systems are usually simpler and more conventional in design than those for LCC HVDC systems.
Reactive power
A major drawback of HVDC systems using line-commutated converters is that the converters inherently consume reactive power. The AC current flowing into the converter from the AC system lags behind the AC voltage so that, irrespective of the direction of active power flow, the converter always absorbs reactive power, behaving in the same way as a shunt reactor. The reactive power absorbed is at least under ideal conditions and can be higher than this when the converter is operating at higher than usual firing or extinction angle, or reduced DC voltage.
Although at HVDC converter stations connected directly to power stations some of the reactive power may be provided by the generators themselves, in most cases the reactive power consumed by the converter must be provided by banks of shunt capacitors connected at the AC terminals of the converter. The shunt capacitors are usually connected directly to the grid voltage but in some cases may be connected to a lower voltage via a tertiary winding on the converter transformer.
Since the reactive power consumed depends on the active power being transmitted, the shunt capacitors usually need to be subdivided into a number of switchable banks (typically four per converter) in order to prevent a surplus of reactive power being generated at low transmitted power.
The shunt capacitors are almost always provided with tuning reactors and, where necessary, damping resistors so that they can perform a dual role as harmonic filters.
VSCs, on the other hand, can either produce or consume reactive power on demand, with the result that usually no separate shunt capacitors are needed (other than those required purely for filtering).
Harmonics and filtering
All power electronic converters generate some degree of harmonic distortion on the AC and DC systems to which they are connected, and HVDC converters are no exception.
With the recently developed modular multilevel converter (MMC), levels of harmonic distortion may be practically negligible, but with line-commutated converters and simpler types of VSCs, considerable harmonic distortion may be produced on both the AC and DC sides of the converter. As a result, harmonic filters are nearly always required at the AC terminals of such converters, and in HVDC transmission schemes using overhead lines, may also be required on the DC side.
Filters for line-commutated converters
The basic building-block of a line-commutated HVDC converter is the six-pulse bridge. This arrangement produces very high levels of harmonic distortion by acting as a current source injecting harmonic currents of order 6n±1 into the AC system and generating harmonic voltages of order 6n superimposed on the DC voltage.
It is very costly to provide harmonic filters capable of suppressing such harmonics, so a variant known as the twelve-pulse bridge (consisting of two six-pulse bridges in series with a 30° phase shift between them) is nearly always used. With the twelve-pulse arrangement, harmonics are still produced but only at orders 12n±1 on the AC side and 12n on the DC side. The task of suppressing such harmonics is still challenging, but manageable.
Line-commutated converters for HVDC are usually provided with combinations of harmonic filters designed to deal with the 11th and 13th harmonics on the AC side, and 12th harmonic on the DC side. Sometimes, high-pass filters may be provided to deal with 23rd, 25th, 35th, 37th... on the AC side and 24th, 36th... on the DC side. Sometimes, the AC filters may also need to provide damping at lower-order, noncharacteristic harmonics such as 3rd or 5th harmonics.
The task of designing AC harmonic filters for HVDC converter stations is complex and computationally intensive, since in addition to ensuring that the converter does not produce an unacceptable level of voltage distortion on the AC system, it must be ensured that the harmonic filters do not resonate with some component elsewhere in the AC system. A detailed knowledge of the harmonic impedance of the AC system, at a wide range of frequencies, is needed in order to design the AC filters.
DC filters are required only for HVDC transmission systems involving overhead lines. Voltage distortion is not a problem in its own right, since consumers do not connect directly to the DC terminals of the system, so the main design criterion for the DC filters is to ensure that the harmonic currents flowing in the DC lines do not induce interference in nearby open-wire telephone lines. With the rise in digital mobile telecommunications systems, which are much less susceptible to interference, DC filters are becoming less important for HVDC systems.
Filters for voltage-sourced converters
Some types of voltage-sourced converters may produce such low levels of harmonic distortion that no filters are required at all. However, converter types such as the two-level converter, used with pulse-width modulation (PWM), still require some filtering, albeit less than on line-commutated converter systems.
With such converters, the harmonic spectrum is generally shifted to higher frequencies than with line-commutated converters. This usually allows the filter equipment to be smaller. The dominant harmonic frequencies are sidebands of the PWM frequency and multiples thereof. In HVDC applications, the PWM frequency is typically around 1 to 2 kHz.
Configurations
Monopole
In a monopole configuration one of the terminals of the rectifier is connected to earth ground. The other terminal, at high voltage relative to ground, is connected to a transmission line. The earthed terminal may be connected to the corresponding connection at the inverting station by means of a second conductor.
If no metallic return conductor is installed, current flows in the earth (or water) between two electrodes. This arrangement is a type of single-wire earth return system.
The electrodes are usually located some tens of kilometers from the stations and are connected to the stations via a medium-voltage electrode line. The design of the electrodes themselves depends on whether they are located on land, on the shore or at sea. For the monopolar configuration with earth return, the earth current flow is unidirectional, which means that the design of one of the electrodes (the cathode) can be relatively simple, although the design of anode electrode is quite complex.
For long-distance transmission, earth return can be considerably cheaper than alternatives using a dedicated neutral conductor, but it can lead to problems such as:
Electrochemical corrosion of long buried metal objects such as pipelines
Underwater earth-return electrodes in seawater may produce chlorine or otherwise affect water chemistry
An unbalanced current path may result in a net magnetic field, which can affect magnetic navigational compasses for ships passing over an underwater cable.
These effects can be eliminated with installation of a metallic return conductor between the two ends of the monopolar transmission line. Since one terminal of the converters is connected to earth, the return conductor need not be insulated for the full transmission voltage which makes it less costly than the high-voltage conductor. The decision of whether or not to use a metallic return conductor is based upon economic, technical and environmental factors.
Modern monopolar systems for pure overhead lines carry typically 1.5 GW. If underground or underwater cables are used, the typical value is 600 MW.
Most monopolar systems are designed for future bipolar expansion. Transmission line towers may be designed to carry two conductors, even if only one is used initially for the monopole transmission system. The second conductor is either unused, used as electrode line or connected in parallel with the other (as in case of Baltic Cable).
Symmetrical monopole
An alternative is to use two high-voltage conductors, operating at about half of the DC voltage, with only a single converter at each end. In this arrangement, known as the symmetrical monopole, the converters are earthed only via a high impedance and there is no earth current. The symmetrical monopole arrangement is uncommon with line-commutated converters (the NorNed interconnector being a rare example) but is very common with Voltage Sourced Converters when cables are used.
Bipolar
In bipolar transmission a pair of conductors is used, each at a high potential with respect to ground, in opposite polarity. Since these conductors must be insulated for the full voltage, transmission line cost is higher than a monopole with a return conductor. However, there are a number of advantages to bipolar transmission which can make it an attractive option.
Under normal load, negligible earth-current flows, as in the case of monopolar transmission with a metallic earth-return. This reduces earth return loss and environmental effects.
When a fault develops in a line, with earth return electrodes installed at each end of the line, approximately half the rated power can continue to flow using the earth as a return path, operating in monopolar mode.
Since for a given total power rating each conductor of a bipolar line carries only half the current of monopolar lines, the cost of the second conductor is reduced compared to a monopolar line of the same rating.
In very adverse terrain, the second conductor may be carried on an independent set of transmission towers, so that some power may continue to be transmitted even if one line is damaged.
A bipolar system may also be installed with a metallic earth return conductor.
Bipolar systems may carry as much as 4 GW at voltages of ±660 kV with a single converter per pole, as on the Ningdong–Shandong project in China. With a power rating of 2,000 MW per twelve-pulse converter, the converters for that project were (as of 2010) the most powerful HVDC converters ever built. Even higher powers can be achieved by connecting two or more twelve-pulse converters in series in each pole, as is used in the ±800 kV Xiangjiaba–Shanghai project in China, which uses two twelve-pulse converter bridges in each pole, each rated at 400 kV DC and 1,600 MW.
Submarine cable installations initially commissioned as a monopole may be upgraded with additional cables and operated as a bipole.
A bipolar scheme can be implemented so that the polarity of one or both poles can be changed. This allows the operation as two parallel monopoles. If one conductor fails, transmission can still continue at reduced capacity. Losses may increase if ground electrodes and lines are not designed for the extra current in this mode. To reduce losses in this case, intermediate switching stations may be installed, at which line segments can be switched off or parallelized. This was done at Inga–Shaba HVDC.
Back to back
A back-to-back station (or B2B for short) is a plant in which both converters are in the same area, usually in the same building. The length of the direct current line is kept as short as possible. HVDC back-to-back stations are used for
coupling of electricity grids of different frequencies (as in Japan and South America; and the GCC interconnector between Saudi Arabia (60 Hz) and rest of GCC countries (50 Hz) completed in 2009)
coupling two networks of the same nominal frequency but no fixed phase relationship (as until 1995/96 in Etzenricht, Dürnrohr, Vienna, and the Vyborg HVDC scheme).
different frequency and phase number (for example, as a replacement for traction current converter plants)
The DC voltage in the intermediate circuit can be selected freely at HVDC back-to-back stations because of the short conductor length. The DC voltage is usually selected to be as low as possible, in order to build a small valve hall and to reduce the number of thyristors connected in series in each valve. For this reason, at HVDC back-to-back stations, valves with the highest available current rating (in some cases, up to 4,500 A) are used.
Multi-terminal systems
The most common configuration of an HVDC link consists of two converter stations connected by an overhead power line or undersea cable.
Multi-terminal HVDC links, connecting more than two points, are rare. The configuration of multiple terminals can be series, parallel, or hybrid (a mixture of series and parallel). Parallel configuration tends to be used for large capacity stations, and series for lower capacity stations. An example is the 2,000 MW Quebec - New England Transmission system opened in 1992, which is currently the largest multi-terminal HVDC system in the world.
Multi-terminal systems are difficult to realize using line commutated converters because reversals of power are effected by reversing the polarity of DC voltage, which affects all converters connected to the system. With Voltage Sourced Converters, power reversal is achieved instead by reversing the direction of current, making parallel-connected multi-terminals systems much easier to control. For this reason, multi-terminal systems are expected to become much more common in the near future.
China is expanding its grid to keep up with increased power demand, while addressing environmental targets. China Southern Power Grid started a three terminals VSC HVDC pilot project in 2011. The project has designed ratings of ±160 kV/200 MW-100 MW-50 MW and will be used to bring wind power generated on Nanao island into the mainland Guangdong power grid through of combination of HVDC land cables, sea cables and overhead lines. This project was put into operation on December 19, 2013.
In India, the multi-terminal North-East Agra project is planned for commissioning in 2015–2017. It is rated 6,000 MW, and it transmits power on a ±800 kV bipolar line from two converter stations, at Biswanath Chariali and Alipurduar, in the east to a converter at Agra, a distance of .
Other arrangements
Cross-Skagerrak consisted since 1993 of 3 poles, from which 2 were switched in parallel and the third used an opposite polarity with a higher transmission voltage. This configuration ended in 2014 when poles 1 and 2 again were rebuilt to work in bipole and pole 3 (LCC) works in bipole with a new pole 4 (VSC). This is the first HVDC transmission where LCC and VSC poles cooperate in a bipole.
A similar arrangement was the HVDC Inter-Island in New Zealand after a capacity upgrade in 1992, in which the two original converters (using mercury-arc valves) were parallel-switched feeding the same pole and a new third (thyristor) converter installed with opposite polarity and higher operation voltage. This configuration ended in 2012 when the two old converters were replaced with a single, new, thyristor converter.
A scheme patented in 2004 is intended for conversion of existing AC transmission lines to HVDC. Two of the three circuit conductors are operated as a bipole. The third conductor is used as a parallel monopole, equipped with reversing valves (or parallel valves connected in reverse polarity). This allows heavier currents to be carried by the bipole conductors, and full use of the installed third conductor for energy transmission. High currents can be circulated through the line conductors even when load demand is low, for removal of ice. , no tripole conversions are in operation, although a transmission line in India has been converted to bipole HVDC (HVDC Sileru-Barsoor).
Corona discharge
Corona discharge is the creation of ions in a fluid (such as air) by the presence of a strong electric field. Electrons are torn from neutral air, and either the positive ions or the electrons are attracted to the conductor, while the charged particles drift. This effect can cause considerable power loss, create audible and radio-frequency interference, generate toxic compounds such as oxides of nitrogen and ozone, and bring forth arcing.
Both AC and DC transmission lines can generate coronas, in the former case in the form of oscillating particles, in the latter a constant wind. Due to the space charge formed around the conductors, an HVDC system may have about half the loss per unit length of a high voltage AC system carrying the same amount of power. With monopolar transmission the choice of polarity of the energized conductor leads to a degree of control over the corona discharge. In particular, the polarity of the ions emitted can be controlled, which may have an environmental impact on ozone creation. Negative coronas generate considerably more ozone than positive coronas, and generate it further downwind of the power line, creating the potential for health effects. The use of a positive voltage will reduce the ozone impacts of monopole HVDC power lines.
Applications
Overview
The controllability of a current-flow through HVDC rectifiers and inverters, their application in connecting unsynchronized networks, and their applications in efficient submarine cables mean that HVDC interconnectors are often used at national or regional boundaries for the exchange of power (in North America, HVDC connections divide much of Canada and the United States into several electrical regions that cross national borders, although the purpose of these connections is still to connect unsynchronized AC grids to each other). Offshore windfarms also require undersea cables, and their turbines are unsynchronized. In very long-distance connections between two locations, such as power transmission from a large hydroelectric power plant at a remote site to an urban area, HVDC transmission systems may appropriately be used; several schemes of these kind have been built. For interconnectors to Siberia, Canada, India, and the Scandinavian North, the decreased line-costs of HVDC also make it applicable, see List of HVDC projects. Other applications are noted throughout this article.
AC network interconnectors
AC transmission lines can interconnect only synchronized AC networks with the same frequency with limits on the allowable phase difference between the two ends of the line. Many areas that wish to share power have unsynchronized networks. The power grids of the UK, Northern Europe and continental Europe are not united into a single synchronized network. Japan has 50 Hz and 60 Hz networks. Continental North America, while operating at 60 Hz throughout, is divided into regions which are unsynchronized: East, West, Texas, Quebec, and Alaska. Brazil and Paraguay, which share the enormous Itaipu Dam hydroelectric plant, operate on 60 Hz and 50 Hz respectively. However, HVDC systems make it possible to interconnect unsynchronized AC networks, and also add the possibility of controlling AC voltage and reactive power flow.
A generator connected to a long AC transmission line may become unstable and fall out of synchronization with a distant AC power system. An HVDC transmission link may make it economically feasible to use remote generation sites. Wind farms located off-shore may use HVDC systems to collect power from multiple unsynchronized generators for transmission to the shore by an underwater cable.
In general, however, an HVDC power line will interconnect two AC regions of the power-distribution grid. Machinery to convert between AC and DC power adds a considerable cost in power transmission. The conversion from AC to DC is known as rectification, and from DC to AC as inversion. Above a certain break-even distance (about for submarine cables, and perhaps for overhead cables), the lower cost of the HVDC electrical conductors outweighs the cost of the electronics.
The conversion electronics also present an opportunity to effectively manage the power grid by means of controlling the magnitude and direction of power flow. An additional advantage of the existence of HVDC links, therefore, is potential increased stability in the transmission grid.
Renewable electricity superhighways
A number of studies have highlighted the potential benefits of very wide area super grids based on HVDC since they can mitigate the effects of intermittency by averaging and smoothing the outputs of large numbers of geographically dispersed wind farms or solar farms. Czisch's study concludes that a grid covering the fringes of Europe could bring 100% renewable power (70% wind, 30% biomass) at close to today's prices. There has been debate over the technical feasibility of this proposal and the political risks involved in energy transmission across a large number of international borders.
The construction of such green power superhighways is advocated in a white paper that was released by the American Wind Energy Association and the Solar Energy Industries Association in 2009. Clean Line Energy Partners is developing four HVDC lines in the U.S. for long-distance electric power transmission.
In January 2009, the European Commission proposed €300 million to subsidize the development of HVDC links between Ireland, Britain, the Netherlands, Germany, Denmark, and Sweden, as part of a wider €1.2 billion package supporting links to offshore wind farms and cross-border interconnectors throughout Europe. Meanwhile, the recently founded Union of the Mediterranean has embraced a Mediterranean Solar Plan to import large amounts of concentrated solar power into Europe from North Africa and the Middle East. Japan-Taiwan-Philippines HVDC interconnector was proposed in 2020. The purpose of this interconnector is to facilitate cross-border renewable power trading with Indonesia and Australia, in preparation for the future Asian Pacific Super Grid.
Advancements in UHVDC
UHVDC (ultrahigh-voltage direct-current) is shaping up to be the latest technological front in high voltage DC transmission technology. UHVDC is defined as DC voltage transmission of above 800 kV (HVDC is generally just 100 to 800 kV).
One of the problems with current UHVDC supergrids is that – although less than AC transmission or DC transmission at lower voltages – they still suffer from power loss as the length is extended. A typical loss for 800 kV lines is 2.6% over . Increasing the transmission voltage on such lines reduces the power loss, but until recently, the interconnectors required to bridge the segments were prohibitively expensive. However, with advances in manufacturing, it is becoming more and more feasible to build UHVDC lines.
In 2010, ABB Group built the world's first 800 kV UHVDC in China. The Zhundong–Wannan UHVDC line with 1100 kV, length and 12 GW capacity was completed in 2018. As of 2020, at least thirteen UHVDC transmission lines in China have been completed.
While the majority of recent UHVDC technology deployment is in China, it has also been deployed in South America as well as other parts of Asia. In India, a , 800 kV, 6 GW line between Raigarh and Pugalur is expected to be completed in 2019. In Brazil, the Xingu-Estreito line over with 800 kV and 4 GW was completed in 2017, and the Xingu-Rio line over with 800 kV and 4 GW was completed in 2019, both to transmit the energy from Belo Monte Dam. As of 2020, no UHVDC line (≥ 800 kV) exists in Europe or North America.
A 1,100 kV link in China was completed in 2019 over a distance of with a power capacity of 12 GW. With this dimension, intercontinental connections become possible which could help to deal with the fluctuations of wind power and photovoltaics.
| Technology | Electricity transmission and distribution | null |
47719 | https://en.wikipedia.org/wiki/Coulomb | Coulomb | The coulomb (symbol: C) is the unit of electric charge in the International System of Units (SI). It is defined to be equal to the electric charge delivered by a 1 ampere current in 1 second. It is used to define the elementary charge e.
Definition
The SI defines the coulomb as "the quantity of electricity carried in 1 second by a current of 1 ampere". Then the value of the elementary charge e is defined to be . Since the coulomb is the reciprocal of the elementary charge,
it is approximately and is thus not an integer multiple of the elementary charge.
The coulomb was previously defined in terms of the force between two wires. The coulomb was originally defined, using the latter definition of the ampere, as .
The 2019 redefinition of the ampere and other SI base units fixed the numerical value of the elementary charge when expressed in coulombs and therefore fixed the value of the coulomb when expressed as a multiple of the fundamental charge.
SI prefixes
Like other SI units, the coulomb can be modified by adding a prefix that multiplies it by a power of 10.
Conversions
The magnitude of the electrical charge of one mole of elementary charges (approximately , the Avogadro number) is known as a faraday unit of charge (closely related to the Faraday constant). One faraday equals In terms of the Avogadro constant (NA), one coulomb is equal to approximately × NA elementary charges.
Every farad of capacitance can hold one coulomb per volt across the capacitor.
One ampere hour equals , hence = .
One statcoulomb (statC), the obsolete CGS electrostatic unit of charge (esu), is approximately or about one-third of a nanocoulomb.
In everyday terms
The charges in static electricity from rubbing materials together are typically a few microcoulombs.
The amount of charge that travels through a lightning bolt is typically around 15 C, although for large bolts this can be up to 350 C.
The amount of charge that travels through a typical alkaline AA battery from being fully charged to discharged is about = ≈ .
A typical smartphone battery can hold ≈ .
Name and history
By 1878, the British Association for the Advancement of Science had defined the volt, ohm, and farad, but not the coulomb. In 1881, the International Electrical Congress, now the International Electrotechnical Commission (IEC), approved the volt as the unit for electromotive force, the ampere as the unit for electric current, and the coulomb as the unit of electric charge.
At that time, the volt was defined as the potential difference [i.e., what is nowadays called the "voltage (difference)"] across a conductor when a current of one ampere dissipates one watt of power.
The coulomb (later "absolute coulomb" or "abcoulomb" for disambiguation) was part of the EMU system of units. The "international coulomb" based on laboratory specifications for its measurement was introduced by the IEC in 1908. The entire set of "reproducible units" was abandoned in 1948 and the "international coulomb" became the modern coulomb.
| Physical sciences | Electromagnetism | null |
47742 | https://en.wikipedia.org/wiki/Brooklyn%20Bridge | Brooklyn Bridge | The Brooklyn Bridge is a hybrid cable-stayed/suspension bridge in New York City, spanning the East River between the boroughs of Manhattan and Brooklyn. Opened on May 24, 1883, the Brooklyn Bridge was the first fixed crossing of the East River. It was also the longest suspension bridge in the world at the time of its opening, with a main span of and a deck above Mean High Water. The span was originally called the New York and Brooklyn Bridge or the East River Bridge but was officially renamed the Brooklyn Bridge in 1915.
Proposals for a bridge connecting Manhattan and Brooklyn were first made in the early 19th century, which eventually led to the construction of the current span, designed by John A. Roebling. The project's chief engineer, his son Washington Roebling, contributed further design work, assisted by the latter's wife, Emily Warren Roebling. Construction started in 1870 and was overseen by the New York Bridge Company, which in turn was controlled by the Tammany Hall political machine. Numerous controversies and the novelty of the design prolonged the project over thirteen years. After opening, the Brooklyn Bridge underwent several reconfigurations, having carried horse-drawn vehicles and elevated railway lines until 1950. To alleviate increasing traffic flows, additional bridges and tunnels were built across the East River. Following gradual deterioration, the Brooklyn Bridge was renovated several times, including in the 1950s, 1980s, and 2010s.
The Brooklyn Bridge is the southernmost of four vehicular bridges directly connecting Manhattan Island and Long Island, with the Manhattan Bridge, the Williamsburg Bridge, and the Queensboro Bridge to the north. Only passenger vehicles and pedestrian and bicycle traffic are permitted. A major tourist attraction since its opening, the Brooklyn Bridge has become an icon of New York City. Over the years, the bridge has been used as the location of various stunts and performances, as well as several crimes, attacks and vandalism. The Brooklyn Bridge is designated a National Historic Landmark, a New York City landmark, and a National Historic Civil Engineering Landmark.
Description
The Brooklyn Bridge, an early example of a steel-wire suspension bridge, uses a hybrid cable-stayed/suspension bridge design, with both vertical and diagonal suspender cables. Its stone towers are neo-Gothic, with characteristic pointed arches. The New York City Department of Transportation (NYCDOT), which maintains the bridge, says that its original paint scheme was "Brooklyn Bridge Tan" and "Silver", but other accounts state that it was originally entirely "Rawlins Red".
Deck
To provide sufficient clearance for shipping in the East River, the Brooklyn Bridge incorporates long approach viaducts on either end to raise it from low ground on both shores. Including approaches, the Brooklyn Bridge is a total of long when measured between the curbs at Park Row in Manhattan and Sands Street in Brooklyn. A separate measurement of is sometimes given; this is the distance from the curb at Centre Street in Manhattan.
Suspension span
The main span between the two suspension towers is long and wide. The bridge "elongates and contracts between the extremes of temperature from 14 to 16 inches". Navigational clearance is above Mean High Water (MHW). A 1909 Engineering Magazine article said that, at the center of the span, the height above MHW could fluctuate by more than due to temperature and traffic loads, while more rigid spans had a lower maximum deflection.
The side spans, between each suspension tower and each side's suspension anchorages, are long. At the time of construction, engineers had not yet discovered the aerodynamics of bridge construction, and bridge designs were not tested in wind tunnels. John Roebling designed the Brooklyn Bridge's truss system to be six to eight times as strong as he thought it needed to be. As such, the open truss structure supporting the deck is, by its nature, subject to fewer aerodynamic problems. However, due to a supplier's fraudulent substitution of inferior-quality wire in the initial construction, the bridge was reappraised at the time as being only four times as strong as necessary.
The main span and side spans are supported by a structure containing trusses that run parallel to the roadway, each of which is deep. Originally there were six trusses, but two were removed during a late-1940s renovation. The trusses allow the Brooklyn Bridge to hold a total load of , a design consideration from when it originally carried heavier elevated trains. These trusses are held up by suspender ropes, which hang downward from each of the four main cables. Crossbeams run between the trusses at the top, and diagonal and vertical stiffening beams run on the outside and inside of each roadway.
An elevated pedestrian-only promenade runs in between the two roadways and above them. It typically runs below the level of the crossbeams, except at the areas surrounding each tower. Here, the promenade rises to just above the level of the crossbeams, connecting to a balcony that slightly overhangs the two roadways. The path is generally wide. The iron railings were produced by Janes & Kirtland, a Bronx iron foundry that also made the United States Capitol dome and the Bow Bridge in Central Park.
Approaches
Each of the side spans is reached by an approach ramp. The approach ramp from the Brooklyn side is shorter than the approach ramp from the Manhattan side. The approaches are supported by Renaissance-style arches made of masonry; the arch openings themselves were filled with brick walls, with small windows within. The approach ramp contains nine arch or iron-girder bridges across side streets in Manhattan and Brooklyn.
Underneath the Manhattan approach, a series of brick slopes or "banks" was developed into a skate park, the Brooklyn Banks, in the late 1980s. The park uses the approach's support pillars as obstacles. In the mid-2010s, the Brooklyn Banks were closed to the public because the area was being used as a storage site during the bridge's renovation. The skateboarding community has attempted to save the banks on multiple occasions; after the city destroyed the smaller banks in the 2000s, the city government agreed to keep the larger banks for skateboarding. When the NYCDOT removed the bricks from the banks in 2020, skateboarders started an online petition. In the 2020s, local resident Rosa Chang advocated for the space under the Manhattan approach to be converted into a recreational area known as Gotham Park. Some of the space under the Manhattan approach reopened in May 2023 as a park called the Arches; this was followed in November 2024 by another section of parkland.
Cables
The Brooklyn Bridge contains four main cables, which descend from the tops of the suspension towers and help support the deck. Two are located to the outside of the bridge's roadways, while two are in the median of the roadways. Each main cable measures in diameter and contains 5,282 parallel, galvanized steel wires wrapped closely together in a cylindrical shape. These wires are bundled in 19 individual strands, with 278 wires to a strand. This was the first use of bundling in a suspension bridge and took several months for workers to tie together. Since the 2000s, the main cables have also supported a series of 24-watt LED lighting fixtures, referred to as "necklace lights" due to their shape.
In addition, either 1,088, 1,096, or 1,520 galvanized steel wire suspender cables hang downward from the main cables. Another 400 cable stays extend diagonally from the towers. The vertical suspender cables and diagonal cable stays hold up the truss structure around the bridge deck. The bridge's suspenders originally used wire rope, which was replaced in the 1980s with galvanized steel made by Bethlehem Steel. The vertical suspender cables measure long, and the diagonal stays measure long.
Anchorages
Each side of the bridge contains an anchorage for the main cables. The anchorages are trapezoidal limestone structures located slightly inland of the shore, measuring at the base and at the top. Each anchorage weighs . The Manhattan anchorage rests on a foundation of bedrock while the Brooklyn anchorage rests on clay.
The anchorages both have four anchor plates, one for each of the main cables, which are located near ground level and parallel to the ground. The anchor plates measure , with a thickness of and weigh each. Each anchor plate is connected to the respective main cable by two sets of nine eyebars, each of which is about long and up to thick. The chains of eyebars curve downward from the cables toward the anchor plates, and the eyebars vary in size depending on their position.
The anchorages also contain numerous passageways and compartments. Starting in 1876, in order to fund the bridge's maintenance, the New York City government made the large vaults under the bridge's Manhattan anchorage available for rent, and they were in constant use during the early 20th century. The vaults were used to store wine, as they were kept at a consistent temperature due to a lack of air circulation. The Manhattan vault was called the "Blue Grotto" because of a shrine to the Virgin Mary next to an opening at the entrance. The vaults were closed for public use in the late 1910s and 1920s during World War I and Prohibition but were reopened thereafter. When New York magazine visited one of the cellars in 1978, it discovered a "fading inscription" on a wall reading: "Who loveth not wine, women and song, he remaineth a fool his whole life long." Leaks found within the vault's spaces necessitated repairs during the late 1980s and early 1990s. By the late 1990s, the chambers were being used to store maintenance equipment.
Towers
The bridge's two suspension towers are tall with a footprint of at the high water line. They are built of limestone, granite, and Rosendale cement. The limestone was quarried at the Clark Quarry in Essex County, New York. The granite blocks were quarried and shaped on Vinalhaven Island, Maine, under a contract with the Bodwell Granite Company, and delivered from Maine to New York by schooner. The Manhattan tower contains of masonry, while the Brooklyn tower has of masonry. There are 56 LED lamps mounted onto the towers.
Each tower contains a pair of Gothic Revival pointed arches, through which the roadways run. The arch openings are tall and wide. The tops of the towers are located above the floor of each arch opening, while the floors of the openings are above mean water level, giving the towers a total height of above mean high water.
Caissons
The towers rest on underwater caissons made of southern yellow pine and filled with cement. Inside both caissons were spaces for construction workers. The Manhattan side's caisson is slightly larger, measuring and located below high water, while the Brooklyn side's caisson measures and is located below high water. The caissons were designed to hold at least the weight of the towers which would exert a pressure of when fully built, but the caissons were over-engineered for safety. During an accident on the Brooklyn side, when air pressure was lost and the partially-built towers dropped full-force down, the caisson sustained an estimated pressure of with only minor damage. Most of the timber used in the bridge's construction, including in the caissons, came from mills at Gascoigne Bluff on St. Simons Island, Georgia.
The Brooklyn side's caisson, which was built first, originally had a height of and a ceiling composed of five layers of timber, each layer tall. Ten more layers of timber were later added atop the ceiling, and the entire caisson was wrapped in tin and wood for further protection against flooding. The thickness of the caisson's sides was at both the bottom and the top. The caisson had six chambers: two each for dredging, supply shafts, and airlocks.
The caisson on the Manhattan side was slightly different because it had to be installed at a greater depth. To protect against the increased air pressure at that depth, the Manhattan caisson had 22 layers of timber on its roof, seven more than its Brooklyn counterpart had. The Manhattan caisson also had fifty pipes for sand removal, a fireproof iron-boilerplate interior, and different airlocks and communication systems.
History
Planning
Proposals for a bridge between the then-separate cities of Brooklyn and New York had been suggested as early as 1800. At the time, the only travel between the two cities was by a number of ferry lines. Engineers presented various designs, such as chain or link bridges, though these were never built because of the difficulties of constructing a high enough fixed-span bridge across the extremely busy East River. There were also proposals for tunnels under the East River, but these were considered prohibitively expensive. German immigrant engineer John Augustus Roebling proposed building a suspension bridge over the East River in 1857. He had previously designed and constructed shorter suspension bridges, such as Roebling's Delaware Aqueduct in Lackawaxen, Pennsylvania, and the Niagara Suspension Bridge. In 1867, Roebling erected what became the John A. Roebling Suspension Bridge over the Ohio River between Cincinnati, Ohio, and Covington, Kentucky.
In February 1867, the New York State Senate passed a bill that allowed the construction of a suspension bridge from Brooklyn to Manhattan. Two months later, the New York and Brooklyn Bridge Company was incorporated with a board of directors (later converted to a board of trustees). There were twenty trustees in total: eight each appointed by the mayors of New York and Brooklyn, as well as the mayors of each city and the auditor and comptroller of Brooklyn. The company was tasked with constructing what was then known as the New York and Brooklyn Bridge. Alternatively, the span was just referred to as the "Brooklyn Bridge", a name originating in a January 25, 1867, letter to the editor sent to the Brooklyn Daily Eagle. The act of incorporation, which became law on April 16, 1867, authorized the cities of New York (now Manhattan) and Brooklyn to subscribe to $5 million in capital stock, which would fund the bridge's construction.
Roebling was subsequently named the chief engineer of the work and, by September 1867, had presented a master plan. According to the plan, the bridge would be longer and taller than any suspension bridge previously built. It would incorporate roadways and elevated rail tracks, whose tolls and fares would provide the means to pay for the bridge's construction. It would also include a raised promenade that served as a leisurely pathway. The proposal received much acclaim in both cities, and residents predicted that the New York and Brooklyn Bridge's opening would have as much of an impact as the Suez Canal, the first transatlantic telegraph cable or the first transcontinental railroad. By early 1869, however, some individuals started to criticize the project, saying either that the bridge was too expensive, or that the construction process was too difficult.
To allay concerns about the design of the New York and Brooklyn Bridge, Roebling set up a "Bridge Party" in March 1869, where he invited engineers and members of U.S. Congress to see his other spans. Following the bridge party in April, Roebling and several engineers conducted final surveys. During the process, it was determined that the main span would have to be raised from above MHW, requiring several changes to the overall design. In June 1869, while conducting these surveys, Roebling sustained a crush injury to his foot when a ferry pinned it against a piling. After amputation of his crushed toes, he developed a tetanus infection that left him incapacitated and resulted in his death the following month. Washington Roebling, John Roebling's 32-year-old son, was then hired to fill his father's role. Tammany Hall leader William M. Tweed also became involved in the bridge's construction because, as a major landowner in New York City, he had an interest in the project's completion. The New York and Brooklyn Bridge Company—later known simply as the New York Bridge Company—was actually overseen by Tammany Hall, and it approved Roebling's plans and designated him as chief engineer of the project.
Construction
Caissons
Construction of the Brooklyn Bridge began on January 2, 1870. The first work entailed the construction of two caissons, upon which the suspension towers would be built. The Brooklyn side's caisson was built at the Webb & Bell shipyard in Greenpoint, Brooklyn, and was launched into the river on March 19, 1870. Compressed air was pumped into the caisson, and workers entered the space to dig the sediment until it sank to the bedrock. As one sixteen-year-old from Ireland, Frank Harris, described the fearful experience:The six of us were working naked to the waist in the small iron chamber with the temperature of about 80 degrees Fahrenheit: In five minutes the sweat was pouring from us, and all the while we were standing in icy water that was only kept from rising by the terrific pressure. No wonder the headaches were blinding. Once the caisson had reached the desired depth, it was to be filled in with vertical brick piers and concrete. However, due to the unexpectedly high concentration of large boulders atop the riverbed, the Brooklyn caisson took several months to sink to the desired depth. Furthermore, in December 1870, its timber roof caught fire, delaying construction further. The "Great Blowout", as the fire was called, delayed construction for several months, since the holes in the caisson had to be repaired. On March 6, 1871, the repairs were finished, and the caisson had reached its final depth of ; it was filled with concrete five days later. Overall, about 264 individuals were estimated to have worked in the caisson every day, but because of high worker turnover, the final total was thought to be about 2,500 men in total. In spite of this, only a few workers were paralyzed. At its final depth, the caisson's air pressure was .
The Manhattan side's caisson was the next structure to be built. To ensure that it would not catch fire like its counterpart had, the Manhattan caisson was lined with fireproof plate iron. It was launched from Webb & Bell's shipyard on May 11, 1871, and maneuvered into place that September. Due to the extreme underwater air pressure inside the much deeper Manhattan caisson, many workers became sick with "the bends"—decompression sickness—during this work, despite the incorporation of airlocks (which were believed to help with decompression sickness at the time). This condition was unknown at the time and was first called "caisson disease" by the project physician, Andrew Smith. Between January 25 and May 31, 1872, Smith treated 110 cases of decompression sickness, while three workers died from the disease. When iron probes underneath the Manhattan caisson found the bedrock to be even deeper than expected, Washington Roebling halted construction due to the increased risk of decompression sickness. After the Manhattan caisson reached a depth of with an air pressure of , Washington deemed the sandy subsoil overlying the bedrock beneath to be sufficiently firm, and subsequently infilled the caisson with concrete in July 1872.
Washington Roebling himself suffered a paralyzing injury as a result of caisson disease shortly after ground was broken for the Brooklyn tower foundation. His debilitating condition left him unable to supervise the construction in person, so he designed the caissons and other equipment from his apartment, directing "the completion of the bridge through a telescope from his bedroom." His wife, Emily Warren Roebling, not only provided written communications between her husband and the engineers on site, but also understood mathematics, calculations of catenary curves, strengths of materials, bridge specifications, and the intricacies of cable construction. She spent the next 11 years helping supervise the bridge's construction, taking over much of the chief engineer's duties, including day-to-day supervision and project management.
Towers
After the caissons were completed, piers were constructed on top of each of them upon which masonry towers would be built. The towers' construction was a complex process that took four years. Since the masonry blocks were heavy, the builders transported them to the base of the towers using a pulley system with a continuous -diameter steel wire rope, operated by steam engines at ground level. The blocks were then carried up on a timber track alongside each tower and maneuvered into the proper position using a derrick atop the towers. The blocks sometimes vibrated the ropes because of their weight, but only once did a block fall.
Construction on the suspension towers started in mid-1872, and by the time work was halted for the winter in late 1872, parts of each tower had already been built. By mid-1873, there was substantial progress on the towers' construction. The Brooklyn side's tower had reached a height of above mean high water (MHW), while the tower on the Manhattan side had reached above MHW. The arches of the Brooklyn tower were completed by August 1874. The tower was substantially finished by December 1874 with the erection of saddle plates for the main cables at the top of the tower. However, the ornamentation on the Brooklyn tower could not be completed until the Manhattan tower was finished. The last stone on the Brooklyn tower was raised in June 1875 and the Manhattan tower was completed in July 1876. The saddle plates atop both towers were also raised in July 1876. The work was dangerous: by 1876, three workers had died having fallen from the towers, while nine other workers were killed in other accidents.
In 1875, while the towers were being constructed, the project had depleted its original $5 million budget. Two bridge commissioners, one each from Brooklyn and Manhattan, petitioned New York state lawmakers to allot another $8 million for construction. Ultimately, the legislators passed a law authorizing the allotment with the condition that the cities would buy the stock of Brooklyn Bridge's private stockholders.
Work proceeded concurrently on the anchorages on each side. The Brooklyn anchorage broke ground in January 1873 and was subsequently substantially completed in August 1875. The Manhattan anchorage was built in less time, having started in May 1875, it was mostly completed in July 1876. The anchorages could not be fully completed until the main cables were spun, at which point another would be added to the height of each anchorage.
Cables
The first temporary wire was stretched between the towers on August 15, 1876, using chrome steel provided by the Chrome Steel Company of Brooklyn. The wire was then stretched back across the river, and the two ends were spliced to form a traveler, a lengthy loop of wire connecting the towers, which was driven by a steam hoisting engine at ground level. The wire was one of two that were used to create a temporary footbridge for workers while cable spinning was ongoing. The next step was to send an engineer across the completed traveler wire in a boatswain's chair slung from the wire, to ensure it was safe enough. The bridge's master mechanic, E.F. Farrington, was selected for this task, and an estimated crowd of 10,000 people on both shores watched him cross. A second traveler wire was then stretched across the span, a task that was completed by August 30. The temporary footbridge, located some above the elevation of the future deck, was completed in February 1877.
By December 1876, a steel contract for the permanent cables still had not been awarded. There was disagreement over whether the bridge's cables should use the as-yet-untested Bessemer steel or the well-proven crucible steel. Until a permanent contract was awarded, the builders ordered of wire in the interim, 10 tons each from three companies, including Washington Roebling's own steel mill in Brooklyn. In the end, it was decided to use number 8 Birmingham gauge (approximately 4 mm or 0.165 inches in diameter) crucible steel, and a request for bids was distributed, to which eight companies responded. In January 1877, a contract for crucible steel was awarded to J. Lloyd Haigh, who was associated with bridge trustee Abram Hewitt, whom Roebling distrusted.
The spinning of the wires required the manufacture of large coils of it which were galvanized but not oiled when they left the factory. The coils were delivered to a yard near the Brooklyn anchorage. There they were dipped in linseed oil, hoisted to the top of the anchorage, dried out and spliced into a single wire, and finally coated with red zinc for further galvanizing. There were thirty-two drums at the anchorage yard, eight for each of the four main cables. Each drum had a capacity of of wire. The first experimental wire for the main cables was stretched between the towers on May 29, 1877, and spinning began two weeks later. All four main cables were being strung by that July. During that time, the temporary footbridge was unofficially opened to members of the public, who could receive a visitor's pass; by August 1877 several thousand visitors from around the world had used the footbridge. The visitor passes ceased that September after a visitor had an epileptic seizure and nearly fell off.
As the wires were being spun, work also commenced on the demolition of buildings on either side of the river for the Brooklyn Bridge's approaches; this work was mostly complete by September 1877. The following month, initial contracts were awarded for the suspender wires, which would hang down from the main cables and support the deck. By May 1878, the main cables were more than two-thirds complete. However, the following month, one of the wires slipped, killing two people and injuring three others. In 1877, Hewitt wrote a letter urging against the use of Bessemer steel in the bridge's construction. Bids had been submitted for both crucible steel and Bessemer steel; John A. Roebling's Sons submitted the lowest bid for Bessemer steel, but at Hewitt's direction, the contract was awarded to Haigh.
A subsequent investigation discovered that Haigh had substituted inferior quality wire in the cables. Of eighty rings of wire that were tested, only five met standards, and it was estimated that Haigh had earned $300,000 from the deception. At this point, it was too late to replace the cables that had already been constructed. Roebling determined that the poorer wire would leave the bridge only four times as strong as necessary, rather than six to eight times as strong. The inferior-quality wire was allowed to remain and 150 extra wires were added to each cable. To avoid public controversy, Haigh was not fired, but instead was required to personally pay for higher-quality wire. The contract for the remaining wire was awarded to the John A. Roebling's Sons, and by October 5, 1878, the last of the main cables' wires went over the river.
Nearing completion
After the suspender wires had been placed, workers began erecting steel crossbeams to support the roadway as part of the bridge's overall superstructure. Construction on the bridge's superstructure started in March 1879, but, as with the cables, the trustees initially disagreed on whether the steel superstructure should be made of Bessemer or crucible steel. That July, the trustees decided to award a contract for of Bessemer steel to the Edgemoor (or Edge Moor) Iron Works, based in Philadelphia, to be delivered by 1880. The trustees later passed another resolution for another of Bessemer steel. However, by February 1880 the steel deliveries had not started. That October, the bridge trustees questioned Edgemoor's president about the delay in steel deliveries. Despite Edgemoor's assurances that the contract would be fulfilled, the deliveries still had not been completed by November 1881. Brooklyn mayor Seth Low, who became part of the board of trustees in 1882, became the chairman of a committee tasked to investigate Edgemoor's failure to fulfill the contract. When questioned, Edgemoor's president stated that the delays were the fault of another contractor, the Cambria Iron Company, who was manufacturing the eyebars for the bridge trusses; at that point, the contract was supposed to be complete by October 1882.
Further complicating the situation, Washington Roebling had failed to appear at the trustees' meeting in June 1882, since he had gone to Newport, Rhode Island. After the news media discovered this, most of the newspapers called for Roebling to be fired as chief engineer, except for the Daily State Gazette of Trenton, New Jersey, and the Brooklyn Daily Eagle. Some of the longstanding trustees, including Henry C. Murphy, James S. T. Stranahan, and William C. Kingsley, were willing to vouch for Roebling, since construction progress on the Brooklyn Bridge was still ongoing. However, Roebling's behavior was considered suspect among the younger trustees who had joined the board more recently.
Construction on the bridge itself was noted in formal reports that Murphy presented each month to the mayors of New York and Brooklyn. For example, Murphy's report in August 1882 noted that the month's progress included 114 intermediate cords erected within a week, as well as 72 diagonal stays, 60 posts, and numerous floor beams, bridging trusses, and stay bars. By early 1883, the Brooklyn Bridge was considered mostly completed and was projected to open that June. Contracts for bridge lighting were awarded by February 1883, and a toll scheme was approved that March.
Opposition
There was substantial opposition to the bridge's construction from shipbuilders and merchants located to the north, who argued that the bridge would not provide sufficient clearance underneath for ships. In May 1876, these groups, led by Abraham Miller, filed a lawsuit in the United States District Court for the Southern District of New York against the cities of New York and Brooklyn.
In 1879, an Assembly Sub-Committee on Commerce and Navigation began an investigation into the Brooklyn Bridge. A seaman who had been hired to determine the height of the span, testified to the committee about the difficulties that ship masters would experience in bringing their ships under the bridge when it was completed. Another witness, Edward Wellman Serrell, a civil engineer, said that the calculations of the bridge's assumed strength were incorrect. The Supreme Court decided in 1883 that the Brooklyn Bridge was a lawful structure.
Opening
The New York and Brooklyn Bridge was opened for use on May 24, 1883. Thousands of people attended the opening ceremony, and many ships were present in the East River for the occasion. Officially, Emily Warren Roebling was the first to cross the bridge. The bridge opening was also attended by U.S. president Chester A. Arthur and New York mayor Franklin Edson, who crossed the bridge and shook hands with Brooklyn mayor Seth Low at the Brooklyn end. Abram Hewitt gave the principal address.
Though Washington Roebling was unable to attend the ceremony (and rarely visited the site again), he held a celebratory banquet at his house on the day of the bridge opening. Further festivity included the performance by a band, gunfire from ships, and a fireworks display. On that first day, a total of 1,800 vehicles and 150,300 people crossed the span. Less than a week after the Brooklyn Bridge opened, ferry crews reported a sharp drop in patronage, while the bridge's toll operators were processing over a hundred people a minute. However, cross-river ferries continued to operate until 1942.
The bridge had cost in 1883 dollars (about US$ in ) to build, of which Brooklyn paid two-thirds. The bonds to fund the construction would not be paid off until 1956. An estimated 27 men died during its construction. Since the New York and Brooklyn Bridge was the only bridge across the East River at that time, it was also called the East River Bridge. Until the construction of the nearby Williamsburg Bridge in 1903, the New York and Brooklyn Bridge was the longest suspension bridge in the world, 20% longer than any built previously.
At the time of opening, the Brooklyn Bridge was not complete; the proposed public transit across the bridge was still being tested, while the Brooklyn approach was being completed. On May 30, 1883, six days after the opening, a woman falling down a stairway at the Brooklyn approach caused a stampede which resulted in at least twelve people being crushed and killed. In subsequent lawsuits, the Brooklyn Bridge Company was acquitted of negligence. However, the company did install emergency phone boxes and additional railings, and the trustees approved a fireproofing plan for the bridge. Public transit service began with the opening of the New York and Brooklyn Bridge Railway, a cable car service, on September 25, 1883. On May 17, 1884, one of the circus master P. T. Barnum's most famous attractions, Jumbo the elephant, led a parade of 21 elephants over the Brooklyn Bridge. This helped to lessen doubts about the bridge's stability while also promoting Barnum's circus.
1880s to 1910s
Patronage across the Brooklyn Bridge increased in the years after it opened; a million people paid to cross in the six first months. The bridge carried 8.5 million people in 1884, its first full year of operation; this number doubled to 17 million in 1885 and again to 34 million in 1889. Many of these people were cable car passengers. Additionally, about 4.5 million pedestrians a year were crossing the bridge for free by 1892.
The first proposal to make changes to the bridge was sent in only two and a half years after it opened, when Linda Gilbert suggested glass steam-powered elevators and an observatory be added to the bridge and a fee charged for use, which would in part fund the bridge's upkeep and in part fund her prison reform charity. This proposal was considered but not acted upon. Numerous other proposals were made during the first fifty years of the bridge's life. Trolley tracks were added in the center lanes of both roadways in 1898, allowing trolleys to use the bridge as well. That year, the formerly separate City of Brooklyn was unified with New York City, and the Brooklyn Bridge fell under city control.
Concerns about the Brooklyn Bridge's safety were raised during the turn of the century. In 1898, traffic backups due to a dead horse caused one of the truss cords to buckle. There were more significant worries after twelve suspender cables snapped in 1901, though a thorough investigation found no other defects. After the 1901 incident, five inspectors were hired to examine the bridge each day, a service that cost $250,000 a year. The Brooklyn Rapid Transit Company, which operated routes across the Brooklyn Bridge, issued a notice in 1905 saying that the bridge had reached its transit capacity.
By 1890, due to the popularity of the Brooklyn Bridge, there were proposals to construct other bridges across the East River between Manhattan and Long Island. Although a second deck for the Brooklyn Bridge was proposed, it was thought to be infeasible because doing so would overload the bridge's structural capacity. The first new bridge across the East River, the Williamsburg Bridge, opened upstream in 1903 and connected Williamsburg, Brooklyn, with the Lower East Side of Manhattan. This was followed by the Queensboro Bridge between Queens and Manhattan in March 1909, and the Manhattan Bridge between Brooklyn and Manhattan in December 1909. Several subway, railroad, and road tunnels were also constructed, which helped to accelerate the development of Manhattan, Brooklyn, and Queens.
1910s to 1940s
Though carriages and cable-car customers had paid tolls ever since the bridge's opening, pedestrians were spared from the tolls originally. By the first decade of the 20th century, pedestrians were also paying tolls. Tolls on all four bridges across the East River—the Brooklyn Bridge, as well as the Manhattan, Williamsburg, and Queensboro bridges to the north—were abolished in July 1911 as part of a populist policy initiative headed by New York City mayor William Jay Gaynor. The city government passed a bill to officially name the structure the "Brooklyn Bridge" in January 1915.
Ostensibly in an attempt to reduce traffic on nearby city streets, Grover Whalen, the commissioner of Plant and Structures, banned motor vehicles from the Brooklyn Bridge on July 6, 1922. The real reason for the ban was an incident the same year where two cables slipped due to high traffic loads. Both Whalen and Roebling called for the renovation of the Brooklyn Bridge and the construction of a parallel bridge, though the parallel bridge was never built. Whalen's successor William Wirt Mills announced in 1924 that a new wood-block pavement would be installed, permitting motor vehicles to use the bridge again; motor traffic was again allowed on the bridge starting on May 12, 1925.
As part of an experiment, starting in November 1946, the Manhattan-bound roadway carried Brooklyn-bound traffic during the evening rush hours. The experiment ended after two months due to complaints about congestion.
Mid- to late 20th century
Upgrades
The first major upgrade to the Brooklyn Bridge commenced in 1948, when a contract to entirely reconstruct the approach ramps was awarded to David B. Steinman. The renovation was expected to double the capacity of the bridge's roadways to nearly 6,000 cars per hour, at a projected cost of $7 million. The renovation included the demolition of both the elevated and the trolley tracks on the roadways, the removal of trusses separating the inner elevated tracks from the existing vehicle lanes and the widening of each roadway from two to three lanes, as well as the construction of a new steel-and-concrete floor. In addition, new ramps were added to Adams Street, Cadman Plaza, and the Brooklyn Queens Expressway (BQE) on the Brooklyn side, and to Park Row on the Manhattan side. The bridge was briefly closed to all traffic for the first time ever in January 1950, and the trolley tracks closed that March to allow the widening work to occur. During the construction project, one roadway at a time was closed, allowing reduced traffic flows to cross the bridge in one direction only.
The widened south roadway was completed in May 1951, followed by the north roadway in October 1953. The restoration was finished in May 1954 with the completion of the reconstructed elevated promenade. While the rebuilding of the span was ongoing, a fallout shelter was constructed beneath the Manhattan approach in anticipation of the Cold War. The abandoned space in one of the masonry arches was stocked with emergency survival supplies for a potential nuclear attack by the Soviet Union; these supplies remained in place half a century later. In addition, defensive barriers were added to the bridge as a safeguard against sabotage.
Simultaneous with the rebuilding of the Brooklyn Bridge, a double-decked viaduct for the BQE was being built through an existing steel overpass of the bridge's Brooklyn approach ramp. The segment of the BQE from Brooklyn Bridge south to Atlantic Avenue opened in June 1954, but the direct ramp from the northbound BQE to the Manhattan-bound Brooklyn Bridge did not open until 1959. The city also widened the Adams Street approach in Brooklyn, between the bridge and Fulton Street, from between 1954 and 1955. Subsequently, Boerum Place from Fulton Street south to Atlantic Avenue was also widened. This required the demolition of the old Kings County courthouse. The towers were cleaned in 1958 and the Brooklyn anchorage was repaired the next year.
On the Manhattan side, the city approved a controversial rebuilding of the Manhattan entrance plaza in 1953. The project, which would add a grade-separated junction over Park Row, was hotly contested because it would require the demolition of 21 structures, including the old New York World Building. The reconstruction also necessitated the relocation of 410 families on Park Row. In December 1956, the city started a two-year renovation of the plaza. This required the closure of one roadway at a time, as was done during the rebuilding of the bridge itself. Work on redeveloping the area around the Manhattan approach started in the mid-1960s. At the same time, plans were announced for direct ramps to the elevated FDR Drive to alleviate congestion at the approach. The ramp from FDR Drive to the Brooklyn Bridge was opened in 1968, followed by the ramp from the bridge to FDR Drive the next year. A single ramp from the Manhattan-bound Brooklyn Bridge to northbound Park Row was constructed in 1970. A repainting of the bridge was announced two years later in advance of its 90th anniversary.
Deterioration and late-20th century repair
The Brooklyn Bridge gradually deteriorated due to age and neglect. While it had 200 full-time dedicated maintenance workers before World War II, that number dropped to five by the late 20th century, and the city as a whole only had 160 bridge maintenance workers. In 1974, heavy vehicles such as vans and buses were banned from the bridge to prevent further erosion of the concrete roadway. A report in The New York Times four years later noted that the cables were visibly fraying and the pedestrian promenade had holes in it. The city began planning to replace all the Brooklyn Bridge's cables at a cost of $115 million, as part of a larger project to renovate all four toll-free East River spans. By 1980, the Brooklyn Bridge was in such dire condition that it faced imminent closure. In some places, half of the strands in the cables were broken.
In June 1981, two of the diagonal stay cables snapped, killing a pedestrian. Subsequently, the anchorages were found to have developed rust, and an emergency cable repair was necessitated less than a month later after another cable developed slack. Following the incident, the city accelerated the timetable of its proposed cable replacement, and it commenced a $153 million rehabilitation of the Brooklyn Bridge in advance of the 100th anniversary. As part of the project, the bridge's original suspender cables installed by J. Lloyd Haigh were replaced by Bethlehem Steel in 1986, marking the cables' first replacement since construction. In addition, the staircase at Washington Street in Brooklyn was renovated, the stairs from Tillary and Adams Streets were replaced with a ramp, and the short flights of steps from the promenade to each tower's balcony were removed. In a smaller project, the bridge was floodlit at night starting in 1982 to highlight its architectural features.
Additional problems persisted, and in 1993, high levels of lead were discovered near the bridge's towers. Further emergency repairs were undertaken in mid-1999 after small concrete shards began falling from the bridge into the East River. The concrete deck had been installed during the 1950s renovations and had a lifespan of about 60 years. The Park Row exit from the bridge's westbound lanes was closed as a safety measure after the September 11, 2001, attacks on the nearby World Trade Center. That section of Park Row had been closed off since it ran right underneath 1 Police Plaza, the headquarters of the New York City Police Department (NYPD). In early 2003, to save money on electricity, the NYCDOT turned off the bridge's "necklace lights" at night. They were turned back on later that year after several private entities made donations to fund the lights.
21st century
After the 2007 collapse of the I-35W bridge in Minneapolis, public attention focused on the condition of bridges across the U.S. The New York Times reported that the Brooklyn Bridge approach ramps had received a "poor" rating during an inspection in 2007. However, a NYCDOT spokesman said that the poor rating did not indicate a dangerous state but rather implied it required renovation. In 2010, the NYCDOT began renovating the approaches and deck, as well as repainting the suspension span. Work included widening two approach ramps from one to two lanes by re-striping a new prefabricated ramp; raising clearance over the eastbound BQE at York Street; seismic retrofitting; replacement of rusted railings and safety barriers; and road deck resurfacing. The work necessitated detours for four years. At the time, the project was scheduled to be completed in 2014; but completion was later delayed to 2015, then again to 2017. The project's cost also increased from $508 million in 2010 to $811 million in 2016.
In August 2016, the NYCDOT announced that it would conduct a seven-month, $370,000 study to verify if the bridge could support a heavier upper deck that consisted of an expanded bicycle and pedestrian path. By then, about 10,000 pedestrians and 3,500 cyclists used the pathway on an average weekday. Work on the pedestrian entrance on the Brooklyn side was underway by 2017. The NYCDOT also indicated in 2016 that it planned to reinforce the Brooklyn Bridge's foundations to prevent it from sinking, as well as repair the masonry arches on the approach ramps, which had been damaged by Hurricane Sandy four years earlier. In July 2018, the New York City Landmarks Preservation Commission approved a further renovation of the Brooklyn Bridge's suspension towers and approach ramps. That December, the federal government gave the city $25 million in funding, which would pay for a $337 million rehabilitation of the bridge approaches and the suspension towers. Work started in late 2019 and was scheduled to be completed in four years. This restoration included removing bricks from the arches and putting fresh concrete behind them, using mortar from the same upstate quarries as the original mortar. The granite arches were also cleaned, revealing the original gray color of the stone, which had long been hidden by grime. Additionally, 56 LED lamps were installed on the bridge at a cost of $2.4 million.
In early 2020, City Council speaker Corey Johnson and the nonprofit Van Alen Institute hosted an international contest to solicit plans for the redesign of the bridge's walkway. Ultimately, in January 2021, the city decided to install a two-way protected bike path on the Manhattan-bound roadway, replacing the leftmost vehicular lane. The bike lane would allow the existing promenade to be used exclusively by pedestrians. Work on the bike lane started in June 2021, and the new path was completed on September 14, 2021. Despite the addition of the bike path, the bridge's walkway was still frequently overcrowded, prompting the city to propose in mid-2023 that street vendors be banned from the bridge and others citywide. All vendors were banned from the bridge at the beginning of January 2024. The same month, the bridge's new LED lights were illuminated for the first time.
A plan for congestion pricing in New York City was approved in mid-2023, allowing the Metropolitan Transportation Authority to toll drivers who enter Manhattan south of 60th Street. Congestion pricing was implemented in January 2025. Most traffic to and from FDR Drive is exempt from the toll, but all other Manhattan-bound drivers pay a toll, which varies based on the time of day.
Usage
Vehicular traffic
Horse-drawn carriages have been allowed to use the Brooklyn Bridge's roadways since its opening. Originally, each of the two roadways carried two lanes of a different direction of traffic. The lanes were relatively narrow at only wide. In July 1922, motor vehicles were banned from the bridge; the ban lasted until May 1925.
After 1950, the main roadway carried six lanes of automobile traffic, three in each direction. It was then reduced to five lanes with the addition of a two-way bike lane on the Manhattan-bound side in 2021. Because of the roadway's posted height restriction of and weight restriction of , commercial vehicles and buses are prohibited from using the Brooklyn Bridge. The weight restrictions prohibit heavy passenger vehicles such as pickup trucks and SUVs from using the bridge, though this is not often enforced in practice.
On the Brooklyn side, vehicles can enter the bridge from Tillary/Adams Streets to the south, Sands/Pearl Streets to the west, and exit 28B of the eastbound Brooklyn-Queens Expressway. In Manhattan, cars can enter from both the northbound and southbound FDR Drive, as well as Park Row to the west, Chambers/Centre Streets to the north, and Pearl Street to the south. However, the exit from the bridge to northbound Park Row was closed after the September 11 attacks because of increased security concerns: that section of Park Row ran under One Police Plaza, the NYPD headquarters.
Exit list
Vehicular access to the bridge is provided by a complex series of ramps on both sides of the bridge. There are two entrances to the bridge's pedestrian promenade on either side. The current configuration was constructed from the mid-1950s up until the early 1970s. After 9/11, the ramp onto Park Row was restricted to public traffic, there are no plans to reopen it.
Rail traffic
Formerly, rail traffic operated on the Brooklyn Bridge as well. Cable cars and elevated railroads used the bridge until 1944, while trolleys ran until 1950.
Cable cars and elevated railroads
The New York and Brooklyn Bridge Railway, a cable car service, began operating on September 25, 1883; it ran on the inner lanes of the bridge, between terminals at the Manhattan and Brooklyn ends. Since Washington Roebling believed that steam locomotives would put excessive loads upon the structure of the Brooklyn Bridge, the cable car line was designed as a steam/cable-hauled hybrid. They were powered from a generating station under the Brooklyn approach. The cable cars could not only regulate their speed on the % upward and downward approaches, but also maintain a constant interval between each other. There were 24 cable cars in total.
Initially, the service ran with single-car trains, but patronage soon grew so much that by October 1883, two-car trains were in use. The line carried three million people in the first six months, nine million in 1884, and nearly 20 million in 1885 following the opening of the Brooklyn Union Elevated Railroad. Accordingly, the track layout was rearranged and more trains were ordered. At the same time, there were highly controversial plans to extend the elevated railroads onto the Brooklyn Bridge, under the pretext of extending the bridge itself. After disputes, the trustees agreed to build two elevated routes to the bridge on the Brooklyn side. Patronage continued to increase, and in 1888, the tracks were lengthened and even more cars were constructed to allow for four-car cable car trains. Electric wires for the trolleys were added by 1895, allowing for the potential future decommissioning of the steam/cable system. The terminals were rebuilt once more in July 1895, and, following the implementation of new electric cars in late 1896, the steam engines were dismantled and sold.
Following the unification of the cities of New York and Brooklyn in 1898, the New York and Brooklyn Bridge Railway ceased to be a separate entity that June and the Brooklyn Rapid Transit Company (BRT) assumed control of the line. The BRT started running through-services of elevated trains, which ran from Park Row Terminal in Manhattan to points in Brooklyn via the Sands Street station on the Brooklyn side. Before reaching Sands Street (at Tillary Street for Fulton Street Line trains, and at Bridge Street for Fifth Avenue Line and Myrtle Avenue Line trains), elevated trains bound for Manhattan were uncoupled from their steam locomotives. The elevated trains were then coupled to the cable cars, which would pull the passenger carriages across the bridge.
The BRT did not run any elevated train through services from 1899 to 1901. Due to increased patronage after the opening of the Interborough Rapid Transit Company (IRT)'s first subway line, the Park Row station was rebuilt in 1906. In the early 20th century, there were plans for Brooklyn Bridge elevated trains to run underground to the BRT's proposed Chambers Street station in Manhattan, though the connection was never opened. The overpass across William Street was closed in 1913 to make way for the proposed connection. In 1929, the overpass was reopened after it became clear that the connection would not be built.
After the IRT's Joralemon Street Tunnel and the Williamsburg Bridge tracks opened in 1908, the Brooklyn Bridge no longer held a monopoly on rail service between Manhattan and Brooklyn, and cable service ceased. New subway lines from the IRT and from the BRT's successor Brooklyn–Manhattan Transit Corporation (BMT), built in the 1910s and 1920s, posed significant competition to the Brooklyn Bridge rail services. With the opening of the Independent Subway System in 1932 and the subsequent unification of all three companies into a single entity in 1940, the elevated services started to decline, and the Park Row and Sands Street stations were greatly reduced in size. The Fifth Avenue and Fulton Street services across the Brooklyn Bridge were discontinued in 1940 and 1941 respectively, and the elevated tracks were abandoned permanently with the withdrawal of Myrtle Avenue services in 1944.
Trolleys
A plan for trolley service across the Brooklyn Bridge was presented in 1895. Two years later, the Brooklyn Bridge trustees agreed to a plan where trolleys could run across the bridge under ten-year contracts. Trolley service, which began in 1898, ran on what are now the two middle lanes of each roadway (shared with other traffic). When cable service was withdrawn in 1908, the trolley tracks on the Brooklyn side were rebuilt to alleviate congestion. Trolley service on the middle lanes continued until the elevated lines stopped using the bridge in 1944, when they moved to the protected center tracks. On March 5, 1950, the streetcars also stopped running, and the bridge was redesigned exclusively for automobile traffic.
Walkway
The Brooklyn Bridge has an elevated promenade open to pedestrians in the center of the bridge, located above the automobile lanes. The promenade is usually located below the height of the girders, except at the approach ramps leading to each tower's balcony. The path is generally wide, though this is constrained by obstacles such as protruding cables, benches, and stairways, which create "pinch points" at certain locations. The path narrows to at the locations where the main cables descend to the level of the promenade. Further exacerbating the situation, these "pinch points" are some of the most popular places to take pictures. As a result, in 2016, the NYCDOT announced that it planned to double the promenade's width.
A center line was painted to separate cyclists from pedestrians in 1971, creating one of the city's first dedicated bike lanes. Initially, the northern side of the promenade was used by pedestrians and the southern side by cyclists. In 2000, these were swapped, with cyclists taking the northern side and pedestrians taking the southern side. On September 14, 2021, the DOT closed off the inner-most car lane on the Manhattan-bound side with protective barriers and fencing to create a new bike path. Cyclists are now prohibited from the upper pedestrian lane.
Pedestrian access to the bridge from the Brooklyn side is from either the median of Adams Street at its intersection with Tillary Street or a staircase near Prospect Street between Cadman Plaza East and West. In Manhattan, the pedestrian walkway is accessible from crosswalks at the intersection of the bridge and Centre Street, or through a staircase leading to Park Row.
Emergency use
While the bridge has always permitted the passage of pedestrians, the promenade facilitates movement when other means of crossing the East River have become unavailable. During transit strikes by the Transport Workers Union in 1980 and 2005, people commuting to work used the bridge; they were joined by Mayors Ed Koch and Michael Bloomberg, who crossed as a gesture to the affected public. Pedestrians also walked across the bridge as an alternative to suspended subway services following the 1965, 1977, and 2003 blackouts, and after the September 11 attacks.
During the 2003 blackouts, many crossing the bridge reported a swaying motion. The higher-than-usual pedestrian load caused this swaying, which was amplified by the tendency of pedestrians to synchronize their footfalls with a sway. Several engineers expressed concern about how this would affect the bridge, although others noted that the bridge did withstand the event and that the redundancies in its design—the inclusion of the three support systems (suspension system, diagonal stay system, and stiffening truss)—make it "probably the best secured bridge against such movements going out of control". In designing the bridge, John Roebling had stated that the bridge would sag but not fall, even if one of these structural systems were to fail altogether.
Notable events
Stunts
There have been several notable jumpers from the Brooklyn Bridge. The first person was Robert Emmet Odlum, brother of women's rights activist Charlotte Odlum Smith, on May 19, 1885. He struck the water at an angle and died shortly afterwards from internal injuries. Steve Brodie supposedly dropped from underneath the bridge in July 1886 and was briefly arrested for it, though there is some doubt about whether he actually jumped. Larry Donovan made a slightly higher jump from the railing a month afterward. The first known person to jump from the bridge with the intention of suicide was Francis McCarey in 1892. A lesser known early jumper was James Duffy of County Cavan, Ireland, who on April 15, 1895, asked several men to watch him jump from the bridge. Duffy jumped and was not seen again. Additionally, the cartoonist Otto Eppers jumped and survived in 1910, and was then tried and acquitted for attempted suicide. The Brooklyn Bridge has since developed a reputation as a suicide bridge due to the number of jumpers who do so intending to kill themselves, though exact statistics are difficult to find.
Other notable feats have taken place on or near the bridge. In 1919, Giorgio Pessi piloted what was then one of the world's largest airplanes, the Caproni Ca.5, under the bridge. In 1993, bridge jumper Thierry Devaux illegally performed eight acrobatic bungee jumps above the East River close to the Brooklyn tower.
Crimes and terrorism
On March 1, 1994, Lebanese-born Rashid Baz opened fire on a van carrying members of the Chabad-Lubavitch Orthodox Jewish Movement, striking 16-year-old student Ari Halberstam and three others traveling on the bridge. Halberstam died five days later from his wounds, and Baz was later convicted of murder. He was apparently acting out of revenge for the Hebron massacre of Palestinian Muslims a few days prior to the incident. After initially classifying the killing as one committed out of road rage, the Justice Department reclassified the case in 2000 as a terrorist attack. The entrance ramp to the bridge on the Manhattan side was dedicated as the Ari Halberstam Memorial Ramp in 1995.
Several potential attacks or disasters have also been averted. In 1979, police disarmed a stick of dynamite placed under the Brooklyn approach, and an artist in Manhattan was arrested that year after another bombing attempt. In 2003, truck driver Iyman Faris was sentenced to about 20 years in prison for providing material support to Al-Qaeda, after an earlier plot to destroy the bridge by cutting through its support wires with blowtorches was thwarted.
Arrests
At 9:00 a.m. on May 19, 1977, artist Jack Bashkow climbed one of the towers for Bridging, a "media sculpture" by the performance group Art Corporation of America Inc. Seven artists climbed the largest bridges connected to Manhattan "to replace violence and fear in mass media for one day". When each of the artists had reached the tops of the bridges, they ignited bright-yellow flares at the same moment, resulting in rush hour traffic disruption, media attention, and the arrest of the climbers, though the charges were later dropped. Called "the first social-sculpture to use mass-media as art" by conceptual artist Joseph Beuys, the event was on the cover of the New York Post, received international attention, and received ABC Eyewitness News' 1977 Best News of the Year award. John Halpern documented the incident in the film Bridging, 1977. Halpern attempted another "bridging" "social sculpture" in 1979, when he planted a radio receiver, gunpowder and fireworks in a bucket atop one of the towers. The piece was later discovered by police, leading to his arrest for possessing a bomb.
On October 1, 2011, more than 700 protesters with the Occupy Wall Street movement were arrested while attempting to march across the bridge on the roadway. Protesters disputed the police account of the events and claimed that the arrests were the result of being trapped on the bridge by the NYPD. The majority of the arrests were subsequently dismissed.
On July 22, 2014, the two American flags on the flagpoles atop each tower were found to have been replaced by bleached-white American flags. Initially, cannabis activism was suspected as a motive, but on August 12, 2014, two Berlin artists claimed responsibility for hoisting the two white flags, having switched out the original flags with their replicas. The artists said that the flags were meant to celebrate "the beauty of public space" and the anniversary of the death of German-born John Roebling, and they denied that it was an "anti-American statement".
Anniversary celebrations
The 50th-anniversary celebrations on May 24, 1933, included a ceremony featuring an airplane show, ships, and fireworks, as well as a banquet. During the centennial celebrations on May 24, 1983, a flotilla of ships visited the harbor, officials held parades, and Grucci Fireworks held a fireworks display that evening. For the centennial, the Brooklyn Museum exhibited a selection of the original drawings made for the bridge's construction, including those by Washington Roebling. Media coverage of the centennial was declared "the public relations triumph of 1983" by Inc.
The 125th anniversary of the bridge's opening was celebrated by a five-day event on May 22–26, 2008, which included a live performance by the Brooklyn Philharmonic, a special lighting of the bridge's towers, and a fireworks display. Other events included a film series, historical walking tours, information tents, a series of lectures and readings, a bicycle tour of Brooklyn, a miniature golf course featuring Brooklyn icons, and other musical and dance performances. Just before the anniversary celebrations, artist Paul St George installed the Telectroscope, a video link on the Brooklyn side of the bridge that connected to a matching device on London's Tower Bridge. A renovated pedestrian connection to Dumbo, Brooklyn, was also reopened before the anniversary celebrations.
Impact
At the time of construction, contemporaries marveled at what technology was capable of, and the bridge became a symbol of the era's optimism. John Perry Barlow wrote in the late 20th century of the "literal and genuinely religious leap of faith" embodied in the bridge's construction, saying that the "Brooklyn Bridge required of its builders faith in their ability to control technology".
Historical designations and plaques
The Brooklyn Bridge has been listed as a National Historic Landmark since January 29, 1964, and was subsequently added to the National Register of Historic Places on October 15, 1966. The bridge has also been a New York City designated landmark since August 24, 1967, and was designated a National Historic Civil Engineering Landmark in 1972. In addition, it was placed on UNESCO's list of tentative World Heritage Sites in 2017.
A bronze plaque is attached to the Manhattan anchorage, which was constructed on the site of the Samuel Osgood House at 1 Cherry Street in Manhattan. Named after Samuel Osgood, a Massachusetts politician and lawyer, it was built in 1770 and served as the first U.S. presidential mansion. The Osgood House was demolished in 1856.
Another plaque on the Manhattan side of the pedestrian promenade, installed by the city in 1975, indicates the bridge's status as a city landmark.
Culture
The Brooklyn Bridge has had an impact on idiomatic American English. For example, references to "selling the Brooklyn Bridge" are frequent in American culture, sometimes presented as a historical reality but more often as an expression meaning an idea that strains credulity. George C. Parker and William McCloundy were two early 20th-century con men who may have perpetrated this scam successfully, particularly on new immigrants, although the author of The Brooklyn Bridge: A Cultural History wrote, "No evidence exists that the bridge has ever been sold to a 'gullible outlander'".
As a tourist attraction, the Brooklyn Bridge is a popular site for clusters of love locks, wherein a couple inscribes a date and their initials onto a lock, attach it to the bridge, and throw the key into the water as a sign of their love. The practice is illegal in New York City and the NYPD can give violators a $100 fine. NYCDOT workers periodically remove the love locks from the bridge at a cost of $100,000 per year.
To highlight the Brooklyn Bridge's cultural status, the city proposed building a Brooklyn Bridge museum near the bridge's Brooklyn end in the 1970s. Though the museum was ultimately not constructed, as many as 10,000 drawings and documents relating to it were found in a carpenter shop in Williamsburg in 1976. These documents were given to the New York City Municipal Archives, where they are normally located, though a selection of them were displayed at the Whitney Museum of American Art when they were discovered.
Media
The bridge is often featured in wide shots of the New York City skyline in television and film and has been depicted in numerous works of art. Fictional works have used the Brooklyn Bridge as a setting; for instance, the dedication of a portion of the bridge, and the bridge itself, were key components in the 2001 film Kate & Leopold. Furthermore, the Brooklyn Bridge has also served as an icon of America, with mentions in numerous songs, books, and poems. Among the most notable of these works is that of American Modernist poet Hart Crane, who used the Brooklyn Bridge as a central metaphor and organizing structure for his second book of poetry, The Bridge (1930).
The Brooklyn Bridge has also been lauded for its architecture. One of the first positive reviews was "The Bridge As A Monument", a Harper's Weekly piece written by architecture critic Montgomery Schuyler and published a week after the bridge's opening. In the piece, Schuyler wrote: "It so happens that the work which is likely to be our most durable monument, and to convey some knowledge of us to the most remote posterity, is a work of bare utility; not a shrine, not a fortress, not a palace, but a bridge." Architecture critic Lewis Mumford cited the piece as the impetus for serious architectural criticism in the U.S. He wrote that in the 1920s the bridge was a source of "joy and inspiration" in his childhood, and that it was a profound influence in his adolescence. Later critics would regard the Brooklyn Bridge as a work of art, as opposed to an engineering feat or a means of transport. Not all critics appreciated the bridge, however. Henry James, writing in the early 20th century, cited the bridge as an ominous symbol of the city's transformation into a "steel-souled machine room".
The construction of the Brooklyn Bridge is detailed in numerous media sources, including David McCullough's 1972 book The Great Bridge and Ken Burns's 1981 documentary Brooklyn Bridge. It is also described in Seven Wonders of the Industrial World, a BBC docudrama series with an accompanying book, as well as Chief Engineer: Washington Roebling, The Man Who Built the Brooklyn Bridge, a biography published in 2017.
| Technology | Transport infrastructure | null |
47769 | https://en.wikipedia.org/wiki/Transistor%E2%80%93transistor%20logic | Transistor–transistor logic | Transistor–transistor logic (TTL) is a logic family built from bipolar junction transistors. Its name signifies that transistors perform both the logic function (the first "transistor") and the amplifying function (the second "transistor"), as opposed to earlier resistor–transistor logic (RTL) and diode–transistor logic (DTL).
TTL integrated circuits (ICs) were widely used in applications such as computers, industrial controls, test equipment and instrumentation, consumer electronics, and synthesizers.
After their introduction in integrated circuit form in 1963 by Sylvania Electric Products, TTL integrated circuits were manufactured by several semiconductor companies. The 7400 series by Texas Instruments became particularly popular. TTL manufacturers offered a wide range of logic gates, flip-flops, counters, and other circuits. Variations of the original TTL circuit design offered higher speed or lower power dissipation to allow design optimization. TTL devices were originally made in ceramic and plastic dual in-line package(s) and in flat-pack form. Some TTL chips are now also made in surface-mount technology packages.
TTL became the foundation of computers and other digital electronics. Even after Very-Large-Scale Integration (VLSI) CMOS integrated circuit microprocessors made multiple-chip processors obsolete, TTL devices still found extensive use as glue logic interfacing between more densely integrated components.
History
TTL was invented in 1961 by James L. Buie of TRW, which declared it "particularly suited to the newly developing integrated circuit design technology." The original name for TTL was transistor-coupled transistor logic (TCTL). The first commercial integrated-circuit TTL devices were manufactured by Sylvania in 1963, called the Sylvania Universal High-Level Logic family (SUHL). The Sylvania parts were used in the controls of the Phoenix missile. TTL became popular with electronic systems designers after Texas Instruments introduced the 5400 series of ICs, with military temperature range, in 1964 and the later 7400 series, specified over a narrower range and with inexpensive plastic packages, in 1966.
The Texas Instruments 7400 family became an industry standard. Compatible parts were made by Motorola, AMD, Fairchild, Intel, Intersil, Signetics, Mullard, Siemens, SGS-Thomson, Rifa, National Semiconductor, and many other companies, even in the Eastern Bloc (Soviet Union, GDR, Poland, Czechoslovakia, Hungary, Romania — for details see 7400 series). Not only did others make compatible TTL parts, but compatible parts were made using many other circuit technologies as well. At least one manufacturer, IBM, produced non-compatible TTL circuits for its own use; IBM used the technology in the IBM System/38, IBM 4300, and IBM 3081.
The term "TTL" is applied to many successive generations of bipolar logic, with gradual improvements in speed and power consumption over about two decades. The most recently introduced family 74Fxx is still sold today (as of 2019), and was widely used into the late 90s. 74AS/ALS Advanced Schottky was introduced in 1985. As of 2008, Texas Instruments continues to supply the more general-purpose chips in numerous obsolete technology families, albeit at increased prices. Typically, TTL chips integrate no more than a few hundred transistors each. Functions within a single package generally range from a few logic gates to a microprocessor bit-slice. TTL also became important because its low cost made digital techniques economically practical for tasks previously done by analog methods.
The Kenbak-1, ancestor of the first personal computers, used TTL for its CPU instead of a microprocessor chip, which was not available in 1971. The Datapoint 2200 from 1970 used TTL components for its CPU and was the basis for the 8008 and later the x86 instruction set. The 1973 Xerox Alto and 1981 Star workstations, which introduced the graphical user interface, used TTL circuits integrated at the level of arithmetic logic units (ALUs) and bitslices, respectively. Most computers used TTL-compatible "glue logic" between larger chips well into the 1990s. Until the advent of programmable logic, discrete bipolar logic was used to prototype and emulate microarchitectures under development.
Implementation
Fundamental TTL gate
TTL inputs are the emitters of bipolar transistors. In the case of NAND inputs, the inputs are the emitters of multiple-emitter transistors, functionally equivalent to multiple transistors where the bases and collectors are tied together. The output is buffered by a common emitter amplifier.
Inputs both logical ones. When all the inputs are held at high voltage, the base–emitter junctions of the multiple-emitter transistor are reverse-biased. Unlike DTL, a small “collector” current (approximately 10 μA) is drawn by each of the inputs. This is because the transistor is in reverse-active mode. An approximately constant current flows from the positive rail, through the resistor and into the base of the multiple emitter transistor. This current passes through the base–emitter junction of the output transistor, allowing it to conduct and pulling the output voltage low (logical zero).
An input logical zero. Note that the base–collector junction of the multiple-emitter transistor and the base–emitter junction of the output transistor are in series between the bottom of the resistor and ground. If one input voltage becomes zero, the corresponding base–emitter junction of the multiple-emitter transistor is in parallel with these two junctions. A phenomenon called current steering means that when two voltage-stable elements with different threshold voltages are connected in parallel, the current flows through the path with the smaller threshold voltage. That is, current flows out of this input and into the zero (low) voltage source. As a result, no current flows through the base of the output transistor, causing it to stop conducting and the output voltage becomes high (logical one). During the transition the input transistor is briefly in its active region; so it draws a large current away from the base of the output transistor and thus quickly discharges its base. This is a critical advantage of TTL over DTL that speeds up the transition over a diode input structure.
The main disadvantage of TTL with a simple output stage is the relatively high output resistance at output logical "1" that is completely determined by the output collector resistor. It limits the number of inputs that can be connected (the fanout). Some advantage of the simple output stage is the high voltage level (up to VCC) of the output logical "1" when the output is not loaded.
Open collector wired logic
A common variation omits the collector resistor of the output transistor, making an open-collector output. This allows the designer to fabricate wired logic by connecting the open-collector outputs of several logic gates together and providing a single external pull-up resistor. If any of the logic gates becomes logic low (transistor conducting), the combined output will be low. Examples of this type of gate are the 7401 and 7403 series. Open-collector outputs of some gates have a higher maximum voltage, such as 15 V for the 7426, useful when driving non-TTL loads.
TTL with a "totem-pole" output stage
To solve the problem with the high output resistance of the simple output stage the second schematic adds to this a "totem-pole" ("push–pull") output. It consists of the two n-p-n transistors V3 and V4, the "lifting" diode V5 and the current-limiting resistor R3 (see the figure on the right). It is driven by applying the same current steering idea as above.
When V2 is "off", V4 is "off" as well and V3 operates in active region as a voltage follower producing high output voltage (logical "1").
When V2 is "on", it activates V4, driving low voltage (logical "0") to the output. Again there is a current-steering effect: the series combination of V2's C-E junction and V4's B-E junction is in parallel with the series of V3 B-E, V5's anode-cathode junction, and V4 C-E. The second series combination has the higher threshold voltage, so no current flows through it, i.e. V3 base current is deprived. Transistor V3 turns "off" and it does not impact on the output.
In the middle of the transition, the resistor R3 limits the current flowing directly through the series connected transistor V3, diode V5 and transistor V4 that are all conducting. It also limits the output current in the case of output logical "1" and short connection to the ground. The strength of the gate may be increased without proportionally affecting the power consumption by removing the pull-up and pull-down resistors from the output stage.
The main advantage of TTL with a "totem-pole" output stage is the low output resistance at output logical "1". It is determined by the upper output transistor V3 operating in active region as an emitter follower. The resistor R3 does not increase the output resistance since it is connected in the V3 collector and its influence is compensated by the negative feedback. A disadvantage of the "totem-pole" output stage is the decreased voltage level (no more than 3.5 V) of the output logical "1" (even if the output is unloaded). The reasons for this reduction are the voltage drops across the V3 base–emitter and V5 anode–cathode junctions.
Interfacing considerations
Like DTL, TTL is a current-sinking logic since a current must be drawn from inputs to bring them to a logic 0 voltage level. The driving stage must absorb up to 1.6 mA from a standard TTL input while not allowing the voltage to rise to more than 0.4 volts. The output stage of the most common TTL gates is specified to function correctly when driving up to 10 standard input stages (a fanout of 10). TTL inputs are sometimes simply left floating to provide a logical "1", though this usage is not recommended.
Standard TTL circuits operate with a 5-volt power supply. A TTL input signal is defined as "low" when between 0 V and 0.8 V with respect to the ground terminal, and "high" when between 2 V and VCC (5 V), and if a voltage signal ranging between 0.8 V and 2.0 V is sent into the input of a TTL gate, there is no certain response from the gate and therefore it is considered "uncertain" (precise logic levels vary slightly between sub-types and by temperature). TTL outputs are typically restricted to narrower limits of between 0.0 V and 0.4 V for a "low" and between 2.4 V and VCC for a "high", providing at least 0.4 V of noise immunity. Standardization of the TTL levels is so ubiquitous that complex circuit boards often contain TTL chips made by many different manufacturers selected for availability and cost, compatibility being assured. Two circuit board units off the same assembly line on different successive days or weeks might have a different mix of brands of chips in the same positions on the board; repair is possible with chips manufactured years later than original components. Within usefully broad limits, logic gates can be treated as ideal Boolean devices without concern for electrical limitations. The 0.4 V noise margins are adequate because of the low output impedance of the driver stage, that is, a large amount of noise power superimposed on the output is needed to drive an input into an undefined region.
In some cases (e.g., when the output of a TTL logic gate needs to be used for driving the input of a CMOS gate), the voltage level of the "totem-pole" output stage at output logical "1" can be increased closer to VCC by connecting an external resistor between the V4 collector and the positive rail. It pulls up the V5 cathode and cuts-off the diode. However, this technique actually converts the sophisticated "totem-pole" output into a simple output stage having significant output resistance when driving a high level (determined by the external resistor).
Packaging
Like most integrated circuits of the period 1963–1990, commercial TTL devices are usually packaged in dual in-line packages (DIPs), usually with 14 to 24 pins, for through-hole or socket mounting. Epoxy plastic (PDIP) packages were often used for commercial temperature range components, while ceramic packages (CDIP) were used for military temperature range parts.
Beam-lead chip dies without packages were made for assembly into larger arrays as hybrid integrated circuits. Parts for military and aerospace applications were packaged in flatpacks, a form of surface-mount package, with leads suitable for welding or soldering to printed circuit boards. Today, many TTL-compatible devices are available in surface-mount packages, which are available in a wider array of types than through-hole packages.
TTL is particularly well suited to bipolar integrated circuits because additional inputs to a gate merely required additional emitters on a shared base region of the input transistor. If individually packaged transistors were used, the cost of all the transistors would discourage one from using such an input structure. But in an integrated circuit, the additional emitters for extra gate inputs add only a small area.
At least one computer manufacturer, IBM, built its own flip chip integrated circuits with TTL; these chips were mounted on ceramic multi-chip modules.
Comparison with other logic families
TTL devices consume substantially more power than equivalent CMOS devices at rest, but power consumption does not increase with clock speed as rapidly as for CMOS devices. Compared to contemporary ECL circuits, TTL uses less power and has easier design rules but is substantially slower. Designers can combine ECL and TTL devices in the same system to achieve best overall performance and economy, but level-shifting devices are required between the two logic families. TTL is less sensitive to damage from electrostatic discharge than early CMOS devices.
Due to the output structure of TTL devices, the output impedance is asymmetrical between the high and low state, making them unsuitable for driving transmission lines. This drawback is usually overcome by buffering the outputs with special line-driver devices where signals need to be sent through cables. ECL, by virtue of its symmetric low-impedance output structure, does not have this drawback.
The TTL "totem-pole" output structure often has a momentary overlap when both the upper and lower transistors are conducting, resulting in a substantial pulse of current drawn from the power supply. These pulses can couple in unexpected ways between multiple integrated circuit packages, resulting in reduced noise margin and lower performance. TTL systems usually have a decoupling capacitor for every one or two IC packages, so that a current pulse from one TTL chip does not momentarily reduce the supply voltage to another.
Since the mid 1980s, several manufacturers supply CMOS logic equivalents with TTL-compatible input and output levels, usually bearing part numbers similar to the equivalent TTL component and with the same pinouts. For example, the 74HCT00 series provides many drop-in replacements for bipolar 7400 series parts, but uses CMOS technology.
Sub-types
Successive generations of technology produced compatible parts with improved power consumption or switching speed, or both. Although vendors uniformly marketed these various product lines as TTL with Schottky diodes, some of the underlying circuits, such as used in the LS family, could rather be considered DTL.
Variations of and successors to the basic TTL family, which has a typical gate propagation delay of 10ns and a power dissipation of 10 mW per gate, for a power–delay product (PDP) or switching energy of about 100 pJ, include:
Low-power TTL (L), which traded switching speed (33ns) for a reduction in power consumption (1 mW) (now essentially replaced by CMOS logic)
High-speed TTL (H), with faster switching than standard TTL (6ns) but significantly higher power dissipation (22 mW)
Schottky TTL (S), introduced in 1969, which used Schottky diode clamps at gate inputs to prevent charge storage and improve switching time. These gates operated more quickly (3ns) but had higher power dissipation (19 mW)
Low-power Schottky TTL (LS) – used the higher resistance values of low-power TTL and the Schottky diodes to provide a good combination of speed (9.5 ns) and reduced power consumption (2 mW), and PDP of about 20 pJ. Probably the most common type of TTL, these were used as glue logic in microcomputers, essentially replacing the former H, L, and S sub-families.
Fast (F) and Advanced-Schottky (AS) variants of LS from Fairchild and TI, respectively, circa 1985, with "Miller-killer" circuits to speed up the low-to-high transition. These families achieved PDPs of 10 pJ and 4 pJ, respectively, the lowest of all the TTL families.
Low-voltage TTL (LVTTL) for 3.3-volt power supplies and memory interfacing.
Most manufacturers offer commercial and extended temperature ranges: for example Texas Instruments 7400 series parts are rated from 0 to 70 °C, and 5400 series devices over the military-specification temperature range of −55 to +125 °C.
Special quality levels and high-reliability parts are available for military and aerospace applications.
Radiation-hardened devices (for example from the SNJ54 series) are offered for space applications.
Applications
Before the advent of VLSI devices, TTL integrated circuits were a standard method of construction for the processors of minicomputer and midrange mainframe computers, such as the DEC VAX and Data General Eclipse; however some computer families were based on proprietary components (e.g. Fairchild CTL) while supercomputers and high-end mainframes used emitter-coupled logic. They were also used for equipment such as machine tool numerical controls, printers and video display terminals, and as microprocessors became more functional for "glue logic" applications, such as address decoders and bus drivers, which tie together the function blocks realized in VLSI elements. The Gigatron TTL is a more recent (2018) example of a processor built entirely with TTL integrated circuits.
Analog applications
While originally designed to handle logic-level digital signals, a TTL inverter can be biased as an analog amplifier. Connecting a resistor between the output and the input biases the TTL element as a negative feedback amplifier. Such amplifiers may be useful to convert analog signals to the digital domain but would not ordinarily be used where analog amplification is the primary purpose. TTL inverters can also be used in crystal oscillators where their analog amplification ability is significant.
A TTL gate may operate inadvertently as an analog amplifier if the input is connected to a slowly changing input signal that traverses the unspecified region from 0.8 V to 2 V. The output can be erratic when the input is in this range. A slowly changing input like this can also cause excess power dissipation in the output circuit. If such an analog input must be used, there are specialized TTL parts with Schmitt trigger inputs available that will reliably convert the analog input to a digital value, effectively operating as a one bit A to D converter.
Serial signaling
TTL serial refers to single-ended serial communication using raw transistor voltage levels: "low" for 0 and "high" for 1. UART over TTL serial is a common debug interface for embedded devices. Handheld devices such as graphing calculators and GPS receivers and fishfinders also commonly use UART with TTL. TTL serial is only a de facto standard: there are no strict electrical guidelines. Driver–receiver modules interface between TTL and longer-range serial standards: one example is the MAX232, which converts from and to RS-232.
Differential TTL is TTL serial carried over a differential pair with complement levels, providing much enhanced noise tolerance. Both RS-422 and RS-485 signals can be produced using TTL levels.
ccTalk is based on TTL voltage levels.
| Technology | Semiconductors | null |
47772 | https://en.wikipedia.org/wiki/Instruction%20set%20architecture | Instruction set architecture | In computer science, an instruction set architecture (ISA) is an abstract model that generally defines how software controls the CPU in a computer or a family of computers. A device or program that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation of that ISA.
In general, an ISA defines the supported instructions, data types, registers, the hardware support for managing main memory, fundamental features (such as the memory consistency, addressing modes, virtual memory), and the input/output model of implementations of the ISA.
An ISA specifies the behavior of machine code running on implementations of that ISA in a fashion that does not depend on the characteristics of that implementation, providing binary compatibility between implementations. This enables multiple implementations of an ISA that differ in characteristics such as performance, physical size, and monetary cost (among other things), but that are capable of running the same machine code, so that a lower-performance, lower-cost machine can be replaced with a higher-cost, higher-performance machine without having to replace software. It also enables the evolution of the microarchitectures of the implementations of that ISA, so that a newer, higher-performance implementation of an ISA can run software that runs on previous generations of implementations.
If an operating system maintains a standard and compatible application binary interface (ABI) for a particular ISA, machine code will run on future implementations of that ISA and operating system. However, if an ISA supports running multiple operating systems, it does not guarantee that machine code for one operating system will run on another operating system, unless the first operating system supports running machine code built for the other operating system.
An ISA can be extended by adding instructions or other capabilities, or adding support for larger addresses and data values; an implementation of the extended ISA will still be able to execute machine code for versions of the ISA without those extensions. Machine code using those extensions will only run on implementations that support those extensions.
The binary compatibility that they provide makes ISAs one of the most fundamental abstractions in computing.
Overview
An instruction set architecture is distinguished from a microarchitecture, which is the set of processor design techniques used, in a particular processor, to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but they have radically different internal designs.
The concept of an architecture, distinct from the design of a specific machine, was developed by Fred Brooks at IBM during the design phase of System/360.
Some virtual machines that support bytecode as their ISA such as Smalltalk, the Java virtual machine, and Microsoft's Common Language Runtime, implement this by translating the bytecode for commonly used code paths into native machine code. In addition, these virtual machines execute less frequently used code paths by interpretation (see: Just-in-time compilation). Transmeta implemented the x86 instruction set atop VLIW processors in this fashion.
Classification of ISAs
An ISA may be classified in a number of different ways. A common classification is by architectural complexity. A complex instruction set computer (CISC) has many specialized instructions, some of which may only be rarely used in practical programs. A reduced instruction set computer (RISC) simplifies the processor by efficiently implementing only the instructions that are frequently used in programs, while the less common operations are implemented as subroutines, having their resulting additional processor execution time offset by infrequent use.
Other types include very long instruction word (VLIW) architectures, and the closely related and explicitly parallel instruction computing (EPIC) architectures. These architectures seek to exploit instruction-level parallelism with less hardware than RISC and CISC by making the compiler responsible for instruction issue and scheduling.
Architectures with even less complexity have been studied, such as the minimal instruction set computer (MISC) and one-instruction set computer (OISC). These are theoretically important types, but have not been commercialized.
Instructions
Machine language is built up from discrete statements or instructions. On the processing architecture, a given instruction may specify:
opcode (the instruction to be performed) e.g. add, copy, test
any explicit operands:
registers
literal/constant values
addressing modes used to access memory
More complex operations are built up by combining these simple instructions, which are executed sequentially, or as otherwise directed by control flow instructions.
Instruction types
Examples of operations common to many instruction sets include:
Data handling and memory operations
Set a register to a fixed constant value.
Copy data from a memory location or a register to a memory location or a register (a machine instruction is often called move; however, the term is misleading). They are used to store the contents of a register, the contents of another memory location or the result of a computation, or to retrieve stored data to perform a computation on it later. They are often called load or store operations.
Read or write data from hardware devices.
Arithmetic and logic operations
Add, subtract, multiply, or divide the values of two registers, placing the result in a register, possibly setting one or more condition codes in a status register.
, in some ISAs, saving operand fetch in trivial cases.
Perform bitwise operations, e.g., taking the conjunction and disjunction of corresponding bits in a pair of registers, taking the negation of each bit in a register.
Compare two values in registers (for example, to see if one is less, or if they are equal).
s for arithmetic on floating-point numbers.
Control flow operations
Branch to another location in the program and execute instructions there.
Conditionally branch to another location if a certain condition holds.
Indirectly branch to another location.
Call another block of code, while saving the location of the next instruction as a point to return to.
Coprocessor instructions
Load/store data to and from a coprocessor or exchanging with CPU registers.
Perform coprocessor operations.
Complex instructions
Processors may include "complex" instructions in their instruction set. A single "complex" instruction does something that may take many instructions on other computers. Such instructions are typified by instructions that take multiple steps, control multiple functional units, or otherwise appear on a larger scale than the bulk of simple instructions implemented by the given processor. Some examples of "complex" instructions include:
transferring multiple registers to or from memory (especially the stack) at once
moving large blocks of memory (e.g. string copy or DMA transfer)
complicated integer and floating-point arithmetic (e.g. square root, or transcendental functions such as logarithm, sine, cosine, etc.)
s, a single instruction performing an operation on many homogeneous values in parallel, possibly in dedicated SIMD registers
performing an atomic test-and-set instruction or other read–modify–write atomic instruction
instructions that perform ALU operations with an operand from memory rather than a register
Complex instructions are more common in CISC instruction sets than in RISC instruction sets, but RISC instruction sets may include them as well. RISC instruction sets generally do not include ALU operations with memory operands, or instructions to move large blocks of memory, but most RISC instruction sets include SIMD or vector instructions that perform the same arithmetic operation on multiple pieces of data at the same time. SIMD instructions have the ability of manipulating large vectors and matrices in minimal time. SIMD instructions allow easy parallelization of algorithms commonly involved in sound, image, and video processing. Various SIMD implementations have been brought to market under trade names such as MMX, 3DNow!, and AltiVec.
Instruction encoding
On traditional architectures, an instruction includes an opcode that specifies the operation to perform, such as add contents of memory to register—and zero or more operand specifiers, which may specify registers, memory locations, or literal data. The operand specifiers may have addressing modes determining their meaning or may be in fixed fields. In very long instruction word (VLIW) architectures, which include many microcode architectures, multiple simultaneous opcodes and operands are specified in a single instruction.
Some exotic instruction sets do not have an opcode field, such as transport triggered architectures (TTA), only operand(s).
Most stack machines have "0-operand" instruction sets in which arithmetic and logical operations lack any operand specifier fields; only instructions that push operands onto the evaluation stack or that pop operands from the stack into variables have operand specifiers. The instruction set carries out most ALU actions with postfix (reverse Polish notation) operations that work only on the expression stack, not on data registers or arbitrary main memory cells. This can be very convenient for compiling high-level languages, because most arithmetic expressions can be easily translated into postfix notation.
Conditional instructions often have a predicate field—a few bits that encode the specific condition to cause an operation to be performed rather than not performed. For example, a conditional branch instruction will transfer control if the condition is true, so that execution proceeds to a different part of the program, and not transfer control if the condition is false, so that execution continues sequentially. Some instruction sets also have conditional moves, so that the move will be executed, and the data stored in the target location, if the condition is true, and not executed, and the target location not modified, if the condition is false. Similarly, IBM z/Architecture has a conditional store instruction. A few instruction sets include a predicate field in every instruction; this is called branch predication.
Number of operands
Instruction sets may be categorized by the maximum number of operands explicitly specified in instructions.
(In the examples that follow, a, b, and c are (direct or calculated) addresses referring to memory cells, while reg1 and so on refer to machine registers.)
C = A+B
0-operand (zero-address machines), so called stack machines: All arithmetic operations take place using the top one or two positions on the stack: push a, push b, add, pop c.
C = A+B needs four instructions. For stack machines, the terms "0-operand" and "zero-address" apply to arithmetic instructions, but not to all instructions, as 1-operand push and pop instructions are used to access memory.
1-operand (one-address machines), so called accumulator machines, include early computers and many small microcontrollers: most instructions specify a single right operand (that is, constant, a register, or a memory location), with the implicit accumulator as the left operand (and the destination if there is one): load a, add b, store c.
C = A+B needs three instructions.
2-operand — many CISC and RISC machines fall under this category:
CISC — move A to C; then add B to C.
C = A+B needs two instructions. This effectively 'stores' the result without an explicit store instruction.
CISC — Often machines are limited to one memory operand per instruction: load a,reg1; add b,reg1; store reg1,c; This requires a load/store pair for any memory movement regardless of whether the add result is an augmentation stored to a different place, as in C = A+B, or the same memory location: A = A+B.
C = A+B needs three instructions.
RISC — Requiring explicit memory loads, the instructions would be: load a,reg1; load b,reg2; add reg1,reg2; store reg2,c.
C = A+B needs four instructions.
3-operand, allowing better reuse of data:
CISC — It becomes either a single instruction: add a,b,c
C = A+B needs one instruction.
CISC — Or, on machines limited to two memory operands per instruction, move a,reg1; add reg1,b,c;
C = A+B needs two instructions.
RISC — arithmetic instructions use registers only, so explicit 2-operand load/store instructions are needed: load a,reg1; load b,reg2; add reg1+reg2->reg3; store reg3,c;
C = A+B needs four instructions.
Unlike 2-operand or 1-operand, this leaves all three values a, b, and c in registers available for further reuse.
more operands—some CISC machines permit a variety of addressing modes that allow more than 3 operands (registers or memory accesses), such as the VAX "POLY" polynomial evaluation instruction.
Due to the large number of bits needed to encode the three registers of a 3-operand instruction, RISC architectures that have 16-bit instructions are invariably 2-operand designs, such as the Atmel AVR, TI MSP430, and some versions of ARM Thumb. RISC architectures that have 32-bit instructions are usually 3-operand designs, such as the ARM, AVR32, MIPS, Power ISA, and SPARC architectures.
Each instruction specifies some number of operands (registers, memory locations, or immediate values) explicitly. Some instructions give one or both operands implicitly, such as by being stored on top of the stack or in an implicit register. If some of the operands are given implicitly, fewer operands need be specified in the instruction. When a "destination operand" explicitly specifies the destination, an additional operand must be supplied. Consequently, the number of operands encoded in an instruction may differ from the mathematically necessary number of arguments for a logical or arithmetic operation (the arity). Operands are either encoded in the "opcode" representation of the instruction, or else are given as values or addresses following the opcode.
Register pressure
Register pressure measures the availability of free registers at any point in time during the program execution. Register pressure is high when a large number of the available registers are in use; thus, the higher the register pressure, the more often the register contents must be spilled into memory. Increasing the number of registers in an architecture decreases register pressure but increases the cost.
While embedded instruction sets such as Thumb suffer from extremely high register pressure because they have small register sets, general-purpose RISC ISAs like MIPS and Alpha enjoy low register pressure. CISC ISAs like x86-64 offer low register pressure despite having smaller register sets. This is due to the many addressing modes and optimizations (such as sub-register addressing, memory operands in ALU instructions, absolute addressing, PC-relative addressing, and register-to-register spills) that CISC ISAs offer.
Instruction length
The size or length of an instruction varies widely, from as little as four bits in some microcontrollers to many hundreds of bits in some VLIW systems. Processors used in personal computers, mainframes, and supercomputers have minimum instruction sizes between 8 and 64 bits. The longest possible instruction on x86 is 15 bytes (120 bits). Within an instruction set, different instructions may have different lengths. In some architectures, notably most reduced instruction set computers (RISC), , typically corresponding with that architecture's word size. In other architectures, instructions have variable length, typically integral multiples of a byte or a halfword. Some, such as the ARM with Thumb-extension have mixed variable encoding, that is two fixed, usually 32-bit and 16-bit encodings, where instructions cannot be mixed freely but must be switched between on a branch (or exception boundary in ARMv8).
Fixed-length instructions are less complicated to handle than variable-length instructions for several reasons (not having to check whether an instruction straddles a cache line or virtual memory page boundary, for instance), and are therefore somewhat easier to optimize for speed.
Code density
In early 1960s computers, main memory was expensive and very limited, even on mainframes. Minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the size of the instructions needed to perform a particular task, the code density, was an important characteristic of any instruction set. It remained important on the initially-tiny memories of minicomputers and then microprocessors. Density remains important today, for smartphone applications, applications downloaded into browsers over slow Internet connections, and in ROMs for embedded applications. A more general advantage of increased density is improved effectiveness of caches and instruction prefetch.
Computers with high code density often have complex instructions for procedure entry, parameterized returns, loops, etc. (therefore retroactively named Complex Instruction Set Computers, CISC). However, more typical, or frequent, "CISC" instructions merely combine a basic ALU operation, such as "add", with the access of one or more operands in memory (using addressing modes such as direct, indirect, indexed, etc.). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment, etc. Software-implemented instruction sets may have even more complex and powerful instructions.
Reduced instruction-set computers, RISC, were first widely implemented during a period of rapidly growing memory subsystems. They sacrifice code density to simplify implementation circuitry, and try to increase performance via higher clock frequencies and more registers. A single RISC instruction typically performs only a single operation, such as an "add" of registers or a "load" from a memory location into a register. A RISC instruction set normally has a fixed instruction length, whereas a typical CISC instruction set has instructions of widely varying length. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories.
Certain embedded RISC ISAs like Thumb and AVR32 typically exhibit very high density owing to a technique called code compression. This technique packs two 16-bit instructions into one 32-bit word, which is then unpacked at the decode stage and executed as two instructions.
Minimal instruction set computers (MISC) are commonly a form of stack machine, where there are few separate instructions (8–32), so that multiple instructions can be fit into a single machine word. These types of cores often take little silicon to implement, so they can be easily realized in an FPGA or in a multi-core form. The code density of MISC is similar to the code density of RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task.
There has been research into executable compression as a mechanism for improving code density. The mathematics of Kolmogorov complexity describes the challenges and limits of this.
In practice, code density is also dependent on the compiler. Most optimizing compilers have options that control whether to optimize code generation for execution speed or for code density. For instance GCC has the option -Os to optimize for small machine code size, and -O3 to optimize for execution speed at the cost of larger machine code.
Representation
The instructions constituting a program are rarely specified using their internal, numeric form (machine code); they may be specified by programmers using an assembly language or, more commonly, may be generated from high-level programming languages by compilers.
Design
The design of instruction sets is a complex issue. There were two stages in history for the microprocessor. The first was the CISC (Complex Instruction Set Computer), which had many different instructions. In the 1970s, however, places like IBM did research and found that many instructions in the set could be eliminated. The result was the RISC (Reduced Instruction Set Computer), an architecture that uses a smaller set of instructions. A simpler instruction set may offer the potential for higher speeds, reduced processor size, and reduced power consumption. However, a more complex set may optimize common operations, improve memory and cache efficiency, or simplify programming.
Some instruction set designers reserve one or more opcodes for some kind of system call or software interrupt. For example, MOS Technology 6502 uses 00H, Zilog Z80 uses the eight codes C7,CF,D7,DF,E7,EF,F7,FFH while Motorola 68000 use codes in the range A000..AFFFH.
Fast virtual machines are much easier to implement if an instruction set meets the Popek and Goldberg virtualization requirements.
The NOP slide used in immunity-aware programming is much easier to implement if the "unprogrammed" state of the memory is interpreted as a NOP.
On systems with multiple processors, non-blocking synchronization algorithms are much easier to implement if the instruction set includes support for something such as "fetch-and-add", "load-link/store-conditional" (LL/SC), or "atomic compare-and-swap".
Instruction set implementation
A given instruction set can be implemented in a variety of ways. All ways of implementing a particular instruction set provide the same programming model, and all implementations of that instruction set are able to run the same executables. The various ways of implementing an instruction set give different tradeoffs between cost, performance, power consumption, size, etc.
When designing the microarchitecture of a processor, engineers use blocks of "hard-wired" electronic circuitry (often designed separately) such as adders, multiplexers, counters, registers, ALUs, etc. Some kind of register transfer language is then often used to describe the decoding and sequencing of each instruction of an ISA using this physical microarchitecture.
There are two basic ways to build a control unit to implement this description (although many designs use middle ways or compromises):
Some computer designs "hardwire" the complete instruction set decoding and sequencing (just like the rest of the microarchitecture).
Other designs employ microcode routines or tables (or both) to do this, using ROMs or writable RAMs (writable control store), PLAs, or both.
Some microcoded CPU designs with a writable control store use it to allow the instruction set to be changed (for example, the Rekursiv processor and the Imsys Cjip).
CPUs designed for reconfigurable computing may use field-programmable gate arrays (FPGAs).
An ISA can also be emulated in software by an interpreter. Naturally, due to the interpretation overhead, this is slower than directly running programs on the emulated hardware, unless the hardware running the emulator is an order of magnitude faster. Today, it is common practice for vendors of new ISAs or microarchitectures to make software emulators available to software developers before the hardware implementation is ready.
Often the details of the implementation have a strong influence on the particular instructions selected for the instruction set. For example, many implementations of the instruction pipeline only allow a single memory load or memory store per instruction, leading to a load–store architecture (RISC). For another example, some early ways of implementing the instruction pipeline led to a delay slot.
The demands of high-speed digital signal processing have pushed in the opposite direction—forcing instructions to be implemented in a particular way. For example, to perform digital filters fast enough, the MAC instruction in a typical digital signal processor (DSP) must use a kind of Harvard architecture that can fetch an instruction and two data words simultaneously, and it requires a single-cycle multiply–accumulate multiplier.
| Technology | Computer architecture concepts | null |
47789 | https://en.wikipedia.org/wiki/Quality%20of%20life | Quality of life | Quality of life (QOL) is defined by the World Health Organization as "an individual's perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards and concerns".
Standard indicators of the quality of life include wealth, employment, the environment, physical and mental health, education, recreation and leisure time, social belonging, religious beliefs, safety, security and freedom. QOL has a wide range of contexts, including the fields of international development, healthcare, politics and employment. Health related QOL (HRQOL) is an evaluation of QOL and its relationship with health.
Engaged theory
One approach, called the engaged theory, outlined in the journal of Applied Research in the Quality of Life, posits four domains in assessing quality of life: ecology, economics, politics and culture. In the domain of culture, for example, it includes the following subdomains of quality of life:
Beliefs and ideas
Creativity and recreation
Enquiry and learning
Gender and generations
Identity and engagement
Memory and projection
Well-being and health
Under this conception, other frequently related concepts include freedom, human rights, and happiness. However, since happiness is subjective and difficult to measure, other measures are generally given priority. It has also been shown that happiness, as much as it can be measured, does not necessarily increase correspondingly with the comfort that results from increasing income. As a result, standard of living should not be taken to be a measure of happiness. Also, sometimes considered related is the concept of human security, though the latter may be considered at a more basic level and for all people.
Quantitative measurement
Unlike per capita GDP or standard of living, both of which can be measured in financial terms, it is harder to make objective or long-term measurements of the quality of life experienced by nations or other groups of people. Researchers have begun in recent times to distinguish two aspects of personal well-being: Emotional well-being, in which respondents are asked about the quality of their everyday emotional experiencesthe frequency and intensity of their experiences of, for example, joy, stress, sadness, anger and affectionand life evaluation, in which respondents are asked to think about their life in general and evaluate it against a scale. Such and other systems and scales of measurement have been in use for some time. Research has attempted to examine the relationship between quality of life and productivity.
There are many different methods of measuring quality of life in terms of health care, wealth, and materialistic goods. However, it is much more difficult to measure meaningful expression of one's desires. One way to do so is to evaluate the scope of how individuals have fulfilled their own ideals. Quality of life can simply mean happiness, which is the subjective state of mind. By using that mentality, citizens of a developing country appreciate more since they are content with the basic necessities of health care, education and child protection.
According to ecological economist Robert Costanza:
Human Development Index
Perhaps the most commonly used international measure of development is the Human Development Index (HDI), which combines measures of life expectancy, education, and standard of living, in an attempt to quantify the options available to individuals within a given society. The HDI is used by the United Nations Development Programme in their Human Development Report. However, since 2010, The Human Development Report introduced an Inequality-adjusted Human Development Index (IHDI). While the original HDI remains useful, it stated that "the IHDI is the actual level of human development (accounting for inequality), while the original HDI can be viewed as an index of 'potential' human development (or the maximum level of HDI) that could be achieved if there was no inequality."
World Happiness Report
The World Happiness Report is a landmark survey on the state of global happiness. It ranks 156 countries by their happiness levels, reflecting growing global interest in using happiness and substantial well-being as an indicator of the quality of human development. Its growing purpose has allowed governments, communities and organizations to use appropriate data to record happiness in order to enable policies to provide better lives. The reports review the state of happiness in the world today and show how the science of happiness explains personal and national variations in happiness.
Developed again by the United Nations and published recently along with the HDI, this report combines both objective and subjective measures to rank countries by happiness, which is deemed as the ultimate outcome of a high quality of life. It uses surveys from Gallup, real GDP per capita, healthy life expectancy, having someone to count on, perceived freedom to make life choices, freedom from corruption, and generosity to derive the final score. Happiness is already recognized as an important concept in global public policy. The World Happiness Report indicates that some regions have in recent years been experiencing progressive inequality of happiness.
Other measures
The Physical Quality of Life Index (PQLI) is a measure developed by sociologist M. D. Morris in the 1970s, based on basic literacy, infant mortality, and life expectancy. Although not as complex as other measures, and now essentially replaced by the Human Development Index, the PQLI is notable for Morris's attempt to show a "less fatalistic pessimistic picture" by focusing on three areas where global quality of life was generally improving at the time, while ignoring gross national product and other possible indicators that were not improving.
The Happy Planet Index, introduced in 2006, is unique among quality of life measures in that, in addition to standard determinants of well-being, it uses each country's ecological footprint as an indicator. As a result, European and North American nations do not dominate this measure. The 2012 list is instead topped by Costa Rica, Vietnam and Colombia.
In 2010, Gallup researchers trying to find the world's happiest countries found Denmark to be at the top of the list. For the period 2014–2016, Norway surpasses Denmark to be at the top of the list.
A 2010 study by two Princeton University professors looked at 1,000 randomly selected U.S. residents over an extended period. It concludes that their life evaluations – that is, their considered evaluations of their life against a stated scale of one to ten – rise steadily with income. On the other hand, their reported quality of emotional daily experiences (their reported experiences of joy, affection, stress, sadness, or anger) levels off after a certain income level (approximately $75,000 per year in 2010); income above $75,000 does not lead to more experiences of happiness nor to further relief of unhappiness or stress. Below this income level, respondents reported decreasing happiness and increasing sadness and stress, implying the pain of life's misfortunes, including disease, divorce, and being alone, is exacerbated by poverty.
Gross national happiness and other subjective measures of happiness are being used by the governments of Bhutan and the United Kingdom. The World Happiness report, issued by Columbia University is a meta-analysis of happiness globally and provides an overview of countries and grassroots activists using GNH. The OECD issued a guide for the use of subjective well-being metrics in 2013. In the U.S., cities and communities are using a GNH metric at a grassroots level.
The Social Progress Index measures the extent to which countries provide for the social and environmental needs of their citizens. Fifty-two indicators in the areas of basic human needs, foundations of wellbeing, and opportunity show the relative performance of nations. The index uses outcome measures when there is sufficient data available or the closest possible proxies.
Day-Reconstruction Method was another way of measuring happiness, in which researchers asked their subjects to recall various things they did on the previous day and describe their mood during each activity. Being simple and approachable, this method required memory and the experiments have confirmed that the answers that people give are similar to those who repeatedly recalled each subject. The method eventually declined as it called for more effort and thoughtful responses, which often included interpretations and outcomes that do not occur to people who are asked to record every action in their daily lives.
The Digital Quality of Life Index - a yearly study on digital well-being across 121 countries created by Surfshark. It indexes each country according to five pillars that impact a population's digital quality of life: internet affordability, internet quality, electronic infrastructure, electronic security, and electronic government.
Livability
The term quality of life is also used by politicians and economists to measure the livability of a given city or nation. Two widely known measures of livability are the Economist Intelligence Unit's Where-to-be-born Index and Mercer's Quality of Living Reports. These two measures calculate the livability of countries and cities around the world, respectively, through a combination of subjective life-satisfaction surveys and objective determinants of quality of life such as divorce rates, safety, and infrastructure. Such measures relate more broadly to the population of a city, state, or country, not to individual quality of life. Livability has a long history and tradition in urban design, and neighborhoods design standards such as LEED-ND are often used in an attempt to influence livability.
Crimes
Some crimes against property (e.g., graffiti and vandalism) and some "victimless crimes" have been referred to as "quality-of-life crimes". American sociologist James Q. Wilson encapsulated this argument as the broken windows theory, which asserts that relatively minor problems left unattended (such as litter, graffiti, or public urination by homeless individuals) send a subliminal message that disorder, in general, is being tolerated, and as a result, more serious crimes will end up being committed (the analogy being that a broken window left broken shows an image of general dilapidation).
Wilson's theories have been used to justify the implementation of zero tolerance policies by many prominent American mayors, most notably Oscar Goodman in Las Vegas, Richard Riordan in Los Angeles, Rudolph Giuliani in New York City and Gavin Newsom in San Francisco. Such policies refuse to tolerate even minor crimes; proponents argue that this will improve the quality of life of local residents. However, critics of zero tolerance policies believe that such policies neglect investigation on a case-by-case basis and may lead to unreasonably harsh penalties for crimes.
In healthcare
Within the field of healthcare, quality of life is often regarded in terms of how a certain ailment affects a patient on an individual level. This may be a debilitating weakness that is not life-threatening; life-threatening illness that is not terminal; terminal illness; the predictable, natural decline in the health of an elder; an unforeseen mental/physical decline of a loved one; or chronic, end-stage disease processes. Researchers at the University of Toronto's Quality of Life Research Unit define quality of life as "The degree to which a person enjoys the important possibilities of his or her life" (UofT). Their Quality of Life Model is based on the categories "being", "belonging", and "becoming"; respectively who one is, how one is connected to one's environment, and whether one achieves one's personal goals, hopes, and aspirations.
Experience sampling studies show substantial between-person variability in within-person associations between somatic symptoms and quality of life. Hecht and Shiel measure quality of life as "the patient's ability to enjoy normal life activities" since life quality is strongly related to wellbeing without suffering from sickness and treatment.
In international development
Quality of life has been deemed an important concept in the field of international development because it allows development to be analyzed on a measure that is generally accepted as more comprehensive than standard of living. Within development theory, however, there are varying ideas concerning what constitutes desirable change for a particular society. The different ways that quality of life is defined by institutions, therefore, shape how these organizations work for its improvement as a whole.
Organisations such as the World Bank, for example, declare a goal of "working for a world free of poverty", with poverty defined as a lack of basic human needs, such as food, water, shelter, freedom, access to education, healthcare, or employment. In other words, poverty is defined as a low quality of life. Using this definition, the World Bank works towards improving quality of life through the stated goal of lowering poverty and helping people afford a better quality of life.
Other organizations, however, may also work towards improved global quality of life using a slightly different definition and substantially different methods. Many NGOs do not focus at all on reducing poverty on a national or international scale, but rather attempt to improve the quality of life for individuals or communities. One example would be sponsorship programs that provide material aid for specific individuals. Although many organizations of this type may still talk about fighting poverty, the methods are significantly different.
Improving quality of life involves action not only by NGOs but also by governments. Global health has the potential to achieve greater political presence if governments were to incorporate aspects of human security into foreign policy. Stressing individuals' basic rights to health, food, shelter, and freedom addresses prominent inter-sectoral problems negatively impacting today's society, and may lead to greater action and resources. Integration of global health concerns into foreign policy may be hampered by approaches that are shaped by the overarching roles of defense and diplomacy.
| Biology and health sciences | Health and fitness: General | Health |
47795 | https://en.wikipedia.org/wiki/Voyager%20program | Voyager program | The Voyager program is an American scientific program that employs two interstellar probes, Voyager 1 and Voyager 2. They were launched in 1977 to take advantage of a favorable planetary alignment to explore the two gas giants Jupiter and Saturn and potentially also the ice giants, Uranus and Neptune - to fly near them while collecting data for transmission back to Earth. After Voyager 1 successfully completed its flyby of Saturn and its moon Titan, it was decided to send Voyager 2 on flybys of Uranus and Neptune.
After the planetary flybys were complete, decisions were made to keep the probes in operation to explore interstellar space and the outer regions of the solar system. On 25 August 2012, data from Voyager 1 indicated that it had entered interstellar space. On 5 November 2019, data from Voyager 2 indicated that it also had entered interstellar space. On 4 November 2019, scientists reported that on 5 November 2018, the Voyager 2 probe had officially reached the interstellar medium (ISM), a region of outer space beyond the influence of the solar wind, as did Voyager 1 in 2012. In August 2018, NASA confirmed, based on results by the New Horizons spacecraft, the existence of a "hydrogen wall" at the outer edges of the Solar System that was first detected in 1992 by the two Voyager spacecraft.
the Voyagers are still in operation beyond the outer boundary of the heliosphere in interstellar space. Voyager 1 is moving with a velocity of , or 17 km/s, (10.5 miles/second) relative to the Sun, and is from the Sun reaching a distance of from Earth as of May 25, 2024. , Voyager 2 is moving with a velocity of , or 15 km/s, relative to the Sun, and is from the Sun reaching a distance of from Earth as of May 25, 2024.
The two Voyagers are the only human-made objects to date that have passed into interstellar space — a record they will hold until at least the 2040s — and Voyager 1 is the farthest human-made object from Earth.
History
Mariner Jupiter-Saturn
The two Voyager space probes were originally conceived as part of the Planetary Grand Tour planned during the late 1960s and early 70s that aimed to explore Jupiter, Saturn, Saturn's moon, Titan, Uranus, Neptune, and Pluto. The mission originated from the Grand Tour program, conceptualized by Gary Flandro, an aerospace engineer at the Jet Propulsion Laboratory, in 1964, which leveraged a rare planetary alignment occurring once every 175 years. This alignment allowed a craft to reach all outer planets using gravitational assists. The mission was to send several pairs of probes and gained momentum in 1966 when it was endorsed by NASA's Jet Propulsion Laboratory. However, in December 1971, the Grand Tour mission was canceled when funding was redirected to the Space Shuttle program.
In 1972, a scaled-down (four planets, two identical spacecraft) mission was proposed, utilizing a spacecraft derived from the Mariner series, initially intended to be Mariner 11 and Mariner 12. The gravity-assist technique, successfully demonstrated by Mariner 10, would be used to achieve significant velocity changes by maneuvering through an intermediate planet's gravitational field to minimize time towards Saturn. The spacecrafts were then moved into a separate program named Mariner Jupiter-Saturn (also Mariner Jupiter-Saturn-Uranus, MJS, or MJSU), part of the Mariner program, later renamed because it was thought that the design of the two space probes had progressed sufficiently beyond that of the Mariner family to merit a separate name.
Voyager probes
On March 4, 1977, NASA announced a competition to rename the mission, believing the existing name was not appropriate as the mission had differed significantly from previous Mariner missions. Voyager was chosen as the new name, referencing an earlier suggestion by William Pickering, who had proposed the name Navigator. Due to the name change occurring close to launch, the probes were still occasionally referred to as Mariner 11 and Mariner 12, or even Voyager 11 and Voyager 12.
Two mission trajectories were established: JST aimed at Jupiter, Saturn, and enhancing a Titan flyby, while JSX served as a contingency plan. JST focused on a Titan flyby, while JSX provided a flexible mission plan. If JST succeeded, JSX could proceed with the Grand Tour, but in case of failure, JSX could be redirected for a separate Titan flyby, forfeiting the Grand Tour opportunity. The second probe, now Voyager 2, followed the JSX trajectory, granting it the option to continue on to Uranus and Neptune. Upon Voyager 1 completing its main objectives at Saturn, Voyager 2 received a mission extension, enabling it to proceed to Uranus and Neptune. This allowed Voyager 2 to diverge from the originally planned JST trajectory.
The probes would be launched in August or September 1977, with their main objective being to compare the characteristics of Jupiter and Saturn, such as their atmospheres, magnetic fields, particle environments, ring systems, and moons. They would fly by planets and moons in either a JST or JSX trajectory. After completing their flybys, the probes would communicate with Earth, relaying vital data using their magnetometers, spectrometers, and other instruments to detect interstellar, solar, and cosmic radiation. Their radioisotope thermoelectric generators (RTGs) would limit the maximum communication time with the probes to roughly a decade. Following their primary missions, the probes would continue to drift into interstellar space.
Voyager 2 was the first to be launched. Its trajectory was designed to allow flybys of Jupiter, Saturn, Uranus, and Neptune. Voyager 1 was launched after Voyager 2, but along a shorter and faster trajectory that was designed to provide an optimal flyby of Saturn's moon Titan, which was known to be quite large and to possess a dense atmosphere. This encounter sent Voyager 1 out of the plane of the ecliptic, ending its planetary science mission. Had Voyager 1 been unable to perform the Titan flyby, the trajectory of Voyager 2 could have been altered to explore Titan, forgoing any visit to Uranus and Neptune. Voyager 1 was not launched on a trajectory that would have allowed it to continue to Uranus and Neptune, but could have continued from Saturn to Pluto without exploring Titan.
During the 1990s, Voyager 1 overtook the slower deep-space probes Pioneer 10 and Pioneer 11 to become the most distant human-made object from Earth, a record that it will keep for the foreseeable future. The New Horizons probe, which had a higher launch velocity than Voyager 1, is travelling more slowly due to the extra speed Voyager 1 gained from its flybys of Jupiter and Saturn. Voyager 1 and Pioneer 10 are the most widely separated human-made objects anywhere since they are travelling in roughly opposite directions from the Solar System.
In December 2004, Voyager 1 crossed the termination shock, where the solar wind is slowed to subsonic speed, and entered the heliosheath, where the solar wind is compressed and made turbulent due to interactions with the interstellar medium. On 10 December 2007, Voyager 2 also reached the termination shock, about closer to the Sun than from where Voyager 1 first crossed it, indicating that the Solar System is asymmetrical.
In 2010 Voyager 1 reported that the outward velocity of the solar wind had dropped to zero, and scientists predicted it was nearing interstellar space. In 2011, data from the Voyagers determined that the heliosheath is not smooth, but filled with giant magnetic bubbles, theorized to form when the magnetic field of the Sun becomes warped at the edge of the Solar System.
In June 2012, Scientists at NASA reported that Voyager 1 was very close to entering interstellar space, indicated by a sharp rise in high-energy particles from outside the Solar System. In September 2013, NASA announced that Voyager 1 had crossed the heliopause on 25 August 2012, making it the first spacecraft to enter interstellar space.
In December 2018, NASA announced that Voyager 2 had crossed the heliopause on 5 November 2018, making it the second spacecraft to enter interstellar space.
Voyager 1 and Voyager 2 continue to monitor conditions in the outer expanses of the Solar System. The Voyager spacecraft are expected to be able to operate science instruments through 2020, when limited power will require instruments to be deactivated one by one. Sometime around 2025, there will no longer be sufficient power to operate any science instruments.
In July 2019, a revised power management plan was implemented to better manage the two probes' dwindling power supply.
Spacecraft design
The Voyager spacecraft each weighed at launch, but after fuel usage are now about . Of this weight, each spacecraft carries of scientific instruments. The identical Voyager spacecraft use three-axis-stabilized guidance systems that use gyroscopic and accelerometer inputs to their attitude control computers to point their high-gain antennas towards the Earth and their scientific instruments towards their targets, sometimes with the help of a movable instrument platform for the smaller instruments and the electronic photography system.
The diagram shows the high-gain antenna (HGA) with a diameter dish attached to the hollow decagonal electronics container. There is also a spherical tank that contains the hydrazine monopropellant fuel.
The Voyager Golden Record is attached to one of the bus sides. The angled square panel to the right is the optical calibration target and excess heat radiator. The three radioisotope thermoelectric generators (RTGs) are mounted end-to-end on the lower boom.
The scan platform comprises: the Infrared Interferometer Spectrometer (IRIS) (largest camera at top right); the Ultraviolet Spectrometer (UVS) just above the IRIS; the two Imaging Science Subsystem (ISS) vidicon cameras to the left of the UVS; and the Photopolarimeter System (PPS) under the ISS.
Only five investigation teams are still supported, though data is collected for two additional instruments.
The Flight Data Subsystem (FDS) and a single eight-track digital tape recorder (DTR) provide the data handling functions.
The FDS configures each instrument and controls instrument operations. It also collects engineering and science data and formats the data for transmission. The DTR is used to record high-rate Plasma Wave Subsystem (PWS) data, which is played back every six months.
The Imaging Science Subsystem made up of a wide-angle and a narrow-angle camera is a modified version of the slow scan vidicon camera designs that were used in the earlier Mariner flights. The Imaging Science Subsystem consists of two television-type cameras, each with eight filters in a commandable filter wheel mounted in front of the vidicons. One has a low resolution focal length wide-angle lens with an aperture of f/3 (the wide-angle camera), while the other uses a higher resolution narrow-angle f/8.5 lens (the narrow-angle camera).
Three spacecraft were built, Voyager 1 (VGR 77-1), Voyager 2 (VGR 77-3), and test spare model (VGR 77-2).
Scientific instruments
Computers and data processing
There are three different computer types on the Voyager spacecraft, two of each kind, sometimes used for redundancy. They are proprietary, custom-built computers built from CMOS and TTL medium-scale CMOS integrated circuits and discrete components, mostly from the 7400 series of Texas Instruments. Total number of words among the six computers is about 32K. Voyager 1 and Voyager 2 have identical computer systems.
The Computer Command System (CCS), the central controller of the spacecraft, has two 18-bit word, interrupt-type processors with 4096 words each of non-volatile plated-wire memory. During most of the Voyager mission the two CCS computers on each spacecraft were used non-redundantly to increase the command and processing capability of the spacecraft. The CCS is nearly identical to the system flown on the Viking spacecraft.
The Flight Data System (FDS) is two 16-bit word machines with modular memories and 8198 words each.
The Attitude and Articulation Control System (AACS) is two 18-bit word machines with 4096 words each.
Unlike the other on-board instruments, the operation of the cameras for visible light is not autonomous, but rather it is controlled by an imaging parameter table contained in one of the on-board digital computers, the Flight Data Subsystem (FDS). More recent space probes, since about 1990, usually have completely autonomous cameras.
The computer command subsystem (CCS) controls the cameras. The CCS contains fixed computer programs such as command decoding, fault detection, and correction routines, antenna-pointing routines, and spacecraft sequencing routines. This computer is an improved version of the one that was used in the Viking orbiter. The hardware in both custom-built CCS subsystems in the Voyagers is identical. There is only a minor software modification for one of them that has a scientific subsystem that the other lacks.
According to Guinness Book of Records, CCS holds record of "longest period of continual operation for a computer". It has been running continuously since 20 August 1977.
The Attitude and Articulation Control Subsystem (AACS) controls the spacecraft orientation (its attitude). It keeps the high-gain antenna pointing towards the Earth, controls attitude changes, and points the scan platform. The custom-built AACS systems on both craft are identical.
It has been erroneously reported on the Internet that the Voyager space probes were controlled by a version of the RCA 1802 (RCA CDP1802 "COSMAC" microprocessor), but such claims are not supported by the primary design documents. The CDP1802 microprocessor was used later in the Galileo space probe, which was designed and built years later. The digital control electronics of the Voyagers were not based on a microprocessor integrated-circuit chip.
Communications
The uplink communications are executed via S-band microwave communications. The downlink communications are carried out by an X-band microwave transmitter on board the spacecraft, with an S-band transmitter as a back-up. All long-range communications to and from the two Voyagers have been carried out using their high-gain antennas. The high-gain antenna has a beamwidth of 0.5° for X-band, and 2.3° for S-band. (The low-gain antenna has a 7 dB gain and 60° beamwidth.)
Because of the inverse-square law in radio communications, the digital data rates used in the downlinks from the Voyagers have been continually decreasing the farther that they get from the Earth. For example, the data rate used from Jupiter was about 115,000 bits per second. That was halved at the distance of Saturn, and it has gone down continually since then. Some measures were taken on the ground along the way to reduce the effects of the inverse-square law. In between 1982 and 1985, the diameters of the three main parabolic dish antennas of the Deep Space Network were increased from dramatically increasing their areas for gathering weak microwave signals.
Whilst the craft were between Saturn and Uranus the onboard software was upgraded to do a degree of image compression and to use a more efficient Reed-Solomon error-correcting encoding.
Then between 1986 and 1989, new techniques were brought into play to combine the signals from multiple antennas on the ground into one, more powerful signal, in a kind of an antenna array. This was done at Goldstone, California, Canberra (Australia), and Madrid (Spain) using the additional dish antennas available there. Also, in Australia, the Parkes Radio Telescope was brought into the array in time for the fly-by of Neptune in 1989. In the United States, the Very Large Array in New Mexico was brought into temporary use along with the antennas of the Deep Space Network at Goldstone. Using this new technology of antenna arrays helped to compensate for the immense radio distance from Neptune to the Earth.
Power
Electrical power is supplied by three MHW-RTG radioisotope thermoelectric generators (RTGs). They are powered by plutonium-238 (distinct from the Pu-239 isotope used in nuclear weapons) and provided approximately 470 W at 30 volts DC when the spacecraft was launched. Plutonium-238 decays with a half-life of 87.74 years, so RTGs using Pu-238 will lose a factor of 1−0.5(1/87.74) = 0.79% of their power output per year.
In 2011, 34 years after launch, the thermal power generated by such an RTG would be reduced to (1/2)(34/87.74) ≈ 76% of its initial power. The RTG thermocouples, which convert thermal power into electricity, also degrade over time reducing available electric power below this calculated level.
By 7 October 2011 the power generated by Voyager 1 and Voyager 2 had dropped to 267.9 W and 269.2 W respectively, about 57% of the power at launch. The level of power output was better than pre-launch predictions based on a conservative thermocouple degradation model. As the electrical power decreases, spacecraft loads must be turned off, eliminating some capabilities. There may be insufficient power for communications by 2032.
Voyager Interstellar Mission
The Voyager primary mission was completed in 1989, with the close flyby of Neptune by Voyager 2. The Voyager Interstellar Mission (VIM) is a mission extension, which began when the two spacecraft had already been in flight for over 12 years. The Heliophysics Division of the NASA Science Mission Directorate conducted a Heliophysics Senior Review in 2008. The panel found that the VIM "is a mission that is absolutely imperative to continue" and that VIM "funding near the optimal level and increased DSN (Deep Space Network) support is warranted."
The main objective of the VIM was to extend the exploration of the Solar System beyond the outer planets to the heliopause (the farthest extent at which the Sun's radiation predominates over interstellar winds) and if possible even beyond. Voyager 1 crossed the heliopause boundary in 2012, followed by Voyager 2 in 2018. Passing through the heliopause boundary has allowed both spacecraft to make measurements of the interstellar fields, particles and waves unaffected by the solar wind. Two significant findings so far have been the discovery of a region of magnetic bubbles and no indication of an expected shift in the Solar magnetic field.
The entire Voyager 2 scan platform, including all of the platform instruments, was switched off in 1998. All platform instruments on Voyager 1, except for the ultraviolet spectrometer (UVS) have also been switched off.
The Voyager 1 scan platform was scheduled to go off-line in late 2000 but has been left on to investigate UV emission from the upwind direction.
UVS data are still captured but scans are no longer possible.
Gyro operations ended in 2016 for Voyager 2 and in 2017 for Voyager 1. Gyro operations are used to rotate the probe 360 degrees six times per year to measure the magnetic field of the spacecraft, which is then subtracted from the magnetometer science data.
The two spacecraft continue to operate, with some loss in subsystem redundancy but retain the capability to return scientific data from a full complement of Voyager Interstellar Mission (VIM) science instruments.
Both spacecraft also have adequate electrical power and attitude control propellant to continue operating until around 2025, after which there may not be electrical power to support science instrument operation; science data return and spacecraft operations will cease.
Mission details
By the start of VIM, Voyager 1 was at a distance of 40 AU from the Earth, while Voyager 2 was at 31 AU. VIM is in three phases: termination shock, heliosheath exploration, and interstellar exploration phase. The spacecraft began VIM in an environment controlled by the Sun's magnetic field, with the plasma particles being dominated by those contained in the expanding supersonic solar wind. This is the characteristic environment of the termination shock phase. At some distance from the Sun, the supersonic solar wind will be held back from further expansion by the interstellar wind. The first feature encountered by a spacecraft as a result of this interaction – between interstellar wind and solar wind – was the termination shock, where the solar wind slows to subsonic speed, and large changes in plasma flow direction and magnetic field orientation occur. Voyager 1 completed the phase of termination shock in December 2004 at a distance of 94 AU, while Voyager 2 completed it in August 2007 at a distance of 84 AU. After entering into the heliosheath, the spacecraft were in an area that is dominated by the Sun's magnetic field and solar wind particles. After passing through the heliosheath, the two Voyagers began the phase of interstellar exploration. The outer boundary of the heliosheath is called the heliopause. This is the region where the Sun's influence begins to decrease and interstellar space can be detected.
Voyager 1 is escaping the Solar System at the speed of 3.6 AU per year 35° north of the ecliptic in the general direction of the solar apex in Hercules, while Voyager 2s speed is about 3.3 AU per year, heading 48° south of the ecliptic. The Voyager spacecraft will eventually go on to the stars. In about 40,000 years, Voyager 1 will be within 1.6 light years (ly) of AC+79 3888, also known as Gliese 445, which is approaching the Sun. In 40,000 years Voyager 2 will be within 1.7 ly of Ross 248 (another star which is approaching the Sun), and in 296,000 years it will pass within 4.6 ly of Sirius, which is the brightest star in the night-sky. The spacecraft are not expected to collide with a star for 1 sextillion (1020) years.
In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System, as detected by the Voyager space probes. According to the researchers, this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose".
Voyager Golden Record
Both spacecraft carry a golden phonograph record that contains pictures and sounds of Earth, symbolic directions on the cover for playing the record, and data detailing the location of Earth. The record is intended as a combination time capsule and an interstellar message to any civilization, alien or far-future human, that may recover either of the Voyagers. The contents of this record were selected by a committee that included Timothy Ferris and was chaired by Carl Sagan.
Pale Blue Dot
Pale Blue Dot is a photograph of Earth taken on February 14, 1990, by the Voyager 1 space probe from a distance of approximately kilometers ( miles, 40.5 AU), as part of that day's Family Portrait series of images of the Solar System.
The Voyager program's discoveries during the primary phase of its mission, including new close-up color photos of the major planets, were regularly documented by print and electronic media outlets. Among the best-known of these is an image of the Earth as a Pale Blue Dot, taken in 1990 by Voyager 1, and popularized by Carl Sagan,
| Technology | Programs and launch sites | null |
47798 | https://en.wikipedia.org/wiki/Trogon | Trogon | The trogons and quetzals are birds in the order Trogoniformes which contains only one family, the Trogonidae. The family Trogonidae contains 46 species in seven genera. The fossil record of the trogons dates back 49 million years to the Early Eocene. They might constitute a member of the basal radiation of the order Coraciiformes and order Passeriformes or be closely related to mousebirds and owls. The word trogon is Greek for "nibbling" and refers to the fact that these birds gnaw holes in trees to make their nests.
Trogons are residents of tropical forests worldwide. The greatest diversity is in the Neotropics, where four genera, containing 24 species, occur. The genus Apaloderma contains the three African species. The genera Harpactes and Apalharpactes, containing twelve species, are found in southeast Asia.
They feed on insects and fruit, and their broad bills and weak legs reflect their diet and arboreal habits. Although their flight is fast, they are reluctant to fly any distance. Trogons are generally not migratory, although some species undertake partial local movements. Trogons have soft, often colourful, feathers with distinctive male and female plumage. They are the only type of animal with a heterodactyl toe arrangement. They nest in holes dug into trees or termite nests, laying 2–4 white or pastel-coloured eggs.
Evolution and taxonomy
The position of the trogons within the class Aves has been a long-standing mystery. A variety of relations have been suggested, including the parrots, cuckoos, toucans, jacamars and puffbirds, rollers, owls and nightjars. More recent morphological and molecular evidence has suggested a relationship with the Coliiformes. The unique arrangement of the toes on the foot (see morphology and flight) has led many to consider the trogons to have no close relatives; to place them in their own order, possibly with the similarly atypical mousebirds as their closest relatives.
The earliest formally described fossil specimen is a cranium from the Fur Formation Lower Eocene in Denmark (54 mya). Other trogoniform fossils have been found in the Messel pit deposits from the mid-Eocene in Germany (49 mya), and in Oligocene and Miocene deposits from Switzerland and France respectively. The oldest New World fossil of a trogon is from the comparatively recent Pleistocene (less than 2.588 mya).
The family had been thought to have an Old World origin notwithstanding the current richness of the family, which is more diverse in the Neotropical New World. DNA evidence seemed to support an African origin for the trogons, with the African genus Apaloderma seemingly basal in the family, and the other two lineages, the Asian and American, breaking off 20–36 million years ago. More recent studies show that the DNA evidence gives contradictory results concerning the basal phylogenetic relationships; so it is currently unknown if all extant trogons are descended from an African ancestor, an American ancestor or neither.
The trogons are split into three subfamilies, each reflecting one of these splits. Aplodermatinae is the African subfamily and contains a single genus, Apaloderma. Harpactinae is the Asian subfamily and contains two genera, Harpactes and Apalharpactes. Apalharpactes, consisting of two species in Java and Sumatra, has only recently been accepted as a separate genus from Harpactes. The remaining subfamily, the Neotropical Trogoninae, contains the remaining four genera, Trogon, Priotelus, Pharomachrus and Euptilotis.
The two Caribbean species of Priotelus were formerly different ones (Temnotrogon on Hispaniola), and are extremely ancient. The two quetzal genera, Pharomachrus and Euptilotis are possibly derived from the final and most numerous genus of trogons in the Neotropics, Trogon. A 2008 study of the genetics of Trogon suggested the genus originated in Central America and radiated into South America after the formation of the Isthmus of Panama (as part of the Great American Interchange), thus making trogons relatively recent arrivals in South America.
Distribution and habitat
The majority of trogons are birds of tropical and subtropical forests. They have a cosmopolitan distribution in the worlds wet tropics, being found in the Americas, Africa and Asia. A few species are distributed into the temperate zone, with one species, the elegant trogon, reaching the south of the United States, specifically southern Arizona and the surrounding area. The Narina trogon of Africa is slightly exceptional in that it utilises a wider range of habitats than any other trogon, ranging from dense forest to fairly open savannah, and from the Equator to southern South Africa. It is the most widespread and successful of all the trogons. The eared quetzal of Mexico is also able to use more xeric habitats, but preferentially inhabits forests. Most other species are more restricted in their habitat, with several species being restricted to undisturbed primary forest. Within forests they tend to be found in the mid-story, occasionally in the canopy.
Some species, particularly the quetzals, are adapted to cooler montane forest. There are a number of insular species; these include a number of species found in the Greater Sundas, one species in the Philippines as well as two species endemic to Cuba and Hispaniola respectively. Outside of South East Asia and the Caribbean, however, trogons are generally absent from islands, especially oceanic ones.
Trogons are generally sedentary, with no species known to undertake long migrations. A small number of species are known to make smaller migratory movements, particularly montane species which move to lower altitudes during different seasons. This has been demonstrated using radio tracking in the resplendent quetzal in Costa Rica and evidence has been accumulated for a number of other species. The Narina trogon of Africa is thought to undertake some localised short-distance migrations over parts of its range, for example birds of Zimbabwe's plateau savannah depart after the breeding season. A complete picture of these movements is however lacking. Trogons are difficult to study as their thick tarsi (feet bones) make ringing studies difficult.
Morphology and flight
The trogons as a family are fairly uniform in appearance, having compact bodies and long tails (very long in the case of the quetzals), and short necks. Trogons range in size from the , scarlet-rumped trogon to the , resplendent quetzal (not including the male quetzal's tail streamers). Their legs and feet are weak and short, and trogons are essentially unable to walk beyond a very occasional shuffle along a branch. They are even incapable of turning around on a branch without using their wings. The ratio of leg muscle to body weight in trogons is only 3%, the lowest known ratio of any bird. The arrangement of toes on the feet of trogons is also unique among birds, although essentially resembling the zygodactyl's two forward two backward arrangement of parrots and other near-passerines, the actual toes are arranged with usually inner hallux being the outer hind toe, an arrangement that is referred to as heterodactylous. The strong bill is short and the gape wide, particularly in the fruit eating quetzals, with a slight hook at the end. There is also a notch at the end of the bill and many species have slight serrations in the mandibles. The skin is exceptionally tender, making preparation of study skins difficult for museum curators. The skeletons of trogons are surprisingly slender, particularly the skulls which are very thin. The plumage of many species is iridescent, although not in most of the Asian species. The African trogons are generally green on the back with red bellies. The New World trogons similarly have green or deep blue upperparts but are more varied in their lowerparts. The Asian species tend towards red underparts and brown backs.
The wings are short but strong, with the wing muscle ratio being around 22% of the body weight. In spite of the strength of their flight, trogons do not fly often or for great distances, generally flying no more than a few hundred metres at a time. Only the montane species tend to make long-distance flights. Shorter flights tend to be direct and swift, but longer flights are slightly undulating. Their flight can be surprisingly silent (for observers), although that of a few species is reportedly quite noisy.
Calls
The calls of trogons are generally loud and uncomplex, consisting of monosyllabic hoots and whistles delivered in varying patterns and sequences. The calls of the quetzals and the two Caribbean genera are the most complex. Among the Asian genera the Sumatran trogon (Apalharpactes) has the most atypical call of any trogon, research has not yet established whether the closely related Javan trogon has a similar call. The calls of the other Asian genus, Harpactes, are remarkably uniform. In addition to the territorial and breeding calls given by males and females during the breeding seasons, trogons have been recorded as having aggression calls given by competing males and alarm calls.
Behaviour
Trogons are generally inactive outside of infrequent feeding flights. Among birdwatchers and biologists it has been noted that "[a]part from their great beauty [they] are notorious ... for their lack of other immediately engaging qualities". Their lack of activity is possibly a defence against predation; trogons on all continents have been reported to shift about on branches to always keep their less brightly coloured backs turned towards observers, while their heads, which like owls can turn through 180 degrees, keep a watch on the watcher. Trogons have reportedly been preyed upon by hawks and predatory mammals; one report was of a resplendent quetzal taken while brooding young by a margay.
Diet and feeding
Trogons feed principally on insects, other arthropods, and fruit; to a lesser extent some small vertebrates such as lizards are taken. Among the insect prey taken one of the more important types are caterpillars; along with cuckoos, trogons are one of the few birds groups to regularly prey upon them. Some caterpillars are known to be poisonous to trogons though, like Arsenura armida. The extent to which each food type is taken varies depending on geography and species. The three African trogons are exclusively insectivorous, whereas the Asian and American genera consume varying amounts of fruit. Diet is somewhat correlated with size, with larger species feeding more on fruit and smaller species focusing on insects.
Prey is almost always obtained on the wing. The most commonly employed foraging technique is a sally-glean flight, where a trogon flies from an observation perch to a target on another branch or in foliage. Once there the birds hovers or stalls and snatches the item before returning to its perch to consume the item. This type of foraging is commonly used by some types of bird to obtain insect prey; in trogons and quetzals it is also used to pluck fruit from trees. Insect prey may also be taken on the wing, with the trogon pursuing flying insects in a similar manner to drongos and Old World flycatchers. Frogs, lizards and large insects on the ground may also be pounced on from the air. More rarely some trogons may shuffle along a branch to obtain insects, insect eggs and very occasionally nestling birds. Violaceous trogons will consume wasps and wasp larvae encountered while digging nests.
Breeding
Trogons are territorial and monogamous. Males will respond quickly to playbacks of their calls and will repel other members of the same species and even other hole-nesting species from around their nesting sites. Males attract females by singing, and, in the case of the resplendent quetzal, undertaking display flights. Some species have been observed in small flocks of 3–12 individuals prior to and sometimes during the breeding season, calling and chasing each other, but the function of these flocks is unclear.
Trogons are cavity nesters. Nests are dug into rotting wood or termite nests, with one species, the violaceous trogon, nesting in wasp nests. Nest cavities can either be deep upward slanting tubes that lead to fully enclosed chambers, or much shallower open niches (from which the bird is visible). Nests are dug with the beak, incidentally giving the family its name. Nest digging may be undertaken by the male alone or by both sexes. In the case of nests dug into tree trunks, the wood must be strong enough not to collapse but soft enough to dig out. Trogons have been observed landing on dead tree trunks and slapping the wood with their tails, presumably to test the firmness.
The nests of trogons are thought to usually be unlined. Between two and four eggs are laid in a nesting attempt. These are round and generally glossy white or lightly coloured (buff, grey, blue or green), although they get increasingly dirty during incubation. Both parents incubate the eggs (except in the case of the bare-cheeked trogon, where apparently the male takes no part), with the male taking one long incubation stint a day and the female incubating the rest of the time. Incubation seems to begin after the last egg is laid. The incubation period varies by species, usually lasting between 16–19 days. On hatching the chicks are altricial, blind and naked. The chicks acquire feathers rapidly in some of the montane species, in the case of the mountain trogon in a week, but more slowly in lowland species like the black-headed trogon, which may take twice as long. The nestling period varies by species and size, with smaller species generally taking 16 to 17 days to fledge, whereas larger species may take as long as 30 days, although 23–25 days is more typical.
Relationship with humans
Trogons and quetzals are considered to be "among the most beautiful of birds", yet they are also often reclusive and seldom seen. Little is known about much of their biology, and much of what is known about them comes from the research of neotropical species by the ornithologist Alexander Skutch. Trogons are nevertheless popular birds with birdwatchers, and there is a modest ecotourism industry in particular to view quetzals in Central America.
Species list
Order Trogoniformes
Family Trogonidae
| Biology and health sciences | Trogoniformes | null |
47877 | https://en.wikipedia.org/wiki/Pickup%20truck | Pickup truck | A pickup truck or pickup is a light or medium duty truck that has an enclosed cabin, and a back end made up of a cargo bed that is enclosed by three low walls with no roof (this cargo bed back end sometimes consists of a tailgate and removable covering). In Australia and New Zealand, both pickups and coupé utilities are called utes, short for utility vehicle. In South Africa, people of all language groups use the term bakkie; a diminutive of , meaning bowl or container.
Once a work or farming tool with few creature comforts, in the 1950s, US consumers began purchasing pickups for lifestyle reasons, and by the 1990s, less than 15 percent of owners reported use in work as the pickup truck's primary purpose. In North America, the pickup is mostly used as a passenger car and accounts for about 18% of total vehicles sold in the United States. Full-sized pickups and SUVs are an important source of revenue for major car manufacturers such as Ford, General Motors, and Stellantis, accounting for more than two-thirds of their global pre-tax earnings, though they make up just 16% of North American vehicle production. These vehicles have a high profit margin and a high price tag; in 2018, Kelley Blue Book cited an average cost (including optional features) of US$47,174 for a new Ford F-150.
The term pickup is of unknown origin. It was used by Studebaker in 1913. By the 1930s, it had become the standard term in certain markets for a light-duty truck.
History
In the early days of automobile manufacturing, vehicles were sold only as a chassis and third parties added bodies on top. In 1902, the Rapid Motor Vehicle Company was founded by Max Grabowsky and Morris Grabowsky who built one-ton carrying capacity trucks in Pontiac, Michigan. In 1913, the Galion Allsteel Body Company, an early developer of the pickup and dump truck, built and installed hauling boxes on slightly modified Ford Model T chassis, and from 1917, on the Model TT. Seeking part of this market share, Dodge introduced a 3/4-ton pickup with a cab and body constructed entirely of wood in 1924. In 1925, Ford followed up with a steel-bodied half-ton based on the Model T with an adjustable tailgate and heavy-duty rear springs. Billed as the "Ford Model T Runabout with Pickup Body," it sold for ; 34,000 were built. In 1928, it was replaced by the Model A, which had a closed-cab, safety-glass windshield, roll-up side windows, and three-speed transmission.
In 1931, General Motors introduced light-duty pickups for both GMC and Chevrolet targeted at private ownership. These pickup trucks were based on the Chevrolet Master. In 1940, GM introduced the dedicated light-truck platform, separate from passenger cars, which GM named the AK series. Ford North America continued to offer a pickup body style on the Ford Model 51, and the Ford Australian division produced the first Australian "ute" in 1932. In 1940, Ford offered a dedicated light-duty truck platform called the Ford F-100, then upgraded the platform after World War II to the Ford F-Series in 1948.
Dodge at first assumed heavier truck production from Graham-Paige, while the company produced their light (pickup) trucks, initially on their sufficiently sturdy passenger car frames. But after switching to distinct, dedicated truck frames in 1936, Dodge/Fargo launched an extensive own truck range for 1939, marketed as the "Job-Rated" trucks. These Art Deco–styled trucks were again continued after World War II.
International Harvester offered the International K and KB series, which were marketed towards construction and farming and did not have a strong retail consumer presence, and Studebaker also manufactured the M-series truck. At the beginning of World War II, the United States government halted the production of privately owned pickup trucks, and all American manufacturers built heavy duty trucks for the war effort.
In the 1950s, consumers began purchasing pickups for lifestyle rather than utilitarian reasons. Car-like, smooth-sided, fenderless trucks were introduced, such as the Chevrolet Fleetside, the Chevrolet El Camino, the Dodge Sweptline, and in 1957, Ford's purpose-built Styleside. Pickups began to feature comfort items such as power options and air conditioning. During this time, pickups with four doors, known as crew cabs, started to become popular. These pickup trucks were released in 1954 in Japan with the Toyota Stout, in 1957 in Japan with the Datsun 220, and in 1957 in America with the International Travelette. Other manufacturers soon followed, including the Hino Briska in 1962, Dodge in 1963, Ford in 1965, and General Motors in 1973.
In 1961 in the UK the British Motor Corporation launched an Austin Mini Pickup version of the original 1959 Mini. It was in production until 1983.
In 1963, the US chicken tax directly curtailed the import of the Volkswagen Type 2, distorting the market in favor of US manufacturers. The tariff directly affected any country seeking to bring light trucks into the United States and effectively "squeezed smaller Asian truck companies out of the American pickup market." Over the intervening years, Detroit lobbied to protect the light-truck tariff, thereby reducing pressure on Detroit to introduce vehicles that polluted less and that offered increased fuel economy.
The US government's 1973 Corporate Average Fuel Economy (CAFE) policy set higher fuel-economy requirements for cars than pickups. CAFE led to the replacement of the station wagon by the minivan, the latter of which belonged in the truck category, which allowed it to comply with less strict emissions standards. Eventually, CAFE led to the promotion of sport utility vehicles (SUVs). Pickups, unhindered by the emissions controls regulations on cars, began to replace muscle cars as the performance vehicle of choice. The Dodge Warlock appeared in Dodge's "adult toys" line, along with the Macho Power Wagon and Street Van. The 1978 gas guzzler tax, which taxed fuel-inefficient cars while exempting pickup trucks, further distorted the market in favor of pickups. Furthermore, until 1999, light trucks were not required to meet the same safety standards as cars, and 20 years later, most still lagged behind cars in the adoption of safety features.
In the 1980s, the compact Mazda B-series, Isuzu Faster, and Mitsubishi Forte debuted. Subsequently, US manufacturers built their compact pickups for the domestic market, including the Ford Ranger, and the Chevrolet S-10. Minivans make inroads into the pickups' market share. In the 1990s, pickups' market share was further eroded by the popularity of SUVs.
Mid-sized electric trucks had been tried early in the 20th century but soon lost out to gasoline and diesel vehicles. In 1997, the Chevrolet S-10 EV was released, but few were sold, and those were mostly to fleet operators.
By 2023, pickup trucks had become strictly more lifestyle than utilitarian vehicles. Annual surveys of Ford F-150 owners from 2012 to 2021 revealed that 87% of the owners used their trucks frequently for shopping and running errands and 70% for pleasure driving, whereas 28% used their trucks often for personal hauling (41% occasionally and 32% rarely/never) and only 7% used them for towing while 29% only did so occasionally and 63% rarely/never did. The 1960s–1970s Ford F-100 was typically a regular cab and consisted of mostly 64% bed and 36% cab, while by mid-2000s, crew cabs were largely becoming the norm and the bed was shrunk to accommodate the larger cab, and a 2023 F-150 consisted of 63% cab and 37% bed.
International markets
While the Ford F-150 has been the best-selling vehicle in the United States since 1982, the Ford F-150, or indeed any full-sized pickup truck, is a rare sight in Europe, where higher fuel prices and narrower city roads make it difficult to use daily. In the United States, pickups are favored by a cultural attachment to the style, lower fuel prices, and taxes and regulations that distort the market in favor of domestically built trucks. As of 2016, the IRS offers tax breaks for business use of "any vehicle equipped with a cargo area ... of at least six feet in interior length that is not readily accessible from the passenger compartment".
In Europe, pickups represent less than 1% of light vehicles sold, the most popular being the Ford Ranger with 27,300 units sold in 2015. Other models include the Renault Alaskan (a rebadged Nissan Navara), and the Toyota Hilux.
The NOx law and other differing regulations prevent pickups from being imported to Japan, but the Japanese domestic market Mitsubishi Triton was available for a limited time. The most recent pickup truck for sale in Japan is the Toyota Hilux.
In China (where it is known by the English loanword as 皮卡车 pí kǎ chē), the Great Wall Wingle is manufactured domestically and exported to Australia. In Thailand, pickups manufactured for local sale and export include the Isuzu D-Max and the Mitsubishi Triton. In Latin and South America, the Toyota Hilux, Ford Ranger, VW Amarok, Dodge Ram, Chevrolet S-10, Chevrolet D-20, and Chevrolet Montana are sold.
In South Africa, pickups account for about 17% of the passenger and light commercial vehicle sales, mostly the Toyota Hilux, Ford Ranger, and Isuzu KB (Isuzu D-Max). The Volkswagen Amarok and Nissan Navara are also sold.
Design and features
In the United States and Canada, nearly all new pickups are sold with automatic transmissions. Only the Jeep Gladiator and the Toyota Tacoma are available with manual transmissions.
A regular cab, single cab or standard cab, has a single row of seats and a single set of doors, one on each side.
Extended cab or extra cab pickups add an extra space behind the main seat, sometimes including smaller jump seats which can fold out of the way to create more storage space. The first extended-cab truck in the United States was called the Club Cab and was introduced by Chrysler in 1973 on its Dodge D-series pickup trucks. Extended-cab trucks either have just a single set of doors with no direct access to the extended portion of the cab, very small (half-sized) rear doors that are rear-hinged which can only be opened after the front doors are open, or small (three-quarter-sized) front-hinged doors.
A crew cab, or double cab, seats five or six and has four full-sized, front-hinged doors. The first crew-cab truck in the United States was made by International Harvester in 1957 and was later followed by Dodge in 1963, Ford in 1965, and Chevrolet in 1973. However, they were originally available only with three-quarter-ton or one-ton models (such as Ford F-250/F-350), while half-ton trucks like Ford F-150 would not become available in four-door configuration until 2001, by which time crew cabs also started overtaking regular/extended cabs in popularity.
Cab-over or cab forward designs have the cab sitting above the front axle. This arrangement allows a longer cargo area for the same overall length. An early cab-forward, drop-sided pickup was the Volkswagen Transporter, introduced in 1952. This configuration is more common among European and Japanese manufacturers than in North America. The design was more popular in North America in the 1950s and 1960s, with examples including the Chevrolet Corvair Rampside and Loadside, Dodge A-100 and A-108, Ford Econoline, and Jeep FC-150 and FC-170.
A "dually" is a North American colloquial term for a pickup with four rear wheels instead of two, able to carry more weight over the rear axle. Vehicles similar to the pickup include the coupé utility, a car-based pickup, and the larger sport utility truck (SUT), based on a sport utility vehicle (SUV).
The terms half-ton, three-quarter-ton, and one-ton are remnants from a time when the number referred to the maximum cargo capacity by weight.
In North America, some pickup trucks may be marketed as heavy duty (eg Ram Heavy Duty), super duty (eg Ford Super Duty) or simply "HD". This is typically a pickup truck with higher payload and/or towing capabilities than is standard for their size. While synonymous with "dually" or full-size pickup trucks in North American, none of those are requirements. Dually is not available on Ram 2500 or Ford F-250 and is optional on Ram 3500 or Ford F-350, but those pickup trucks are all heavy duty. Mahindra Bolero MaXX Pik-Up HD is a heavy duty mid-size pickup truck with a two tonne payload.
Some pickup trucks have an opening at the rear of the cab to increase cargo capacity lengthwise without increasing overall vehicle length or wheelbase, which reduces break over, approach, departure angles and increases turning radius. This feature is referred to as a mid-gate due to it being located in the middle of a pickup truck, as opposed to the tail-gate, which is located as the rear/tail of the vehicle.
Bed styles
The cargo bed can vary in size according to whether the vehicle is optimized for cargo utility or passenger comfort. Most have fixed side walls and a hinged tailgate. Cargo beds are normally found in two styles: stepside or fleetside. A stepside bed has fenders that extend on the outside of the cargo area; originally these were just fenders attached to a cargo box. This style used to be the standard design, as it was cheaper to manufacture. A fleetside bed has wheel wells inside of a double-walled bed, and most are usually designed to match the cab's styling. The two types of bed have been given a variety of names by different manufacturers; "Stepside" and "Fleetside" originate with Chevrolet but are also frequently used by Dodge as well as GMC. GMC has also used "Wideside" instead of Fleetside, while Dodge has also used "Utiline" and "Sweptline" for the two types. Ford uses "Flareside" and "Styleside", respectively. Jeep has used "Sportside" and "Thriftside" for the separate fender style, and "Townside" for flush designs. International Harvester called the two types "Standard" and "Bonus-Load".
The first fleet-sided pickup truck was the Crosley in the 1940s, followed by the 1955 Chevrolet Cameo Carrier. Early pickups had wood-plank beds, which were largely replaced by steel by the 1960s. In many parts of the world, pickups frequently use a dropside bed – with a flat tray with hinged panels that can be raised separately on the sides and the rear. The fleetside has gradually fully replaced the earlier, separate-fender look: The last time Chevrolet and GMC used the Stepside style was on the 2005 Silverado and Sierra 1500 models; Ford last used the Flareside style on the 2009 F-150.
Safety
Consumer pickup trucks sold in the US have increased in weight by 32% since 1990. Also, cabins have grown and risen further from the ground and grill and hood sizes have increased over time. These changes mean that a modern standard pickup truck has a longer blind spot in front of its grill than most other vehicles as well as increased blind spots behind and to the side. The Ford F-250 has a hood almost from the ground. It may be impossible to see a small object such as a child as far as in front of the vehicle. A total of 575 children in the US died in front-over deaths between 2009 and 2019, most by their parents. This is an 89% increase in mortality from the previous ten years. Additionally, US car-related fatalities went up by 8% and pedestrian casualties increased by 46% between 2011 and 2021. While the reasons for this increase are complex, Consumer Reports partially attributes this number to increased truck size and prevalence. Chuck Farmer from the US Insurance Institute for Highway Safety has found large pickup trucks to be as deadly or deadlier than muscle cars and "... are work trucks, and people should not be using them primarily for commuting, because they kill so many other drivers."
Uses
In the United States and Canada, pickups are used primarily for passenger transport. Pickup trucks are often marketed and used for their hauling (utilizing cargo bed) and towing (utilizing body-on-frame design and long wheelbase) capabilities.
Pickup trucks are also used by many journeymen, tradesmen, and outdoor enthusiasts. They are also used to move or transport large goods. For example, in the US, a homeowner can rent a pickup truck to transport a large appliance from a home supply store.
Equipping pickup trucks with a camper shell provides a small space for camping. Slide-in truck campers can offer a pickup truck the amenities of a small motorhome, but still allow the operator the option of removal and independent use of the vehicle.
Pickups are popular with overlanders as they are often the most affordable vehicle capable of carrying the large quantities of fuel needed for long distance, remote travel and generator use without expensive modifications.
Modified pickups can be used as improvised, unarmored combat vehicles called a technical.
Pickup trucks are used to carry passengers in parts of Africa and Southeast Asia. In Thailand, most songthaews are converted pickup trucks and flatbed trucks. In Haiti, tap taps are also converted pickup trucks.
Towing with pickup trucks is separated into two categories: conventional towing (bumper pull) and in-bed (heavy duty) towing. Conventional towing mounts the hitch at the rear of the pickup truck, in-bed towing mounts the hitch directly above or in front of the rear axle. Weight distribution hitch falls under conventional towing. Fifth wheel and gooseneck fall under in-bed towing.
Sizes
Kei/Mini truck
Kei trucks are a Japanese class with a maximum length of , a maximum width of , a maximum height of , and a maximum displacement of .
In some countries, mini trucks are similar to, or slightly bigger than, kei trucks. In other countries, eg the United States, mini trucks are another name for any pickup smaller than full-size pickups.
UTVs are of similar size and serve similar roles in developed countries but are typically restricted to off-road and rural areas.
Compact pickup truck
Typically, a unibody pickup truck is built on compact SUV platform or a compact passenger car platform. Examples include the Hyundai Santa Cruz and Ford Maverick. Subaru also produced the Subaru Baja based heavily on the Subaru Outback (Legacy) wagon and Subaru BRAT based on the Subaru Leone wagon using a unibody construction. Other variations include the Holden Crewman and Holden one tonner which are based on a sedan platform but use a part-monocoque, part chassis frame construction.
Mid-size pickup truck
Typically, a body-on-frame pickup truck of a similar size to a mid-size SUV. Examples include the Ford Ranger, Toyota Hilux, and Isuzu D-Max. This is usually the largest size pickup sold or manufactured in countries outside North America.
Full-size pickup truck
A body-on-frame pickup truck with an exterior width of more than two meters (excluding mirrors and/or widebody/flares for dually wheels).
Gallery
| Technology | Motorized road transport | null |
47878 | https://en.wikipedia.org/wiki/Huntington%27s%20disease | Huntington's disease | Huntington's disease (HD), also known as Huntington's chorea, is an incurable neurodegenerative disease that is mostly inherited. The earliest symptoms are often subtle problems with mood or mental/psychiatric abilities. A general lack of coordination and an unsteady gait often follow. It is also a basal ganglia disease causing a hyperkinetic movement disorder known as chorea. As the disease advances, uncoordinated, involuntary body movements of chorea become more apparent. Physical abilities gradually worsen until coordinated movement becomes difficult and the person is unable to talk. Mental abilities generally decline into dementia, depression, apathy, and impulsivity at times. The specific symptoms vary somewhat between people. Symptoms usually begin between 30 and 50 years of age, and can start at any age but are usually seen around the age of 40. The disease may develop earlier in each successive generation. About eight percent of cases start before the age of 20 years, and are known as juvenile HD, which typically present with the slow movement symptoms of Parkinson's disease rather than those of chorea.
HD is typically inherited from an affected parent, who carries a mutation in the huntingtin gene (HTT). However, up to 10% of cases are due to a new mutation. The huntingtin gene provides the genetic information for huntingtin protein (Htt). Expansion of CAG repeats of cytosine-adenine-guanine (known as a trinucleotide repeat expansion) in the gene coding for the huntingtin protein results in an abnormal mutant protein (mHtt), which gradually damages brain cells through a number of possible mechanisms. The mutant protein is dominant, so having one parent who is a carrier of the trait is sufficient to trigger the disease in their children. Diagnosis is by genetic testing, which can be carried out at any time, regardless of whether or not symptoms are present. This fact raises several ethical debates: the age at which an individual is considered mature enough to choose testing; whether parents have the right to have their children tested; and managing confidentiality and disclosure of test results.
No cure for HD is known, and full-time care is required in the later stages. Treatments can relieve some symptoms and in some, improve quality of life. The best evidence for treatment of the movement problems is with tetrabenazine. HD affects about 4 to 15 in 100,000 people of European descent. It is rare among the Finnish and Japanese, while the occurrence rate in Africa is unknown. The disease affects males and females equally. Complications such as pneumonia, heart disease, and physical injury from falls reduce life expectancy; although fatal aspiration pneumonia is commonly cited as the ultimate cause of death for those with the condition. Suicide is the cause of death in about 9% of cases. Death typically occurs 15–20 years from when the disease was first detected.
The earliest known description of the disease was in 1841 by American physician Charles Oscar Waters. The condition was described in further detail in 1872 by American physician George Huntington. The genetic basis was discovered in 1993 by an international collaborative effort led by the Hereditary Disease Foundation. Research and support organizations began forming in the late 1960s to increase public awareness, provide support for individuals and their families and promote research. Research directions include determining the exact mechanism of the disease, improving animal models to aid with research, testing of medications and their delivery to treat symptoms or slow the progression of the disease, and studying procedures such as stem-cell therapy with the goal of replacing damaged or lost neurons.
Signs and symptoms
Signs and symptoms of Huntington's disease most commonly become noticeable between the ages of 30 and 50 years, but they can begin at any age and present as a triad of motor, cognitive, and psychiatric symptoms. When developed in an early stage, it is known as juvenile Huntington's disease. In 50% of cases, the psychiatric symptoms appear first. Their progression is often described in early stages, middle stages, and late stages with an earlier prodromal phase. In the early stages, subtle personality changes, problems in cognition and physical skills, irritability, and mood swings occur, all of which may go unnoticed, and these usually precede the motor symptoms. Almost everyone with HD eventually exhibits similar physical symptoms, but the onset, progression, and extent of cognitive and behavioral symptoms vary significantly between individuals.
The most characteristic initial physical symptoms are jerky, random, and uncontrollable movements called chorea. Many people are not aware of their involuntary movements, or impeded by them. Chorea may be initially exhibited as general restlessness, small unintentionally initiated or uncompleted motions, lack of coordination, or slowed saccadic eye movements. These minor motor abnormalities usually precede more obvious signs of motor dysfunction by at least three years. The clear appearance of symptoms such as rigidity, writhing motions, or abnormal posturing appear as the disorder progresses. These are signs that the system in the brain that is responsible for movement has been affected. Psychomotor functions become increasingly impaired, such that any action that requires muscle control is affected. When muscle control is affected such as rigidity or muscle contracture this is known as dystonia. Dystonia is a neurological hyperkinetic movement disorder that results in twisting or repetitive movements, that may resemble a tremor. Common consequences are physical instability, abnormal facial expression, and difficulties chewing, swallowing, and speaking. Sleep disturbances and weight loss are also associated symptoms. Eating difficulties commonly cause weight loss and may lead to malnutrition. Weight loss is common in people with Huntington's disease, and it progresses with the disease. Juvenile HD generally progresses at a faster rate with greater cognitive decline, and chorea is exhibited briefly, if at all; the Westphal variant of slowness of movement, rigidity, and tremors is more typical in juvenile HD, as are seizures.
Cognitive abilities are progressively impaired and tend to generally decline into dementia. Especially affected are executive functions, which include planning, cognitive flexibility, abstract thinking, rule acquisition, initiation of appropriate actions, and inhibition of inappropriate actions. Different cognitive impairments include difficulty focusing on tasks, lack of flexibility, a lack of impulse, a lack of awareness of one's own behaviors and abilities and difficulty learning or processing new information. As the disease progresses, memory deficits tend to appear. Reported impairments range from short-term memory deficits to long-term memory difficulties, including deficits in episodic (memory of one's life), procedural (memory of the body of how to perform an activity), and working memory.
Reported neuropsychiatric signs are anxiety, depression, a reduced display of emotions, egocentrism, aggression, and compulsive behavior and hallucination and delusion. Other common psychiatric disorders could include obsessive–compulsive disorder, mania, insomnia and bipolar disorder. Difficulties in recognizing other people's negative expressions have also been observed. The prevalence of these symptoms is highly variable between studies, with estimated rates for lifetime prevalence of psychiatric disorders between 33 and 76%. For many with the disease and their families, these symptoms are among the most distressing aspects of the disease, often affecting daily functioning and constituting reason for institutionalization. Early behavioral changes in HD result in an increased risk of suicide. Often, individuals have reduced awareness of chorea, cognitive, and emotional impairments.
Mutant huntingtin is expressed throughout the body and associated with abnormalities in peripheral tissues that are directly caused by such expression outside the brain. These abnormalities include muscle atrophy, cardiac failure, impaired glucose tolerance, weight loss, osteoporosis, and testicular atrophy.
Genetics
Everyone has two copies of the huntingtin gene (HTT), which codes for the huntingtin protein (Htt). HTT is also called the HD gene, and the IT15 gene, (interesting transcript 15). Part of this gene is a repeated section called a trinucleotide repeat expansion – a short repeat, which varies in length between individuals, and may change length between generations. If the repeat is present in a healthy gene, a dynamic mutation may increase the repeat count and result in a defective gene. When the length of this repeated section reaches a certain threshold, it produces an altered form of the protein, called mutant huntingtin protein (mHtt). The differing functions of these proteins are the cause of pathological changes, which in turn cause the disease symptoms. The Huntington's disease mutation is genetically dominant and almost fully penetrant; mutation of either of a person's HTT alleles causes the disease. It is not inherited according to sex, but by the length of the repeated section of the gene; hence its severity can be influenced by the sex of the affected parent.
Genetic mutation
HD is one of several trinucleotide repeat disorders that are caused by the length of a repeated section of a gene exceeding a normal range. The HTT gene is located on the short arm of chromosome 4 at 4p16.3. HTT contains a sequence of three DNA bases—cytosine-adenine-guanine (CAG)—repeated multiple times (i.e. ... CAGCAGCAG ...), known as a trinucleotide repeat. CAG is the three-letter genetic code (codon) for the amino acid glutamine, so a series of them results in the production of a chain of glutamine known as a polyglutamine tract (or polyQ tract), and the repeated part of the gene, the polyQ region.
Generally, people have fewer than 36 repeated glutamines in the polyQ region, which results in the production of the cytoplasmic protein huntingtin. However, a sequence of 36 or more glutamines results in the production of a protein with different characteristics. This altered form, called mutant huntingtin (mHtt), increases the decay rate of certain types of neurons. Regions of the brain have differing amounts and reliance on these types of neurons and are affected accordingly. Generally, the number of CAG repeats is related to how much this process is affected, and accounts for about 60% of the variation of the age of the onset of symptoms. The remaining variation is attributed to the environment and other genes that modify the mechanism of HD. About 36 to 39 repeats result in a reduced-penetrance form of the disease, with a much later onset and slower progression of symptoms. In some cases, the onset may be so late that symptoms are never noticed. With very large repeat counts (more than 60), HD onset can occur below the age of 20, known as juvenile HD. Juvenile HD is typically of the Westphal variant that is characterized by slowness of movement, rigidity, and tremors. This accounts for about 7% of HD carriers.
Inheritance
Huntington's disease has autosomal dominant inheritance, meaning that an affected individual typically inherits one copy of the gene with an expanded trinucleotide repeat (the mutant allele) from an affected parent. Since the penetrance of the mutation is very high, those who have a mutated copy of the gene will have the disease. In this type of inheritance pattern, each offspring of an affected individual has a 50% risk of inheriting the mutant allele, so are affected with the disorder (see figure). This probability is sex-independent. Sex-dependent or sex-linked genes are traits that are found on the X or Y chromosomes.
Trinucleotide CAG repeats numbering over 28 are unstable during replication, and this instability increases with the number of repeats present. This usually leads to new expansions as generations pass (dynamic mutations) instead of reproducing an exact copy of the trinucleotide repeat. This causes the number of repeats to change in successive generations, such that an unaffected parent with an "intermediate" number of repeats (28–35), or "reduced penetrance" (36–40), may pass on a copy of the gene with an increase in the number of repeats that produces fully penetrant HD. The earlier age of onset and greater severity of disease in successive generations due to increases in the number of repeats is known as genetic anticipation. Instability is greater in spermatogenesis than oogenesis; maternally inherited alleles are usually of a similar repeat length, whereas paternally inherited ones have a higher chance of increasing in length. Rarely is Huntington's disease caused by a new mutation, where neither parent has over 36 CAG repeats.
In the rare situations where both parents have an expanded HD gene, the risk increases to 75%, and when either parent has two expanded copies, the risk is 100% (all children will be affected). Individuals with both genes affected are rare. For some time, HD was thought to be the only disease for which possession of a second mutated gene did not affect symptoms and progression, but it has since been found that it can affect the phenotype and the rate of progression.
Mechanisms
Huntingtin protein interacts with over 100 other proteins, and appears to have multiple functions. The behavior of the mutated protein (mHtt) is not completely understood, but it is toxic to certain cell types, particularly brain cells. Early damage is most evident in the subcortical basal ganglia, initially in the striatum, but as the disease progresses, other areas of the brain are also affected, including regions of the cerebral cortex. Early symptoms are attributable to functions of the striatum and its cortical connections—namely control over movement, mood, and higher cognitive function. DNA methylation also appears to be changed in HD.
In 2025, scientists affiliated with Harvard and MIT are believed to have revealed why symptoms of the disease starts earlier in some people who inherit it and later in others: the repeats of a gene sequence become toxic at 150 repeats.
Huntingtin function
Htt is expressed in all cells, with the highest concentrations found in the brain and testes, and moderate amounts in the liver, heart, and lungs. Its functions are unclear, but it does interact with proteins involved in transcription, cell signaling, and intracellular transporting. In animals genetically modified to exhibit HD, several functions of Htt have been identified. In these animals, Htt is important for embryonic development, as its absence is related to embryonic death. Caspase, an enzyme which plays a role in catalyzing apoptosis, is thought to be activated by the mutated gene through damaging the ubiquitin-protease system. It also acts as an antiapoptotic agent preventing programmed cell death and controls the production of brain-derived neurotrophic factor, a protein that protects neurons and regulates their creation during neurogenesis. Htt also facilitates synaptic vesicular transport and synaptic transmission, and controls neuronal gene transcription. If the expression of Htt is increased, brain cell survival is improved and the effects of mHtt are reduced, whereas when the expression of Htt is reduced, the resulting characteristics are more as seen in the presence of mHtt. Accordingly, the disease is thought not to be caused by inadequate production of Htt, but by a toxic gain-of-function of mHtt in the body.
Cellular changes
The toxic action of mHtt may manifest and produce the HD pathology through multiple cellular changes. In its mutant (polyglutamine expanded) form, the protein is more prone to cleavage that creates shorter fragments containing the polyglutamine expansion. These protein fragments have a propensity to undergo misfolding and aggregation, yielding fibrillar aggregates in which non-native polyglutamine β-strands from multiple proteins are bonded together by hydrogen bonds. These aggregates share the same fundamental cross-beta amyloid architecture seen in other protein deposition diseases . Over time, the aggregates accumulate to form inclusion bodies within cells, ultimately interfering with neuronal function. Inclusion bodies have been found in both the cell nucleus and cytoplasm. Inclusion bodies in cells of the brain are one of the earliest pathological changes, and some experiments have found that they can be toxic for the cell, but other experiments have shown that they may form as part of the body's defense mechanism and help protect cells.
Several pathways by which mHtt may cause cell death have been identified. These include effects on chaperone proteins, which help fold proteins and remove misfolded ones; interactions with caspases, which play a role in the process of removing cells; the toxic effects of glutamine on nerve cells; impairment of energy production within cells; and effects on the expression of genes.
Mutant huntingtin protein has been found to play a key role in mitochondrial dysfunction. The impairment of mitochondrial electron transport can result in higher levels of oxidative stress and release of reactive oxygen species.
Glutamine is known to be excitotoxic when present in large amounts, that can cause damage to numerous cellular structures. Excessive glutamine is not found in HD, but the interactions of the altered huntingtin protein with numerous proteins in neurons lead to an increased vulnerability to glutamine. The increased vulnerability is thought to result in excitotoxic effects from normal glutamine levels.
Macroscopic changes
Initially, damage to the brain is regionally specific with the dorsal striatum in the subcortical basal ganglia being primarily affected, followed later by cortical involvement in all areas. Other areas of the basal ganglia affected include the substantia nigra; cortical involvement includes cortical layers 3, 5, and 6; also evident is involvement of the hippocampus, Purkinje cells in the cerebellum, lateral tuberal nuclei of the hypothalamus and parts of the thalamus. These areas are affected according to their structure and the types of neurons they contain, reducing in size as they lose cells. Striatal medium spiny neurons are the most vulnerable, particularly ones with projections towards the external globus pallidus, with interneurons and spiny cells projecting to the internal globus pallidus being less affected. HD also causes an abnormal increase in astrocytes and activation of the brain's immune cells, microglia.
The basal ganglia play a key role in movement and behavior control. Their functions are not fully understood, but theories propose that they are part of the cognitive executive system and the motor circuit. The basal ganglia ordinarily inhibit a large number of circuits that generate specific movements. To initiate a particular movement, the cerebral cortex sends a signal to the basal ganglia that causes the inhibition to be released. Damage to the basal ganglia can cause the release or reinstatement of the inhibitions to be erratic and uncontrolled, which results in an awkward start to the motion or motions to be unintentionally initiated or in a motion to be halted before or beyond its intended completion. The accumulating damage to this area causes the characteristic erratic movements associated with HD known as chorea, a dyskinesia. Because of the basal ganglia's inability to inhibit movements, individuals affected by it inevitably experience a reduced ability to produce speech and swallow foods and liquids (dysphagia).
Transcriptional dysregulation
CREB-binding protein (CBP), a transcriptional coregulator, is essential for cell function because as a coactivator at a significant number of promoters, it activates the transcription of genes for survival pathways. CBP contains an acetyltransferase domain to which HTT binds through its polyglutamine-containing domain. Autopsied brains of those who had Huntington's disease also have been found to have incredibly reduced amounts of CBP. In addition, when CBP is overexpressed, polyglutamine-induced death is diminished, further demonstrating that CBP plays an important role in Huntington's disease and neurons in general.
Diagnosis
Diagnosis of the onset of HD can be made following the appearance of physical symptoms specific to the disease. Genetic testing can be used to confirm a physical diagnosis if no family history of HD exists. Even before the onset of symptoms, genetic testing can confirm if an individual or embryo carries an expanded copy of the trinucleotide repeat (CAG) in the HTT gene that causes the disease. Genetic counseling is available to provide advice and guidance throughout the testing procedure and on the implications of a confirmed diagnosis. These implications include the impact on an individual's psychology, career, family-planning decisions, relatives, and relationships. Despite the availability of pre-symptomatic testing, only 5% of those at risk of inheriting HD choose to do so.
Clinical
A physical examination, sometimes combined with a psychological examination, can determine whether the onset of the disease has begun. Excessive unintentional movements of any part of the body are often the reason for seeking medical consultation. If these are abrupt and have random timing and distribution, they suggest a diagnosis of HD. Cognitive or behavioral symptoms are rarely the first symptoms diagnosed; they are usually only recognized in hindsight or when they develop further. How far the disease has progressed can be measured using the unified Huntington's disease rating scale, which provides an overall rating system based on motor, behavioral, cognitive, and functional assessments. Medical imaging, such as a CT scan or MRI scan, can show atrophy of the caudate nuclei early in the disease, as seen in the illustration to the right, but these changes are not, by themselves, diagnostic of HD. Cerebral atrophy can be seen in the advanced stages of the disease. Functional neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), can show changes in brain activity before the onset of physical symptoms, but they are experimental tools and are not used clinically.
Predictive genetic testing
Because HD follows an autosomal dominant pattern of inheritance, a strong motivation exists for individuals who are at risk of inheriting it to seek a diagnosis. The genetic test for HD consists of a blood test, which counts the numbers of CAG repeats in each of the HTT alleles. Cutoffs are given as follows:
At 40 or more CAG repeats, full penetrance allele (FPA) exists. A "positive test" or "positive result" generally refers to this case. A positive result is not considered a diagnosis, since it may be obtained decades before the symptoms begin. However, a negative test means that the individual does not carry the expanded copy of the gene and will not develop HD. The test will tell a person who originally had a 50% chance of inheriting the disease if their risk goes up to 100% or is eliminated. Persons who test positive for the disease will develop HD sometime within their lifetimes, provided they live long enough for the disease to appear.
At 36 to 39 repeats, incomplete or reduced penetrance allele (RPA) may cause symptoms, usually later in the adult life. The maximum risk is 60% that a person with an RPA will be symptomatic at age 65, and 70% at 75.
At 27 to 35 repeats, intermediate allele (IA), or large normal allele, is not associated with symptomatic disease in the tested individual, but may expand upon further inheritance to give symptoms in offspring.
With 26 or fewer repeats, the result is not associated with HD.
Testing before the onset of symptoms is a life-changing event and a very personal decision. The main reason given for choosing to test for HD is to aid in career and family decisions. Predictive testing for Huntington's disease has been available via linkage analysis (which requires testing multiple family members) since 1986 and via direct mutation analysis since 1993. At that time, surveys indicated that 50–70% of at-risk individuals would have been interested in receiving testing, but since predictive testing has been offered far fewer choose to be tested. Over 95% of individuals at risk of inheriting HD do not proceed with testing, mostly because it has no treatment. A key issue is the anxiety an individual experiences about not knowing whether they will eventually develop HD, compared to the impact of a positive result. Irrespective of the result, stress levels are lower two years after being tested, but the risk of suicide is increased after a positive test result. Individuals found to have not inherited the disorder may experience survivor guilt about family members who are affected. Other factors taken into account when considering testing include the possibility of discrimination and the implications of a positive result, which usually means a parent has an affected gene and that the individual's siblings will be at risk of inheriting it. In one study, genetic discrimination was found in 46% of individuals at risk for Huntington's disease. It occurred at higher rates within personal relationships than health insurance or employment relations. Genetic counseling in HD can provide information, advice and support for initial decision-making, and then, if chosen, throughout all stages of the testing process. Because of the implications of this test, patients who wish to undergo testing must complete three counseling sessions which provide information about Huntington's.
Counseling and guidelines on the use of genetic testing for HD have become models for other genetic disorders, such as autosomal dominant cerebellar ataxia. Presymptomatic testing for HD has also influenced testing for other illnesses with genetic variants such as polycystic kidney disease, familial Alzheimer's disease and breast cancer. The European Molecular Genetics Quality Network have published yearly external quality assessment scheme for molecular genetic testing for this disease and have developed best practice guidelines for genetic testing for HD to assist in testing and reporting of results.
Preimplantation genetic diagnosis
Embryos produced using in vitro fertilization may be genetically tested for HD using preimplantation genetic diagnosis. This technique, where one or two cells are extracted from a typically 4- to 8-cell embryo and then tested for the genetic abnormality, can then be used to ensure embryos affected with HD genes are not implanted, so any offspring will not inherit the disease. Some forms of preimplantation genetic diagnosis—non-disclosure or exclusion testing—allow at-risk people to have HD-free offspring without revealing their own parental genotype, giving no information about whether they themselves are destined to develop HD. In exclusion testing, the embryo's DNA is compared with that of the parents and grandparents to avoid inheritance of the chromosomal region containing the HD gene from the affected grandparent. In nondisclosure testing, only disease-free embryos are replaced in the uterus while the parental genotype and hence parental risk for HD are never disclosed.
Prenatal testing
Obtaining a prenatal diagnosis for an embryo or fetus in the womb is also possible, using fetal genetic material acquired through chorionic villus sampling. An amniocentesis can be performed if the pregnancy is further along, within 14–18 weeks. This procedure looks at the amniotic fluid surrounding the baby for indicators of the HD mutation. This, too, can be paired with exclusion testing to avoid disclosure of parental genotype. Prenatal testing can be done when parents have been diagnosed with HD, when they have had genetic testing showing the expansion of the HTT gene, or when they have a 50% chance of inheriting the disease. The parents can be counseled on their options, which include termination of pregnancy, and on the difficulties of a child with the identified gene.
In addition, in at-risk pregnancies due to an affected male partner, noninvasive prenatal diagnosis can be performed by analyzing cell-free fetal DNA in a blood sample taken from the mother (via venipuncture) between six and 12 weeks of pregnancy. It has no procedure-related risk of miscarriage.
Differential diagnosis
About 99% of HD diagnoses based on the typical symptoms and a family history of the disease are confirmed by genetic testing to have the expanded trinucleotide repeat that causes HD. Most of the remaining are called HD-like (HDL) syndromes. The cause of most HDL diseases is unknown, but those with known causes are due to mutations in the prion protein gene (HDL1), the junctophilin 3 gene (HDL2), a recessively inherited unknown gene (HDL3—only found in two families and poorly understood), and the gene encoding the TATA box-binding protein (SCA17, sometimes called HDL4). Other autosomal dominant diseases that can be misdiagnosed as HD are dentatorubral-pallidoluysian atrophy and neuroferritinopathy. Also, some autosomal recessive disorders resemble sporadic cases of HD. These include chorea acanthocytosis and pantothenate kinase-associated neurodegeneration. One X-linked disorder of this type is McLeod syndrome.
Management
Treatments are available to reduce the severity of some HD symptoms. For many of these treatments, evidence to confirm their effectiveness in treating symptoms of HD specifically are incomplete. As the disease progresses, the ability to care for oneself declines, and carefully managed multidisciplinary caregiving becomes increasingly necessary. Although relatively few studies of exercises and therapies have shown to be helpful to rehabilitate cognitive symptoms of HD, some evidence shows the usefulness of physical therapy, occupational therapy, and speech therapy.
Therapy
Weight loss and problems in eating due to dysphagia and other muscle discoordination are common, making nutrition management increasingly important as the disease advances. Thickening agents can be added to liquids, as thicker fluids are easier and safer to swallow. Reminding the affected person to eat slowly and to take smaller pieces of food into the mouth may also be of use to prevent choking. If eating becomes too hazardous or uncomfortable, the option of using a percutaneous endoscopic gastrostomy is available. This feeding tube, permanently attached through the abdomen into the stomach, reduces the risk of aspirating food and provides better nutritional management. Assessment and management by speech-language pathologists with experience in Huntington's disease is recommended.
People with Huntington's disease may see a physical therapist for noninvasive and nonmedication-based ways of managing the physical symptoms. Physical therapists may implement fall risk assessment and prevention, as well as strengthening, stretching, and cardiovascular exercises. Walking aids may be prescribed as appropriate. Physical therapists also prescribe breathing exercises and airway clearance techniques with the development of respiratory problems. Consensus guidelines on physiotherapy in Huntington's disease have been produced by the European HD Network. Goals of early rehabilitation interventions are prevention of loss of function. Participation in rehabilitation programs during the early to middle stage of the disease may be beneficial as it translates into long-term maintenance of motor and functional performance. Rehabilitation during the late stage aims to compensate for motor and functional losses. For long-term independent management, the therapist may develop home exercise programs for appropriate people.
Additionally, an increasing number of people with HD are turning to palliative care, which aims to improve quality of life through the treatment of the symptoms and stress of serious illness, in addition to their other treatments.
Medications
Tetrabenazine was approved in 2000 for treatment of chorea in Huntington's disease in the EU, and in 2008 in the US. Although other drugs had been used "off label", tetrabenazine was the first approved treatment for Huntington's disease in the U.S. The compound has been known since the 1950s. An alternative to tetrabenazine is amantadine but there is limited evidence for its safety and efficacy.
Other drugs that help to reduce chorea include antipsychotics and benzodiazepines. Hypokinesia and rigidity, especially in juvenile cases, can be treated with antiparkinsonian drugs, and myoclonic hyperkinesia can be treated with valproic acid. Tentative evidence has found ethyl eicosapentaenoic acid to improve motor symptoms at one year. In 2017, deutetrabenazine, a heavier form of tetrabenazine medication for the treatment of chorea in HD, was approved by the FDA. This is marketed as Austedo.
Psychiatric symptoms can be treated with medications similar to those used in the general population. Selective serotonin reuptake inhibitors and mirtazapine have been recommended for depression, while atypical antipsychotics are recommended for psychosis and behavioral problems. Specialist neuropsychiatric input is recommended since people may require long-term treatment with multiple medications in combination.
Plant-based medications
There has been a number of alternative therapies experimented in ayurvedic medicine with plant-based products, although none have provided good evidence of efficacy. A recent study showed that the stromal processing peptidase (SPP), a synthetic enzyme found in plant chloroplasts, prevented the aggregation of proteins associated with Huntington's disease. However, repeat studies and clinical validation are needed to confirm its true therapeutic potential.
Education
The families of individuals, and society at large, who have inherited or are at risk of inheriting HD have generations of experience of HD but may be unaware of recent breakthroughs in understanding the disease, and of the availability of genetic testing. Genetic counseling benefits these individuals by updating their knowledge, seeking to dispel any unfounded beliefs that they may have, and helping them consider their future options and plans. The Patient Education Program for Huntington's Disease has been created to help educate family members, caretakers, and those diagnosed with Huntington's disease. Also covered is information concerning family planning choices, care management, and other considerations.
Prognosis
The length of the trinucleotide repeat accounts for 60% of the variation of the age of symptoms onset and their rate of progress. A longer repeat results in an earlier age of onset and a faster progression of symptoms. Individuals with more than sixty repeats often develop the disease before age 20, while those with fewer than 40 repeats may remain asymptomatic. The remaining variation is due to environmental factors and other genes that influence the mechanism of the disease.
Life expectancy in HD is generally around 10 to 30 years following the onset of visible symptoms. Juvenile Huntington's disease has a life expectancy rate of 10 years after onset of visible symptoms. Most life-threatening complications result from muscle coordination, and to a lesser extent, behavioral changes induced by declining cognitive function. The largest risk is pneumonia, which causes death in one third of those with HD. As the ability to synchronize movements deteriorates, difficulty clearing the lungs, and an increased risk of aspirating food or drink both increase the risk of contracting pneumonia. The second-greatest risk is heart disease, which causes almost a quarter of fatalities of those with HD. Suicide is the third greatest cause of fatalities, with 7.3% of those with HD taking their own lives and up to 27% attempting to do so. To what extent suicidal thoughts are influenced by behavioral symptoms is unclear, as they signify a desire to avoid the later stages of the disease. Suicide is the greatest risk of this disease before the diagnosis is made and in the middle stages of development throughout the disease. Other associated risks include choking; due to the inability to swallow, physical injury from falls, and malnutrition.
Epidemiology
The late onset of Huntington's disease means it does not usually affect reproduction. The worldwide prevalence of HD is 5–10 cases per 100,000 persons, but varies greatly geographically as a result of ethnicity, local migration and past immigration patterns. Prevalence is similar for men and women. The rate of occurrence is highest in peoples of Western European descent, averaging around seven per 100,000 people, and is lower in the rest of the world; e.g., one per million people of Asian and African descent. A 2013 epidemiological study of the prevalence of Huntington's disease in the UK between 1990 and 2010 found that the average prevalence for the UK was 12.3 per 100,000. Additionally, some localized areas have a much higher prevalence than their regional average. One of the highest incidences is in the isolated populations of the Lake Maracaibo region of Venezuela, where HD affects up to 700 per 100,000 persons. Other areas of high localization have been found in Tasmania and specific regions of Scotland, Wales and Sweden. Increased prevalence in some cases occurs due to a local founder effect, a historical migration of carriers into an area of geographic isolation. Some of these carriers have been traced back hundreds of years using genealogical studies. Genetic haplotypes can also give clues for the geographic variations of prevalence. Iceland, on the contrary, has a rather low prevalence of 1 per 100,000, despite the fact that Icelanders as a people are descended from the early Germanic tribes of Scandinavia which also gave rise to the Swedes; all cases with the exception of one going back nearly two centuries having derived from the offspring of a couple living early in the 19th century. Finland, as well, has a low incidence of only 2.2 per 100,000 people.
Until the discovery of a genetic test, statistics could only include clinical diagnosis based on physical symptoms and a family history of HD, excluding those who died of other causes before diagnosis. These cases can now be included in statistics; and, as the test becomes more widely available, estimates of the prevalence and incidence of the disorder are likely to increase.
History
In centuries past, various kinds of chorea were at times called by names such as Saint Vitus' dance, with little or no understanding of their cause or type in each case.
The first definite mention of HD was in a letter by Charles Oscar Waters (1816–1892), published in the first edition of Robley Dunglison's Practice of Medicine in 1842. Waters described "a form of chorea, vulgarly called magrums", including accurate descriptions of the chorea, its progression, and the strong heredity of the disease. In 1846 Charles Rollin Gorman (1817–1879) observed how higher prevalence seemed to occur in localized regions. Independently of Gorman and Waters, both students of Dunglison at Jefferson Medical College in Philadelphia, (1830–1906) also produced an early description in 1860. He specifically noted that in Setesdalen, a secluded mountain valley in Norway, the high prevalence of dementia was associated with a pattern of jerking movement disorders that ran in families.
The first thorough description of the disease was by George Huntington in 1872. Examining the combined medical history of several generations of a family exhibiting similar symptoms, he realized their conditions must be linked; he presented his detailed and accurate definition of the disease as his first paper. Huntington described the exact pattern of inheritance of autosomal dominant disease years before the rediscovery by scientists of Mendelian inheritance.
Sir William Osler was interested in the disorder and chorea in general, and was impressed with Huntington's paper, stating, "In the history of medicine, there are few instances in which a disease has been more accurately, more graphically or more briefly described." Osler's continued interest in HD, combined with his influence in the field of medicine, helped to rapidly spread awareness and knowledge of the disorder throughout the medical community. Great interest was shown by scientists in Europe, including Louis Théophile Joseph Landouzy, Désiré-Magloire Bourneville, Camillo Golgi, and Joseph Jules Dejerine, and until the end of the century, much of the research into HD was European in origin. By the end of the 19th century, research and reports on HD had been published in many countries and the disease was recognized as a worldwide condition.
During the rediscovery of Mendelian inheritance at the turn of the 20th century, HD was used tentatively as an example of autosomal dominant inheritance. English biologist William Bateson used the pedigrees of affected families to establish that HD had an autosomal dominant inheritance pattern. The strong inheritance pattern prompted several researchers, including Smith Ely Jelliffe, to attempt to trace and connect family members of previous studies. Jelliffe collected information from across New York and published several articles regarding the genealogy of HD in New England. Jelliffe's research roused the interest of his college friend, Charles Davenport, who commissioned Elizabeth Muncey to produce the first field study on the East Coast of the United States of families with HD and to construct their pedigrees. Davenport used this information to document the variable age of onset and range of symptoms of HD; he claimed that most cases of HD in the US could be traced back to a handful of individuals. This research was further embellished in 1932 by P. R. Vessie, who popularized the idea that three brothers who left England in 1630 bound for Boston were the progenitors of HD in the US. The claim that the earliest progenitors had been established and eugenic bias of Muncey's, Davenport's, and Vessie's work contributed to misunderstandings and prejudice about HD. Muncey and Davenport also popularized the idea that in the past, some with HD may have been thought to be possessed by spirits or victims of witchcraft, and were sometimes shunned or exiled by society. This idea has not been proven. Researchers have found contrary evidence; for instance, the community of the family studied by George Huntington openly accommodated those who exhibited symptoms of HD.
The search for the cause of this condition was enhanced considerably in 1968, when the Hereditary Disease Foundation (HDF) was created by Milton Wexler, a psychoanalyst based in Los Angeles, California, whose wife Leonore Sabin had been diagnosed earlier that year with Huntington's disease. The three brothers of Wexler's wife also had this disease.
The foundation was involved in the recruitment of more than 100 scientists in the US-Venezuela Huntington's Disease Collaborative Project, which over a 10-year period from 1979, worked to locate the genetic cause. This was achieved in 1983 when a causal gene was approximately located, and in 1993, the gene was precisely located at chromosome 4 (4p16.3). The study had focused on the populations of two isolated Venezuelan villages, Barranquitas and Lagunetas, where there was an unusually high prevalence of HD, and involved over 18,000 people, mostly from a single extended family, and resulted in making HD the first autosomal disease locus found using genetic linkage analysis. Among other innovations, the project developed DNA-marking methods which were an important step in making the Human Genome Project possible.
In the same time, key discoveries concerning the mechanisms of the disorder were being made, including the findings by Anita Harding's research group on the effects of the gene's length.
Modelling the disease in various types of animals, such as the transgenic mouse developed in 1996, enabled larger-scale experiments. As these animals have faster metabolisms and much shorter lifespans than humans results from experiments are received sooner, speeding research. The 1997 discovery that mHtt fragments misfold led to the discovery of the nuclear inclusions they cause. These advances have led to increasingly extensive research into the proteins involved with the disease, potential drug treatments, care methods, and the gene itself.
The networks of care and support that had developed in Venezuela and Colombia during the research projects there in the 1970s through 2000s were eventually eroded by various forces, such as the ongoing crisis in Venezuela and the death of a lead researcher in Colombia (Jorge Daza Barriga). Doctors are working toward rekindling these networks because the people who have contributed to the science of Huntington's disease by participating in these studies deserve adequate follow-up care; societies elsewhere in the world who benefit from the scientific advances thus achieved owe at least that much to those who participated in the research.
The condition was formerly called Huntington's chorea, but this term has been replaced by Huntington's disease because not all patients develop chorea and due to the importance of cognitive and behavioral problems.
Society and culture
Ethics
Genetic testing for Huntington's disease has raised several ethical issues. The issues for genetic testing include defining how mature an individual should be before being considered eligible for testing, ensuring the confidentiality of results, and whether companies should be allowed to use test results for decisions on employment, life insurance or other financial matters. There was controversy when Charles Davenport proposed in 1910 that compulsory sterilization and immigration control be used for people with certain diseases, including HD, as part of the eugenics movement. In vitro fertilization has some issues regarding its use of embryos. Some HD research has ethical issues due to its use of animal testing and embryonic stem cells.
The development of an accurate diagnostic test for Huntington's disease has caused social, legal, and ethical concerns over access to and use of a person's results.
Many guidelines and testing procedures have strict procedures for disclosure and confidentiality to allow individuals to decide when and how to receive their results and also to whom the results are made available. Insurance companies and businesses are faced with the question of whether to use genetic test results when assessing an individual, such as for life insurance or employment. The United Kingdom's insurance companies agreed with the Department of Health and Social Care that until 2017 customers would not need to disclose predictive genetics tests to them, but this agreement explicitly excluded the government-approved test for Huntington's when writing policies with a value over . As with other untreatable genetic conditions with a later onset, it is ethically questionable to perform presymptomatic testing on a child or adolescent since there would be no medical benefit for that individual. There is consensus for testing only individuals who are considered cognitively mature, although there is a counter-argument that parents have a right to make the decision on their child's behalf. With the lack of effective treatment, testing a person under legal age who is not judged to be competent is considered unethical in most cases.
There are ethical concerns related to prenatal genetic testing or preimplantation genetic diagnosis to ensure a child is not born with a given disease. For example, prenatal testing raises the issue of selective abortion, a choice considered unacceptable by some. As it is a dominant disease, there are difficulties in situations in which a parent does not want to know his or her own diagnosis. This would require parts of the process to be kept secret from the parent.
Support organizations
In 1968, after experiencing HD in his wife's family, Dr. Milton Wexler was inspired to start the Hereditary Disease Foundation (HDF), with the aim of curing genetic illnesses by coordinating and supporting research. The foundation and Wexler's daughter, Nancy Wexler, were key parts of the research team in Venezuela which discovered the HD gene.
At roughly the same time as the HDF formed, Marjorie Guthrie helped to found the committee to Combat Huntington's Disease (now the Huntington's Disease Society of America), after her husband, folk singer-songwriter Woody Guthrie died from complications of HD.
Since then, support and research organizations have formed in many countries around the world and have helped to increase public awareness of HD. A number of these collaborate in umbrella organizations, like the International Huntington Association and the European HD network. Many support organizations hold an annual HD awareness event, some of which have been endorsed by their respective governments. For example, 6 June is designated "National Huntington's Disease Awareness Day" by the US Senate. Many organizations exist to support and inform those affected by HD, including the Huntington's Disease Association in the UK. The largest funder of research is provided by the Cure Huntington's Disease Initiative Foundation (CHDI).
Research directions
Research into the mechanism of HD is focused on identifying the functioning of Htt, how mHtt differs or interferes with it, and the brain pathology that the disease produces. Research is conducted using in vitro methods, genetically modified animals, (also called transgenic animal models), and human volunteers. Animal models are critical for understanding the fundamental mechanisms causing the disease, and for supporting the early stages of drug development. The identification of the causative gene has enabled the development of many genetically modified organisms including nematodes (roundworms), Drosophila fruit flies, and genetically modified mammals including mice, rats, sheep, pigs and monkeys that express mutant huntingtin and develop progressive neurodegeneration and HD-like symptoms.
Research is being conducted using many approaches to either prevent Huntington's disease or slow its progression. Disease-modifying strategies can be broadly grouped into three categories: reducing the level of the mutant huntingtin protein (including gene splicing and gene silencing); approaches aimed at improving neuronal survival by reducing the harm caused by the protein to specific cellular pathways and mechanisms (including protein homeostasis and histone deacetylase inhibition); and strategies to replace lost neurons. In addition, novel therapies to improve brain functioning are under development; these seek to produce symptomatic rather than disease-modifying therapies, and include phosphodiesterase inhibitors.
The CHDI Foundation funds a great many research initiatives providing many publications. The CHDI foundation is the largest funder of Huntington's disease research globally and aims to find and develop drugs that will slow the progression of HD. CHDI was formerly known as the High Q Foundation. In 2006, it spent $50 million on Huntington's disease research. CHDI collaborates with many academic and commercial laboratories globally and engages in oversight and management of research projects as well as funding.
Reducing huntingtin production
Gene silencing aims to reduce the production of the mutant protein, since HD is caused by a single dominant gene encoding a toxic protein. Gene silencing experiments in mouse models have shown that when the expression of mHtt is reduced, symptoms improve. The safety of RNA interference, and allele-specific oligonucleotide (ASO) methods of gene silencing has been demonstrated in mice and the larger primate macaque brain. Allele-specific silencing attempts to silence mutant htt while leaving wild-type Htt untouched. One way of accomplishing this is to identify polymorphisms present on only one allele and produce gene silencing drugs that target polymorphisms in only the mutant allele. The first gene silencing trial involving humans with HD began in 2015, testing the safety of IONIS-HTTRx, produced by Ionis Pharmaceuticals and led by UCL Institute of Neurology. Mutant huntingtin was detected and quantified for the first time in cerebrospinal fluid from Huntington's disease mutation-carriers in 2015 using a novel "single-molecule counting" immunoassay, providing a direct way to assess whether huntingtin-lowering treatments are achieving the desired effect. A phase 3 trial of this compound, renamed tominersen and sponsored by Roche Pharmaceuticals, began in 2019 but was halted in 2021 after the safety monitoring board concluded that the risk-benefit balance was unfavourable. A huntingtin-lowering gene therapy trial run by Uniqure began in 2019, and several trials of orally administered huntingtin-lowering splicing modulator compounds have been announced. Gene splicing techniques are being looked at to try to repair a genome with the erroneous gene that causes HD, using tools such as CRISPR/Cas9. PTC Therapeutics is evaluating small molecules that induce poison exon inclusion in HTT transcript as a therapeutic strategy to lower HTT expression.
Increasing huntingtin clearance
Another strategy to reduce the level of mutant huntingtin is to increase the rate at which cells are able to clear it. As mHtt (and many other protein aggregates) are degraded by autophagy, increasing the rate of autophagy has the potential to reduce levels of mHtt and thereby ameliorate disease. Pharmacological and genetic inducers of autophagy have been tested in a variety of Huntington's disease models; many have been shown to reduce mHtt levels and decrease toxicity.
Improving cell survival
Among the approaches aimed at improving cell survival in the presence of mutant huntingtin are correction of transcriptional regulation using histone deacetylase inhibitors, modulating aggregation of huntingtin, improving metabolism and mitochondrial function and restoring function of synapses.
Neuronal replacement
Stem-cell therapy is used to replace damaged neurons by transplantation of stem cells into affected regions of the brain. Experiments in animal models (rats and mice only) have yielded positive results.
Whatever their future therapeutic potential, stem cells are already a valuable tool for studying Huntington's disease in the laboratory.
Ferroptosis
Ferroptosis is a form of regulated cell death characterized by the iron-dependent accumulation of lipid hydroperoxides to lethal levels. ALOX5-mediated ferroptosis acts as a cell death pathway upon oxidative stress in Huntington's disease. Inhibitors of ferroptosis are protective in models of degenerative brain disorders, including
Parkinson's, Huntington's, and Alzheimer's Diseases.
Clinical trials
In 2020, there were 197 clinical trials related to varied therapies and biomarkers for Huntington's disease listed as either underway, recruiting or newly completed. Compounds trialled that have failed to prevent or slow the progression of Huntington's disease include remacemide, coenzyme Q10, riluzole, creatine, minocycline, ethyl-EPA, phenylbutyrate and dimebon.
| Biology and health sciences | Specific diseases | Health |
47895 | https://en.wikipedia.org/wiki/Stochastic%20process | Stochastic process | In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.
Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include the Wiener process or Brownian motion process, used by Louis Bachelier to study price changes on the Paris Bourse, and the Poisson process, used by A. K. Erlang to study the number of phone calls occurring in a certain period of time. These two stochastic processes are considered the most important and central in the theory of stochastic processes, and were invented repeatedly and independently, both before and after Bachelier and Erlang, in different settings and countries.
The term random function is also used to refer to a stochastic or random process, because a stochastic process can also be interpreted as a random element in a function space. The terms stochastic process and random process are used interchangeably, often with no specific mathematical space for the set that indexes the random variables. But often these two terms are used when the random variables are indexed by the integers or an interval of the real line. If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, then the collection of random variables is usually called a random field instead. The values of a stochastic process are not always numbers and can be vectors or other mathematical objects.
Based on their mathematical properties, stochastic processes can be grouped into various categories, which include random walks, martingales, Markov processes, Lévy processes, Gaussian processes, random fields, renewal processes, and branching processes. The study of stochastic processes uses mathematical knowledge and techniques from probability, calculus, linear algebra, set theory, and topology as well as branches of mathematical analysis such as real analysis, measure theory, Fourier analysis, and functional analysis. The theory of stochastic processes is considered to be an important contribution to mathematics and it continues to be an active topic of research for both theoretical reasons and applications.
Introduction
A stochastic or random process can be defined as a collection of random variables that is indexed by some mathematical set, meaning that each random variable of the stochastic process is uniquely associated with an element in the set. The set used to index the random variables is called the index set. Historically, the index set was some subset of the real line, such as the natural numbers, giving the index set the interpretation of time. Each random variable in the collection takes values from the same mathematical space known as the state space. This state space can be, for example, the integers, the real line or -dimensional Euclidean space. An increment is the amount that a stochastic process changes between two index values, often interpreted as two points in time. A stochastic process can have many outcomes, due to its randomness, and a single outcome of a stochastic process is called, among other names, a sample function or realization.
Classifications
A stochastic process can be classified in different ways, for example, by its state space, its index set, or the dependence among the random variables. One common way of classification is by the cardinality of the index set and the state space.
When interpreted as time, if the index set of a stochastic process has a finite or countable number of elements, such as a finite set of numbers, the set of integers, or the natural numbers, then the stochastic process is said to be in discrete time. If the index set is some interval of the real line, then time is said to be continuous. The two types of stochastic processes are respectively referred to as discrete-time and continuous-time stochastic processes. Discrete-time stochastic processes are considered easier to study because continuous-time processes require more advanced mathematical techniques and knowledge, particularly due to the index set being uncountable. If the index set is the integers, or some subset of them, then the stochastic process can also be called a random sequence.
If the state space is the integers or natural numbers, then the stochastic process is called a discrete or integer-valued stochastic process. If the state space is the real line, then the stochastic process is referred to as a real-valued stochastic process or a process with continuous state space. If the state space is -dimensional Euclidean space, then the stochastic process is called a -dimensional vector process or -vector process.
Etymology
The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence. In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics". This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz who in 1917 wrote in German the word stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph Doob. For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin, though the German term had been used earlier, for example, by Andrei Kolmogorov in 1931.
According to the Oxford English Dictionary, early occurrences of the word random in English with its current meaning, which relates to chance or luck, date back to the 16th century, while earlier recorded usages started in the 14th century as a noun meaning "impetuosity, great speed, force, or violence (in riding, running, striking, etc.)". The word itself comes from a Middle French word meaning "speed, haste", and it is probably derived from a French verb meaning "to run" or "to gallop". The first written appearance of the term random process pre-dates stochastic process, which the Oxford English Dictionary also gives as a synonym, and was used in an article by Francis Edgeworth published in 1888.
Terminology
The definition of a stochastic process varies, but a stochastic process is traditionally defined as a collection of random variables indexed by some set. The terms random process and stochastic process are considered synonyms and are used interchangeably, without the index set being precisely specified. Both "collection", or "family" are used while instead of "index set", sometimes the terms "parameter set" or "parameter space" are used.
The term random function is also used to refer to a stochastic or random process, though sometimes it is only used when the stochastic process takes real values. This term is also used when the index sets are mathematical spaces other than the real line, while the terms stochastic process and random process are usually used when the index set is interpreted as time, and other terms are used such as random field when the index set is -dimensional Euclidean space or a manifold.
Notation
A stochastic process can be denoted, among other ways, by , , or simply as . Some authors mistakenly write even though it is an abuse of function notation. For example, or are used to refer to the random variable with the index , and not the entire stochastic process. If the index set is , then one can write, for example, to denote the stochastic process.
Examples
Bernoulli process
One of the simplest stochastic processes is the Bernoulli process, which is a sequence of independent and identically distributed (iid) random variables, where each random variable takes either the value one or zero, say one with probability and zero with probability . This process can be linked to an idealisation of repeatedly flipping a coin, where the probability of obtaining a head is taken to be and its value is one, while the value of a tail is zero. In other words, a Bernoulli process is a sequence of iid Bernoulli random variables, where each idealised coin flip is an example of a Bernoulli trial.
Random walk
Random walks are stochastic processes that are usually defined as sums of iid random variables or random vectors in Euclidean space, so they are processes that change in discrete time. But some also use the term to refer to processes that change in continuous time, particularly the Wiener process used in financial models, which has led to some confusion, resulting in its criticism. There are various other types of random walks, defined so their state spaces can be other mathematical objects, such as lattices and groups, and in general they are highly studied and have many applications in different disciplines.
A classic example of a random walk is known as the simple random walk, which is a stochastic process in discrete time with the integers as the state space, and is based on a Bernoulli process, where each Bernoulli variable takes either the value positive one or negative one. In other words, the simple random walk takes place on the integers, and its value increases by one with probability, say, , or decreases by one with probability , so the index set of this random walk is the natural numbers, while its state space is the integers. If , this random walk is called a symmetric random walk.
Wiener process
The Wiener process is a stochastic process with stationary and independent increments that are normally distributed based on the size of the increments. The Wiener process is named after Norbert Wiener, who proved its mathematical existence, but the process is also called the Brownian motion process or just Brownian motion due to its historical connection as a model for Brownian movement in liquids.
Playing a central role in the theory of probability, the Wiener process is often considered the most important and studied stochastic process, with connections to other stochastic processes. Its index set and state space are the non-negative numbers and real numbers, respectively, so it has both continuous index set and states space. But the process can be defined more generally so its state space can be -dimensional Euclidean space. If the mean of any increment is zero, then the resulting Wiener or Brownian motion process is said to have zero drift. If the mean of the increment for any two points in time is equal to the time difference multiplied by some constant , which is a real number, then the resulting stochastic process is said to have drift .
Almost surely, a sample path of a Wiener process is continuous everywhere but nowhere differentiable. It can be considered as a continuous version of the simple random walk. The process arises as the mathematical limit of other stochastic processes such as certain random walks rescaled, which is the subject of Donsker's theorem or invariance principle, also known as the functional central limit theorem.
The Wiener process is a member of some important families of stochastic processes, including Markov processes, Lévy processes and Gaussian processes. The process also has many applications and is the main stochastic process used in stochastic calculus. It plays a central role in quantitative finance, where it is used, for example, in the Black–Scholes–Merton model. The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena.
Poisson process
The Poisson process is a stochastic process that has different forms and definitions. It can be defined as a counting process, which is a stochastic process that represents the random number of points or events up to some time. The number of points of the process that are located in the interval from zero to some given time is a Poisson random variable that depends on that time and some parameter. This process has the natural numbers as its state space and the non-negative numbers as its index set. This process is also called the Poisson counting process, since it can be interpreted as an example of a counting process.
If a Poisson process is defined with a single positive constant, then the process is called a homogeneous Poisson process. The homogeneous Poisson process is a member of important classes of stochastic processes such as Markov processes and Lévy processes.
The homogeneous Poisson process can be defined and generalized in different ways. It can be defined such that its index set is the real line, and this stochastic process is also called the stationary Poisson process. If the parameter constant of the Poisson process is replaced with some non-negative integrable function of , the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant. Serving as a fundamental process in queueing theory, the Poisson process is an important process for mathematical models, where it finds applications for models of events randomly occurring in certain time windows.
Defined on the real line, the Poisson process can be interpreted as a stochastic process, among other random objects. But then it can be defined on the -dimensional Euclidean space or other mathematical spaces, where it is often interpreted as a random set or a random counting measure, instead of a stochastic process. In this setting, the Poisson process, also called the Poisson point process, is one of the most important objects in probability theory, both for applications and theoretical reasons. But it has been remarked that the Poisson process does not receive as much attention as it should, partly due to it often being considered just on the real line, and not on other mathematical spaces.
Definitions
Stochastic process
A stochastic process is defined as a collection of random variables defined on a common probability space , where is a sample space, is a -algebra, and is a probability measure; and the random variables, indexed by some set , all take values in the same mathematical space , which must be measurable with respect to some -algebra .
In other words, for a given probability space and a measurable space , a stochastic process is a collection of -valued random variables, which can be written as:
Historically, in many problems from the natural sciences a point had the meaning of time, so is a random variable representing a value observed at time . A stochastic process can also be written as to reflect that it is actually a function of two variables, and .
There are other ways to consider a stochastic process, with the above definition being considered the traditional one. For example, a stochastic process can be interpreted or defined as a -valued random variable, where is the space of all the possible functions from the set into the space . However this alternative definition as a "function-valued random variable" in general requires additional regularity assumptions to be well-defined.
Index set
The set is called the index set or parameter set of the stochastic process. Often this set is some subset of the real line, such as the natural numbers or an interval, giving the set the interpretation of time. In addition to these sets, the index set can be another set with a total order or a more general set, such as the Cartesian plane or -dimensional Euclidean space, where an element can represent a point in space. That said, many results and theorems are only possible for stochastic processes with a totally ordered index set.
State space
The mathematical space of a stochastic process is called its state space. This mathematical space can be defined using integers, real lines, -dimensional Euclidean spaces, complex planes, or more abstract mathematical spaces. The state space is defined using elements that reflect the different values that the stochastic process can take.
Sample function
A sample function is a single outcome of a stochastic process, so it is formed by taking a single possible value of each random variable of the stochastic process. More precisely, if is a stochastic process, then for any point , the mapping
is called a sample function, a realization, or, particularly when is interpreted as time, a sample path of the stochastic process . This means that for a fixed , there exists a sample function that maps the index set to the state space . Other names for a sample function of a stochastic process include trajectory, path function or path.
Increment
An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period. For example, if is a stochastic process with state space and index set , then for any two non-negative numbers and such that , the difference is a -valued random variable known as an increment. When interested in the increments, often the state space is the real line or the natural numbers, but it can be -dimensional Euclidean space or more abstract spaces such as Banach spaces.
Further definitions
Law
For a stochastic process defined on the probability space , the law of stochastic process is defined as the pushforward measure:
where is a probability measure, the symbol denotes function composition and is the pre-image of the measurable function or, equivalently, the -valued random variable , where is the space of all the possible -valued functions of , so the law of a stochastic process is a probability measure.
For a measurable subset of , the pre-image of gives
so the law of a can be written as:
The law of a stochastic process or a random variable is also called the probability law, probability distribution, or the distribution.
Finite-dimensional probability distributions
For a stochastic process with law , its finite-dimensional distribution for is defined as:
This measure is the joint distribution of the random vector ; it can be viewed as a "projection" of the law onto a finite subset of .
For any measurable subset of the -fold Cartesian power , the finite-dimensional distributions of a stochastic process can be written as:
The finite-dimensional distributions of a stochastic process satisfy two mathematical conditions known as consistency conditions.
Stationarity
Stationarity is a mathematical property that a stochastic process has when all the random variables of that stochastic process are identically distributed. In other words, if is a stationary stochastic process, then for any the random variable has the same distribution, which means that for any set of index set values , the corresponding random variables
all have the same probability distribution. The index set of a stationary stochastic process is usually interpreted as time, so it can be the integers or the real line. But the concept of stationarity also exists for point processes and random fields, where the index set is not interpreted as time.
When the index set can be interpreted as time, a stochastic process is said to be stationary if its finite-dimensional distributions are invariant under translations of time. This type of stochastic process can be used to describe a physical system that is in steady state, but still experiences random fluctuations. The intuition behind stationarity is that as time passes the distribution of the stationary stochastic process remains the same. A sequence of random variables forms a stationary stochastic process only if the random variables are identically distributed.
A stochastic process with the above definition of stationarity is sometimes said to be strictly stationary, but there are other forms of stationarity. One example is when a discrete-time or continuous-time stochastic process is said to be stationary in the wide sense, then the process has a finite second moment for all and the covariance of the two random variables and depends only on the number for all . Khinchin introduced the related concept of stationarity in the wide sense, which has other names including covariance stationarity or stationarity in the broad sense.
Filtration
A filtration is an increasing sequence of sigma-algebras defined in relation to some probability space and an index set that has some total order relation, such as in the case of the index set being some subset of the real numbers. More formally, if a stochastic process has an index set with a total order, then a filtration , on a probability space is a family of sigma-algebras such that for all , where and denotes the total order of the index set . With the concept of a filtration, it is possible to study the amount of information contained in a stochastic process at , which can be interpreted as time . The intuition behind a filtration is that as time passes, more and more information on is known or available, which is captured in , resulting in finer and finer partitions of .
Modification
A modification of a stochastic process is another stochastic process, which is closely related to the original stochastic process. More precisely, a stochastic process that has the same index set , state space , and probability space as another stochastic process is said to be a modification of if for all the following
holds. Two stochastic processes that are modifications of each other have the same finite-dimensional law and they are said to be stochastically equivalent or equivalent.
Instead of modification, the term version is also used, however some authors use the term version when two stochastic processes have the same finite-dimensional distributions, but they may be defined on different probability spaces, so two processes that are modifications of each other, are also versions of each other, in the latter sense, but not the converse.
If a continuous-time real-valued stochastic process meets certain moment conditions on its increments, then the Kolmogorov continuity theorem says that there exists a modification of this process that has continuous sample paths with probability one, so the stochastic process has a continuous modification or version. The theorem can also be generalized to random fields so the index set is -dimensional Euclidean space as well as to stochastic processes with metric spaces as their state spaces.
Indistinguishable
Two stochastic processes and defined on the same probability space with the same index set and set space are said be indistinguishable if the following
holds. If two and are modifications of each other and are almost surely continuous, then and are indistinguishable.
Separability
Separability is a property of a stochastic process based on its index set in relation to the probability measure. The property is assumed so that functionals of stochastic processes or random fields with uncountable index sets can form random variables. For a stochastic process to be separable, in addition to other conditions, its index set must be a separable space, which means that the index set has a dense countable subset.
More precisely, a real-valued continuous-time stochastic process with a probability space is separable if its index set has a dense countable subset and there is a set of probability zero, so , such that for every open set and every closed set , the two events and differ from each other at most on a subset of .
The definition of separability can also be stated for other index sets and state spaces, such as in the case of random fields, where the index set as well as the state space can be -dimensional Euclidean space.
The concept of separability of a stochastic process was introduced by Joseph Doob,. The underlying idea of separability is to make a countable set of points of the index set determine the properties of the stochastic process. Any stochastic process with a countable index set already meets the separability conditions, so discrete-time stochastic processes are always separable. A theorem by Doob, sometimes known as Doob's separability theorem, says that any real-valued continuous-time stochastic process has a separable modification. Versions of this theorem also exist for more general stochastic processes with index sets and state spaces other than the real line.
Independence
Two stochastic processes and defined on the same probability space with the same index set are said be independent if for all and for every choice of epochs , the random vectors and are independent.
Uncorrelatedness
Two stochastic processes and are called uncorrelated if their cross-covariance is zero for all times. Formally:
.
Independence implies uncorrelatedness
If two stochastic processes and are independent, then they are also uncorrelated.
Orthogonality
Two stochastic processes and are called orthogonal if their cross-correlation is zero for all times. Formally:
.
Skorokhod space
A Skorokhod space, also written as Skorohod space, is a mathematical space of all the functions that are right-continuous with left limits, defined on some interval of the real line such as or , and take values on the real line or on some metric space. Such functions are known as càdlàg or cadlag functions, based on the acronym of the French phrase continue à droite, limite à gauche. A Skorokhod function space, introduced by Anatoliy Skorokhod, is often denoted with the letter , so the function space is also referred to as space . The notation of this function space can also include the interval on which all the càdlàg functions are defined, so, for example, denotes the space of càdlàg functions defined on the unit interval .
Skorokhod function spaces are frequently used in the theory of stochastic processes because it often assumed that the sample functions of continuous-time stochastic processes belong to a Skorokhod space. Such spaces contain continuous functions, which correspond to sample functions of the Wiener process. But the space also has functions with discontinuities, which means that the sample functions of stochastic processes with jumps, such as the Poisson process (on the real line), are also members of this space.
Regularity
In the context of mathematical construction of stochastic processes, the term regularity is used when discussing and assuming certain conditions for a stochastic process to resolve possible construction issues. For example, to study stochastic processes with uncountable index sets, it is assumed that the stochastic process adheres to some type of regularity condition such as the sample functions being continuous.
Further examples
Markov processes and chains
Markov processes are stochastic processes, traditionally in discrete or continuous time, that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is stochastically independent of its behavior in the past, given the current state of the process.
The Brownian motion process and the Poisson process (in one dimension) are both examples of Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time.
A Markov chain is a type of Markov process that has either discrete state space or discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it has been also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space). It has been argued that the first definition of a Markov chain, where it has discrete time, now tends to be used, despite the second definition having been used by researchers like Joseph Doob and Kai Lai Chung.
Markov processes form an important class of stochastic processes and have applications in many areas. For example, they are the basis for a general stochastic simulation method known as Markov chain Monte Carlo, which is used for simulating random objects with specific probability distributions, and has found application in Bayesian statistics.
The concept of the Markov property was originally for stochastic processes in continuous and discrete time, but the property has been adapted for other index sets such as -dimensional Euclidean space, which results in collections of random variables known as Markov random fields.
Martingale
A martingale is a discrete-time or continuous-time stochastic process with the property that, at every instant, given the current value and all the past values of the process, the conditional expectation of every future value is equal to the current value. In discrete time, if this property holds for the next value, then it holds for all future values. The exact mathematical definition of a martingale requires two other conditions coupled with the mathematical concept of a filtration, which is related to the intuition of increasing available information as time passes. Martingales are usually defined to be real-valued, but they can also be complex-valued or even more general.
A symmetric random walk and a Wiener process (with zero drift) are both examples of martingales, respectively, in discrete and continuous time. For a sequence of independent and identically distributed random variables with zero mean, the stochastic process formed from the successive partial sums is a discrete-time martingale. In this aspect, discrete-time martingales generalize the idea of partial sums of independent random variables.
Martingales can also be created from stochastic processes by applying some suitable transformations, which is the case for the homogeneous Poisson process (on the real line) resulting in a martingale called the compensated Poisson process. Martingales can also be built from other martingales. For example, there are martingales based on the martingale the Wiener process, forming continuous-time martingales.
Martingales mathematically formalize the idea of a 'fair game' where it is possible form reasonable expectations for payoffs, and they were originally developed to show that it is not possible to gain an 'unfair' advantage in such a game. But now they are used in many areas of probability, which is one of the main reasons for studying them. Many problems in probability have been solved by finding a martingale in the problem and studying it. Martingales will converge, given some conditions on their moments, so they are often used to derive convergence results, due largely to martingale convergence theorems.
Martingales have many applications in statistics, but it has been remarked that its use and application are not as widespread as it could be in the field of statistics, particularly statistical inference. They have found applications in areas in probability theory such as queueing theory and Palm calculus and other fields such as economics and finance.
Lévy process
Lévy processes are types of stochastic processes that can be considered as generalizations of random walks in continuous time. These processes have many applications in fields such as finance, fluid mechanics, physics and biology. The main defining characteristics of these processes are their stationarity and independence properties, so they were known as processes with stationary and independent increments. In other words, a stochastic process is a Lévy process if for non-negatives numbers, , the corresponding increments
are all independent of each other, and the distribution of each increment only depends on the difference in time.
A Lévy process can be defined such that its state space is some abstract mathematical space, such as a Banach space, but the processes are often defined so that they take values in Euclidean space. The index set is the non-negative numbers, so , which gives the interpretation of time. Important stochastic processes such as the Wiener process, the homogeneous Poisson process (in one dimension), and subordinators are all Lévy processes.
Random field
A random field is a collection of random variables indexed by a -dimensional Euclidean space or some manifold. In general, a random field can be considered an example of a stochastic or random process, where the index set is not necessarily a subset of the real line. But there is a convention that an indexed collection of random variables is called a random field when the index has two or more dimensions. If the specific definition of a stochastic process requires the index set to be a subset of the real line, then the random field can be considered as a generalization of stochastic process.
Point process
A point process is a collection of points randomly located on some mathematical space such as the real line, -dimensional Euclidean space, or more abstract spaces. Sometimes the term point process is not preferred, as historically the word process denoted an evolution of some system in time, so a point process is also called a random point field. There are different interpretations of a point process, such a random counting measure or a random set. Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear.
Other authors consider a point process as a stochastic process, where the process is indexed by sets of the underlying space on which it is defined, such as the real line or -dimensional Euclidean space. Other stochastic processes such as renewal and counting processes are studied in the theory of point processes.
History
Early probability theory
Probability theory has its origins in games of chance, which have a long history, with some games being played thousands of years ago, but very little analysis on them was done in terms of probability. The year 1654 is often considered the birth of probability theory when French mathematicians Pierre Fermat and Blaise Pascal had a written correspondence on probability, motivated by a gambling problem. But there was earlier mathematical work done on the probability of gambling games such as Liber de Ludo Aleae by Gerolamo Cardano, written in the 16th century but posthumously published later in 1663.
After Cardano, Jakob Bernoulli wrote Ars Conjectandi, which is considered a significant event in the history of probability theory. Bernoulli's book was published, also posthumously, in 1713 and inspired many mathematicians to study probability. But despite some renowned mathematicians contributing to probability theory, such as Pierre-Simon Laplace, Abraham de Moivre, Carl Gauss, Siméon Poisson and Pafnuty Chebyshev, most of the mathematical community did not consider probability theory to be part of mathematics until the 20th century.
Statistical mechanics
In the physical sciences, scientists developed in the 19th century the discipline of statistical mechanics, where physical systems, such as containers filled with gases, are regarded or treated mathematically as collections of many moving particles. Although there were attempts to incorporate randomness into statistical physics by some scientists, such as Rudolf Clausius, most of the work had little or no randomness.
This changed in 1859 when James Clerk Maxwell contributed significantly to the field, more specifically, to the kinetic theory of gases, by presenting work where he modelled the gas particles as moving in random directions at random velocities. The kinetic theory of gases and statistical physics continued to be developed in the second half of the 19th century, with work done chiefly by Clausius, Ludwig Boltzmann and Josiah Gibbs, which would later have an influence on Albert Einstein's mathematical model for Brownian movement.
Measure theory and probability theory
At the International Congress of Mathematicians in Paris in 1900, David Hilbert presented a list of mathematical problems, where his sixth problem asked for a mathematical treatment of physics and probability involving axioms. Around the start of the 20th century, mathematicians developed measure theory, a branch of mathematics for studying integrals of mathematical functions, where two of the founders were French mathematicians, Henri Lebesgue and Émile Borel. In 1925, another French mathematician Paul Lévy published the first probability book that used ideas from measure theory.
In the 1920s, fundamental contributions to probability theory were made in the Soviet Union by mathematicians such as Sergei Bernstein, Aleksandr Khinchin, and Andrei Kolmogorov. Kolmogorov published in 1929 his first attempt at presenting a mathematical foundation, based on measure theory, for probability theory. In the early 1930s, Khinchin and Kolmogorov set up probability seminars, which were attended by researchers such as Eugene Slutsky and Nikolai Smirnov, and Khinchin gave the first mathematical definition of a stochastic process as a set of random variables indexed by the real line.
Birth of modern probability theory
In 1933, Andrei Kolmogorov published in German, his book on the foundations of probability theory titled Grundbegriffe der Wahrscheinlichkeitsrechnung, where Kolmogorov used measure theory to develop an axiomatic framework for probability theory. The publication of this book is now widely considered to be the birth of modern probability theory, when the theories of probability and stochastic processes became parts of mathematics.
After the publication of Kolmogorov's book, further fundamental work on probability theory and stochastic processes was done by Khinchin and Kolmogorov as well as other mathematicians such as Joseph Doob, William Feller, Maurice Fréchet, Paul Lévy, Wolfgang Doeblin, and Harald Cramér.
Decades later, Cramér referred to the 1930s as the "heroic period of mathematical probability theory". World War II greatly interrupted the development of probability theory, causing, for example, the migration of Feller from Sweden to the United States of America and the death of Doeblin, considered now a pioneer in stochastic processes.
Stochastic processes after World War II
After World War II, the study of probability theory and stochastic processes gained more attention from mathematicians, with significant contributions made in many areas of probability and mathematics as well as the creation of new areas. Starting in the 1940s, Kiyosi Itô published papers developing the field of stochastic calculus, which involves stochastic integrals and stochastic differential equations based on the Wiener or Brownian motion process.
Also starting in the 1940s, connections were made between stochastic processes, particularly martingales, and the mathematical field of potential theory, with early ideas by Shizuo Kakutani and then later work by Joseph Doob. Further work, considered pioneering, was done by Gilbert Hunt in the 1950s, connecting Markov processes and potential theory, which had a significant effect on the theory of Lévy processes and led to more interest in studying Markov processes with methods developed by Itô.
In 1953, Doob published his book Stochastic processes, which had a strong influence on the theory of stochastic processes and stressed the importance of measure theory in probability.
Doob also chiefly developed the theory of martingales, with later substantial contributions by Paul-André Meyer. Earlier work had been carried out by Sergei Bernstein, Paul Lévy and Jean Ville, the latter adopting the term martingale for the stochastic process. Methods from the theory of martingales became popular for solving various probability problems. Techniques and theory were developed to study Markov processes and then applied to martingales. Conversely, methods from the theory of martingales were established to treat Markov processes.
Other fields of probability were developed and used to study stochastic processes, with one main approach being the theory of large deviations. The theory has many applications in statistical physics, among other fields, and has core ideas going back to at least the 1930s. Later in the 1960s and 1970s, fundamental work was done by Alexander Wentzell in the Soviet Union and Monroe D. Donsker and Srinivasa Varadhan in the United States of America, which would later result in Varadhan winning the 2007 Abel Prize. In the 1990s and 2000s the theories of Schramm–Loewner evolution and rough paths were introduced and developed to study stochastic processes and other mathematical objects in probability theory, which respectively resulted in Fields Medals being awarded to Wendelin Werner in 2008 and to Martin Hairer in 2014.
The theory of stochastic processes still continues to be a focus of research, with yearly international conferences on the topic of stochastic processes.
Discoveries of specific stochastic processes
Although Khinchin gave mathematical definitions of stochastic processes in the 1930s, specific stochastic processes had already been discovered in different settings, such as the Brownian motion process and the Poisson process. Some families of stochastic processes such as point processes or renewal processes have long and complex histories, stretching back centuries.
Bernoulli process
The Bernoulli process, which can serve as a mathematical model for flipping a biased coin, is possibly the first stochastic process to have been studied. The process is a sequence of independent Bernoulli trials, which are named after Jacob Bernoulli who used them to study games of chance, including probability problems proposed and studied earlier by Christiaan Huygens. Bernoulli's work, including the Bernoulli process, were published in his book Ars Conjectandi in 1713.
Random walks
In 1905, Karl Pearson coined the term random walk while posing a problem describing a random walk on the plane, which was motivated by an application in biology, but such problems involving random walks had already been studied in other fields. Certain gambling problems that were studied centuries earlier can be considered as problems involving random walks. For example, the problem known as the Gambler's ruin is based on a simple random walk, and is an example of a random walk with absorbing barriers. Pascal, Fermat and Huyens all gave numerical solutions to this problem without detailing their methods, and then more detailed solutions were presented by Jakob Bernoulli and Abraham de Moivre.
For random walks in -dimensional integer lattices, George Pólya published, in 1919 and 1921, work where he studied the probability of a symmetric random walk returning to a previous position in the lattice. Pólya showed that a symmetric random walk, which has an equal probability to advance in any direction in the lattice, will return to a previous position in the lattice an infinite number of times with probability one in one and two dimensions, but with probability zero in three or higher dimensions.
Wiener process
The Wiener process or Brownian motion process has its origins in different fields including statistics, finance and physics. In 1880, Danish astronomer Thorvald Thiele wrote a paper on the method of least squares, where he used the process to study the errors of a model in time-series analysis. The work is now considered as an early discovery of the statistical method known as Kalman filtering, but the work was largely overlooked. It is thought that the ideas in Thiele's paper were too advanced to have been understood by the broader mathematical and statistical community at the time.
The French mathematician Louis Bachelier used a Wiener process in his 1900 thesis in order to model price changes on the Paris Bourse, a stock exchange, without knowing the work of Thiele. It has been speculated that Bachelier drew ideas from the random walk model of Jules Regnault, but Bachelier did not cite him, and Bachelier's thesis is now considered pioneering in the field of financial mathematics.
It is commonly thought that Bachelier's work gained little attention and was forgotten for decades until it was rediscovered in the 1950s by the Leonard Savage, and then become more popular after Bachelier's thesis was translated into English in 1964. But the work was never forgotten in the mathematical community, as Bachelier published a book in 1912 detailing his ideas, which was cited by mathematicians including Doob, Feller and Kolmogorov. The book continued to be cited, but then starting in the 1960s, the original thesis by Bachelier began to be cited more than his book when economists started citing Bachelier's work.
In 1905, Albert Einstein published a paper where he studied the physical observation of Brownian motion or movement to explain the seemingly random movements of particles in liquids by using ideas from the kinetic theory of gases. Einstein derived a differential equation, known as a diffusion equation, for describing the probability of finding a particle in a certain region of space. Shortly after Einstein's first paper on Brownian movement, Marian Smoluchowski published work where he cited Einstein, but wrote that he had independently derived the equivalent results by using a different method.
Einstein's work, as well as experimental results obtained by Jean Perrin, later inspired Norbert Wiener in the 1920s to use a type of measure theory, developed by Percy Daniell, and Fourier analysis to prove the existence of the Wiener process as a mathematical object.
Poisson process
The Poisson process is named after Siméon Poisson, due to its definition involving the Poisson distribution, but Poisson never studied the process. There are a number of claims for early uses or discoveries of the Poisson
process.
At the beginning of the 20th century, the Poisson process would arise independently in different situations.
In Sweden 1903, Filip Lundberg published a thesis containing work, now considered fundamental and pioneering, where he proposed to model insurance claims with a homogeneous Poisson process.
Another discovery occurred in Denmark in 1909 when A.K. Erlang derived the Poisson distribution when developing a mathematical model for the number of incoming phone calls in a finite time interval. Erlang was not at the time aware of Poisson's earlier work and assumed that the number phone calls arriving in each interval of time were independent to each other. He then found the limiting case, which is effectively recasting the Poisson distribution as a limit of the binomial distribution.
In 1910, Ernest Rutherford and Hans Geiger published experimental results on counting alpha particles. Motivated by their work, Harry Bateman studied the counting problem and derived Poisson probabilities as a solution to a family of differential equations, resulting in the independent discovery of the Poisson process. After this time there were many studies and applications of the Poisson process, but its early history is complicated, which has been explained by the various applications of the process in numerous fields by biologists, ecologists, engineers and various physical scientists.
Markov processes
Markov processes and Markov chains are named after Andrey Markov who studied Markov chains in the early 20th century. Markov was interested in studying an extension of independent random sequences. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains.
In 1912, Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé. Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.
Andrei Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov–Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in the 1930s, and then later Eugene Dynkin, starting in the 1950s.
Lévy processes
Lévy processes such as the Wiener process and the Poisson process (on the real line) are named after Paul Lévy who started studying them in the 1930s, but they have connections to infinitely divisible distributions going back to the 1920s. In a 1932 paper, Kolmogorov derived a characteristic function for random variables associated with Lévy processes. This result was later derived under more general conditions by Lévy in 1934, and then Khinchin independently gave an alternative form for this characteristic function in 1937. In addition to Lévy, Khinchin and Kolomogrov, early fundamental contributions to the theory of Lévy processes were made by Bruno de Finetti and Kiyosi Itô.
Mathematical construction
In mathematics, constructions of mathematical objects are needed, which is also the case for stochastic processes, to prove that they exist mathematically. There are two main approaches for constructing a stochastic process. One approach involves considering a measurable space of functions, defining a suitable measurable mapping from a probability space to this measurable space of functions, and then deriving the corresponding finite-dimensional distributions.
Another approach involves defining a collection of random variables to have specific finite-dimensional distributions, and then using Kolmogorov's existence theorem to prove a corresponding stochastic process exists. This theorem, which is an existence theorem for measures on infinite product spaces, says that if any finite-dimensional distributions satisfy two conditions, known as consistency conditions, then there exists a stochastic process with those finite-dimensional distributions.
Construction issues
When constructing continuous-time stochastic processes certain mathematical difficulties arise, due to the uncountable index sets, which do not occur with discrete-time processes. One problem is that it is possible to have more than one stochastic process with the same finite-dimensional distributions. For example, both the left-continuous modification and the right-continuous modification of a Poisson process have the same finite-dimensional distributions. This means that the distribution of the stochastic process does not, necessarily, specify uniquely the properties of the sample functions of the stochastic process.
Another problem is that functionals of continuous-time process that rely upon an uncountable number of points of the index set may not be measurable, so the probabilities of certain events may not be well-defined. For example, the supremum of a stochastic process or random field is not necessarily a well-defined random variable. For a continuous-time stochastic process , other characteristics that depend on an uncountable number of points of the index set include:
a sample function of a stochastic process is a continuous function of ;
a sample function of a stochastic process is a bounded function of ; and
a sample function of a stochastic process is an increasing function of .
where the symbol ∈ can be read "a member of the set", as in a member of the set .
To overcome the two difficulties described above, i.e., "more than one..." and "functionals of...", different assumptions and approaches are possible.
Resolving construction issues
One approach for avoiding mathematical construction issues of stochastic processes, proposed by Joseph Doob, is to assume that the stochastic process is separable. Separability ensures that infinite-dimensional distributions determine the properties of sample functions by requiring that sample functions are essentially determined by their values on a dense countable set of points in the index set. Furthermore, if a stochastic process is separable, then functionals of an uncountable number of points of the index set are measurable and their probabilities can be studied.
Another approach is possible, originally developed by Anatoliy Skorokhod and Andrei Kolmogorov, for a continuous-time stochastic process with any metric space as its state space. For the construction of such a stochastic process, it is assumed that the sample functions of the stochastic process belong to some suitable function space, which is usually the Skorokhod space consisting of all right-continuous functions with left limits. This approach is now more used than the separability assumption, but such a stochastic process based on this approach will be automatically separable.
Although less used, the separability assumption is considered more general because every stochastic process has a separable version. It is also used when it is not possible to construct a stochastic process in a Skorokhod space. For example, separability is assumed when constructing and studying random fields, where the collection of random variables is now indexed by sets other than the real line such as -dimensional Euclidean space.
Application
Applications in Finance
Black-Scholes Model
One of the most famous applications of stochastic processes in finance is the Black-Scholes model for option pricing. Developed by Fischer Black, Myron Scholes, and Robert Solow, this model uses Geometric Brownian motion, a specific type of stochastic process, to describe the dynamics of asset prices. The model assumes that the price of a stock follows a continuous-time stochastic process and provides a closed-form solution for pricing European-style options. The Black-Scholes formula has had a profound impact on financial markets, forming the basis for much of modern options trading.
The key assumption of the Black-Scholes model is that the price of a financial asset, such as a stock, follows a log-normal distribution, with its continuous returns following a normal distribution. Although the model has limitations, such as the assumption of constant volatility, it remains widely used due to its simplicity and practical relevance.
Stochastic Volatility Models
Another significant application of stochastic processes in finance is in stochastic volatility models, which aim to capture the time-varying nature of market volatility. The Heston model is a popular example, allowing for the volatility of asset prices to follow its own stochastic process. Unlike the Black-Scholes model, which assumes constant volatility, stochastic volatility models provide a more flexible framework for modeling market dynamics, particularly during periods of high uncertainty or market stress.
Applications in Biology
Population Dynamics
One of the primary applications of stochastic processes in biology is in population dynamics. In contrast to deterministic models, which assume that populations change in predictable ways, stochastic models account for the inherent randomness in births, deaths, and migration. The birth-death process, a simple stochastic model, describes how populations fluctuate over time due to random births and deaths. These models are particularly important when dealing with small populations, where random events can have large impacts, such as in the case of endangered species or small microbial populations.
Another example is the branching process, which models the growth of a population where each individual reproduces independently. The branching process is often used to describe population extinction or explosion, particularly in epidemiology, where it can model the spread of infectious diseases within a population.
Applications in Computer Science
Randomized Algorithms
Stochastic processes play a critical role in computer science, particularly in the analysis and development of randomized algorithms. These algorithms utilize random inputs to simplify problem-solving or enhance performance in complex computational tasks. For instance, Markov chains are widely used in probabilistic algorithms for optimization and sampling tasks, such as those employed in search engines like Google's PageRank. These methods balance computational efficiency with accuracy, making them invaluable for handling large datasets. Randomized algorithms are also extensively applied in areas such as cryptography, large-scale simulations, and artificial intelligence, where uncertainty must be managed effectively.
Queuing Theory
Another significant application of stochastic processes in computer science is in queuing theory, which models the random arrival and service of tasks in a system. This is particularly relevant in network traffic analysis and server management. For instance, queuing models help predict delays, manage resource allocation, and optimize throughput in web servers and communication networks. The flexibility of stochastic models allows researchers to simulate and improve the performance of high-traffic environments. For example, queueing theory is crucial for designing efficient data centers and cloud computing infrastructures.
| Mathematics | Statistics and probability | null |
47949 | https://en.wikipedia.org/wiki/Union%20%28set%20theory%29 | Union (set theory) | In set theory, the union (denoted by ∪) of a collection of sets is the set of all elements in the collection. It is one of the fundamental operations through which sets can be combined and related to each other.
A refers to a union of zero () sets and it is by definition equal to the empty set.
For explanation of the symbols used in this article, refer to the table of mathematical symbols.
Union of two sets
The union of two sets A and B is the set of elements which are in A, in B, or in both A and B. In set-builder notation,
.
For example, if A = {1, 3, 5, 7} and B = {1, 2, 4, 6, 7} then A ∪ B = {1, 2, 3, 4, 5, 6, 7}. A more elaborate example (involving two infinite sets) is:
A =
B =
As another example, the number 9 is not contained in the union of the set of prime numbers and the set of even numbers , because 9 is neither prime nor even.
Sets cannot have duplicate elements, so the union of the sets and is . Multiple occurrences of identical elements have no effect on the cardinality of a set or its contents.
Algebraic properties
Binary union is an associative operation; that is, for any sets ,
Thus, the parentheses may be omitted without ambiguity: either of the above can be written as . Also, union is commutative, so the sets can be written in any order.
The empty set is an identity element for the operation of union. That is, , for any set . Also, the union operation is idempotent: . All these properties follow from analogous facts about logical disjunction.
Intersection distributes over union
and union distributes over intersection
The power set of a set , together with the operations given by union, intersection, and complementation, is a Boolean algebra. In this Boolean algebra, union can be expressed in terms of intersection and complementation by the formula
where the superscript denotes the complement in the universal set . Alternatively, intersection can be expressed in terms of union and complementation in a similar way: . These two expressions together are called De Morgan's laws.
Finite unions
One can take the union of several sets simultaneously. For example, the union of three sets A, B, and C contains all elements of A, all elements of B, and all elements of C, and nothing else. Thus, x is an element of A ∪ B ∪ C if and only if x is in at least one of A, B, and C.
A finite union is the union of a finite number of sets; the phrase does not imply that the union set is a finite set.
Arbitrary unions
The most general notion is the union of an arbitrary collection of sets, sometimes called an infinitary union. If M is a set or class whose elements are sets, then x is an element of the union of M if and only if there is at least one element A of M such that x is an element of A. In symbols:
This idea subsumes the preceding sections—for example, A ∪ B ∪ C is the union of the collection . Also, if M is the empty collection, then the union of M is the empty set.
Notations
The notation for the general concept can vary considerably. For a finite union of sets one often writes or . Various common notations for arbitrary unions include , , and . The last of these notations refers to the union of the collection , where I is an index set and is a set for every . In the case that the index set I is the set of natural numbers, one uses the notation , which is analogous to that of the infinite sums in series.
When the symbol "∪" is placed before other symbols (instead of between them), it is usually rendered as a larger size.
Notation encoding
In Unicode, union is represented by the character . In TeX, is rendered from \cup and is rendered from \bigcup.
| Mathematics | Discrete mathematics | null |
47967 | https://en.wikipedia.org/wiki/Authentication | Authentication | Authentication (from authentikos, "real, genuine", from αὐθέντης authentes, "author") is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicating a person or thing's identity, authentication is the process of verifying that identity. It might involve validating personal identity documents, verifying the authenticity of a website with a digital certificate, determining the age of an artifact by carbon dating, or ensuring that a product or document is not counterfeit.
Methods
Authentication is relevant to multiple fields. In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain person or in a certain place or period of history. In computer science, verifying a user's identity is often required to allow access to confidential data or systems.
Authentication can be considered to be of three types:
The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member, or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while they may not have evidence that every step in the supply chain was authenticated. Centralized authority-based trust relationships back most secure internet communication through known public certificate authorities; decentralized peer-based trust, also known as a web of trust, is used for personal services such as email or files and trust is established by known individuals signing each other's cryptographic key for instance.
The second type of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist, on the other hand, might use carbon dating to verify the age of an artifact, do a chemical and spectroscopic analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.
Attribute comparison may be vulnerable to forgery. In general, it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery.
In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well.
Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught.
Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify.
The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external records have their own problems of forgery and perjury and are also vulnerable to being separated from the artifact and lost.
In computer science, a user can be given access to secure systems based on user credentials that imply authenticity. A network administrator can give a user a password, or provide the user with a key card or other access devices to allow system access. In this case, authenticity is implied but not guaranteed.
Consumer goods such as pharmaceuticals, perfume, and clothing can use all forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation. As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark on the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.
Authentication factors
The ways in which someone may be authenticated fall into three categories, based on what is known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity before being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority.
Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified. The three factors (classes) and some of the elements of each factor are:
Knowledge: Something the user knows (e.g., a password, partial password, passphrase, personal identification number (PIN), challenge–response (the user must answer a question or pattern), security question).
Ownership: Something the user has (e.g., wrist band, ID card, security token, implanted device, cell phone with a built-in hardware token, software token, or cell phone holding a software token).
Inherence: Something the user is or does (e.g., fingerprint, retinal pattern, DNA sequence (there are assorted definitions of what is sufficient), signature, face, voice, unique bio-electric signals, or other biometric identifiers).
Single-factor authentication
As the weakest level of authentication, only a single component from one of the three categories of factors is used to authenticate an individual's identity. The use of only one factor does not offer much protection from misuse or malicious intrusion. This type of authentication is not recommended for financial or personally relevant transactions that warrant a higher level of security.
Multi-factor authentication
Multi-factor authentication involves two or more authentication factors (something you know, something you have, or something you are). Two-factor authentication is a special case of multi-factor authentication involving exactly two factors.
For example, using a bank card (something the user has) along with a PIN (something the user knows) provides two-factor authentication. Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication.
Authentication types
Strong authentication
The United States government's National Information Assurance Glossary defines strong authentication as a layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information.
The European Central Bank (ECB) has defined strong authentication as "a procedure based on two or more of the three authentication factors". The factors that are used must be mutually independent and at least one factor must be "non-reusable and non-replicable", except in the case of an inherence factor and must also be incapable of being stolen off the Internet. In the European, as well as in the US-American understanding, strong authentication is very similar to multi-factor authentication or 2FA, but exceeding those with more rigorous requirements.
The FIDO Alliance has been striving to establish technical specifications for strong authentication.
Continuous authentication
Conventional computer systems authenticate users only at the initial log-in session, which can be the cause of a critical security flaw. To resolve this problem, systems need continuous user authentication methods that continuously monitor and authenticate users based on some biometric trait(s). A study used behavioural biometrics based on writing styles as a continuous authentication method.
Recent research has shown the possibility of using smartphones sensors and accessories to extract some behavioral attributes such as touch dynamics, keystroke dynamics and gait recognition. These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems.
Digital authentication
The term digital authentication, also known as electronic authentication or e-authentication, refers to a group of processes where the confidence for user identities is established and presented via electronic methods to an information system. The digital authentication process creates technical challenges because of the need to authenticate individuals or entities remotely over a network.
The American National Institute of Standards and Technology (NIST) has created a generic model for digital authentication that describes the processes that are used to accomplish secure authentication:
Enrollment – an individual applies to a credential service provider (CSP) to initiate the enrollment process. After successfully proving the applicant's identity, the CSP allows the applicant to become a subscriber.
Authentication – After becoming a subscriber, the user receives an authenticator e.g., a token and credentials, such as a user name. He or she is then permitted to perform online transactions within an authenticated session with a relying party, where they must provide proof that he or she possesses one or more authenticators.
Life-cycle maintenance – the CSP is charged with the task of maintaining the user's credential over the course of its lifetime, while the subscriber is responsible for maintaining his or her authenticator(s).
The authentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity.
Product authentication
Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods, such as electronics, music, apparel, and counterfeit medications, have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are sold and used. Even security printing on packages, labels, and nameplates, however, is subject to counterfeiting.
In their anti-counterfeiting technology guide, the EUIPO Observatory on Infringements of Intellectual Property Rights categorizes the main anti-counterfeiting technologies on the market currently into five main categories: electronic, marking, chemical and physical, mechanical, and technologies for digital media.
Products or their packaging can include a variable QR Code. A QR Code alone is easy to verify but offers a weak level of authentication as it offers no protection against counterfeits unless scan data is analyzed at the system level to detect anomalies. To increase the security level, the QR Code can be combined with a digital watermark or copy detection pattern that are robust to copy attempts and can be authenticated with a smartphone.
A secure key storage device can be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally, the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these secure coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified.
Packaging
Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale of products. Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:
Taggant fingerprinting – uniquely coded microscopic materials that are verified from a database
Encrypted micro-particles – unpredictably placed markings (numbers, layers and colors) not visible to the human eye
Holograms – graphics printed on seals, patches, foils or labels and used at the point of sale for visual verification
Micro-printing – second-line authentication often used on currencies
Serialized barcodes
UV printing – marks only visible under UV light
Track and trace systems – use codes to link products to the database tracking system
Water indicators – become visible when contacted with water
DNA tracking – genes embedded onto labels that can be traced
Color-shifting ink or film – visible marks that switch colors or texture when tilted
Tamper evident seals and tapes – destructible or graphically verifiable at point of sale
2d barcodes – data codes that can be tracked
RFID chips
NFC chips
Information content
Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging – anything from a box to e-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like:
A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint.
A shared secret, such as a passphrase, in the content of the message.
An electronic signature; public-key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key.
The opposite problem is the detection of plagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.
Literacy and literature authentication
In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is – Does one believe it? Related to that, an authentication project is therefore a reading and writing activity in which students document the relevant research process (). It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers consider the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the period.
History and state-of-the-art
Historically, fingerprints have been used as the most authoritative method of authentication, but court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability. Outside of the legal system as well, fingerprints are easily spoofable, with British Telecom's top computer security official noting that "few" fingerprint readers have not already been tricked by one spoof or another. Hybrid or two-tiered authentication methods offer a compelling solution, such as private keys encrypted by fingerprint inside of a USB device.
In a computer data context, cryptographic methods have been developed which are not spoofable if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. However, it is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in the future. If that were to occur, it may call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered.
Authorization
The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". While authorization often happens immediately after authentication (e.g., when logging into a computer system), this does not mean authorization presupposes authentication: an anonymous agent could be authorized to a limited action set.
Access control
One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, granting privileges established for that identity.
| Technology | Cryptography | null |
48065 | https://en.wikipedia.org/wiki/Polder | Polder | A polder () is a low-lying tract of land that forms an artificial hydrological entity, enclosed by embankments known as dikes. The three types of polder are:
Land reclaimed from a body of water, such as a lake or the seabed
Flood plains separated from the sea or river by a dike
Marshes separated from the surrounding water by a dike and subsequently drained; these are also known as koogs, especially in Germany
The ground level in drained marshes subsides over time. All polders will eventually be below the surrounding water level some or all of the time. Water enters the low-lying polder through infiltration and water pressure of groundwater, or rainfall, or transport of water by rivers and canals. This usually means that the polder has an excess of water, which is pumped out or drained by opening sluices at low tide. Care must be taken not to set the internal water level too low. Polder land made up of peat (former marshland) will sink in relation to its previous level, because of peat decomposing when exposed to oxygen from the air.
Polders are at risk of flooding at all times, and care must be taken to protect the surrounding dikes. Dikes are typically built with locally available materials, and each material has its own risks: sand is prone to collapse owing to saturation by water; dry peat is lighter than water and potentially unable to retain water in very dry seasons. Some animals dig tunnels in the barrier, allowing water to infiltrate the structure; the muskrat is known for this activity and hunted in certain European countries because of it. Polders are most commonly, though not exclusively, found in river deltas, former fenlands, and coastal areas.
Flooding of polders has also been used as a military tactic in the past. One example is the flooding of the polders along the Yser River during World War I. Opening the sluices at high tide and closing them at low tide turned the polders into an inaccessible swamp, which allowed the Allied armies to stop the German army.
The Netherlands has a large area of polders: as much as 20% of the land area has at some point in the past been reclaimed from the sea, thus contributing to the development of the country. IJsselmeer is the most famous polder project of the Netherlands. Some other countries which have polders are Bangladesh, Belgium, Canada and China. Some examples of Dutch polder projects are Beemster, Schermer, Flevopolder and Noordoostpolder.
Etymology
The Dutch word derives successively from Middle Dutch , from Old Dutch , and ultimately from pol-, a piece of land elevated above its surroundings, with the augmentative suffix -er and epenthetical -d-. The word has been adopted in thirty-six languages.
Netherlands
The Netherlands is frequently associated with polders, as its engineers became noted for developing techniques to drain wetlands and make them usable for agriculture and other development. This is illustrated by the saying "God created the world, but the Dutch created the Netherlands".
The Dutch have a long history of reclamation of marshes and fenland, resulting in some 3,000 polders nationwide. By 1961, about half of the country's land, , was reclaimed from the sea. About half the total surface area of polders in northwest Europe is in the Netherlands. The first embankments in Europe were constructed in Roman times. The first polders were constructed in the 11th century. The oldest extant polder is the Achtermeer polder, from 1533.
As a result of flooding disasters, water boards called waterschap (when situated more inland) or hoogheemraadschap (near the sea, mainly used in the Holland region) were set up to maintain the integrity of the water defences around polders, maintain the waterways inside a polder, and control the various water levels inside and outside the polder. Water boards hold separate elections, levy taxes, and function independently from other government bodies. Their function is basically unchanged even today. As such, they are the oldest democratic institutions in the country. The necessary cooperation among all ranks to maintain polder integrity gave its name to the Dutch version of third-way politics—the Polder Model.
The 1953 flood disaster prompted a new approach to the design of dikes and other water-retaining structures, based on an acceptable probability of overflowing. Risk is defined as the product of probability and consequences. The potential damage in lives, property, and rebuilding costs is compared with the potential cost of water defences. From these calculations follows an acceptable flood risk from the sea at one in 4,000–10,000 years, while it is one in 100–2,500 years for a river flood. The particular established policy guides the Dutch government to improve flood defences as new data on threat levels become available.
Major Dutch polders and the years they were laid dry include Beemster (1609–1612), Schermer (1633–1635), and Haarlemmermeerpolder (1852). Polders created as part of the Zuiderzee Works include Wieringermeerpolder (1930), Noordoostpolder (1942) and Flevopolder (1956–1968)
Examples of polders
Brazil
Several cities on the Paraíba Valley region (in the state of São Paulo) have polders on land claimed from the floodplains around the Paraíba do Sul river.
Bangladesh
Bangladesh has 139 polders, of which 49 are sea-facing, while the rest are along the numerous distributaries of the Ganges-Brahmaputra-Meghna River delta. These were constructed in the 1960s to protect the coast from tidal flooding and reduce salinity incursion. They reduce long-term flooding and waterlogging following storm surges from tropical cyclones. They are also cultivated for agriculture.
Belgium
De Moeren, near Veurne in West Flanders
Polders along the Yser river between Nieuwpoort and Diksmuide
Polders of Muisbroek and Ettenhoven, in Ekeren and Hoevenen
Polder of Stabroek, in Stabroek
Kabeljauwpolder, in Zandvliet
Scheldepolders on the left bank of the Scheldt
Uitkerkse polders, near Blankenberge in West Flanders
Prosperpolder, near Doel, Antwerp and Kieldrecht.
Canada
Tantramar Marshes
Holland Marsh
Pitt Polder Ecological Reserve
Grand Pré, Nova Scotia
Minas Basin
China
The city of Kunshan has over 100 polders.
History
The Jiangnan region, at the Yangtze River Delta, has a long history of constructing polders. Most of these projects were performed between the 10th and 13th centuries. The Chinese government also assisted local communities in constructing dikes for swampland water drainage. The Lijia (里甲) self-monitoring system of 110 households under a lizhang (里长) headman was used for the purposes of service administration and tax collection in the polder, with a liangzhang (粮长, grain chief) responsible for maintaining the water system and a tangzhang (塘长, dike chief) for polder maintenance.
Denmark
Filsø
Kolindsund
Lammefjorden
Finland
Söderfjärden
Munsmo
Two polders ( in total) near Vassor in Korsholm
France
Marais Poitevin
Les Moëres, adjacent to the Flemish polder De Moeren in Belgium.
Polders de Couesnon near Mont-Saint Michel in Normandy
Germany
In Germany, land reclaimed by diking is called a koog. The German Deichgraf system was similar to the Dutch and is widely known from Theodor Storm's novella The Rider on the White Horse.
Altes Land near Hamburg
Blockland and Hollerland near Bremen
Nordstrand, Germany
Bormerkoog and Meggerkoog near Friedrichstadt
36 koogs in the district of Nordfriesland
12 koogs in the district of Dithmarschen
In southern Germany, the term polder is used for retention basins recreated by opening dikes during river floodplain restoration, a meaning somewhat opposite to that in coastal context.
Guyana
Black Bush Polder, Corentyne, Berbice.
India
Kuttanad Region, Kerala
Ireland
Lough Swilly, County Donegal. Near Inch Island and Newtowncunningham.
Italy
Delta of the river Po, such as Bonifica Valle del Mezzano
Japan
Around the Ariake Sea in Kyushu, mainly in Saga but also in Fukuoka and Kumamoto Prefectures
Lithuania
Rusnė Island
Netherlands
Achtermeer, the oldest polder, from 1533
Alblasserwaard, containing the windmills of Kinderdijk, a World Heritage Site
Alkmaar
Andijk
Anna Paulownapolder
Beemster, a World Heritage Site
Bijlmermeer
Flevopolder, the largest artificial island in the world, last part drained in 1968
's-Gravesloot
Haarlemmermeer, containing Schiphol airport
Krimpenerwaard
Lauwersmeer
Mastenbroek, one of the oldest medieval polders, drained around 1363-1364.
Noordoostpolder
Prins Alexanderpolder
Purmer
Schermer
Watergraafsmeer
Wieringermeer
Wieringerwaard
Wijdewormer
Zestienhoven, home of the Rotterdam The Hague Airport (Overschie), in the city of Rotterdam.
Zuidplaspolder, along with Lammefjord in Denmark the lowest point of the European Union
Poland
Vistula delta near Elbląg and Nowy Dwór Gdański
Warta delta near Kostrzyn nad Odrą
Romania
Danube Delta
Singapore
Parts of Pulau Tekong
Slovenia
The Ankaran/Ancarano Polder (), Semedela Polder (), and Škocjan Polder () in reclaimed land around Koper/Capodistria.
South Korea
Parts of the coast of Ganghwa Island, adjacent to the river Han in Incheon
Delta of the river Nakdong in Busan
Saemangeum in North Jeolla Province
Spain
Parts of Málaga were built on reclaimed land
United Kingdom
Traeth Mawr
Sunk Island, on the north shore of the Humber east of Hull
Caldicot and Wentloog Levels along the Severn Estuary in South Wales
Parts of The Fens
Branston Island, by the River Witham outside the conventional area of the fens but connected to them.
Parts of the coast of Essex
Some land along the River Plym in Plymouth
Some land around Meathop east of Grange-over-Sands, reclaimed as a side-effect of building a railway embankment
The Somerset Levels and North Somerset Levels
Romney Marsh
Sealand, Flintshire
Humberhead Levels
United States
New Orleans
Sacramento – San Joaquin River Delta
| Physical sciences | Artificial landforms | null |
48130 | https://en.wikipedia.org/wiki/Hare | Hare | Hares and jackrabbits are mammals belonging to the genus Lepus. They are herbivores, and live solitarily or in pairs. They nest in slight depressions called forms, and their young are able to fend for themselves shortly after birth. The genus includes the largest lagomorphs. Most are fast runners with long, powerful hind legs, and large ears that dissipate body heat. Hare species are native to Africa, Eurasia and North America. A hare less than one year old is called a "leveret". A group of hares is called a "husk", a "down", or a "drove".
Members of the Lepus genus are considered true hares, distinguishing them from rabbits which make up the rest of the Leporidae family. However, there are five leporid species with "hare" in their common names which are not considered true hares: the hispid hare (Caprolagus hispidus), and four species known as red rock hares (Pronolagus). Conversely, several Lepus species are called "jackrabbits", but classed as hares rather than rabbits. The pet known as the Belgian hare is a domesticated European rabbit which has been selectively bred to resemble a hare.
Biology
Hares are swift animals and can run up to over short distances. Over longer distances, the European hare (Lepus europaeus) can run up to . The five species of jackrabbits found in central and western North America are able to run at over longer distances, and can leap up to at a time.
Normally a shy animal, the European brown hare changes its behavior in spring, when it can be seen in daytime chasing other hares. This appears to be competition between males (called bucks) to attain dominance for breeding. During this spring frenzy, animals of both sexes can be seen "boxing", one hare striking another with its paws. This behavior gives rise to the idiom "mad as a March hare". This is present not only in intermale competition, but also among females (called does) toward males to prevent copulation.
Differences from rabbits
Hares are generally larger than rabbits, with longer ears, and have black markings on their fur. Hares, like all leporids, have jointed, or kinetic, skulls, unique among mammals. They have 48 chromosomes, while rabbits have 44. Hares have not been domesticated, while some rabbits are raised for food and kept as pets.
Some rabbits live and give birth underground in burrows, with many burrows in an area forming a warren. Other rabbits and hares live and give birth in simple forms (shallow depression or flattened nest of grass) above the ground. Hares usually do not live in groups. Young hares are adapted to the lack of physical protection, relative to that afforded by a burrow, by being born fully furred and with eyes open. They are hence precocial, able to fend for themselves soon after birth. By contrast, rabbits are altricial, being born blind and hairless.
Diet
Easily digestible food is processed in the gastrointestinal tract, expelling the waste as regular feces. For nutrients that are harder to extract, hares, like all lagomorphs, ferment fiber in the cecum and expel the mass as cecotropes, which they ingest again, a practice called cecotrophy or refection. The cecotropes are absorbed in the small intestine to use the nutrients.
Classification
The 34 species listed are:
Genus Lepus
Subgenus Macrotolagus
Antelope jackrabbit, Lepus alleni
Subgenus Poecilolagus
Snowshoe hare, Lepus americanus
Subgenus Lepus
Arctic hare, Lepus arcticus
Alaskan hare, Lepus othus
Mountain hare, Lepus timidus
Subgenus Proeulagus
Black jackrabbit, Lepus insularis
Desert hare, Lepus tibetanus
Tolai hare, Lepus tolai
Subgenus Eulagos
Broom hare, Lepus castroviejoi
Yunnan hare, Lepus comus
Korean hare, Lepus coreanus
European hare, Lepus europaeus
Manchurian hare, Lepus mandshuricus
Ethiopian highland hare, Lepus starcki
Subgenus Sabanalagus
Ethiopian hare, Lepus fagani
African savanna hare, Lepus victoriae
Subgenus Indolagus
Hainan hare, Lepus hainanus
Indian hare, Lepus nigricollis
Burmese hare, Lepus peguensis
Subgenus Sinolagus
Chinese hare, Lepus sinensis
Subgenus Tarimolagus
Yarkand hare, Lepus yarkandensis
Incertae sedis
Tamaulipas jackrabbit, Lepus altamirae
Japanese hare, Lepus brachyurus
Black-tailed jackrabbit, Lepus californicus
White-sided jackrabbit, Lepus callotis
Cape hare, Lepus capensis
Corsican hare, Lepus corsicanus
Tehuantepec jackrabbit, Lepus flavigularis
Granada hare, Lepus granatensis
Abyssinian hare, Lepus habessinicus
Woolly hare, Lepus oiostolus
West Sahara hare, Lepus saharae
Scrub hare, Lepus saxatilis
White-tailed jackrabbit, Lepus townsendii
In human culture
Food
Meat
Hares and rabbits are plentiful in many areas, adapt to a wide variety of conditions, and reproduce quickly, so hunting is often less regulated than for other varieties of game. They are a common source of protein worldwide. Because of their extremely low fat content, they are a poor choice as a survival food.
Hares can be prepared in the same manner as rabbits—commonly roasted or parted for breading and frying.
(also spelled ) is a traditional German stew made from marinated rabbit or hare, seasoned with black pepper (German ) and other spices. Wine or vinegar is also a prominent ingredient, to lend a sourness to the recipe.
()—hare stew with pearl onions, vinegar, red wine, and cinnamon—is a much-prized dish enjoyed in Greece and Cyprus and communities in the diaspora.
The hare (and in recent times, the rabbit) is a staple of Maltese cuisine. The dish was presented to the island's Grandmasters of the Sovereign Military Order of Malta, as well as Renaissance Inquisitors resident on the island, several of whom went on to become pope.
According to Jewish tradition, the hare is among mammals deemed not kosher, and therefore not eaten by observant Jews. Muslims deem coney meat (rabbit, pika, hyrax) to be halal, and in Egypt, hare and rabbit are popular meats for mulukhiyah (jute leaf soup), especially in Cairo.
Blood
The blood of a freshly killed hare can be collected for consumption in a stew or casserole in a cooking process known as jugging. First the entrails are removed from the hare carcass before it is hung in a larder by its hind legs, which causes blood to accumulate in the chest cavity. One method of preserving the blood after draining it from the hare (since the hare is usually hung for a week or more) is to mix it with red wine vinegar to prevent coagulation, and then to store it in a freezer.
Jugged hare, known as in France, is a whole hare, cut into pieces, marinated, and cooked with red wine and juniper berries in a tall jug that stands in a pan of water. It traditionally is served with the hare's blood (or the blood is added right at the end of the cooking process) and port wine.
Jugged hare is described in an influential 18th-century English cookbook, The Art of Cookery by Hannah Glasse, with a recipe titled, "A Jugged Hare", that begins, "Cut it into little pieces, lard them here and there ..." The recipe goes on to describe cooking the pieces of hare in water in a jug set within a bath of boiling water to cook for three hours. In the 19th century, a myth arose that Glasse's recipe began with the words "First, catch your hare."
Many other British cookbooks from before the middle of the 20th century have recipes for jugged hare. Merle and Reitch have this to say about jugged hare, for example:
The best part of the hare, when roasted, is the loin and the thick part of the hind leg; the other parts are only fit for stewing, hashing, or jugging. It is usual to roast a hare first, and to stew or jug the portion which is not eaten the first day. ...
To Jug A Hare. This mode of cooking a hare is very desirable when there is any doubt as to its age, as an old hare, which would be otherwise uneatable, may be made into an agreeable dish.
In 2006, a survey of 2021 people for the UKTV Food television channel found only 1.6% of the people under 25 recognized jugged hare by name. Seven of ten stated they would refuse to eat jugged hare if it were served at the house of a friend or a relative.
In England, a now rarely served dish is potted hare. The hare meat is cooked, then covered in at least one inch (preferably more) of butter. The butter is a preservative (excludes air); the dish can be stored for up to several months. It is served cold, often on bread or as an appetizer.
Taming
No extant domesticated hares exist. However, hare remains have been found in a wide range of human settlement sites, some showing signs of use beyond simple hunting and eating:
A European brown hare was buried alongside an older woman in Hungary mid fifth millennium BC.
12 Mountain hare metapodials were found in a Swedish grave from third millennium BC.
The Tolai hare (originally described as a Cape hare, amended according to range) was tamed by northern Chinese people in the neolithic period (~third millennium BC) and fed millets.
In mythology and folklore
The hare in African folk tales is a trickster; some of the stories about the hare were retold among enslaved Africans in America and are the basis of the Br'er Rabbit stories. The hare appears in English folklore in the saying "as mad as a March hare" and in the legend of the White Hare that alternatively tells of a witch who takes the form of a white hare and goes out looking for prey at night or of the spirit of a broken-hearted maiden who cannot rest and who haunts her unfaithful lover.
The constellation Lepus is taken to represent a hare.
The hare was once regarded as an animal sacred to Aphrodite and Eros because of its high libido. Live hares were often presented as a gift of love. In European witchcraft, hares were either witches' familiars or a witch who had transformed themself into a hare. Now pop mythology associates the hare with the Anglo-Saxon goddess Ēostre as an explanation for the Easter Bunny, but is wholly modern in origin and has no authentic basis.
In European tradition, the hare symbolises the two qualities of swiftness and timidity. The latter once gave the European hare the Linnaean name Lepus timidus that is now limited to the mountain hare. Several ancient fables depict the Hare in flight: In one, The Hares and the Frogs, they decide to commit mass suicide to relieve the angst of constantly fleeing threats, but reconsider when they startle frogs on the way to throwing themselves into the river. Conversely, in The Tortoise and the Hare, perhaps the best-known among Aesop's Fables, the hare loses a race through being too confident in its swiftness. In Irish folklore, the hare is often associated with the Aos sí or other pagan elements. In these stories, characters who harm hares often suffer dreadful consequences.
In literature and art
In fiction
In art
Three hares
A study in 2004 followed the history and migration of a symbolic image of three hares with conjoined ears. In this image, three hares are seen chasing each other in a circle with their heads near its centre. While each of the animals appears to have two ears, only three ears are depicted. The ears form a triangle at the centre of the circle and each is shared by two of the hares. The image has been traced from Christian churches in the English county of Devon right back along the Silk Road to China, via western and eastern Europe and the Middle East. Before its appearance in China, it was possibly first depicted in the Middle East before being reimported centuries later. Its use is associated with Christian, Jewish, Islamic and Buddhist sites stretching back to about 600 CE.
Place names
The hare has given rise to local place names, as they can often be observed in favoured localities. An example in Scotland is "Murchland", "murchen" being a Scots word for a hare.
| Biology and health sciences | Lagomorphs | null |
48144 | https://en.wikipedia.org/wiki/Microcomputer | Microcomputer | A microcomputer is a small, relatively inexpensive computer having a central processing unit (CPU) made out of a microprocessor. The computer also includes memory and input/output (I/O) circuitry together mounted on a printed circuit board (PCB). Microcomputers became popular in the 1970s and 1980s with the advent of increasingly powerful microprocessors. The predecessors to these computers, mainframes and minicomputers, were comparatively much larger and more expensive (though indeed present-day mainframes such as the IBM System z machines use one or more custom microprocessors as their CPUs). Many microcomputers (when equipped with a keyboard and screen for input and output) are also personal computers (in the generic sense). An early use of the term "personal computer" in 1962 predates microprocessor-based designs. (See "Personal Computer: Computers at Companies" reference below). A "microcomputer" used as an embedded control system may have no human-readable input and output devices. "Personal computer" may be used generically or may denote an IBM PC compatible machine.
The abbreviation "micro" was common during the 1970s and 1980s, but has since fallen out of common usage.
Origins
The term microcomputer came into popular use after the introduction of the minicomputer, although Isaac Asimov used the term in his short story "The Dying Night" as early as 1956 (published in The Magazine of Fantasy and Science Fiction in July that year). Most notably, the microcomputer replaced the many separate components that made up the minicomputer's CPU with one integrated microprocessor chip.
In 1973, the French Institut National de la Recherche Agronomique (INRA) was looking for a computer able to measure agricultural hygrometry. To answer this request, a team of French engineers of the computer technology company R2E, led by its Head of Development, François Gernelle, created the first available microprocessor-based microcomputer, the Micral N. The same year the company filed their patents with the term "Micro-ordinateur", a literal equivalent of "Microcomputer", to designate a solid state machine designed with a microprocessor.
In the US the earliest models such as the Altair 8800 were often sold as kits to be assembled by the user, and came with as little as 256 bytes of RAM, and no input/output devices other than indicator lights and switches, useful as a proof of concept to demonstrate what such a simple device could do.
As microprocessors and semiconductor memory became less expensive, microcomputers grew cheaper and easier to use.
Increasingly inexpensive logic chips such as the 7400 series allowed cheap dedicated circuitry for improved user interfaces such as keyboard input, instead of simply a row of switches to toggle bits one at a time.
Use of audio cassettes for inexpensive data storage replaced manual re-entry of a program every time the device was powered on.
Large cheap arrays of silicon logic gates in the form of read-only memory and EPROMs allowed utility programs and self-booting kernels to be stored within microcomputers. These stored programs could automatically load further more complex software from external storage devices without user intervention, to form an inexpensive turnkey system that does not require a computer expert to understand or to use the device.
Random-access memory became cheap enough to afford dedicating approximately 1–2 kilobytes of memory to a video display controller frame buffer, for a 40x25 or 80x25 text display or blocky color graphics on a common household television. This replaced the slow, complex, and expensive teletypewriter that was previously common as an interface to minicomputers and mainframes.
All these improvements in cost and usability resulted in an explosion in their popularity during the late 1970s and early 1980s.
A large number of computer makers packaged microcomputers for use in small business applications. By 1979, many companies such as Cromemco, Processor Technology, IMSAI, North Star Computers, Southwest Technical Products Corporation, Ohio Scientific, Altos Computer Systems, Morrow Designs and others produced systems designed for resourceful end users or consulting firms to deliver business systems such as accounting, database management and word processing to small businesses. This allowed businesses unable to afford leasing of a minicomputer or time-sharing service the opportunity to automate business functions, without (usually) hiring a full-time staff to operate the computers. A representative system of this era would have used an S100 bus, an 8-bit processor such as an Intel 8080 or Zilog Z80, and either CP/M or MP/M operating system.
The increasing availability and power of desktop computers for personal use attracted the attention of more software developers. As the industry matured, the market for personal computers standardized around IBM PC compatibles running DOS, and later Windows. Modern desktop computers, video game consoles, laptops, tablet PCs, and many types of handheld devices, including mobile phones, pocket calculators, and industrial embedded systems, may all be considered examples of microcomputers according to the definition given above.
Colloquial use of the term
By the early 2000s, everyday use of the expression "microcomputer" (and in particular "micro") declined significantly from its peak in the mid-1980s. The term is most commonly associated with the most popular 8-bit home computers (such as the Apple II, ZX Spectrum, Commodore 64, BBC Micro, and TRS-80) and small-business CP/M-based microcomputers.
In colloquial usage, "microcomputer" has been largely supplanted by the term "personal computer" or "PC", which specifies a computer that has been designed to be used by one individual at a time, a term first coined in 1959. IBM first promoted the term "personal computer" to differentiate the IBM PC from CP/M-based microcomputers likewise targeted at the small-business market, and also IBM's own mainframes and minicomputers. However, following its release, the IBM PC itself was widely imitated, as well as the term. The component parts were commonly available to producers and the BIOS was reverse engineered through cleanroom design techniques. IBM PC compatible "clones" became commonplace, and the terms "personal computer", and especially "PC", stuck with the general public, often specifically for a computer compatible with DOS (or nowadays Windows).
Description
Monitors, keyboards and other devices for input and output may be integrated or separate. Computer memory in the form of RAM, and at least one other less volatile, memory storage device are usually combined with the CPU on a system bus in one unit. Other devices that make up a complete microcomputer system include batteries, a power supply unit, a keyboard and various input/output devices used to convey information to and from a human operator (printers, monitors, human interface devices). Microcomputers are designed to serve only one user at a time, although they can often be modified with software or hardware to concurrently serve more than one user. Microcomputers fit well on or under desks or tables, so that they are within easy access of users. Bigger computers like minicomputers, mainframes, and supercomputers take up large cabinets or even dedicated rooms.
A microcomputer comes equipped with at least one type of data storage, usually RAM. Although some microcomputers (particularly early 8-bit home micros) perform tasks using RAM alone, some form of secondary storage is normally desirable. In the early days of home micros, this was often a data cassette deck (in many cases as an external unit). Later, secondary storage (particularly in the form of floppy disk and hard disk drives) were built into the microcomputer case.
History
TTL precursors
Although they did not contain any microprocessors, but were built around transistor-transistor logic (TTL), Hewlett-Packard calculators as far back as 1968 had various levels of programmability comparable to microcomputers. The HP 9100B (1968) had rudimentary conditional (if) statements, statement line numbers, jump statements (go to), registers that could be used as variables, and primitive subroutines. The programming language resembled assembly language in many ways. Later models incrementally added more features, including the BASIC programming language (HP 9830A in 1971). Some models had tape storage and small printers. However, displays were limited to one line at a time. The HP 9100A was referred to as a personal computer in an advertisement in a 1968 Science magazine, but that advertisement was quickly dropped. HP was reluctant to sell them as "computers" because the perception at that time was that a computer had to be big in size to be powerful, and thus decided to market them as calculators. Additionally, at that time, people were more likely to buy calculators than computers, and, purchasing agents also preferred the term "calculator" because purchasing a "computer" required additional layers of purchasing authority approvals.
The Datapoint 2200, made by CTC in 1970, was also comparable to microcomputers. While it contains no microprocessor, the instruction set of its custom TTL processor was the basis of the instruction set for the Intel 8008, and for practical purposes the system behaves approximately as if it contains an 8008. This is because Intel was the contractor in charge of developing the Datapoint's CPU, but ultimately CTC rejected the 8008 design because it needed 20 support chips.
Another early system, the Kenbak-1, was released in 1971. Like the Datapoint 2200, it used small-scale integrated transistor–transistor logic instead of a microprocessor. It was marketed as an educational and hobbyist tool, but it was not a commercial success; production ceased shortly after introduction.
Early microcomputers
In late 1972, a French team headed by François Gernelle within a small company, Réalisations & Etudes Electroniques (R2E), developed and patented a computer based on a microprocessor – the Intel 8008 8-bit microprocessor. This Micral-N was marketed in early 1973 as a "Micro-ordinateur" or microcomputer, mainly for scientific and process-control applications. About a hundred Micral-N were installed in the next two years, followed by a new version based on the Intel 8080. Meanwhile, another French team developed the Alvan, a small computer for office automation which found clients in banks and other sectors. The first version was based on LSI chips with an Intel 8008 as peripheral controller (keyboard, monitor and printer), before adopting the Zilog Z80 as main processor.
In late 1972, a Sacramento State University team led by Bill Pentz built the Sac State 8008 computer, able to handle thousands of patients' medical records. The Sac State 8008 was designed with the Intel 8008. It had a full set of hardware and software components: a disk operating system included in a series of programmable read-only memory chips (PROMs); 8 Kilobytes of RAM; IBM's Basic Assembly Language (BAL); a hard drive; a color display; a printer output; a 150 bit/s serial interface for connecting to a mainframe; and even the world's first microcomputer front panel.
In early 1973, Sord Computer Corporation (now Toshiba Personal Computer System Corporation) completed the SMP80/08, which used the Intel 8008 microprocessor. The SMP80/08, however, did not have a commercial release. After the first general-purpose microprocessor, the Intel 8080, was announced in April 1974, Sord announced the SMP80/x, the first microcomputer to use the 8080, in May 1974.
Virtually all early microcomputers were essentially boxes with lights and switches; one had to read and understand binary numbers and machine language to program and use them (the Datapoint 2200 was a striking exception, bearing a modern design based on a monitor, keyboard, and tape and disk drives). Of the early "box of switches"-type microcomputers, the MITS Altair 8800 (1975) was arguably the most famous. Most of these simple, early microcomputers were sold as electronic kits—bags full of loose components which the buyer had to solder together before the system could be used.
The period from about 1971 to 1976 is sometimes called the first generation of microcomputers. Many companies such as DEC, National Semiconductor, Texas Instruments offered their microcomputers for use in terminal control, peripheral device interface control and industrial machine control. There were also machines for engineering development and hobbyist personal use. In 1975, the Processor Technology SOL-20 was designed, which consisted of one board which included all the parts of the computer system. The SOL-20 had built-in EPROM software which eliminated the need for rows of switches and lights. The MITS Altair just mentioned played an instrumental role in sparking significant hobbyist interest, which itself eventually led to the founding and success of many well-known personal computer hardware and software companies, such as Microsoft and Apple Computer. Although the Altair itself was only a mild commercial success, it helped spark a huge industry.
Home computers
By 1977, the introduction of the second microcomputer generation as consumer goods, known as home computers, made them considerably easier to use than their predecessors because their predecessors' operation often demanded thorough familiarity with practical electronics. The ability to connect to a monitor (screen) or TV set allowed visual manipulation of text and numbers. The BASIC language, which was easier to learn and use than raw machine language, became a standard feature. These features were already common in minicomputers, with which many hobbyists and early produces were familiar.
In 1979, the launch of the VisiCalc spreadsheet (initially for the Apple II) first turned the microcomputer from a hobby for computer enthusiasts into a business tool. After the 1981 release by IBM of its IBM PC, the term personal computer became generally used for microcomputers compatible with the IBM PC architecture (IBM PC–compatible).
| Technology | Computer hardware | null |
48146 | https://en.wikipedia.org/wiki/Fossil%20fuel | Fossil fuel | A fossil fuel is a carbon compound- or hydrocarbon-containing material formed naturally in the Earth's crust from the buried remains of prehistoric organisms (animals, plants or planktons), a process that occurs within geological formations. Reservoirs of such compound mixtures, such as coal, petroleum and natural gas, can be extracted and burnt as fuel for human consumption to provide energy for direct use (such as for cooking, heating or lighting), to power heat engines (such as steam or internal combustion engines) that can propel vehicles, or to generate electricity via steam turbine generators. Some fossil fuels are further refined into derivatives such as kerosene, gasoline and diesel, or converted into petrochemicals such as polyolefins (plastics), aromatics and synthetic resins.
The origin of fossil fuels is the anaerobic decomposition of buried dead organisms. The conversion from these organic materials to high-carbon fossil fuels typically requires a geological process of millions of years. Due to the length of time it takes nature to form them, fossil fuels are considered non-renewable resources.
In 2022, over 80% of primary energy consumption in the world and over 60% of its electricity supply were from fossil fuels. The large-scale burning of fossil fuels causes serious environmental damage. Over 70% of the greenhouse gas emissions due to human activity in 2022 was carbon dioxide () released from burning fossil fuels. Natural carbon cycle processes on Earth, mostly absorption by the ocean, can remove only a small part of this, and terrestrial vegetation loss due to deforestation, land degradation and desertification further compounds this deficiency. Therefore, there is a net increase of many billion tonnes of atmospheric per year. Although methane leaks are significant, the burning of fossil fuels is the main source of greenhouse gas emissions causing global warming and ocean acidification. Additionally, most air pollution deaths are due to fossil fuel particulates and noxious gases, and it is estimated that this costs over 3% of the global gross domestic product and that fossil fuel phase-out will save millions of lives each year.
Recognition of the climate crisis, pollution and other negative impacts caused by fossil fuels has led to a widespread policy transition and activist movement focused on ending their use in favor of renewable and sustainable energy. Because the fossil-fuel industry is so heavily integrated in the global economy and heavily subsidized, this transition is expected to have significant economic impacts. Many stakeholders argue that this change needs to be a just transition and create policy that addresses the societal burdens created by the stranded assets of the fossil fuel industry. International policy, in the form of United Nations' sustainable development goals for affordable and clean energy and climate action, as well as the Paris Climate Agreement, is designed to facilitate this transition at a global level. In 2021, the International Energy Agency concluded that no new fossil fuel extraction projects could be opened if the global economy and society wants to avoid the worst impacts of climate change and meet international goals for climate change mitigation.
Origin
The theory that fossil fuels formed from the fossilized remains of dead plants by exposure to heat and pressure in Earth's crust over millions of years was first introduced by Andreas Libavius "in his 1597 Alchemia [Alchymia]" and later by Mikhail Lomonosov "as early as 1757 and certainly by 1763". The first recorded use of the term "fossil fuel" occurs in the work of the German chemist Caspar Neumann, in English translation in 1759. The Oxford English Dictionary notes that in the phrase "fossil fuel" the adjective "fossil" means "[o]btained by digging; found buried in the earth", which dates to at least 1652, before the English noun "fossil" came to refer primarily to long-dead organisms in the early 18th century.
Aquatic phytoplankton and zooplankton that died and sedimented in large quantities under anoxic conditions millions of years ago began forming petroleum and natural gas as a result of anaerobic decomposition. Over geological time this organic matter, mixed with mud, became buried under further heavy layers of inorganic sediment. The resulting high temperature and pressure caused the organic matter to chemically alter, first into a waxy material known as kerogen, which is found in oil shales, and then with more heat into liquid and gaseous hydrocarbons in a process known as catagenesis. Despite these heat-driven transformations, the energy released in combustion is still photosynthetic in origin.
Terrestrial plants tended to form coal and methane. Many of the coal fields date to the Carboniferous period of Earth's history. Terrestrial plants also form type III kerogen, a source of natural gas. Although fossil fuels are continually formed by natural processes, they are classified as non-renewable resources because they take millions of years to form and known viable reserves are being depleted much faster than new ones are generated.
Importance
Fossil fuels have been important to human development because they can be readily burned in the open atmosphere to produce heat. The use of peat as a domestic fuel predates recorded history. Coal was burned in some early furnaces for the smelting of metal ore, while semi-solid hydrocarbons from oil seeps were also burned in ancient times, they were mostly used for waterproofing and embalming.
Commercial exploitation of petroleum began in the 19th century.
Natural gas, once flared-off as an unneeded byproduct of petroleum production, is now considered a very valuable resource. Natural gas deposits are also the main source of helium.
Heavy crude oil, which is much more viscous than conventional crude oil, and oil sands, where bitumen is found mixed with sand and clay, began to become more important as sources of fossil fuel in the early 2000s. Oil shale and similar materials are sedimentary rocks containing kerogen, a complex mixture of high-molecular weight organic compounds, which yield synthetic crude oil when heated (pyrolyzed). With additional processing, they can be employed instead of other established fossil fuels. During the 2010s and 2020s there was disinvestment from exploitation of such resources due to their high carbon cost relative to more easily-processed reserves.
Prior to the latter half of the 18th century, windmills and watermills provided the energy needed for work such as milling flour, sawing wood or pumping water, while burning wood or peat provided domestic heat. The wide-scale use of fossil fuels, coal at first and petroleum later, in steam engines enabled the Industrial Revolution. At the same time, gas lights using natural gas or coal gas were coming into wide use. The invention of the internal combustion engine and its use in automobiles and trucks greatly increased the demand for gasoline and diesel oil, both made from fossil fuels. Other forms of transportation, railways and aircraft, also require fossil fuels. The other major use for fossil fuels is in generating electricity and as feedstock for the petrochemical industry. Tar, a leftover of petroleum extraction, is used in the construction of roads.
The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon-fueled irrigation. The development of synthetic nitrogen fertilizer has significantly supported global population growth; it has been estimated that almost half of the Earth's population are currently fed as a result of synthetic nitrogen fertilizer use. According to head of a fertilizers commodity price agency, "50% of the world's food relies on fertilisers."
Environmental effects
The burning of fossil fuels has a number of negative externalitiesharmful environmental impacts where the effects extend beyond the people using the fuel. These effects vary between different fuels. All fossil fuels release when they burn, thus accelerating climate change. Burning coal, and to a lesser extent oil and its derivatives, contributes to atmospheric particulate matter, smog and acid rain.
Air pollution from fossil fuels in 2018 has been estimated to cost US$2.9 trillion, or 3.3% of the global gross domestic product (GDP).
Climate change is largely driven by the release of greenhouse gases like , and the burning of fossil fuels is the main source of these emissions. In most parts of the world climate change is negatively impacting ecosystems. This includes contributing to the extinction of species and reducing people's ability to produce food, thus adding to the problem of world hunger. Continued rises in global temperatures will lead to further adverse effects on both ecosystems and people; the World Health Organization has said that climate change is the greatest threat to human health in the 21st century.
Combustion of fossil fuels generates sulfuric and nitric acids, which fall to Earth as acid rain, impacting both natural areas and the built environment. Monuments and sculptures made from marble and limestone are particularly vulnerable, as the acids dissolve calcium carbonate.
Fossil fuels also contain radioactive materials, mainly uranium and thorium, which are released into the atmosphere. In 2000, about 12,000 tonnes of thorium and 5,000 tonnes of uranium were released worldwide from burning coal. It is estimated that during 1982, US coal burning released 155 times as much radioactivity into the atmosphere as the Three Mile Island accident.
Burning coal also generates large amounts of bottom ash and fly ash. These materials are used in a wide variety of applications (see Fly ash reuse), utilizing, for example, about 40% of the United States production.
In addition to the effects that result from burning, the harvesting, processing, and distribution of fossil fuels also have environmental effects. Coal mining methods, particularly mountaintop removal and strip mining, have negative environmental impacts, and offshore oil drilling poses a hazard to aquatic organisms. Fossil fuel wells can contribute to methane release via fugitive gas emissions. Oil refineries also have negative environmental impacts, including air and water pollution. Coal is sometimes transported by diesel-powered locomotives, while crude oil is typically transported by tanker ships, requiring the combustion of additional fossil fuels.
A variety of mitigating efforts have arisen to counter the negative effects of fossil fuels. This includes a movement to use alternative energy sources, such as renewable energy. Environmental regulation uses a variety of approaches to limit these emissions; for example, rules against releasing waste products like fly ash into the atmosphere.
In December 2020, the United Nations released a report saying that despite the need to reduce greenhouse emissions, various governments are "doubling down" on fossil fuels, in some cases diverting over 50% of their COVID-19 recovery stimulus funding to fossil fuel production rather than to alternative energy. The UN secretary general António Guterres declared that "Humanity is waging war on nature. This is suicidal. Nature always strikes backand it is already doing so with growing force and fury." He also claimed there is still cause for hope, anticipating the US plan to join other large emitters like China and the EU in adopting targets to reach net zero emissions by 2050.
Inflation effects
Fossilflation is a term that describes the impact of fossil fuels on inflation.
According to Vox in August 2022, "Economists have pointed to energy prices as the main reason for high inflation," noting that "energy prices indirectly affect virtually every part of the economy". Sectors that raise prices significantly as a result of higher fossil fuel prices include transportation, food, and shipping.
History
Mark Zandi of Moody's says that fossil fuel prices have driven every big episode of inflation since WWII.
The economic impact of the Russian Invasion of Ukraine in 2022 was a major recent example of fossil fuels causing inflation. Some economists, including Isabel Schnabel, believe that dependence on fossil fuels is the main driver of the 2021-2022 inflation spike.
Efforts to combat fossilflation
Gernot Wagner argues that commodities are undesirable energy sources because they are susceptible to volatile price swings that technologies like renewable energy are not. He also argues that technologies improve and get relatively cheaper over time. Coming out of the COVID-19 pandemic, some argued for the possibility of a base effect phenomenon due to cheaper than normal prices, such as for oil, at the onset of the pandemic, followed by above-average prices which exacerbated the perceived inflation.
Inflation Reduction Act
While not expected to provide much short-term relief, the Inflation Reduction Act seeks to make the United States less dependent on fossil fuels and their ability to cause inflation in the economy. Moody's estimates that by 2030, the bill could reduce the typical American household's spending on energy by more than $300 each year, in 2022 dollars.
Illness and deaths
Environmental pollution from fossil fuels impacts humans because particulates and other air pollution from fossil fuel combustion may cause illness and death when inhaled. These health effects include premature death, acute respiratory illness, aggravated asthma, chronic bronchitis and decreased lung function. The poor, undernourished, very young and very old, and people with preexisting respiratory disease and other ill health are more at risk. Global air pollution deaths due to fossil fuels have been estimated at over 8 million people (2018, nearly 1 in 5 deaths worldwide) at 10.2 million (2019), and 5.13 million excess deaths from ambient air pollution from fossil fuel use (2023).
While all energy sources inherently have adverse effects, the data show that fossil fuels cause the highest levels of greenhouse gas emissions and are the most dangerous for human health. In contrast, modern renewable energy sources appear to be safer for human health and cleaner. The death rates from accidents and air pollution in the EU are as follows per terawatt-hour (TWh):
As the data shows, coal, oil, natural gas, and biomass cause higher death rates and higher levels of greenhouse gas emissions than hydropower, nuclear energy, wind, and solar power. Scientists propose that 1.8 million lives have been saved by replacing fossil fuel sources with nuclear power.
Phase-out
Just transition
Divestment
Industrial sector
In 2019, Saudi Aramco was listed and it reached a US$2 trillion valuation on its second day of trading, after the world's largest initial public offering.
Subsidies
Lobbying activities
| Technology | Energy | null |
48239 | https://en.wikipedia.org/wiki/Celestial%20sphere | Celestial sphere | In astronomy and navigation, the celestial sphere is an abstract sphere that has an arbitrarily large radius and is concentric to Earth. All objects in the sky can be conceived as being projected upon the inner surface of the celestial sphere, which may be centered on Earth or the observer. If centered on the observer, half of the sphere would resemble a hemispherical screen over the observing location.
The celestial sphere is a conceptual tool used in spherical astronomy to specify the position of an object in the sky without consideration of its linear distance from the observer. The celestial equator divides the celestial sphere into northern and southern hemispheres.
Description
Because astronomical objects are at such remote distances, casual observation of the sky offers no information on their actual distances. All celestial objects seem equally far away, as if fixed onto the inside of a sphere with a large but unknown radius, which appears to rotate westward overhead; meanwhile, Earth underfoot seems to remain still. For purposes of spherical astronomy, which is concerned only with the directions to celestial objects, it makes no difference if this is actually the case or if it is Earth that is rotating while the celestial sphere is stationary.
The celestial sphere can be considered to be infinite in radius. This means any point within it, including that occupied by the observer, can be considered the center. It also means that all parallel lines, be they millimetres apart or across the Solar System from each other, will seem to intersect the sphere at a single point, analogous to the vanishing point of graphical perspective. All parallel planes will seem to intersect the sphere in a coincident great circle (a "vanishing circle").
Conversely, observers looking toward the same point on an infinite-radius celestial sphere will be looking along parallel lines, and observers looking toward the same great circle, along parallel planes. On an infinite-radius celestial sphere, all observers see the same things in the same direction.
For some objects, this is over-simplified. Objects which are relatively near to the observer (for instance, the Moon) will seem to change position against the distant celestial sphere if the observer moves far enough, say, from one side of planet Earth to the other. This effect, known as parallax, can be represented as a small offset from a mean position. The celestial sphere can be considered to be centered at the Earth's center, the Sun's center, or any other convenient location, and offsets from positions referred to these centers can be calculated.
In this way, astronomers can predict geocentric or heliocentric positions of objects on the celestial sphere, without the need to calculate the individual geometry of any particular observer, and the utility of the celestial sphere is maintained. Individual observers can work out their own small offsets from the mean positions, if necessary. In many cases in astronomy, the offsets are insignificant.
Determining location of objects
The celestial sphere can thus be thought of as a kind of astronomical shorthand, and is applied very frequently by astronomers. For instance, the Astronomical Almanac for 2010 lists the apparent geocentric position of the Moon on January 1 at 00:00:00.00 Terrestrial Time, in equatorial coordinates, as right ascension 6h 57m 48.86s, declination +23° 30' 05.5". Implied in this position is that it is as projected onto the celestial sphere; any observer at any location looking in that direction would see the "geocentric Moon" in the same place against the stars. For many rough uses (e.g. calculating an approximate phase of the Moon), this position, as seen from the Earth's center, is adequate.
For applications requiring precision (e.g. calculating the shadow path of an eclipse), the Almanac gives formulae and methods for calculating the topocentric coordinates, that is, as seen from a particular place on the Earth's surface, based on the geocentric position. This greatly abbreviates the amount of detail necessary in such almanacs, as each observer can handle their own specific circumstances.
Greek history on celestial spheres
Celestial spheres (or celestial orbs) were envisioned to be perfect and divine entities initially from Greek astronomers such as Aristotle. He composed a set of principles called Aristotelian physics that outlined the natural order and structure of the world. Like other Greek astronomers, Aristotle also thought the "...celestial sphere as the frame of reference for their geometric theories of the motions of the heavenly bodies". With his adoption of Eudoxus of Cnidus' theory, Aristotle had described celestial bodies within the Celestial sphere to be filled with pureness, perfect and quintessence (the fifth element that was known to be divine and purity according to Aristotle). Aristotle deemed the Sun, Moon, planets and the fixed stars to be perfectly concentric spheres in a superlunary region above the sublunary sphere. Aristotle had asserted that these bodies (in the superlunary region) are perfect and cannot be corrupted by any of the classical elements: fire, water, air, and earth. Corruptible elements were only contained in the sublunary region and incorruptible elements were in the superlunary region of Aristotle's geocentric model. Aristotle had the notion that celestial orbs must exhibit celestial motion (a perfect circular motion) that goes on for eternity. He also argued that the behavior and property follows strictly to a principle of natural place where the quintessential element moves freely of divine will, while other elements, fire, air, water and earth, are corruptible, subject to change and imperfection. Aristotle's key concepts rely on the nature of the five elements distinguishing the Earth and the Heavens in the astronomical reality, taking Eudoxus's model of separate spheres.
Numerous discoveries from Aristotle and Eudoxus (approximately 395 B.C. to 337 B.C.) have sparked differences in both of their models and sharing similar properties simultaneously. Aristotle and Eudoxus claimed two different counts of spheres in the heavens. According to Eudoxus, there were only 27 spheres in the heavens, while there are 55 spheres in Aristotle's model. Eudoxus attempted to construct his model mathematically from a treatise known as On Speeds () and asserted the shape of the hippopede or lemniscate was associated with planetary retrogression. Aristotle emphasized that the speed of the celestial orbs is unchanging, like the heavens, while Eudoxus emphasized that the orbs are in a perfect geometrical shape. Eudoxus's spheres would produce undesirable motions to the lower region of the planets, while Aristotle introduced unrollers between each set of active spheres to counteract the motions of the outer set, or else the outer motions will be transferred to the outer planets. Aristotle would later observe "...the motions of the planets by using the combinations of nested spheres and circular motions in creative ways, but further observations kept undoing their work".
Aside from Aristotle and Eudoxus, Empedocles gave an explanation that the motion of the heavens, moving about it at divine (relatively high) speed, puts the Earth in a stationary position due to the circular motion preventing the downward movement from natural causes. Aristotle criticized Empedocles's model, arguing that all heavy objects go towards the Earth and not the whirl itself coming to Earth. He ridiculed it and claimed that Empedocles's statement was extremely absurd. Anything that defied the motion of natural place and the unchanging heavens (including the celestial spheres) was criticized immediately by Aristotle.
==Celestial coordinate systems==
These concepts are important for understanding celestial coordinate systems, frameworks for measuring the positions of objects in the sky. Certain reference lines and planes on Earth, when projected onto the celestial sphere, form the bases of the reference systems. These include the Earth's equator, axis, and orbit. At their intersections with the celestial sphere, these form the celestial equator, the north and south celestial poles, and the ecliptic, respectively. As the celestial sphere is considered arbitrary or infinite in radius, all observers see the celestial equator, celestial poles, and ecliptic at the same place against the background stars.
From these bases, directions toward objects in the sky can be quantified by constructing celestial coordinate systems. Similar to geographic longitude and latitude, the equatorial coordinate system specifies positions relative to the celestial equator and celestial poles, using right ascension and declination. The ecliptic coordinate system specifies positions relative to the ecliptic (Earth's orbit), using ecliptic longitude and latitude. Besides the equatorial and ecliptic systems, some other celestial coordinate systems, like the galactic coordinate system, are more appropriate for particular purposes.
History
The ancient Greeks assumed the literal truth of stars attached to a celestial sphere, revolving about the Earth in one day, and a fixed Earth.
The Eudoxan planetary model, on which the Aristotelian and Ptolemaic models were based, was the first geometric explanation for the "wandering" of the classical planets. The outermost of these "crystal spheres" was thought to carry the fixed stars. Eudoxus used 27 concentric spherical solids to answer Plato's challenge: "By the assumption of what uniform and orderly motions can the apparent motions of the planets be accounted for?"
Anaxagoras in the mid 5th century BC was the first known philosopher to suggest that the stars were "fiery stones" too far away for their heat to be felt. Similar ideas were expressed by Aristarchus of Samos. However, they did not enter mainstream European and Islamic astronomy of the late ancient and medieval period.
Copernican heliocentrism did away with the planetary spheres, but it did not necessarily preclude the existence of a sphere for the fixed stars. The first astronomer of the European Renaissance to suggest that the stars were distant suns was Giordano Bruno in his De l'infinito universo et mondi (1584). This idea was among the charges, albeit not in a prominent position, brought against him by the Inquisition.
The idea became mainstream in the later 17th century, especially following the publication of Conversations on the Plurality of Worlds by Bernard Le Bovier de Fontenelle (1686), and by the early 18th century it was the default working assumptions in stellar astronomy.
Star globe
A celestial sphere can also refer to a physical model of the celestial sphere or celestial globe.
Such globes map the constellations on the outside of a sphere, resulting in a mirror image of the constellations as seen from Earth. The oldest surviving example of such an artifact is the globe of the Farnese Atlas sculpture, a 2nd-century copy of an older (Hellenistic period, ca. 120 BCE) work.
Bodies other than Earth
Observers on other worlds would, of course, see objects in that sky under much the same conditions – as if projected onto a dome. Coordinate systems based on the sky of that world could be constructed. These could be based on the equivalent "ecliptic", poles and equator, although the reasons for building a system that way are as much historic as technical.
| Physical sciences | Celestial sphere | null |
48317 | https://en.wikipedia.org/wiki/Lagomorpha | Lagomorpha | The lagomorphs () are the members of the taxonomic order Lagomorpha, of which there are two living families: the Leporidae (rabbits and hares) and the Ochotonidae (pikas). There are 110 recent species of lagomorph, of which only 109 species in twelve genera are extant, including ten genera of rabbits (42 species); one genus of hare (33 species) and one genus of pika (34 species). The name of the order is derived from the Ancient Greek lagos (λαγώς, "hare") + morphē (μορφή, "form").
Taxonomy and evolutionary history
Other names used for this order, now considered synonymous, include: Duplicidentata (Illiger, 1811); Leporida (Averianov, 1999); Neolagomorpha (Averianov, 1999); Ochotonida (Averianov, 1999); and Palarodentia (Haeckel, 1895; Lilian, 2016).
The evolutionary history of the lagomorphs is still not well understood. In the late 20th century, it was generally agreed that Eurymylus, which lived in eastern Asia and dates back to the late Paleocene or early Eocene, was an ancestor of the lagomorphs. Examination of the fossil evidence in the 21st century suggested that the lagomorphs may have instead descended from mimotonids, mammals present in Asia during the Paleogene with similar body size and dental structure to early European rabbits such as Megalagus turgidus, while Eurymylus was more closely related to rodents (although not a direct ancestor). The leporids first appeared in the late Eocene and rapidly spread throughout the Northern Hemisphere; they show a trend towards increasingly long hind limbs as the modern leaping gait developed. The pikas appeared somewhat later in the Oligocene of eastern Asia.
Lagomorphs were certainly more diverse in the past than in the present, with around 75 genera and over 230 species represented in the fossil record and many more species in a single biome. This is evidence that lagomorph lineages are declining.
A 2008 study suggests an Indian origin for the order, having possibly evolved in isolation when India was an island continent in the Paleocene.
Characteristics
Lagomorphs are similar to other mammals in that they all have hair, four limbs (i.e., they are tetrapods), and mammary glands and are endotherms. Lagomorphs possess a moderately fused postorbital process to the cranium, unlike other small mammals. They differ in that they have a mixture of "basal" and "derived" physical traits.
Differences between lagomorphs and other mammals
Lagomorphs and rodents form the clade or grandorder Glires. Despite the evolutionary relationship between lagomorphs and rodents, the two orders have some major differences.
Lagomorphs have four incisors in the upper jaw (smaller peg teeth behind larger incisors), whereas rodents only have two. They are similar to rodents in that their incisors grow continuously, thus necessitating constant chewing on fibrous food to prevent the teeth from growing too long. In addition, all lagomorph teeth grow continuously, while for most rodents, only the incisors grow continuously. Lagomorph and rodent incisors are structured differently. Lagomorphs have more cheek teeth than rodents. Both have a large diastema.
Lagomorphs are almost strictly herbivorous, unlike rodents, many of which will eat both meat and vegetable matter. Lagomorphs have no paw pads; instead, the bottoms of their paws are entirely covered with fur, a trait they share with red pandas. Similar to the rodents, bats, and some mammalian insectivores, they have a smooth-surfaced cerebrum. Lagomorphs are unusual among terrestrial mammals in that the females are larger than males.
Differences between families of lagomorphs
Rabbits and hares move by jumping, pushing off with their strong hind legs and using their forelimbs to soften the impact on landing. Pikas lack certain skeletal modifications present in leporids, such as a highly arched skull, an upright posture of the head, strong hind limbs and pelvic girdle, and long limbs. Also, pikas have a short nasal region and entirely lack a supraorbital foramen, while leporids have prominent supraorbital foramina and nasal regions.
Pikas
Pikas, also known as conies, are entirely represented by the family Ochotonidae and are small mammals native to mountainous regions of western North America and Central Asia. They are mostly about long and have greyish-brown, silky fur, small rounded ears, and almost no tail. Their four legs are nearly equal in length. Some species live in scree, making their homes in the crevices between broken rocks, while others construct burrows in upland areas. The rock-dwelling species are typically long-lived and solitary, having one or two small litters each year contributing to stable populations. The burrowing species, in contrast, are short-lived, gregarious and have multiple large litters during the year. These species tend to have large swings in population size. The gestation period of the pika is around one month long, and the newborns are altricial (eyes and ears closed, no fur). The social behaviour of the two groups also differs: the rock dwellers aggressively maintain scent-marked territories, while the burrowers live in family groups, they interact vocally with each other and defend a mutual territory. Pikas are diurnal and are active early and late in the day during hot weather. They feed on all sorts of plant material. As they do not hibernate, they make "haypiles" of dried vegetation which they collect and carry back to their homes to store for use during winter.
Hares
Hares, members of genus Lepus of family Leporidae, are medium size mammals native to Europe, Asia, Africa, and North America. North American jackrabbits are actually hares. Species vary in size from in length and have long powerful back legs, and ears up to in length. Although usually greyish-brown, some species turn white in the winter. They are solitary animals. Newborns are precocial (eyes and ears open, fully furred). Several litters are born during the year in a form (a nest above ground, usually under a bush). They are preyed upon by large mammalian carnivores and birds of prey.
Rabbits
Rabbits, members of the Leporidae family (excluding Lepus (hares)) are generally much smaller than hares and include the rock hares and the hispid hare. They are native to Europe, parts of Africa, Central and Southern Asia, North America and much of South America. They inhabit both grassland and arid regions. They vary in size from and have long, powerful hind legs, shorter forelegs and a tiny tail. The colour is some shade of brown, buff or grey and there is one black species and two striped ones. Domestic rabbits come in a wider variety of colours. Newborn rabbits are altricial (eyes and ears closed, no fur). Although most species live in burrows, the cottontails and hispid hares have forms (nests above ground, usually under a bush). Most of the burrowing species are colonial, and feed together in small groups. Rabbits play an important part in the terrestrial food chain, eating a wide range of forbs, grasses, and herbs, and being part of the staple diet of many carnivorous species. Domestic rabbits can be litter box trained, and—assuming they are given sufficient room to run and a good diet—can live long lives as house pets.
Distribution
Lagomorphs are widespread around the world and inhabit every continent except Antarctica. However, they are not found in most of the southern cone of South America, in the West Indies, Indonesia or Madagascar, nor on many islands. Although they are not native to Australia, humans have introduced them there and they have successfully colonized many parts of the country and caused disruption to native species.
Biology
Digestion
Easily digestible food is processed in the gastrointestinal tract and expelled as regular feces. But in order to get nutrients out of hard to digest fiber, lagomorphs ferment fiber in the cecum (in the GI tract) and then expel the contents as cecotropes, which are reingested (cecotrophy). The cecotropes are then absorbed in the small intestine to utilize the nutrients.
Like rodents, they are not able to vomit.
Birth and early life
Many lagomorphs breed several times a year and produce large litters. This is particularly the case in species that live in underground, protective environments, such as burrows. The young of rabbits and pikas (called kits) are born after a short gestation period and the mother can become pregnant again almost immediately after giving birth. The mothers are able to leave these young safely and go off to feed, returning at intervals to feed them with their unusually rich milk. In some species, the mother only visits and feeds the litter once a day but the young grow rapidly and are usually weaned within a month.
Hare young are called leverets. Adults have a strategy to prevent predators from tracking down their litter by following the adults' scent. They approach and depart from the nesting site in a series of immense bounds, sometimes moving at right angles to their previous direction. Each litter of hares have a small number of young and are born after a longer gestation period.
Sociality and safety
Many species of lagomorphs, particularly the rabbits and the pikas, are gregarious and live in colonies, whereas hares are generally solitary species, although many hares travel and forage in groups of two, three, or four. Many rabbits and pikas rely on their burrows as places of safety when danger threatens, but hares rely on their long legs, great speed and jinking gait to escape from predators.
Classification
Recent genera
Order Lagomorpha Brandt 1885
Family Leporidae Fischer de Waldheim 1817 (rabbits and hares)
Subfamily Leporinae Trouessart 1880
Genus Brachylagus
Genus Bunolagus
Genus Caprolagus Blyth 1845
Genus Lepus Linnaeus 1758 (hare)
Genus Nesolagus Forsyth Major 1899
Genus Oryctolagus Lilljeborg 1874
Genus Pentalagus Lyon 1904
Genus Poelagus
Genus Pronolagus Lyon 1904
Genus Romerolagus Merriam 1896
Genus Sylvilagus Gray 1867
Family Ochotonidae Thomas 1897 (pikas)
Genus Ochotona Link 1795
Genus †Prolagus Pomel 1853 (considered by some sources to be the sole member of the family Prolagidae)
Fossil genera
Order Lagomorpha Brandt 1885
Family Leporidae Fischer de Waldheim 1817 (rabbits and hares)
Subfamily † Archaeolaginae
Genus †Archaeolagus Dice 1917
Genus †Hypolagus Dice 1917
Genus †Notolagus Wilson 1938
Genus †Panolax Cope 1874
Subfamily Leporinae Trouessart 1880
Genus †Alilepus Dice 1931
Genus †Nuralagus Lilljeborg 1874
Genus †Pliolagus Kormos 1934
Genus †Pliosiwalagus Patnaik 2001
Genus †Pratilepus Hibbard 1939
Genus †Serengetilagus Dietrich 1941
Subfamily †Palaeolaginae Dice 1929
Tribe †Dasyporcina Gray 1825
Genus †Coelogenys Illiger 1811
Genus †Agispelagus Argyropulo 1939
Genus †Aluralagus Downey 1968
Genus †Austrolagomys Stromer 1926
Genus †Aztlanolagus Russell & Harris 1986
Genus †Chadrolagus Gawne 1978
Genus †Gobiolagus Burke 1941
Genus †Lagotherium Pictet 1853
Genus †Lepoides White 1988
Genus †Nekrolagus Hibbard 1939
Genus †Ordolagus de Muizon 1977
Genus †Paranotolagus Miller & Carranza-Castaneda 1982
Genus †Pewelagus White 1984
Genus †Pliopentalagus Gureev & Konkova 1964
Genus †Pronotolagus White 1991
Genus †Tachylagus Storer 1992
Genus †Trischizolagus Radulesco & Samson 1967
Genus †Veterilepus Radulesco & Samson 1967
Tribe incertae sedis
Genus †Litolagus Dawson 1958
Genus †Megalagus Walker 1931
Genus †Mytonolagus Burke 1934
Genus †Palaeolagus Leidy 1856
Family Ochotonidae Thomas 1897 (pikas)
Genus †Alloptox Dawson 1961
Genus †Amphilagus Tobien 1974
Genus †Bellatona Dawson 1961
Genus †Cuyamalagus Hutchison & Lindsay 1974
Genus †Desmatolagus Matthew & Granger 1923
Genus †Gripholagomys Green 1972
Genus †Hesperolagomys Clark et al. 1964
Genus †Kenyalagomys MacInnes 1953
Genus †Lagopsis Schlosser 1894
Genus †Ochotonoides Teilhard de Jardin & Young 1931
Genus †Ochotonoma Sen 1998
Genus †Oklahomalagus Dalquest et al. 1996
Genus †Oreolagus Dice 1917
Genus †Piezodus Viret 1929
Genus †Russellagus Storer 1970
Genus †Sinolagomys Bohlin 1937
Genus †Titanomys von Meyer 1843
Family incertae sedis
Genus †Eurolagus Lopez Martinez 1977
Genus †Hsiuannania Xu 1976
Genus †Hypsimylus Zhai 1977
Genus †Lushilagus Li 1965
Genus †Shamolagus Burke 1941
| Biology and health sciences | Lagomorphs | null |
48334 | https://en.wikipedia.org/wiki/Retina | Retina | The retina (; or retinas) is the innermost, light-sensitive layer of tissue of the eye of most vertebrates and some molluscs. The optics of the eye create a focused two-dimensional image of the visual world on the retina, which then processes that image within the retina and sends nerve impulses along the optic nerve to the visual cortex to create visual perception. The retina serves a function which is in many ways analogous to that of the film or image sensor in a camera.
The neural retina consists of several layers of neurons interconnected by synapses and is supported by an outer layer of pigmented epithelial cells. The primary light-sensing cells in the retina are the photoreceptor cells, which are of two types: rods and cones. Rods function mainly in dim light and provide monochromatic vision. Cones function in well-lit conditions and are responsible for the perception of colour through the use of a range of opsins, as well as high-acuity vision used for tasks such as reading. A third type of light-sensing cell, the photosensitive ganglion cell, is important for entrainment of circadian rhythms and reflexive responses such as the pupillary light reflex.
Light striking the retina initiates a cascade of chemical and electrical events that ultimately trigger nerve impulses that are sent to various visual centres of the brain through the fibres of the optic nerve. Neural signals from the rods and cones undergo processing by other neurons, whose output takes the form of action potentials in retinal ganglion cells whose axons form the optic nerve.
In vertebrate embryonic development, the retina and the optic nerve originate as outgrowths of the developing brain, specifically the embryonic diencephalon; thus, the retina is considered part of the central nervous system (CNS) and is actually brain tissue. It is the only part of the CNS that can be visualized noninvasively. Like most of the brain, the retina is isolated from the vascular system by the blood–brain barrier. The retina is the part of the body with the greatest continuous energy demand.
Structure
Inverted versus non-inverted retina
The vertebrate retina is inverted in the sense that the light-sensing cells are in the back of the retina, so that light has to pass through layers of neurons and capillaries before it reaches the photosensitive sections of the rods and cones. The ganglion cells, whose axons form the optic nerve, are at the front of the retina; therefore, the optic nerve must cross through the retina en route to the brain. No photoreceptors are in this region, giving rise to the blind spot. In contrast, in the cephalopod retina, the photoreceptors are in front, with processing neurons and capillaries behind them. Because of this, cephalopods do not have a blind spot.
Although the overlying neural tissue is partly transparent, and the accompanying glial cells have been shown to act as fibre-optic channels to transport photons directly to the photoreceptors, light scattering does occur. Some vertebrates, including humans, have an area of the central retina adapted for high-acuity vision. This area, termed the fovea centralis, is avascular (does not have blood vessels), and has minimal neural tissue in front of the photoreceptors, thereby minimizing light scattering.
The cephalopods have a non-inverted retina, which is comparable in resolving power to the eyes of many vertebrates. Squid eyes do not have an analog of the vertebrate retinal pigment epithelium (RPE). Although their photoreceptors contain a protein, retinochrome, that recycles retinal and replicates one of the functions of the vertebrate RPE, cephalopod photoreceptors are likely not maintained as well as in vertebrates, and that as a result, the useful lifetime of photoreceptors in invertebrates is much shorter than in vertebrates. Having easily replaced stalk eyes (some lobsters) or retinae (some spiders, such as Deinopis) rarely occurs.
The cephalopod retina does not originate as an outgrowth of the brain, as the vertebrate one does. This difference suggests that vertebrate and cephalopod eyes are not homologous, but have evolved separately. From an evolutionary perspective, a more complex structure such as the inverted retina can generally come about as a consequence of two alternate processes - an advantageous "good" compromise between competing functional limitations, or as a historical maladaptive relic of the convoluted path of organ evolution and transformation. Vision is an important adaptation in higher vertebrates.
A third view of the "inverted" vertebrate eye is that it combines two benefits - the maintenance of the photoreceptors mentioned above, and the reduction in light intensity necessary to avoid blinding the photoreceptors, which are based on the extremely sensitive eyes of the ancestors of modern hagfish (fish that live in very deep, dark water).
A recent study on the evolutionary purpose for the inverted retina structure from the APS (American Physical Society) says that "The directional of glial cells helps increase the clarity of human vision. But we also noticed something rather curious: the colours that best passed through the glial cells were green to red, which the eye needs most for daytime vision. The eye usually receives too much blue—and thus has fewer blue-sensitive cones.
Further computer simulations showed that green and red are concentrated five to ten times more by the glial cells, and into their respective cones, than blue light. Instead, excess blue light gets scattered to the surrounding rods. This optimization is such that color vision during the day is enhanced, while night-time vision suffers very little".
Retinal layers
The vertebrate retina has 10 distinct layers. From closest to farthest from the vitreous body:
Inner limiting membrane – basement membrane elaborated by Müller cells
Nerve fibre layer – axons of the ganglion cell bodies (a thin layer of Müller cell footplates exists between this layer and the inner limiting membrane)
Ganglion cell layer – contains nuclei of ganglion cells, the axons of which become the optic nerve fibres, and some displaced amacrine cells
Inner plexiform layer – contains the synapse between the bipolar cell axons and the dendrites of the ganglion and amacrine cells
Inner nuclear layer – contains the nuclei and surrounding cell bodies (perikarya) of the amacrine cells, bipolar cells, and horizontal cells
Outer plexiform layer – projections of rods and cones ending in the rod spherule and cone pedicle, respectively, these make synapses with dendrites of bipolar cells and horizontal cells. In the macular region, this is known as the Fiber layer of Henle.
Outer nuclear layer – cell bodies of rods and cones
External limiting membrane – layer that separates the inner segment portions of the photoreceptors from their cell nuclei
Inner segment / outer segment layer – inner segments and outer segments of rods and cones, the outer segments contain a highly specialized light-sensing apparatus.
Retinal pigment epithelium – single layer of cuboidal epithelial cells (with extrusions not shown in diagram). This layer is closest to the choroid, and provides nourishment and supportive functions to the neural retina, The black pigment melanin in the pigment layer prevents light reflection throughout the globe of the eyeball; this is extremely important for clear vision.
These layers can be grouped into four main processing stages—photoreception; transmission to bipolar cells; transmission to ganglion cells, which also contain photoreceptors, the photosensitive ganglion cells; and transmission along the optic nerve. At each synaptic stage, horizontal and amacrine cells also are laterally connected.
The optic nerve is a central tract of many axons of ganglion cells connecting primarily to the lateral geniculate body, a visual relay station in the diencephalon (the rear of the forebrain). It also projects to the superior colliculus, the suprachiasmatic nucleus, and the nucleus of the optic tract. It passes through the other layers, creating the optic disc in primates.
Additional structures, not directly associated with vision, are found as outgrowths of the retina in some vertebrate groups. In birds, the pecten is a vascular structure of complex shape that projects from the retina into the vitreous humour; it supplies oxygen and nutrients to the eye, and may also aid in vision. Reptiles have a similar, but much simpler, structure.
In adult humans, the entire retina is about 72% of a sphere about 22 mm in diameter. The entire retina contains about 7 million cones and 75 to 150 million rods. The optic disc, a part of the retina sometimes called "the blind spot" because it lacks photoreceptors, is located at the optic papilla, where the optic-nerve fibres leave the eye. It appears as an oval white area of 3 mm2. Temporal (in the direction of the temples) to this disc is the macula, at whose centre is the fovea, a pit that is responsible for sharp central vision, but is actually less sensitive to light because of its lack of rods. Human and non-human primates possess one fovea, as opposed to certain bird species, such as hawks, that are bifoviate, and dogs and cats, that possess no fovea, but a central band known as the visual streak. Around the fovea extends the central retina for about 6 mm and then the peripheral retina. The farthest edge of the retina is defined by the ora serrata. The distance from one ora to the other (or macula), the most sensitive area along the horizontal meridian, is about 32 mm.
In section, the retina is no more than 0.5 mm thick. It has three layers of nerve cells and two of synapses, including the unique ribbon synapse. The optic nerve carries the ganglion-cell axons to the brain, and the blood vessels that supply the retina. The ganglion cells lie innermost in the eye while the photoreceptive cells lie beyond. Because of this counter-intuitive arrangement, light must first pass through and around the ganglion cells and through the thickness of the retina, (including its capillary vessels, not shown) before reaching the rods and cones. Light is absorbed by the retinal pigment epithelium or the choroid (both of which are opaque).
The white blood cells in the capillaries in front of the photoreceptors can be perceived as tiny bright moving dots when looking into blue light. This is known as the blue field entoptic phenomenon (or Scheerer's phenomenon).
Between the ganglion-cell layer and the rods and cones are two layers of neuropils, where synaptic contacts are made. The neuropil layers are the outer plexiform layer and the inner plexiform layer. In the outer neuropil layer, the rods and cones connect to the vertically running bipolar cells, and the horizontally oriented horizontal cells connect to ganglion cells.
The central retina predominantly contains cones, while the peripheral retina predominantly contains rods. In total, the retina has about seven million cones and a hundred million rods. At the centre of the macula is the foveal pit where the cones are narrow and long, and arranged in a hexagonal mosaic, the most dense, in contradistinction to the much fatter cones located more peripherally in the retina. At the foveal pit, the other retinal layers are displaced, before building up along the foveal slope until the rim of the fovea, or parafovea, is reached, which is the thickest portion of the retina. The macula has a yellow pigmentation, from screening pigments, and is known as the macula lutea. The area directly surrounding the fovea has the highest density of rods converging on single bipolar cells. Since its cones have a much lesser convergence of signals, the fovea allows for the sharpest vision the eye can attain.
Though the rod and cones are a mosaic of sorts, transmission from receptors, to bipolars, to ganglion cells is not direct. Since about 150 million receptors and only 1 million optic nerve fibres exist, convergence and thus mixing of signals must occur. Moreover, the horizontal action of the horizontal and amacrine cells can allow one area of the retina to control another (e.g. one stimulus inhibiting another). This inhibition is key to lessening the sum of messages sent to the higher regions of the brain. In some lower vertebrates (e.g. the pigeon), control of messages is "centrifugal" – that is, one layer can control another, or higher regions of the brain can drive the retinal nerve cells, but in primates, this does not occur.
Layers imagable with optical coherence tomography
Using optical coherence tomography (OCT), 18 layers can be identified in the retina. The layers and anatomical correlation are:
From innermost to outermost, the layers identifiable by OCT are as follows:
Development
Retinal development begins with the establishment of the eye fields mediated by the SHH and SIX3 proteins, with subsequent development of the optic vesicles regulated by the PAX6 and LHX2 proteins. The role of Pax6 in eye development was elegantly demonstrated by Walter Gehring and colleagues, who showed that ectopic expression of Pax6 can lead to eye formation on Drosophila antennae, wings, and legs. The optic vesicle gives rise to three structures: the neural retina, the retinal pigmented epithelium, and the optic stalk. The neural retina contains the retinal progenitor cells (RPCs) that give rise to the seven cell types of the retina. Differentiation begins with the retinal ganglion cells and concludes with production of the Muller glia. Although each cell type differentiates from the RPCs in a sequential order, there is considerable overlap in the timing of when individual cell types differentiate. The cues that determine a RPC daughter cell fate are coded by multiple transcription factor families including the bHLH and homeodomain factors.
In addition to guiding cell fate determination, cues exist in the retina to determine the dorsal-ventral (D-V) and nasal-temporal (N-T) axes. The D-V axis is established by a ventral to dorsal gradient of VAX2, whereas the N-T axis is coordinated by expression of the forkhead transcription factors FOXD1 and FOXG1. Additional gradients are formed within the retina. This spatial distribution may aid in proper targeting of RGC axons that function to establish the retinotopic map.
Blood supply
The retina is stratified into distinct layers, each containing specific cell types or cellular compartments that have metabolisms with different nutritional requirements. To satisfy these requirements, the ophthalmic artery bifurcates and supplies the retina via two distinct vascular networks: the choroidal network, which supplies the choroid and the outer retina, and the retinal network, which supplies the retina's inner layer.
Although the inverted retina of vertebrates appears counter-intuitive, it is necessary for the proper functioning of the retina. The photoreceptor layer must be embedded in the retinal pigment epithelium (RPE), which performs at least seven vital functions, one of the most obvious being to supply oxygen and other necessary nutrients needed for the photoreceptors to function.
Energy requirements
The energy requirements of the retina are even greater than that of the brain. This is due to the additional energy needed to continuously renew the photoreceptor outer segments, of which 10% are shed daily. Energy demands are greatest during dark adaptation when its sensitivity is most enhanced. The choroid supplies about 75% of these nutrients to the retina and the retinal vasculature only 25%.
When light strikes 11-cis-retinal (in the disks in the rods and cones), 11-cis-retinal changes to all-trans-retinal which then triggers changes in the opsins. Now, the outer segments do not regenerate the retinal back into the cis- form once it has been changed by light. Instead the retinal is pumped out to the surrounding RPE where it is regenerated and transported back into the outer segments of the photoreceptors. This recycling function of the RPE protects the photoreceptors against photo-oxidative damage and allows the photoreceptor cells to have decades-long useful lives.
In birds
The bird retina is devoid of blood vessels, perhaps to give unobscured passage of light for forming images, thus giving better resolution. It is, therefore, a considered view that the bird retina depends for nutrition and oxygen supply on a specialized organ, called the "pecten" or pecten oculi, located on the blind spot or optic disk. This organ is extremely rich in blood vessels and is thought to supply nutrition and oxygen to the bird retina by diffusion through the vitreous body. The pecten is highly rich in alkaline phosphatase activity and polarized cells in its bridge portion – both befitting its secretory role. Pecten cells are packed with dark melanin granules, which have been theorized to keep this organ warm with the absorption of stray light falling on the pecten. This is considered to enhance metabolic rate of the pecten, thereby exporting more nutritive molecules to meet the stringent energy requirements of the retina during long periods of exposure to light.
Biometric identification and diagnosis of disease
The bifurcations and other physical characteristics of the inner retinal vascular network are known to vary among individuals, and these individual variances have been used for biometric identification and for early detection of the onset of disease. The mapping of vascular bifurcations is one of the basic steps in biometric identification. Results of such analyses of retinal blood vessel structure can be evaluated against the ground truth data of vascular bifurcations of retinal fundus images that are obtained from the DRIVE dataset. In addition, the classes of vessels of the DRIVE dataset have also been identified, and an automated method for accurate extraction of these bifurcations is also available. Changes in retinal blood circulation are seen with aging and exposure to air pollution, and may indicate cardiovascular diseases such as hypertension and atherosclerosis. Determining the equivalent width of arterioles and venules near the optic disc is also a widely used technique to identify cardiovascular risks.
Function
The retina translates an optical image into neural impulses starting with the patterned excitation of the colour-sensitive pigments of its rods and cones, the retina's photoreceptor cells. The excitation is processed by the neural system and various parts of the brain working in parallel to form a representation of the external environment in the brain.
The cones respond to bright light and mediate high-resolution colour vision during daylight illumination (also called photopic vision). The rod responses are saturated at daylight levels and do not contribute to pattern vision. However, rods do respond to dim light and mediate lower-resolution, monochromatic vision under very low levels of illumination (called scotopic vision). The illumination in most office settings falls between these two levels and is called mesopic vision. At mesopic light levels, both the rods and cones are actively contributing pattern information. What contribution the rod information makes to pattern vision under these circumstances is unclear.
The response of cones to various wavelengths of light is called their spectral sensitivity. In normal human vision, the spectral sensitivity of a cone falls into one of three subtypes, often called blue, green, and red, but more accurately known as short, medium, and long wavelength-sensitive cone subtypes. It is a lack of one or more of the cone subtypes that causes individuals to have deficiencies in colour vision or various kinds of colour blindness. These individuals are not blind to objects of a particular colour, but are unable to distinguish between colours that can be distinguished by people with normal vision. Humans have this trichromatic vision, while most other mammals lack cones with red sensitive pigment and therefore have poorer dichromatic colour vision. However, some animals have four spectral subtypes, e.g. the trout adds an ultraviolet subgroup to short, medium, and long subtypes that are similar to humans. Some fish are sensitive to the polarization of light as well.
In the photoreceptors, exposure to light hyperpolarizes the membrane in a series of graded shifts. The outer cell segment contains a photopigment. Inside the cell the normal levels of cyclic guanosine monophosphate (cGMP) keep the Na+ channel open, and thus in the resting state the cell is depolarised. The photon causes the retinal bound to the receptor protein to isomerise to trans-retinal. This causes the receptor to activate multiple G-proteins. This in turn causes the Ga-subunit of the protein to activate a phosphodiesterase (PDE6), which degrades cGMP, resulting in the closing of Na+ cyclic nucleotide-gated ion channels (CNGs). Thus the cell is hyperpolarised. The amount of neurotransmitter released is reduced in bright light and increases as light levels fall. The actual photopigment is bleached away in bright light and only replaced as a chemical process, so in a transition from bright light to darkness the eye can take up to thirty minutes to reach full sensitivity.
When thus excited by light, the photoceptor sends a proportional response synaptically to bipolar cells which in turn signal the retinal ganglion cells. The photoreceptors are also cross-linked by horizontal cells and amacrine cells, which modify the synaptic signal before it reaches the ganglion cells, the neural signals being intermixed and combined. Of the retina's nerve cells, only the retinal ganglion cells and few amacrine cells create action potentials.
In the retinal ganglion cells there are two types of response, depending on the receptive field of the cell. The receptive fields of retinal ganglion cells comprise a central, approximately circular area, where light has one effect on the firing of the cell, and an annular surround, where light has the opposite effect. In ON cells, an increment in light intensity in the centre of the receptive field causes the firing rate to increase. In OFF cells, it makes it decrease. In a linear model, this response profile is well described by a difference of Gaussians and is the basis for edge detection algorithms. Beyond this simple difference, ganglion cells are also differentiated by chromatic sensitivity and the type of spatial summation. Cells showing linear spatial summation are termed X cells (also called parvocellular, P, or midget ganglion cells), and those showing non-linear summation are Y cells (also called magnocellular, M, or parasol retinal ganglion cells), although the correspondence between X and Y cells (in the cat retina) and P and M cells (in the primate retina) is not as simple as it once seemed.
In the transfer of visual signals to the brain, the visual pathway, the retina is vertically divided in two, a temporal (nearer to the temple) half and a nasal (nearer to the nose) half. The axons from the nasal half cross the brain at the optic chiasma to join with axons from the temporal half of the other eye before passing into the lateral geniculate body.
Although there are more than 130 million retinal receptors, there are only approximately 1.2 million fibres (axons) in the optic nerve. So, a large amount of pre-processing is performed within the retina. The fovea produces the most accurate information. Despite occupying about 0.01% of the visual field (less than 2° of visual angle), about 10% of axons in the optic nerve are devoted to the fovea. The resolution limit of the fovea has been determined to be around 10,000 points. The information capacity is estimated at 500,000 bits per second (for more information on bits, see information theory) without colour or around 600,000 bits per second including colour.
Spatial encoding
When the retina sends neural impulses representing an image to the brain, it spatially encodes (compresses) those impulses to fit the limited capacity of the optic nerve. Compression is necessary because there are 100 times more photoreceptor cells than ganglion cells. This is done by "decorrelation", which is carried out by the "centre–surround structures", which are implemented by the bipolar and ganglion cells.
There are two types of centre–surround structures in the retina – on-centres and off-centres. On-centres have a positively weighted centre and a negatively weighted surround. Off-centres are just the opposite. Positive weighting is more commonly known as excitatory, and negative weighting as inhibitory.
These centre–surround structures are not physical apparent, in the sense that one cannot see them by staining samples of tissue and examining the retina's anatomy. The centre–surround structures are logical (i.e., mathematically abstract) in the sense that they depend on the connection strengths between bipolar and ganglion cells. It is believed that the connection strength between cells is caused by the number and types of ion channels embedded in the synapses between the bipolar and ganglion cells.
The centre–surround structures are mathematically equivalent to the edge detection algorithms used by computer programmers to extract or enhance the edges in a digital photograph. Thus, the retina performs operations on the image-representing impulses to enhance the edges of objects within its visual field. For example, in a picture of a dog, a cat and a car, it is the edges of these objects that contain the most information. In order for higher functions in the brain (or in a computer for that matter) to extract and classify objects such as a dog and a cat, the retina is the first step to separating out the various objects within the scene.
As an example, the following matrix is at the heart of a computer algorithm that implements edge detection. This matrix is the computer equivalent to the centre–surround structure. In this example, each box (element) within this matrix would be connected to one photoreceptor. The photoreceptor in the centre is the current receptor being processed. The centre photoreceptor is multiplied by the +1 weight factor. The surrounding photoreceptors are the "nearest neighbors" to the centre and are multiplied by the −1/8 value. The sum of all nine of these elements is finally calculated. This summation is repeated for every photoreceptor in the image by shifting left to the end of a row and then down to the next line.
The total sum of this matrix is zero, if all the inputs from the nine photoreceptors are of the same value. The zero result indicates the image was uniform (non-changing) within this small patch. Negative or positive sums mean the image was varying (changing) within this small patch of nine photoreceptors.
The above matrix is only an approximation to what really happens inside the retina. The differences are:
The above example is called "balanced". The term balanced means that the sum of the negative weights is equal to the sum of the positive weights so that they cancel out perfectly. Retinal ganglion cells are almost never perfectly balanced.
The table is square while the centre–surround structures in the retina are circular.
Neurons operate on spike trains traveling down nerve cell axons. Computers operate on a single floating-point number that is essentially constant from each input pixel. (The computer pixel is basically the equivalent of a biological photoreceptor.)
The retina performs all these calculations in parallel while the computer operates on each pixel one at a time. The retina performs no repeated summations and shifting as would a computer.
Finally, the horizontal and amacrine cells play a significant role in this process, but that is not represented here.
Here is an example of an input image and how edge detection would modify it.
Once the image is spatially encoded by the centre–surround structures, the signal is sent out along the optic nerve (via the axons of the ganglion cells) through the optic chiasm to the LGN (lateral geniculate nucleus). The exact function of the LGN is unknown at this time. The output of the LGN is then sent to the back of the brain. Specifically, the output of the LGN "radiates" out to the V1 primary visual cortex.
Simplified signal flow: Photoreceptors → Bipolar → Ganglion → Chiasm → LGN → V1 cortex
Clinical significance
There are many inherited and acquired diseases or disorders that may affect the retina. Some of them include:
Retinitis pigmentosa is a group of genetic diseases that affect the retina and cause the loss of night vision and peripheral vision.
Macular degeneration describes a group of diseases characterized by loss of central vision because of death or impairment of the cells in the macula.
Cone-rod dystrophy (CORD) describes a number of diseases where vision loss is caused by deterioration of the cones and/or rods in the retina.
In retinal separation, the retina detaches from the back of the eyeball. Ignipuncture is an outdated treatment method. The term retinal detachment is used to describe a separation of the neurosensory retina from the retinal pigment epithelium. There are several modern treatment methods for fixing a retinal detachment: pneumatic retinopexy, scleral buckle, cryotherapy, laser photocoagulation and pars plana vitrectomy.
Both hypertension and diabetes mellitus can cause damage to the tiny blood vessels that supply the retina, leading to hypertensive retinopathy and diabetic retinopathy.
Retinoblastoma is a cancer of the retina.
Retinal diseases in dogs include retinal dysplasia, progressive retinal atrophy, and sudden acquired retinal degeneration.
Lipaemia retinalis is a white appearance of the retina, and can occur by lipid deposition in lipoprotein lipase deficiency.
Retinal Detachment. The neural retina occasionally detaches from the pigment epithelium. In some instances, the cause of such detachment is injury to the eyeball that allows fluid or blood to collect between the neural retina and the pigment epithelium. Detachment is occasionally caused by contracture of fine collagenous fibrils in the vitreous humor, which pull areas of the retina toward the interior of the globe.
Night blindness: Night blindness occurs in any person with severe vitamin A deficiency. The reason for this is that without vitamin A, the amounts of retinal and rhodopsin that can be formed are severely depressed. This condition is called night blindness because the amount of light available at night is too little to permit adequate vision in vitamin A–deficient persons.
In addition, the retina has been described as a "window" into the brain and body, given that abnormalities detected through an examination of the retina can discover both neurological and systemic diseases.
Diagnosis
A number of different instruments are available for the diagnosis of diseases and disorders affecting the retina. Ophthalmoscopy and fundus photography have long been used to examine the retina. Recently, adaptive optics has been used to image individual rods and cones in the living human retina, and a company based in Scotland has engineered technology that allows physicians to observe the complete retina without any discomfort to patients.
The electroretinogram is used to non-invasively measure the retina's electrical activity, which is affected by certain diseases. A relatively new technology, now becoming widely available, is optical coherence tomography (OCT). This non-invasive technique allows one to obtain a 3D volumetric or high resolution cross-sectional tomogram of the fine structures of the retina, with histologic quality. Retinal vessel analysis is a non-invasive method to examine the small arteries and veins in the retina which allows to draw conclusions about the morphology and the function of small vessels elsewhere in the human body. It has been established as a predictor of cardiovascular disease and seems to have, according to a study published in 2019, potential in the early detection of Alzheimer's disease.
Treatment
Treatment depends upon the nature of the disease or disorder.
Common treatment modalities
The following are commonly modalities of management for retinal disease:
Intravitreal medication, such as anti-VEGF or corticosteroid agents
Vitreoretinal surgery
Use of nutritional supplements
Modification of systemic risk factors for retinal disease
Uncommon treatment modalities
Retinal gene therapy
Gene therapy holds promise as a potential avenue to cure a wide range of retinal diseases. This involves using a non-infectious virus to shuttle a gene into a part of the retina. Recombinant adeno-associated virus (rAAV) vectors possess a number of features that render them ideally suited for retinal gene therapy, including a lack of pathogenicity, minimal immunogenicity, and the ability to transduce postmitotic cells in a stable and efficient manner. rAAV vectors are increasingly utilized for their ability to mediate efficient transduction of retinal pigment epithelium (RPE), photoreceptor cells and retinal ganglion cells. Each cell type can be specifically targeted by choosing the appropriate combination of AAV serotype, promoter, and intraocular injection site.
Several clinical trials have already reported positive results using rAAV to treat Leber's congenital amaurosis, showing that the therapy was both safe and effective. There were no serious adverse events, and patients in all three studies showed improvement in their visual function as measured by a number of methods. The methods used varied among the three trials, but included both functional methods such as visual acuity and functional mobility as well as objective measures that are less susceptible to bias, such as the pupil's ability to respond to light and improvements on functional MRI. Improvements were sustained over the long-term, with patients continuing to do well after more than 1.5 years.
The unique architecture of the retina and its relatively immune-privileged environment help this process. Tight junctions that form the blood retinal barrier separate the subretinal space from the blood supply, thus protecting it from microbes and most immune-mediated damage, and enhancing its potential to respond to vector-mediated therapies. The highly compartmentalized anatomy of the eye facilitates accurate delivery of therapeutic vector suspensions to specific tissues under direct visualization using microsurgical techniques. In the sheltered environment of the retina, AAV vectors are able to maintain high levels of transgene expression in the retinal pigmented epithelium (RPE), photoreceptors, or ganglion cells for long periods of time after a single treatment. In addition, the eye and the visual system can be routinely and easily monitored for visual function and retinal structural changes after injections with noninvasive advanced technology, such as visual acuities, contrast sensitivity, fundus auto-fluorescence (FAF), dark-adapted visual thresholds, vascular diameters, pupillometry, electroretinography (ERG), multifocal ERG and optical coherence tomography (OCT).
This strategy is effective against a number of retinal diseases that have been studied, including neovascular diseases that are features of age-related macular degeneration, diabetic retinopathy and retinopathy of prematurity. Since the regulation of vascularization in the mature retina involves a balance between endogenous positive growth factors, such as vascular endothelial growth factor (VEGF) and inhibitors of angiogenesis, such as pigment epithelium-derived factor (PEDF), rAAV-mediated expression of PEDF, angiostatin, and the soluble VEGF receptor sFlt-1, which are all antiangiogenic proteins, have been shown to reduce aberrant vessel formation in animal models. Since specific gene therapies cannot readily be used to treat a significant fraction of patients with retinal dystrophy, there is a major interest in developing a more generally applicable survival factor therapy. Neurotrophic factors have the ability to modulate neuronal growth during development to maintain existing cells and to allow recovery of injured neuronal populations in the eye. AAV encoding neurotrophic factors such as fibroblast growth factor (FGF) family members and GDNF either protected photoreceptors from apoptosis or slowed down cell death.
Organ transplantation
Transplantation of retinas has been attempted, but without much success. At MIT, The University of Southern California, RWTH Aachen University, and the University of New South Wales, an "artificial retina" is under development: an implant which will bypass the photoreceptors of the retina and stimulate the attached nerve cells directly, with signals from a digital camera.
History
Around 300 BCE, Herophilos identified the retina from dissections of cadaver eyes. He called it the arachnoid layer, from its resemblance to a spider web, and retiform, from its resemblance to a casting net. The term arachnoid came to refer to a layer around the brain; the term retiform came to refer to the retina.
Between 1011 and 1021 CE, Ibn Al-Haytham published numerous experiments demonstrating that sight occurs from light reflecting from objects into the eye. This is consistent with intromission theory and against emission theory, the theory that sight occurs from rays emitted by the eyes. However, Ibn Al-Haytham decided that the retina could not be responsible for the beginnings of vision because the image formed on it was inverted. Instead he decided it must begin at the surface of the lens.
In 1604, Johannes Kepler worked out the optics of the eye and decided that the retina must be where sight begins. He left it up to other scientists to reconcile the inverted retinal image with our perception of the world as upright.
In 1894, Santiago Ramón y Cajal published the first major characterization of retinal neurons in Retina der Wirbelthiere (The Retina of Vertebrates).
George Wald, Haldan Keffer Hartline, and Ragnar Granit won the 1967 Nobel Prize in Physiology or Medicine for their scientific research on the retina.
A recent University of Pennsylvania study calculated that the approximate bandwidth of human retinas is 8.75 megabits per second, whereas a guinea pig's retinal transfer rate is 875 kilobits per second.
MacLaren & Pearson and colleagues at University College London and Moorfields Eye Hospital in London, in 2006, showed that photoreceptor cells could be transplanted successfully in the mouse retina if donor cells were at a critical developmental stage. Recently Ader and colleagues in Dublin showed, using the electron microscope, that transplanted photoreceptors formed synaptic connections.
In 2012, Sebastian Seung and his laboratory at MIT launched EyeWire, an online Citizen science game where players trace neurons in the retina. The goals of the EyeWire project are to identify specific cell types within the known broad classes of retinal cells, and to map the connections between neurons in the retina, which will help to determine how vision works.
Additional images
| Biology and health sciences | Visual system | Biology |
48336 | https://en.wikipedia.org/wiki/Electrolyte | Electrolyte | An electrolyte is a substance that conducts electricity through the movement of ions, but not through the movement of electrons. This includes most soluble salts, acids, and bases, dissolved in a polar solvent like water. Upon dissolving, the substance separates into cations and anions, which disperse uniformly throughout the solvent. Solid-state electrolytes also exist. In medicine and sometimes in chemistry, the term electrolyte refers to the substance that is dissolved.
Electrically, such a solution is neutral. If an electric potential is applied to such a solution, the cations of the solution are drawn to the electrode that has an abundance of electrons, while the anions are drawn to the electrode that has a deficit of electrons. The movement of anions and cations in opposite directions within the solution amounts to a current. Some gases, such as hydrogen chloride (HCL), under conditions of high temperature or low pressure can also function as electrolytes. Electrolyte solutions can also result from the dissolution of some biological (e.g., DNA, polypeptides) or synthetic polymers (e.g., polystyrene sulfonate), termed "polyelectrolytes", which contain charged functional groups. A substance that dissociates into ions in solution or in the melt acquires the capacity to conduct electricity. Sodium, potassium, chloride, calcium, magnesium, and phosphate in a liquid phase are examples of electrolytes.
In medicine, electrolyte replacement is needed when a person has prolonged vomiting or diarrhea, and as a response to sweating due to strenuous athletic activity. Commercial electrolyte solutions are available, particularly for sick children (such as oral rehydration solution, Suero Oral, or Pedialyte) and athletes (sports drinks). Electrolyte monitoring is important in the treatment of anorexia and bulimia.
In science, electrolytes are one of the main components of electrochemical cells.
In clinical medicine, mentions of electrolytes usually refer metonymically to the ions, and (especially) to their concentrations (in blood, serum, urine, or other fluids). Thus, mentions of electrolyte levels usually refer to the various ion concentrations, not to the fluid volumes.
Etymology
The word electrolyte derives from Ancient Greek ήλεκτρο- (ēlectro-), prefix originally meaning amber but in modern contexts related to electricity, and λυτός (lytos), meaning "able to be untied or loosened".
History
In his 1884 dissertation, Svante Arrhenius put forth his explanation of solid crystalline salts disassociating into paired charged particles when dissolved, for which he won the 1903 Nobel Prize in Chemistry. Arrhenius's explanation was that in forming a solution, the salt dissociates into charged particles, to which Michael Faraday (1791-1867) had given the name "ions" many years earlier. Faraday's belief had been that ions were produced in the process of electrolysis. Arrhenius proposed that, even in the absence of an electric current, solutions of salts contained ions. He thus proposed that chemical reactions in solution were reactions between ions.
Shortly after Arrhenius's hypothesis of ions, Franz Hofmeister and Siegmund Lewith found that different ion types displayed different effects on such things as the solubility of proteins. A consistent ordering of these different ions on the magnitude of their effect arises consistently in many other systems as well. This has since become known as the Hofmeister series.
While the origins of these effects are not abundantly clear and have been debated throughout the past century, it has been suggested that the charge density of these ions is important and might actually have explanations originating from the work of Charles-Augustin de Coulomb over 200 years ago.
Formation
Electrolyte solutions are normally formed when salt is placed into a solvent such as water and the individual components dissociate due to the thermodynamic interactions between solvent and solute molecules, in a process called "solvation". For example, when table salt (sodium chloride), NaCl, is placed in water, the salt (a solid) dissolves into its component ions, according to the dissociation reaction:
NaCl(s) → Na+(aq) + Cl−(aq)
It is also possible for substances to react with water, producing ions. For example, carbon dioxide gas dissolves in water to produce a solution that contains hydronium, carbonate, and hydrogen carbonate ions.
Molten salts can also be electrolytes as, for example, when sodium chloride is molten, the liquid conducts electricity. In particular, ionic liquids, which are molten salts with melting points below 100 °C, are a type of highly conductive non-aqueous electrolytes and thus have found more and more applications in fuel cells and batteries.
An electrolyte in a solution may be described as "concentrated" if it has a high concentration of ions, or "dilute" if it has a low concentration. If a high proportion of the solute dissociates to form free ions, the electrolyte is strong; if most of the solute does not dissociate, the electrolyte is weak. The properties of electrolytes may be exploited using electrolysis to extract constituent elements and compounds contained within the solution.
Alkaline earth metals form hydroxides that are strong electrolytes with limited solubility in water, due to the strong attraction between their constituent ions. This limits their application to situations where high solubility is required.
In 2021, researchers have found that electrolyte can "substantially facilitate electrochemical corrosion studies in less conductive media".
Physiological importance
In physiology, the primary ions of electrolytes are sodium (Na+), potassium (K+), calcium (Ca2+), magnesium (Mg2+), chloride (Cl−), hydrogen phosphate (HPO42−), and hydrogen carbonate (HCO3−). The electric charge symbols of plus (+) and minus (−) indicate that the substance is ionic in nature and has an imbalanced distribution of electrons, the result of chemical dissociation. Sodium is the main electrolyte found in extracellular fluid and potassium is the main intracellular electrolyte; both are involved in fluid balance and blood pressure control.
All known multicellular lifeforms require a subtle and complex electrolyte balance between the intracellular and extracellular environments. In particular, the maintenance of precise osmotic gradients of electrolytes is important. Such gradients affect and regulate the hydration of the body as well as blood pH, and are critical for nerve and muscle function. Various mechanisms exist in living species that keep the concentrations of different electrolytes under tight control.
Both muscle tissue and neurons are considered electric tissues of the body. Muscles and neurons are activated by electrolyte activity between the extracellular fluid or interstitial fluid, and intracellular fluid. Electrolytes may enter or leave the cell membrane through specialized protein structures embedded in the plasma membrane called "ion channels". For example, muscle contraction is dependent upon the presence of calcium (Ca2+), sodium (Na+), and potassium (K+). Without sufficient levels of these key electrolytes, muscle weakness or severe muscle contractions may occur.
Electrolyte balance is maintained by oral, or in emergencies, intravenous (IV) intake of electrolyte-containing substances, and is regulated by hormones, in general with the kidneys flushing out excess levels. In humans, electrolyte homeostasis is regulated by hormones such as antidiuretic hormones, aldosterone and parathyroid hormones. Serious electrolyte disturbances, such as dehydration and overhydration, may lead to cardiac and neurological complications and, unless they are rapidly resolved, will result in a medical emergency.
Measurement
Measurement of electrolytes is a commonly performed diagnostic procedure, performed via blood testing with ion-selective electrodes or urinalysis by medical technologists. The interpretation of these values is somewhat meaningless without analysis of the clinical history and is often impossible without parallel measurements of renal function. The electrolytes measured most often are sodium and potassium. Chloride levels are rarely measured except for arterial blood gas interpretations since they are inherently linked to sodium levels. One important test conducted on urine is the specific gravity test to determine the occurrence of an electrolyte imbalance.
Rehydration
According to a study paid for by the Gatorade Sports Science Institute, electrolyte drinks containing sodium and potassium salts replenish the body's water and electrolyte concentrations after dehydration caused by exercise, excessive alcohol consumption, diaphoresis (heavy sweating), diarrhea, vomiting, intoxication or starvation; the study says that athletes exercising in extreme conditions (for three or more hours continuously, e.g. a marathon or triathlon) who do not consume electrolytes risk dehydration (or hyponatremia).
A home-made electrolyte drink can be made by using water, sugar and salt in precise proportions. It is important to include glucose (sugar) to utilise the co-transport mechanism of sodium and glucose. Commercial preparations are also available for both human and veterinary use.
Electrolytes are commonly found in fruit juices, sports drinks, milk, nuts, and many fruits and vegetables (whole or in juice form) (e.g., potatoes, avocados).
Electrochemistry
When electrodes are placed in an electrolyte and a voltage is applied, the electrolyte will conduct electricity. Lone electrons normally cannot pass through the electrolyte; instead, a chemical reaction occurs at the cathode, providing electrons to the electrolyte. Another reaction occurs at the anode, consuming electrons from the electrolyte. As a result, a negative charge cloud develops in the electrolyte around the cathode, and a positive charge develops around the anode. The ions in the electrolyte neutralize these charges, enabling the electrons to keep flowing and the reactions to continue.
For example, in a solution of ordinary table salt (sodium chloride, NaCl) in water, the cathode reaction will be
2 H2O + 2e− → 2 OH− + H2
and hydrogen gas will bubble up; the anode reaction is
2 NaCl → 2 Na+ + Cl2 + 2e−
and chlorine gas will be liberated into solution where it reacts with the sodium and hydroxyl ions to produce sodium hypochlorite - household bleach. The positively charged sodium ions Na+ will react toward the cathode, neutralizing the negative charge of OH− there, and the negatively charged hydroxide ions OH− will react toward the anode, neutralizing the positive charge of Na+ there. Without the ions from the electrolyte, the charges around the electrode would slow down continued electron flow; diffusion of H+ and OH− through water to the other electrode takes longer than movement of the much more prevalent salt ions.
Electrolytes dissociate in water because water molecules are dipoles and the dipoles orient in an energetically favorable manner to solvate the ions.
In other systems, the electrode reactions can involve the metals of the electrodes as well as the ions of the electrolyte.
Electrolytic conductors are used in electronic devices where the chemical reaction at a metal-electrolyte interface yields useful effects.
In batteries, two materials with different electron affinities are used as electrodes; electrons flow from one electrode to the other outside of the battery, while inside the battery the circuit is closed by the electrolyte's ions. Here, the electrode reactions convert chemical energy to electrical energy.
In some fuel cells, a solid electrolyte or proton conductor connects the plates electrically while keeping the hydrogen and oxygen fuel gases separated.
In electroplating tanks, the electrolyte simultaneously deposits metal onto the object to be plated, and electrically connects that object in the circuit.
In operation-hours gauges, two thin columns of mercury are separated by a small electrolyte-filled gap, and, as charge is passed through the device, the metal dissolves on one side and plates out on the other, causing the visible gap to slowly move along.
In electrolytic capacitors the chemical effect is used to produce an extremely thin dielectric or insulating coating, while the electrolyte layer behaves as one capacitor plate.
In some hygrometers the humidity of air is sensed by measuring the conductivity of a nearly dry electrolyte.
Hot, softened glass is an electrolytic conductor, and some glass manufacturers keep the glass molten by passing a large current through it.
Solid electrolytes
Solid electrolytes can be mostly divided into four groups described below.
Gel electrolytes
Gel electrolytes – closely resemble liquid electrolytes. In essence, they are liquids in a flexible lattice framework. Various additives are often applied to increase the conductivity of such systems.
Ceramic electrolytes
Solid ceramic electrolytes – ions migrate through the ceramic phase by means of vacancies or interstitials within the lattice. There are also glassy-ceramic electrolytes.
Polymer electrolytes
Dry polymer electrolytes differ from liquid and gel electrolytes in that salt is dissolved directly into the solid medium. Usually it is a relatively high-dielectric constant polymer (PEO, PMMA, PAN, polyphosphazenes, siloxanes, etc.) and a salt with low lattice energy. In order to increase the mechanical strength and conductivity of such electrolytes, very often composites are made, and inert ceramic phase is introduced. There are two major classes of such electrolytes: polymer-in-ceramic, and ceramic-in-polymer.
Organic plastic electrolytes
Organic ionic plastic crystals – are a type organic salts exhibiting mesophases (i.e. a state of matter intermediate between liquid and solid), in which mobile ions are orientationally or rotationally disordered while their centers are located at the ordered sites in the crystal structure. They have various forms of disorder due to one or more solid–solid phase transitions below the melting point and have therefore plastic properties and good mechanical flexibility as well as an improved electrode-electrolyte interfacial contact. In particular, protic organic ionic plastic crystals (POIPCs), which are solid protic organic salts formed by proton transfer from a Brønsted acid to a Brønsted base and in essence are protic ionic liquids in the molten state, have found to be promising solid-state proton conductors for fuel cells. Examples include 1,2,4-triazolium perfluorobutanesulfonate and imidazolium methanesulfonate.
| Physical sciences | Electrochemistry | Chemistry |
48337 | https://en.wikipedia.org/wiki/Caterpillar | Caterpillar | Caterpillars ( ) are the larval stage of members of the order Lepidoptera (the insect order comprising butterflies and moths).
As with most common names, the application of the word is arbitrary, since the larvae of sawflies (suborder Symphyta) are commonly called caterpillars as well. Both lepidopteran and symphytan larvae have eruciform body shapes.
Caterpillars of most species eat plant material (often leaves), but not all; some (about 1%) eat insects, and some are even cannibalistic. Some feed on other animal products. For example, clothes moths feed on wool, and horn moths feed on the hooves and horns of dead ungulates.
Caterpillars are typically voracious feeders and many of them are among the most serious of agricultural pests. In fact, many moth species are best known in their caterpillar stages because of the damage they cause to fruits and other agricultural produce, whereas the moths are obscure and do no direct harm. Conversely, various species of caterpillar are valued as sources of silk, as human or animal food, or for biological control of pest plants.
Etymology
The origins of the word "caterpillar" date from the early 16th century. They derive from Middle English catirpel, catirpeller, probably an alteration of Old North French catepelose: cate, cat (from Latin cattus) + pelose, hairy (from Latin pilōsus).
The inchworm, or looper caterpillars from the family Geometridae are so named because of the way they move, appearing to measure the earth (the word geometrid means earth-measurer in Greek); the primary reason for this unusual locomotion is the elimination of nearly all the prolegs except the clasper on the terminal segment.
Description
Caterpillars have soft bodies that can grow rapidly between moults. Their size varies between species and instars (moults) from as small as up to . Some larvae of the order Hymenoptera (ants, bees, and wasps) can appear like the caterpillars of the Lepidoptera. Such larvae are mainly seen in the sawfly suborder. However while these larvae superficially resemble caterpillars, they can be distinguished by the presence of prolegs on every abdominal segment, an absence of crochets or hooks on the prolegs (these are present on lepidopteran caterpillars), one pair of prominent ocelli on the head capsule, and an absence of the upside-down Y-shaped suture on the front of the head.
Lepidopteran caterpillars can be differentiated from sawfly larvae by:
the numbers of pairs of pro-legs; sawfly larvae have 6 or more pairs while caterpillars have a maximum of 5 pairs.
the number of stemmata (simple eyes); the sawfly larvae have only two, while caterpillars usually have twelve (six each side of the head).
the presence of crochets on the prolegs; these are absent in the sawflies.
sawfly larvae have an invariably smooth head capsule with no cleavage lines, while lepidopterous caterpillars bear an inverted "Y" or "V" (frontal suture).
Fossils
In 2019, a geometrid moth caterpillar dating back to the Eocene epoch, approximately 44 million years ago, was found preserved in Baltic amber. It was described under Eogeometer vadens. Previously, another fossil dating back approximately 125 million years was found in Lebanese amber.
Defenses
Many animals feed on caterpillars as they are rich in protein. As a result, caterpillars have evolved various means of defense.
Caterpillars have evolved defenses against physical conditions such as cold, hot or dry environmental conditions. Some Arctic species like Gynaephora groenlandica have special basking and aggregation behaviours apart from physiological adaptations to remain in a dormant state.
Appearance
The appearance of a caterpillar can often repel a predator: its markings and certain body parts can make it seem poisonous, or bigger in size and thus threatening, or non-edible. Some types of caterpillars are indeed poisonous or distasteful and their bright coloring warns predators of this. Others may mimic dangerous caterpillars or other animals while not being dangerous themselves. Many caterpillars are cryptically colored and resemble the plants on which they feed. An example of caterpillars that use camouflage for defense is the species Nemoria arizonaria. If the caterpillars hatch in the spring and feed on oak catkins they appear green. If they hatch in the summer they appear dark colored, like oak twigs. The differential development is linked to the tannin content in the diet. Caterpillars may even have spines or growths that resemble plant parts such as thorns. Some look like objects in the environment such as bird droppings. Some Geometridae cover themselves in plant parts, while bagworms construct and live in a bag covered in sand, pebbles or plant material.
Chemical defenses
More aggressive self-defense measures have evolved in some caterpillars. These measures include having spiny bristles or long fine hair-like setae with detachable tips that will irritate by lodging in the skin or mucous membranes. However some birds (such as cuckoos) will swallow even the hairiest of caterpillars. Other caterpillars acquire toxins from their host plants that render them unpalatable to most of their predators. For instance, ornate moth caterpillars utilize pyrrolizidine alkaloids that they obtain from their food plants to deter predators. The most aggressive caterpillar defenses are bristles associated with venom glands. These bristles are called urticating hairs. A venom which is among the most potent defensive chemicals in any animal is produced by the South American silk moth genus Lonomia. Its venom is an anticoagulant powerful enough to cause a human to hemorrhage to death (See Lonomiasis). This chemical is being investigated for potential medical applications. Most urticating hairs range in effect from mild irritation to dermatitis. Example: brown-tail moth.
Plants contain toxins which protect them from herbivores, but some caterpillars have evolved countermeasures which enable them to eat the leaves of such toxic plants. In addition to being unaffected by the poison, the caterpillars sequester it in their body, making them highly toxic to predators. The chemicals are also carried on into the adult stages. These toxic species, such as the cinnabar moth (Tyria jacobaeae) and monarch (Danaus plexippus) caterpillars, usually advertise themselves with the danger colors of red, yellow and black, often in bright stripes (see aposematism). Any predator that attempts to eat a caterpillar with an aggressive defense mechanism will learn and avoid future attempts.
Some caterpillars regurgitate acidic digestive juices at attacking enemies. Many papilionid larvae produce bad smells from extrudable glands called osmeteria.
Defensive behaviors
Many caterpillars display feeding behaviors which allow the caterpillar to remain hidden from potential predators. Many feed in protected environments, such as enclosed inside silk galleries, rolled leaves or by mining between the leaf surfaces.
Some caterpillars, like early instars of the tomato hornworm and tobacco hornworm, have long "whip-like" organs attached to the ends of their body. The caterpillar wiggles these organs to frighten away flies and predatory wasps. Some caterpillars can evade predators by using a silk line and dropping off from branches when disturbed. Many species thrash about violently when disturbed to scare away potential predators. One species (Amorpha juglandis) even makes high pitched whistles that can scare away birds.
Social behaviors and relationships with other insects
Some caterpillars obtain protection by associating themselves with ants. The Lycaenid butterflies are particularly well known for this. They communicate with their ant protectors by vibrations as well as chemical means and typically provide food rewards.
Some caterpillars are gregarious; large aggregations are believed to help in reducing the levels of parasitization and predation. Clusters amplify the signal of aposematic coloration, and individuals may participate in group regurgitation or displays. Pine processionary (Thaumetopoea pityocampa) caterpillars often link into a long train to move through trees and over the ground. The head of the lead caterpillar is visible, but the other heads can appear hidden. Forest tent caterpillars cluster during periods of cold weather.
Predators
Caterpillars are eaten by many animals. The European pied flycatcher is one species that preys upon caterpillars. The flycatcher typically finds caterpillars among oak foliage. Paper wasps, including those in the genus Polistes and Polybia catch caterpillars to feed their young and themselves.
Behavior
Caterpillars have been called "eating machines", and eat leaves voraciously. Most species shed their skin four or five times as their bodies grow, and they eventually enter a pupal stage before becoming adults. Caterpillars grow very quickly; for instance, a tobacco hornworm will increase its weight ten-thousandfold in less than twenty days. An adaptation that enables them to eat so much is a mechanism in a specialized midgut that quickly transports ions to the lumen (midgut cavity), to keep the potassium level higher in the midgut cavity than in the hemolymph.
Most caterpillars are solely herbivorous. Many are restricted to feeding on one species of plant, while others are polyphagous. Some, including the clothes moth, feed on detritus. Some are predatory, and may prey on other species of caterpillars (e.g. Hawaiian Eupithecia). Others feed on eggs of other insects, aphids, scale insects, or ant larvae. A few are parasitic on cicadas or leaf hoppers (Epipyropidae). Some Hawaiian caterpillars (Hyposmocoma molluscivora) use silk traps to capture snails.
Many caterpillars are nocturnal. For example, the "cutworms" (of the family Noctuidae) hide at the base of plants during the day and only feed at night. Others, such as spongy moth (Lymantria dispar) larvae, change their activity patterns depending on density and larval stage, with more diurnal feeding in early instars and high densities.
Economic effects
Caterpillars cause much damage, mainly by eating leaves. The propensity for damage is enhanced by monocultural farming practices, especially where the caterpillar is specifically adapted to the host plant under cultivation. The cotton bollworm causes enormous losses. Other species eat food crops. Caterpillars have been the target of pest control through the use of pesticides, biological control and agronomic practices. Many species have become resistant to pesticides. Bacterial toxins such as those from Bacillus thuringiensis which are evolved to affect the gut of Lepidoptera have been used in sprays of bacterial spores, toxin extracts and also by incorporating genes to produce them within the host plants. These approaches are defeated over time by the evolution of resistance mechanisms in the insects.
Plants evolve mechanisms of resistance to being eaten by caterpillars, including the evolution of chemical toxins and physical barriers such as hairs. Incorporating host plant resistance (HPR) through plant breeding is another approach used in reducing the impact of caterpillars on crop plants.
Some caterpillars are used in industry. The silk industry is based on the silkworm caterpillar.
Human health
Caterpillar hair can be a cause of human health problems. Caterpillar hairs sometimes have venoms in them and species from approximately 12 families of moths or butterflies worldwide can inflict serious human injuries ranging from urticarial dermatitis and atopic asthma to osteochondritis, consumption coagulopathy, kidney failure, and brain bleeding. Skin rashes are the most common, but there have been fatalities. Lonomia is a frequent cause of envenomation in Brazil, with 354 cases reported between 1989 and 2005. Lethality ranging up to 20% with death caused most often by intracranial hemorrhage.
Caterpillar hair has also been known to cause kerato-conjunctivitis. The sharp barbs on the end of caterpillar hairs can get lodged in soft tissues and mucous membranes such as the eyes. Once they enter such tissues, they can be difficult to extract, often exacerbating the problem as they migrate across the membrane.
This becomes a particular problem in an indoor setting. The hair easily enter buildings through ventilation systems and accumulate in indoor environments because of their small size, which makes it difficult for them to be vented out. This accumulation increases the risk of human contact in indoor environments.
Caterpillars are a food source in some cultures. For example, in South Africa mopane worms are eaten by the bushmen, and in China silkworms are considered a delicacy.
In popular culture
In the Old Testament of the Bible caterpillars are feared as pests that devour crops. They are part of the "pestilence, blasting, mildew, locust" because of their association with the locust, thus they are one of the plagues of Egypt. Jeremiah names them as one of the inhabitants of Babylon. The English word caterpillar derives from the old French catepelose (hairy cat) but merged with the piller (pillager). Caterpillars became a symbol for social dependents. Shakespeare's Bolingbroke described King Richard's friends as "The caterpillars of the commonwealth, Which I have sworn to weed and pluck away". In 1790 William Blake referenced this popular image in The Marriage of Heaven and Hell when he attacked priests: "as the caterpillar chooses the fairest leaves to lay her eggs on, so the priest lay his curse on the fairest joys".
The role of caterpillars in the life stages of butterflies was badly understood. In 1679 Maria Sibylla Merian published the first volume of The Caterpillars' Marvelous Transformation and Strange Floral Food, which contained 50 illustrations and a description of insects, moths, butterflies and their larvae. An earlier popular publication on moths and butterflies, and their caterpillars, by Jan Goedart had not included eggs in the life stages of European moths and butterflies, because he had believed that caterpillars were generated from water. When Merian published her study of caterpillars it was still widely believed that insects were spontaneously generated. Merian's illustrations supported the findings of Francesco Redi, Marcello Malpighi and Jan Swammerdam.
Butterflies were regarded as symbol for the human soul since ancient time, and also in the Christian tradition. Goedart thus located his empirical observations on the transformation of caterpillars into butterflies in the Christian tradition. As such he argued that the metamorphosis from caterpillar into butterfly was a symbol, and even proof, of Christ's resurrection. He argued "that from dead caterpillars emerge living animals; so it is equally true and miraculous, that our dead and rotten corpses will rise from the grave." Swammerdam, who in 1669 had demonstrated that inside a caterpillar the rudiments of the future butterfly's limbs and wings could be discerned, attacked the mystical and religious notion that the caterpillar died and the butterfly subsequently resurrected. As a militant Cartesian, Swammerdam attacked Goedart as ridiculous, and when publishing his findings he proclaimed "here we witness the digression of those who have tried to prove Resurrection of the Dead from these obviously natural and comprehensible changes within the creature itself."
Since then the metamorphoses of the caterpillar into a butterfly has in Western societies been associated with countless human transformations in folktales and literature. There is no process in the physical life of human beings that resembles this metamorphoses, and the symbol of the caterpillar tends to depict a psychic transformation of a human. As such the caterpillar has in the Christian tradition become a metaphor for being "born again".
Famously, in Lewis Carroll's Alice's Adventures in Wonderland a caterpillar asks Alice "Who are you?". When Alice comments on the caterpillar's inevitable transformation into a butterfly, the caterpillar champions the position that in spite of changes it is still possible to know something, and that Alice is the same Alice at the beginning and end of a considerable interval. When the Caterpillar asks Alice to clarify a point, the child replies "I'm afraid I can't put it more clearly... for I can't but understand it myself, to begin with, and being so many different sizes in a day is very confusing". Here Carroll satirizes René Descartes, the founder of Cartesian philosophy, and his theory on innate ideas. Descartes argued that we are distracted by urgent bodily stimuli that swamp the human mind in childhood. Descartes also theorised that inherited preconceived opinions obstruct the human perception of the truth.
More recent symbolic references to caterpillars in popular media include the Mad Men season 3 episode "The Fog", in which Betty Draper has a drug-induced dream, while in labor, that she captures a caterpillar and holds it firmly in her hand. In The Sopranos season 5 episode "The Test Dream", Tony Soprano dreams that Ralph Cifaretto has a caterpillar on his bald head that changes into a butterfly.
Gallery
Click left or right for a slide show.
| Biology and health sciences | Lepidoptera | Animals |
48338 | https://en.wikipedia.org/wiki/Butterfly | Butterfly | Butterflies are winged insects from the lepidopteran suborder Rhopalocera, characterized by large, often brightly coloured wings that often fold together when at rest, and a conspicuous, fluttering flight. The group comprises the superfamilies Hedyloidea (moth-butterflies in the Americas) and Papilionoidea (all others). The oldest butterfly fossils have been dated to the Paleocene, about 56 million years ago, though molecular evidence suggests that they likely originated in the Cretaceous.
Butterflies have a four-stage life cycle, and like other holometabolous insects they undergo complete metamorphosis. Winged adults lay eggs on the food plant on which their larvae, known as caterpillars, will feed. The caterpillars grow, sometimes very rapidly, and when fully developed, pupate in a chrysalis. When metamorphosis is complete, the pupal skin splits, the adult insect climbs out, expands its wings to dry, and flies off.
Some butterflies, especially in the tropics, have several generations in a year, while others have a single generation, and a few in cold locations may take several years to pass through their entire life cycle.
Butterflies are often polymorphic, and many species make use of camouflage, mimicry, and aposematism to evade their predators. Some, like the monarch and the painted lady, migrate over long distances. Many butterflies are attacked by parasites or parasitoids, including wasps, protozoans, flies, and other invertebrates, or are preyed upon by other organisms. Some species are pests because in their larval stages they can damage domestic crops or trees; other species are agents of pollination of some plants. Larvae of a few butterflies (e.g., harvesters) eat harmful insects, and a few are predators of ants, while others live as mutualists in association with ants. Culturally, butterflies are a popular motif in the visual and literary arts. The Smithsonian Institution says "butterflies are certainly one of the most appealing creatures in nature".
Etymology
The Oxford English Dictionary derives the word straightforwardly from Old English butorflēoge, butter-fly; similar names in Old Dutch and Old High German show that the name is ancient, but modern Dutch and German use different words ( and ) and the common name often varies substantially between otherwise closely related languages. A possible source of the name is the bright yellow male of the brimstone (Gonepteryx rhamni); another is that butterflies were on the wing in meadows during the spring and summer butter season while the grass was growing.
Paleontology
The earliest Lepidoptera fossils date to the Triassic-Jurassic boundary, around 200million years ago. Butterflies evolved from moths, so while the butterflies are monophyletic (forming a single clade), the moths are not. The oldest known butterfly is Protocoeliades kristenseni from the Palaeocene aged Fur Formation of Denmark, approximately 55million years old, which belongs to the family Hesperiidae (skippers). Molecular clock estimates suggest that butterflies originated sometime in the Late Cretaceous, but only significantly diversified during the Cenozoic, with one study suggesting a North American origin for the group. The oldest American butterfly is the Late Eocene Prodryas persephone from the Florissant Fossil Beds, approximately 34million years old.
Taxonomy and phylogeny
Butterflies are divided into seven families that contain a total of about 20,000 species.
Traditionally, butterflies have been divided into the superfamilies Papilionoidea and the moth-like Hedyloidea. Recent work has discovered that Hedylidae, the only family within Hedyloidea, is nested within the Papilionoidea, meaning that Papilionoidea would be synonymous with Rhopalocera. The relationships between the rest of the 6 families are extremely well resolved, which is summarized in the below cladogram.
Biology
General description
Butterfly adults are characterized by their four scale-covered wings, which give the Lepidoptera their name (Ancient Greek λεπίς lepís, scale + πτερόν pterón, wing). These scales give butterfly wings their colour: they are pigmented with melanins that give them blacks and browns, as well as uric acid derivatives and flavones that give them yellows, but many of the blues, greens, reds and iridescent colours are created by structural coloration produced by the micro-structures of the scales and hairs.
As in all insects, the body is divided into three sections: the head, thorax, and abdomen. The thorax is composed of three segments, each with a pair of legs. In most families of butterfly the antennae are clubbed, unlike those of moths which may be threadlike or feathery. The long proboscis can be coiled when not in use for sipping nectar from flowers.
Nearly all butterflies are diurnal, have relatively bright colours, and hold their wings vertically above their bodies when at rest, unlike the majority of moths which fly by night, are often cryptically coloured (well camouflaged), and either hold their wings flat (touching the surface on which the moth is standing) or fold them closely over their bodies. Some day-flying moths, such as the hummingbird hawk-moth, are exceptions to these rules.
Butterfly larvae, caterpillars, have a hard (sclerotised) head with strong mandibles used for cutting their food, most often leaves. They have cylindrical bodies, with ten segments to the abdomen, generally with short prolegs on segments 3–6 and 10; the three pairs of true legs on the thorax have five segments each. Many are well camouflaged; others are aposematic with bright colours and bristly projections containing toxic chemicals obtained from their food plants. The pupa or chrysalis, unlike that of moths, is not wrapped in a cocoon.
Many butterflies are sexually dimorphic. Most butterflies have the ZW sex-determination system where females are the heterogametic sex (ZW) and males homogametic (ZZ).
Distribution and migration
Butterflies are distributed worldwide except Antarctica, totalling some 18,500 species. Of these, 775 are Nearctic; 7,700 Neotropical; 1,575 Palearctic; 3,650 Afrotropical; and 4,800 are distributed across the combined Oriental and Australian/Oceania regions. The monarch butterfly is native to the Americas, but in the nineteenth century or before, spread across the world, and is now found in Australia, New Zealand, other parts of Oceania, and the Iberian Peninsula. It is not clear how it dispersed; adults may have been blown by the wind or larvae or pupae may have been accidentally transported by humans, but the presence of suitable host plants in their new environment was a necessity for their successful establishment.
Many butterflies, such as the painted lady, monarch, and several danaine migrate for long distances. These migrations take place over a number of generations and no single individual completes the whole trip. The eastern North American population of monarchs can travel thousands of miles south-west to overwintering sites in Mexico. There is a reverse migration in the spring. It has recently been shown that the British painted lady undertakes a 9,000-mile round trip in a series of steps by up to six successive generations, from tropical Africa to the Arctic Circle — almost double the length of the famous migrations undertaken by monarch. Spectacular large-scale migrations associated with the monsoon are seen in peninsular India. Migrations have been studied in more recent times using wing tags and also using stable hydrogen isotopes.
Butterflies navigate using a time-compensated sun compass. They can see polarized light and therefore orient even in cloudy conditions. The polarized light near the ultraviolet spectrum appears to be particularly important. Many migratory butterflies live in semi-arid areas where breeding seasons are short. The life histories of their host plants also influence butterfly behaviour.
Life cycle
Butterflies in their adult stage can live from a week to nearly a year depending on the species. Many species have long larval life stages while others can remain dormant in their pupal or egg stages and thereby survive winters. The Melissa Arctic (Oeneis melissa) overwinters twice as a caterpillar. Butterflies may have one or more broods per year. The number of generations per year varies from temperate to tropical regions with tropical regions showing a trend towards multivoltinism.
Courtship is often aerial and often involves pheromones. Butterflies then land on the ground or on a perch to mate. Copulation takes place tail-to-tail and may last from minutes to hours. Simple photoreceptor cells located at the genitals are important for this and other adult behaviours. The male passes a spermatophore to the female; to reduce sperm competition, he may cover her with his scent, or in some species such as the Apollos (Parnassius) plugs her genital opening to prevent her from mating again.
The vast majority of butterflies have a four-stage life cycle: egg, larva (caterpillar), pupa (chrysalis) and imago (adult). In the genera Colias, Erebia, Euchloe, and Parnassius, a small number of species are known that reproduce semi-parthenogenetically; when the female dies, a partially developed larva emerges from her abdomen.
Eggs
Butterfly eggs are protected by a hard-ridged outer layer of shell, called the chorion. This is lined with a thin coating of wax which prevents the egg from drying out before the larva has had time to fully develop. Each egg contains a number of tiny funnel-shaped openings at one end, called micropyles; the purpose of these holes is to allow sperm to enter and fertilize the egg. Butterfly eggs vary greatly in size and shape between species, but are usually upright and finely sculptured. Some species lay eggs singly, others in batches. Many females produce between one hundred and two hundred eggs.
Butterfly eggs are fixed to a leaf with a special glue which hardens rapidly. As it hardens it contracts, deforming the shape of the egg. This glue is easily seen surrounding the base of every egg forming a meniscus. The nature of the glue has been little researched but in the case of Pieris brassicae, it begins as a pale yellow granular secretion containing acidophilic proteins. This is viscous and darkens when exposed to air, becoming a water-insoluble, rubbery material which soon sets solid. Butterflies in the genus Agathymus do not fix their eggs to a leaf; instead, the newly laid eggs fall to the base of the plant.
Eggs are almost invariably laid on plants. Each species of butterfly has its own host plant range and while some species of butterfly are restricted to just one species of plant, others use a range of plant species, often including members of a common family. In some species, such as the great spangled fritillary, the eggs are deposited close to but not on the food plant. This most likely happens when the egg overwinters before hatching and where the host plant loses its leaves in winter, as do violets in this example.
The egg stage lasts a few weeks in most butterflies, but eggs laid close to winter, especially in temperate regions, go through a diapause (resting) stage, and the hatching may take place only in spring. Some temperate region butterflies, such as the Camberwell beauty, lay their eggs in the spring and have them hatch in the summer.
Caterpillar larva
Butterfly larvae, or caterpillars, consume plant leaves and spend practically all of their time searching for and eating food. Although most caterpillars are herbivorous, a few species are predators: Spalgis epius eats scale insects, while lycaenids such as Liphyra brassolis are myrmecophilous, eating ant larvae.
Some larvae, especially those of the Lycaenidae, form mutual associations with ants. They communicate with the ants using vibrations that are transmitted through the substrate as well as using chemical signals. The ants provide some degree of protection to these larvae and they in turn gather honeydew secretions. Large blue (Phengaris arion) caterpillars trick Myrmica ants into taking them back to the ant colony where they feed on the ant eggs and larvae in a parasitic relationship.
Caterpillars mature through a series of developmental stages known as instars. Near the end of each stage, the larva undergoes a process called apolysis, mediated by the release of a series of neurohormones. During this phase, the cuticle, a tough outer layer made of a mixture of chitin and specialized proteins, is released from the softer epidermis beneath, and the epidermis begins to form a new cuticle. At the end of each instar, the larva moults, the old cuticle splits and the new cuticle expands, rapidly hardening and developing pigment.
Caterpillars have short antennae and several simple eyes. The mouthparts are adapted for chewing with powerful mandibles and a pair of maxillae, each with a segmented palp. Adjoining these is the labium-hypopharynx which houses a tubular spinneret which is able to extrude silk. Caterpillars such as those in the genus Calpodes (family Hesperiidae) have a specialized tracheal system on the 8th segment that function as a primitive lung. Butterfly caterpillars have three pairs of true legs on the thoracic segments and up to six pairs of prolegs arising from the abdominal segments. These prolegs have rings of tiny hooks called crochets that are engaged hydrostatically and help the caterpillar grip the substrate. The epidermis bears tufts of setae, the position and number of which help in identifying the species. There is also decoration in the form of hairs, wart-like protuberances, horn-like protuberances and spines. Internally, most of the body cavity is taken up by the gut, but there may also be large silk glands, and special glands which secrete distasteful or toxic substances. The developing wings are present in later stage instars and the gonads start development in the egg stage.
Pupa
When the larva is fully grown, hormones such as prothoracicotropic hormone (PTTH) are produced. At this point the larva stops feeding, and begins "wandering" in the quest for a suitable pupation site, often the underside of a leaf or other concealed location. There it spins a button of silk which it uses to fasten its body to the surface and moults for a final time. While some caterpillars spin a cocoon to protect the pupa, most species do not. The naked pupa, often known as a chrysalis, usually hangs head down from the cremaster, a spiny pad at the posterior end, but in some species a silken girdle may be spun to keep the pupa in a head-up position. Most of the tissues and cells of the larva are broken down inside the pupa, as the constituent material is rebuilt into the imago. The structure of the transforming insect is visible from the exterior, with the wings folded flat on the ventral surface and the two halves of the proboscis, with the antennae and the legs between them.
The pupal transformation into a butterfly through metamorphosis has held great appeal to mankind. To transform from the miniature wings visible on the outside of the pupa into large structures usable for flight, the pupal wings undergo rapid mitosis and absorb a great deal of nutrients. If one wing is surgically removed early on, the other three will grow to a larger size. In the pupa, the wing forms a structure that becomes compressed from top to bottom and pleated from proximal to distal ends as it grows, so that it can rapidly be unfolded to its full adult size. Several boundaries seen in the adult colour pattern are marked by changes in the expression of particular transcription factors in the early pupa.
Adult
The reproductive stage of the insect is the winged adult or imago. The surface of both butterflies and moths is covered by scales, each of which is an outgrowth from a single epidermal cell. The head is small and dominated by the two large compound eyes. These are capable of distinguishing flower shapes or motion but cannot view distant objects clearly. Colour perception is good, especially in some species in the blue/violet range. The antennae are composed of many segments and have clubbed tips (unlike moths that have tapering or feathery antennae). The sensory receptors are concentrated in the tips and can detect odours. Taste receptors are located on the palps and on the feet. The mouthparts are adapted to sucking and the mandibles are usually reduced in size or absent. The first maxillae are elongated into a tubular proboscis which is curled up at rest and expanded when needed to feed. The first and second maxillae bear palps which function as sensory organs. Some species have a reduced proboscis or maxillary palps and do not feed as adults.
Many Heliconius butterflies also use their proboscis to feed on pollen; in these species only 20% of the amino acids used in reproduction come from larval feeding, which allow them to develop more quickly as caterpillars, and gives them a longer lifespan of several months as adults.
The thorax of the butterfly is devoted to locomotion. Each of the three thoracic segments has two legs (among nymphalids, the first pair is reduced and the insects walk on four legs). The second and third segments of the thorax bear the wings. The leading edges of the forewings have thick veins to strengthen them, and the hindwings are smaller and more rounded and have fewer stiffening veins. The forewings and hindwings are not hooked together (as they are in moths) but are coordinated by the friction of their overlapping parts. The front two segments have a pair of spiracles which are used in respiration.
The abdomen consists of ten segments and contains the gut and genital organs. The front eight segments have spiracles and the terminal segment is modified for reproduction. The male has a pair of clasping organs attached to a ring structure, and during copulation, a tubular structure is extruded and inserted into the female's vagina. A spermatophore is deposited in the female, following which the sperm make their way to a seminal receptacle where they are stored for later use. In both sexes, the genitalia are adorned with various spines, teeth, scales and bristles, which act to prevent the butterfly from mating with an insect of another species. After it emerges from its pupal stage, a butterfly cannot fly until the wings are unfolded. A newly emerged butterfly needs to spend some time inflating its wings with hemolymph and letting them dry, during which time it is extremely vulnerable to predators.
Pattern formation
The colourful patterns on many butterfly wings tell potential predators that they are toxic. Hence, the genetic basis of wing pattern formation can illuminate both the evolution of butterflies as well as their developmental biology. The colour of butterfly wings is derived from tiny structures called scales, each of which have their own pigments. In Heliconius butterflies, there are three types of scales: yellow/white, black, and red/orange/brown scales. Some mechanism of wing pattern formation are now being solved using genetic techniques. For instance, a gene called cortex determines the colour of scales: deleting cortex turned black and red scales yellow. Mutations, e.g. transposon insertions of the non-coding DNA around the cortex gene can turn a black-winged butterfly into a butterfly with a yellow wing band.
Mating
When the butterfly Bicyclus anynana is subjected to repeated inbreeding in the laboratory, there is a dramatic decrease in egg hatching. This severe inbreeding depression is considered to be likely due to a relatively high mutation rate to recessive alleles with substantial damaging effects and infrequent episodes of inbreeding in nature that might otherwise purge such mutations. Although B. anynana experiences inbreeding depression when forcibly inbred in the laboratory it recovers within a few generation when allowed to breed freely. During mate selection, adult females do not innately avoid or learn to avoid siblings, implying that such detection may not be critical to reproductive fitness. Inbreeding may persist in B anynana because the probability of encountering close relatives is rare in nature; that is, movement ecology may mask the deleterious effect of inbreeding resulting in relaxation of selection for active inbreeding avoidance behaviors.
Behaviour
Butterflies feed primarily on nectar from flowers. Some also derive nourishment from pollen, tree sap, rotting fruit, dung, decaying flesh, and dissolved minerals in wet sand or dirt. Butterflies are important as pollinators for some species of plants. In general, they do not carry as much pollen load as bees, but they are capable of moving pollen over greater distances. Flower constancy has been observed for at least one species of butterfly.
Adult butterflies consume only liquids, ingested through the proboscis. They sip water from damp patches for hydration and feed on nectar from flowers, from which they obtain sugars for energy, and sodium and other minerals vital for reproduction. Several species of butterflies need more sodium than that provided by nectar and are attracted by sodium in salt; they sometimes land on people, attracted by the salt in human sweat. Some butterflies also visit dung and scavenge rotting fruit or carcasses to obtain minerals and nutrients. In many species, this mud-puddling behaviour is restricted to the males, and studies have suggested that the nutrients collected may be provided as a nuptial gift, along with the spermatophore, during mating.
In hilltopping, males of some species seek hilltops and ridge tops, which they patrol in search for females. Since it usually occurs in species with low population density, it is assumed these landscape points are used as meeting places to find mates.
Butterflies use their antennae to sense the air for wind and scents. The antennae come in various shapes and colours; the hesperiids have a pointed angle or hook to the antennae, while most other families show knobbed antennae. The antennae are richly covered with sensory organs known as sensillae. A butterfly's sense of taste is coordinated by chemoreceptors on the tarsi, or feet, which work only on contact, and are used to determine whether an egg-laying insect's offspring will be able to feed on a leaf before eggs are laid on it. Many butterflies use chemical signals, pheromones; some have specialized scent scales (androconia) or other structures (coremata or "hair pencils" in the Danaidae). Vision is well developed in butterflies and most species are sensitive to the ultraviolet spectrum. Many species show sexual dimorphism in the patterns of UV reflective patches. Colour vision may be widespread but has been demonstrated in only a few species. Some butterflies have organs of hearing and some species make stridulatory and clicking sounds.
Many species of butterfly maintain territories and actively chase other species or individuals that may stray into them. Some species will bask or perch on chosen perches. The flight styles of butterflies are often characteristic and some species have courtship flight displays. Butterflies can only fly when their temperature is above ; when it is cool, they can position themselves to expose the underside of the wings to the sunlight to heat themselves up. If their body temperature reaches , they can orientate themselves with the folded wings edgewise to the sun. Basking is an activity which is more common in the cooler hours of the morning. Some species have evolved dark wingbases to help in gathering more heat and this is especially evident in alpine forms.
As in many other insects, the lift generated by butterflies is more than can be accounted for by steady-state, non-transitory aerodynamics. Studies using Vanessa atalanta in a wind tunnel show that they use a wide variety of aerodynamic mechanisms to generate force. These include wake capture, vortices at the wing edge, rotational mechanisms and the Weis-Fogh 'clap-and-fling' mechanism. Butterflies are able to change from one mode to another rapidly.
Ecology
Parasitoids, predators, and pathogens
Butterflies are threatened in their early stages by parasitoids and in all stages by predators, diseases and environmental factors. Braconid and other parasitic wasps lay their eggs in lepidopteran eggs or larvae and the wasps' parasitoid larvae devour their hosts, usually pupating inside or outside the desiccated husk. Most wasps are very specific about their host species and some have been used as biological controls of pest butterflies like the large white butterfly. When the small cabbage white was accidentally introduced to New Zealand, it had no natural enemies. In order to control it, some pupae that had been parasitised by a chalcid wasp were imported, and natural control was thus regained. Some flies lay their eggs on the outside of caterpillars and the newly hatched fly larvae bore their way through the skin and feed in a similar way to the parasitoid wasp larvae. Predators of butterflies include ants, spiders, wasps, and birds.
Caterpillars are also affected by a range of bacterial, viral and fungal diseases, and only a small percentage of the butterfly eggs laid ever reach adulthood. The bacterium Bacillus thuringiensis has been used in sprays to reduce damage to crops by the caterpillars of the large white butterfly, and the entomopathogenic fungus Beauveria bassiana has proved effective for the same purpose.
Endangered species
Queen Alexandra's birdwing, found in Papua New Guinea, is the largest butterfly in the world. The species is endangered, and is one of only three insects (the other two being butterflies as well) to be listed on Appendix I of CITES, making international trade illegal.
Defences
Butterflies protect themselves from predators by a variety of means.
Chemical defences are widespread and are mostly based on chemicals of plant origin. In many cases the plants themselves evolved these toxic substances as protection against herbivores. Butterflies have evolved mechanisms to sequester these plant toxins and use them instead in their own defence. These defence mechanisms are effective only if they are well advertised; this has led to the evolution of bright colours in unpalatable butterflies (aposematism). This signal is commonly mimicked by other butterflies, usually only females. A Batesian mimic imitates another species to enjoy the protection of that species' aposematism. The common Mormon of India has female morphs which imitate the unpalatable red-bodied swallowtails, the common rose and the crimson rose. Müllerian mimicry occurs when aposematic species evolve to resemble each other, presumably to reduce predator sampling rates; Heliconius butterflies from the Americas are a good example.
Camouflage is found in many butterflies. Some like the oakleaf butterfly and autumn leaf are remarkable imitations of leaves. As caterpillars, many defend themselves by freezing and appearing like sticks or branches. Others have deimatic behaviours, such as rearing up and waving their front ends which are marked with eyespots as if they were snakes. Some papilionid caterpillars such as the giant swallowtail (Papilio cresphontes) resemble bird droppings so as to be passed over by predators. Some caterpillars have hairs and bristly structures that provide protection while others are gregarious and form dense aggregations. Some species are myrmecophiles, forming mutualistic associations with ants and gaining their protection. Behavioural defences include perching and angling the wings to reduce shadow and avoid being conspicuous. Some female Nymphalid butterflies guard their eggs from parasitoidal wasps.
The Lycaenidae have a false head consisting of eyespots and small tails (false antennae) to deflect attack from the more vital head region. These may also cause ambush predators such as spiders to approach from the wrong end, enabling the butterflies to detect attacks promptly. Many butterflies have eyespots on the wings; these too may deflect attacks, or may serve to attract mates.
Auditory defences can also be used, which in the case of the grizzled skipper refers to vibrations generated by the butterfly upon expanding its wings in an attempt to communicate with ant predators.
Many tropical butterflies have seasonal forms for dry and wet seasons. These are switched by the hormone ecdysone. The dry-season forms are usually more cryptic, perhaps offering better camouflage when vegetation is scarce. Dark colours in wet-season forms may help to absorb solar radiation.
Butterflies without defences such as toxins or mimicry protect themselves through a flight that is more bumpy and unpredictable than in other species. It is assumed this behavior makes it more difficult for predators to catch them, and is caused by the turbulence created by the small whirlpools formed by the wings during flight.
Declining numbers
Declining butterfly populations have been noticed in many areas of the world, and this phenomenon is consistent with the rapidly decreasing insect populations around the world. At least in the Western United States, this collapse in the number of most species of butterflies has been determined to be driven by global climate change, specifically, by warmer autumns.
In culture
In art and literature
Butterflies have appeared in art from 3500 years ago in ancient Egypt. In hunting scenes, butterflies were sometimes included in a way that suggested life, freedom, and the strength to escape capture, creating a balance to scenes concerned with death and upholding ma'at. They also were suggestive of regeneration or rebirth and protection. Certain butterflies, such as the tiger butterfly, may have been associated with solar deities, particularly Ra. The tiger butterfly also would have a particular resemblance to the ankh, due to its black body and wingtips, that was likely noted by the Ancient Egyptians. Butterflies may also have been understood as one of the deceased's guides in the afterlife.
In the ancient Mesoamerican city of Teotihuacan, the brilliantly coloured image of the butterfly was carved into many temples, buildings, jewellery, and emblazoned on incense burners. The butterfly was sometimes depicted with the maw of a jaguar, and some species were considered to be the reincarnations of the souls of dead warriors. The close association of butterflies with fire and warfare persisted into the Aztec civilisation; evidence of similar jaguar-butterfly images has been found among the Zapotec and Maya civilisations.
Butterflies are widely used in objects of art and jewellery: mounted in frames, embedded in resin, displayed in bottles, laminated in paper, and used in some mixed media artworks and furnishings. The Norwegian naturalist Kjell Sandved compiled a photographic Butterfly Alphabet containing all 26 letters and the numerals 0 to 9 from the wings of butterflies.
Sir John Tenniel drew a famous illustration of Alice meeting a caterpillar for Lewis Carroll's Alice in Wonderland, c. 1865. The caterpillar is seated on a toadstool and is smoking a hookah; the image can be read as showing either the forelegs of the larva, or as suggesting a face with protruding nose and chin. Eric Carle's children's book The Very Hungry Caterpillar portrays the larva as an extraordinarily hungry animal, while also teaching children how to count (to five) and the days of the week.
A butterfly appeared in one of Rudyard Kipling's Just So Stories, "The Butterfly that Stamped".
One of the most popular, and most often recorded, songs by Sweden's eighteenth-century bard, Carl Michael Bellman, is "Fjäriln vingad syns på Haga" (The butterfly wingèd is seen in Haga), one of his Fredman's Songs.
Madam Butterfly is a 1904 opera by Giacomo Puccini about a romantic young Japanese bride who is deserted by her American officer husband soon after they are married. It was based on John Luther Long's short story written in 1898.
In mythology and folklore
According to Lafcadio Hearn, a butterfly was seen in Japan as the personification of a person's soul; whether they be living, dying, or already dead. One Japanese superstition says that if a butterfly enters your guest room and perches behind the bamboo screen, the person whom you most love is coming to see you. Large numbers of butterflies are viewed as bad omens. When Taira no Masakado was secretly preparing his famous revolt, a vast a swarm of butterflies appeared in Kyoto. The people were frightened, thinking the apparition to be a portent of coming evil.
Diderot's Encyclopédie cites butterflies as a symbol for the soul. A Roman sculpture depicts a butterfly exiting the mouth of a dead man, representing the Roman belief that the soul leaves through the mouth. In line with this, the ancient Greek word for "butterfly" is ψυχή (psȳchē), which primarily means "soul" or "mind". According to Mircea Eliade, some of the Nagas of Manipur claim ancestry from a butterfly. In some cultures, butterflies symbolise rebirth. The butterfly is a symbol of being transgender, because of the transformation from caterpillar to winged adult. In the English county of Devon, people once hurried to kill the first butterfly of the year, to avoid a year of bad luck. In the Philippines, a lingering black or dark butterfly or moth in the house is taken to mean an impending or recent death in the family. Several American states have chosen an official state butterfly.
Collecting, recording, and rearing
"Collecting" means preserving dead specimens, not keeping butterflies as pets. Collecting butterflies was once a popular hobby; it has now largely been replaced by photography, recording, and rearing butterflies for release into the wild. The zoological illustrator Frederick William Frohawk succeeded in rearing all the butterfly species found in Britain, at a rate of four per year, to enable him to draw every stage of each species. He published the results in the folio sized handbook The Natural History of British Butterflies in 1924.
Butterflies and moths can be reared for recreation or for release.
In technology
Study of the structural coloration of the wing scales of swallowtail butterflies has led to the development of more efficient light-emitting diodes, and is inspiring nanotechnology research to produce paints that do not use toxic pigments and the development of new display technologies.
| Biology and health sciences | Lepidoptera | null |
48339 | https://en.wikipedia.org/wiki/Parachute | Parachute | A parachute is a device used to slow the motion of an object through an atmosphere by creating drag or aerodynamic lift. A major application is to support people, for recreation or as a safety device for aviators, who can exit from an aircraft at height and descend safely to earth.
A parachute is usually made of a light, strong fabric. Early parachutes were made of silk. The most common fabric today is nylon. A parachute's canopy is typically dome-shaped, but some are rectangles, inverted domes, and other shapes.
A variety of loads are attached to parachutes, including people, food, equipment, space capsules, and bombs.
History
Middle Ages
In 852, in Córdoba, Spain, the Moorish man Armen Firman attempted unsuccessfully to fly by jumping from a tower while wearing a large cloak. It was recorded that "there was enough air in the folds of his cloak to prevent great injury when he reached the ground."
Early Renaissance
The earliest evidence for the true parachute dates back to the Renaissance period. The oldest parachute design appears in a manuscript from the 1470s attributed to Francesco di Giorgio Martini (British Library, Add MS 34113, fol. 200v), showing a free-hanging man clutching a crossbar frame attached to a conical canopy. As a safety measure, four straps ran from the ends of the rods to a waist belt. Although the surface area of the parachute design appears to be too small to offer effective air resistance and the wooden base-frame is superfluous and potentially harmful, the basic concept of a working parachute is apparent.
The design is a marked improvement over another folio (189v), which depicts a man trying to break the force of his fall using two long cloth streamers fastened to two bars, which he grips with his hands.
Shortly after, a more sophisticated parachute was sketched by the polymath Leonardo da Vinci in his Codex Atlanticus (fol. 381v) dated to . Here, the scale of the parachute is in a more favorable proportion to the weight of the jumper. A square wooden frame, which alters the shape of the parachute from conical to pyramidal, held open Leonardo's canopy. It is not known whether the Italian inventor was influenced by the earlier design, but he may have learned about the idea through the intensive oral communication among artist-engineers of the time. The feasibility of Leonardo's pyramidal design was successfully tested in 2000 by Briton Adrian Nicholas and again in 2008 by the Swiss skydiver Olivier Vietti-Teppa. According to historian of technology Lynn White, these conical and pyramidal designs, much more elaborate than early artistic jumps with rigid parasols in Asia, mark the origin of "the parachute as we know it."
The Croatian polymath and inventor Fausto Veranzio, or Faust Vrančić (1551–1617), examined da Vinci's parachute sketch and kept the square frame but replaced the canopy with a bulging sail-like piece of cloth that he came to realize decelerates a fall more effectively. A now-famous depiction of a parachute that he dubbed Homo Volans (Flying Man), showing a man parachuting from a tower, presumably St Mark's Campanile in Venice, appeared in his book on mechanics, Machinae Novae ("New Machines", published in 1615 or 1616), alongside a number of other devices and technical concepts.
It was once widely believed that in 1617, Veranzio, then aged 65 and seriously ill, implemented his design and tested the parachute by jumping from St Mark's Campanile, from a bridge nearby, or from St Martin's Cathedral in Bratislava. Various publications incorrectly claimed the event was documented some thirty years later by John Wilkins, one of the founders of, and secretary of, the Royal Society in London, in his book Mathematical Magick or, the Wonders that may be Performed by Mechanical Geometry, published in London in 1648. However, Wilkins wrote about flying, not parachutes, and does not mention Veranzio, a parachute jump, or any event in 1617. Doubts about this test, which include a lack of written evidence, suggest it never occurred, and was instead a misreading of historical notes.
18th and 19th centuries
The modern parachute was invented in the late 18th century by Louis-Sébastien Lenormand in France, who made the first recorded public jump in 1783. Lenormand also sketched his device beforehand.
Two years later, in 1785, Lenormand coined the word "parachute" by hybridizing an Italian prefix para, an imperative form of parare = to avert, defend, resist, guard, shield or shroud, from paro = to parry, and chute, the French word for fall, to describe the aeronautical device's real function.
Also in 1785, Jean-Pierre Blanchard demonstrated it as a means of safely disembarking from a hot-air balloon. While Blanchard's first parachute demonstrations were conducted with a dog as the passenger, he later claimed to have had the opportunity to try it himself in 1793 when his hot air balloon ruptured, and he used a parachute to descend. (This event was not witnessed by others.)
On 12 October 1799, Jeanne Geneviève Garnerin ascended in a gondola attached to a balloon. At 900 meters she detached the gondola from the balloon and descended in the gondola by parachute. In doing so, she became the first woman to parachute. She went on to complete many ascents and parachute descents in towns across France and Europe.
Subsequent development of the parachute focused on it becoming more compact. While the early parachutes were made of linen stretched over a wooden frame, in the late 1790s, Blanchard began making parachutes from folded silk, taking advantage of silk's strength and light weight. In 1797, André Garnerin made the first descent of a "frameless" parachute covered in silk. In 1804, Jérôme Lalande introduced a vent in the canopy to eliminate violent oscillations. In 1887, Park Van Tassel and Thomas Scott Baldwin invented a parachute in San Francisco, California, with Baldwin making the first successful parachute jump in the western United States.
Eve of World War I
In 1907 Charles Broadwick demonstrated two key advances in the parachute he used to jump from hot air balloons at fairs: he folded his parachute into a backpack, and the parachute was pulled from the pack by a static line attached to the balloon. When Broadwick jumped from the balloon, the static line became taut, pulled the parachute from the pack, and then snapped.
In 1911 a successful test took place with a dummy at the Eiffel Tower in Paris. The puppet's weight was ; the parachute's weight was . The cables between the puppet and the parachute were long. On February 4, 1912, Franz Reichelt jumped to his death from the tower during initial testing of his wearable parachute.
Also in 1911, Grant Morton made the first parachute jump from an airplane, a Wright Model B piloted by Phil Parmalee, at Venice Beach, California. Morton's device was of the "throw-out" type where he held the parachute in his arms as he left the aircraft. In the same year (1911), Russian Gleb Kotelnikov invented the first knapsack parachute, although Hermann Lattemann and his wife Käthe Paulus had been jumping with bagged parachutes in the last decade of the 19th century.
In 1912, on a road near Tsarskoye Selo, years before it became part of St. Petersburg, Kotelnikov successfully demonstrated the braking effects of a parachute by accelerating a Russo-Balt automobile to its top speed and then opening a parachute attached to the back seat, thus also inventing the drogue parachute.
On 1 March 1912, U.S. Army Captain Albert Berry made the first (attached-type) parachute jump in the United States from a fixed-wing aircraft, a Benoist pusher, while flying above Jefferson Barracks, St. Louis, Missouri. The jump utilized a parachute stored or housed in a cone-shaped casing under the airplane and attached to a harness on the jumper's body.
Štefan Banič patented an umbrella-like design in 1914, and sold (or donated) the patent to the United States military, which later modified his design, resulting in the first military parachute. Banič had been the first person to patent the parachute, and his design was the first to properly function in the 20th century.
On June 21, 1913, Georgia Broadwick became the first woman to parachute-jump from a moving aircraft, doing so over Los Angeles, California. In 1914, while doing demonstrations for the U.S. Army, Broadwick deployed her chute manually, thus becoming the first person to jump free-fall.
World War I
The first military use of the parachute was by artillery observers on tethered observation balloons in World War I. These were tempting targets for enemy fighter aircraft, though difficult to destroy, due to their heavy anti-aircraft defenses. Because it was difficult to escape from them, and dangerous when on fire due to their hydrogen inflation, observers would abandon them and descend by parachute as soon as enemy aircraft were seen. The ground crew would then attempt to retrieve and deflate the balloon as quickly as possible. The main part of the parachute was in a bag suspended from the balloon with the pilot wearing only a simple waist harness attached to the main parachute. When the balloon crew jumped the main part of the parachute was pulled from the bag by the crew's waist harness, first the shroud lines, followed by the main canopy. This type of parachute was first adopted on a large scale for their observation balloon crews by the Germans, and then later by the British and French. While this type of unit worked well from balloons, it had mixed results when used on fixed-wing aircraft by the Germans, where the bag was stored in a compartment directly behind the pilot. In many instances where it did not work the shroud lines became entangled with the spinning aircraft. Although this type of parachute saved a number of famous German fighter pilots, including Hermann Göring, no parachutes were issued to the crews of Allied "heavier-than-air" aircraft. It has been claimed that the reason was to avoid pilots jumping from the plane when hit rather than trying to save the aircraft, but Air Vice Marshall Arthur Gould Lee, himself a pilot during the war, examined the British War Office files after the war and found no evidence of such claim.
Airplane cockpits at that time also were not large enough to accommodate a pilot and a parachute, since a seat that would fit a pilot wearing a parachute would be too large for a pilot not wearing one. This is why the German type was stowed in the fuselage, rather than being of the "backpack" type. Weight was – at the very beginning – also a consideration since planes had limited load capacity. Carrying a parachute impeded performance and reduced the useful offensive and fuel load.
In the UK, Everard Calthrop, a railway engineer and breeder of Arab horses, invented and marketed through his Aerial Patents Company a "British Parachute" and the "Guardian Angel" parachute. As part of an investigation into Calthrop's design, on 13 January 1917, test pilot Clive Franklyn Collett successfully jumped from a Royal Aircraft Factory BE.2c flying over Orford Ness Experimental Station at . He repeated the experiment several days later.
Following on from Collett, balloon officer Thomas Orde-Lees, known as the "Mad Major", successfully jumped from Tower Bridge in London, which led to the balloonists of the Royal Flying Corps using parachutes, though they were issued for use in aircraft.
In 1911, Solomon Lee Van Meter, Jr. of Lexington, Kentucky, submitted an application for, and in July 1916 received, a patent for a backpack style parachute – the Aviatory Life Buoy. His self-contained device featured a revolutionary quick-release mechanism – the ripcord – that allowed a falling aviator to expand the canopy only when safely away from the disabled aircraft.
Otto Heinecke, a German airship ground crewman, designed a parachute which the German air service introduced in 1918, becoming the world's first air service to introduce a standard parachute. Schroeder company of Berlin manufactured Heinecke's design. The first successful use of this parachute was by Leutnant Helmut Steinbrecher of Jagdstaffel 46, who bailed on 27 June 1918 from his stricken fighter airplane to become the first pilot in history to successfully do so. Although many pilots were saved by the Heinecke design, their efficacy was relatively poor. Out of the first 70 German airmen to bail out, around a third died, These fatalities were mostly due to the chute or ripcord becoming entangled in the airframe of their spinning aircraft or because of harness failure, a problem fixed in later versions.
The French, British, American and Italian air services later based their first parachute designs on the Heinecke parachute to varying extents.
In the UK, Sir Frank Mears, who was serving as a Major in the Royal Flying Corps in France (Kite Balloon section), registered a patent in July 1918 for a parachute with a quick release buckle, known as the "Mears parachute", which was in common use from then onwards.
Post-World War I
The experience with parachutes during the war highlighted the need to develop a design that could be reliably used to exit a disabled airplane. For instance, tethered parachutes did not work well when the aircraft was spinning. After the war, Major Edward L. Hoffman of the United States Army led an effort to develop an improved parachute by bringing together the best elements of multiple parachute designs. Participants in the effort included Leslie Irvin and James Floyd Smith. The team eventually created the Airplane Parachute Type-A. This incorporated three key elements:
storing the parachute in a soft pack worn on the back, as demonstrated by Charles Broadwick in 1906;
a ripcord for manually deploying the parachute at a safe distance from the airplane, from a design by Albert Leo Stevens; and
a pilot chute that draws the main canopy from the pack.
In 1919, Irvin successfully tested the parachute by jumping from an airplane. The Type-A parachute was put into production and over time saved a number of lives. The effort was recognized by the awarding of the Robert J. Collier Trophy to Major Edward L. Hoffman in 1926.
Irvin became the first person to make a premeditated free-fall parachute jump from an airplane. An early brochure of the Irvin Air Chute Company credits William O'Connor as having become, on 24 August 1920, at McCook Field near Dayton, Ohio, the first person to be saved by an Irvin parachute. Test pilot Lt. Harold R. Harris made another life-saving jump at McCook Field on 20 October 1922. Shortly after Harris' jump, two Dayton newspaper reporters suggested the creation of the Caterpillar Club for successful parachute jumps from disabled aircraft.
Beginning with Italy in 1927, several countries experimented with using parachutes to drop soldiers behind enemy lines. The regular Soviet Airborne Troops were established as early as 1931 after a number of experimental military mass jumps starting from 2 August 1930. Earlier the same year, the first Soviet mass jumps led to the development of the parachuting sport in the Soviet Union. By the time of World War II, large airborne forces were trained and used in surprise attacks, as in the battles for Fort Eben-Emael and The Hague, the first large-scale, opposed landings of paratroopers in military history, by the Germans. This was followed later in the war by airborne assaults on a larger scale, such as the Battle of Crete and Operation Market Garden, the latter being the largest airborne military operation ever. Aircraft crew were routinely equipped with parachutes for emergencies as well.
In 1937, drag chutes were used in aviation for the first time, by Soviet airplanes in the Arctic that were providing support for the polar expeditions of the era, such as the first drifting ice station, North Pole-1. The drag chute allowed airplanes to land safely on smaller ice floes.
Most parachutes were made of silk until World War II cut off supplies from Japan. After Adeline Gray made the first jump using a nylon parachute in June 1942, the industry switched to nylon.
Types
Today's modern parachutes are classified into two categories – ascending and descending canopies. All ascending canopies refer to paragliders, built specifically to ascend and stay aloft as long as possible. Other parachutes, including ram-air non-elliptical, are classified as descending canopies by manufacturers.
Some modern parachutes are classified as semi-rigid wings, which are maneuverable and can make a controlled descent to collapse on impact with the ground.
Round
Round parachutes are purely a drag device (that is, unlike the ram-air types, they provide no lift) and are used in military, emergency and cargo applications (e.g. airdrops). Most have large dome-shaped canopies made from a single layer of triangular cloth gores. Some skydivers call them "jellyfish 'chutes" because of the resemblance to the marine organisms. Modern sports parachutists rarely use this type.
The first round parachutes were simple, flat circulars. These early parachutes suffered from instability caused by oscillations. A hole in the apex helped to vent some air and reduce the oscillations. Many military applications adopted conical, i.e., cone-shaped, or parabolic (a flat circular canopy with an extended skirt) shapes, such as the United States Army T-10 static-line parachute. A round parachute with no holes in it is more prone to oscillate and is not considered to be steerable. Some parachutes have inverted dome-shaped canopies. These are primarily used for dropping non-human payloads due to their faster rate of descent.
Forward speed (5–13 km/h) and steering can be achieved by cuts in various sections (gores) across the back, or by cutting four lines in the back, thereby modifying the canopy shape to allow air to escape from the back of the canopy, providing limited forward speed. Other modifications sometimes used are cuts in various gores to cause some of the skirt to bow out. Turning is accomplished by forming the edges of the modifications, giving the parachute more speed from one side of the modification than the other. This gives the jumpers the ability to steer the parachute (such as the United States Army MC series parachutes), enabling them to avoid obstacles and to turn into the wind to minimize horizontal speed at landing.
Cruciform
The unique design characteristics of cruciform parachutes decrease oscillation (its user swinging back and forth) and violent turns during descent. This technology will be used by the United States Army as it replaces its older T-10 parachutes with T-11 parachutes under a program called Advanced Tactical Parachute System (ATPS). The ATPS canopy is a highly modified version of a cross/ cruciform platform and is square in appearance. The ATPS system will reduce the rate of descent by 30 percent from to . The T-11 is designed to have an average rate of descent 14% slower than the T-10D, thus resulting in lower landing injury rates for jumpers. The decline in the rate of descent will reduce the impact energy by almost 25% to lessen the potential for injury.
Pull-down apex
A variation on the round parachute is the pull-down apex parachute, invented by a Frenchman named Pierre-Marcel Lemoigne. The first widely used canopy of this type was called the Para-Commander (made by the Pioneer Parachute Co.), although there are many other canopies with a pull-down apex produced in the years thereafter - these had minor differences in attempts to make a higher performance rig, such as different venting configurations. They are all considered 'round' parachutes, but with suspension lines to the canopy apex that apply load there and pull the apex closer to the load, distorting the round shape into a somewhat flattened or lenticular shape when viewed from the side. And while called rounds, they generally have an elliptical shape when viewed from above or below, with the sides bulging out more than the for'd-and-aft dimension, the chord (see the lower photo to the right and you likely can ascertain the difference).
Due to their lenticular shape and appropriate venting, they have a considerably faster forward speed than, say, a modified military canopy. And due to controllable rear-facing vents in the canopy's sides, they also have much snappier turning capabilities, though they are decidedly low-performance compared to today's ram-air rigs. From about the mid-1960s to the late-1970s, this was the most popular parachute design type for sport parachuting (prior to this period, modified military 'rounds' were generally used and after, ram-air 'squares' became common). Note that the use of the word elliptical for these 'round' parachutes is somewhat dated and may cause slight confusion, since some 'squares' (i.e. ram-airs) are elliptical nowadays, too.
Annular
Some designs with a pull-down apex have the fabric removed from the apex to open a hole through which air can exit (most, if not all, round canopies have at least a small hole to allow easier tie-down for packing - these aren't considered annular), giving the canopy an annular geometry. This hole can be very pronounced in some designs, taking up more 'space' than the parachute. They also have decreased horizontal drag due to their flatter shape and, when combined with rear-facing vents, can have considerable forward speed. Truly annular designs - with a hole large enough that the canopy can be classified as ring-shaped - are uncommon.
Rogallo wing
Sport parachuting has experimented with the Rogallo wing, among other shapes and forms. These were usually an attempt to increase the forward speed and reduce the landing speed offered by the other options at the time. The ram-air parachute's development and the subsequent introduction of the sail slider to slow deployment reduced the level of experimentation in the sport parachuting community. The parachutes are also hard to build.
Ribbon and ring
Ribbon and ring parachutes have similarities to annular designs. They are frequently designed to deploy at supersonic speeds. A conventional parachute would instantly burst upon opening and be shredded at such speeds. Ribbon parachutes have a ring-shaped canopy, often with a large hole in the centre to release the pressure. Sometimes the ring is broken into ribbons connected by ropes to leak air even more. These large leaks lower the stress on the parachute so it does not burst or shred when it opens. Ribbon parachutes made of Kevlar are used on nuclear bombs, such as the B61 and B83.
Ram-air
The principle of the Ram-Air Multicell Airfoil was conceived in 1963 by Canadian Domina "Dom" C. Jalbert, but serious problems had to be solved before a ram-air canopy could be marketed to the sport parachuting community. Ram-air parafoils are steerable (as are most canopies used for sport parachuting), and have two layers of fabric—top and bottom—connected by airfoil-shaped fabric ribs to form "cells". The cells fill with higher-pressure air from vents that face forward on the leading edge of the airfoil. The fabric is shaped and the parachute lines trimmed under load such that the ballooning fabric inflates into an airfoil shape. This airfoil is sometimes maintained by use of fabric one-way valves called airlocks. "The first jump of this canopy (a Jalbert Parafoil) was made by International Skydiving Hall of Fame member Paul 'Pop' Poppenhager."
Varieties
Personal ram-air parachutes are loosely divided into two varieties – rectangular or tapered – commonly called "squares" or "ellipticals", respectively. Medium-performance canopies (reserve-, BASE-, canopy formation-, and accuracy-type) are usually rectangular. High-performance, ram-air parachutes have a slightly tapered shape to their leading and/or trailing edges when viewed in plan form, and are known as ellipticals. Sometimes all the taper is on the leading edge (front), and sometimes in the trailing edge (tail).
Ellipticals are usually used only by sport parachutists. They often have smaller, more numerous fabric cells and are shallower in profile. Their canopies can be anywhere from slightly elliptical to highly elliptical, indicating the amount of taper in the canopy design, which is often an indicator of the responsiveness of the canopy to control input for a given wing loading, and of the level of experience required to pilot the canopy safely.
The rectangular parachute designs tend to look like square, inflatable air mattresses with open front ends. They are generally safer to operate because they are less prone to dive rapidly with relatively small control inputs, they are usually flown with lower wing loadings per square foot of area, and they glide more slowly. They typically have a lower glide ratio.
Wing loading of parachutes is measured similarly to that of aircraft, comparing exit weight to area of parachute fabric. Typical wing loading for students, accuracy competitors, and BASE jumpers is less than 5 kg per square meter – often 0.3 kilograms per square meter or less. Most student skydivers fly with wing loading below 5 kg per square meter. Most sport jumpers fly with wing loading between 5 and 7 kg per square meter, but many interested in performance landings exceed this wing loading. Professional canopy pilots compete with wing loading of 10 to over 15 kilograms per square meter. While ram-air parachutes with wing loading higher than 20 kilograms per square meter have been landed, this is strictly the realm of professional test jumpers.
Smaller parachutes tend to fly faster for the same load, and ellipticals respond faster to control input. Therefore, small, elliptical designs are often chosen by experienced canopy pilots for the thrilling flying they provide. Flying a fast elliptical requires much more skill and experience. Fast ellipticals are also considerably more dangerous to land. With high-performance elliptical canopies, nuisance malfunctions can be much more serious than with a square design, and may quickly escalate into emergencies. Flying highly loaded, elliptical canopies is a major contributing factor in many skydiving accidents, although advanced training programs are helping to reduce this danger.
High-speed, cross-braced parachutes, such as the Velocity, VX, XAOS, and Sensei, have given birth to a new branch of sport parachuting called "swooping." A race course is set up in the landing area for expert pilots to measure the distance they are able to fly past the tall entry gate. Current world records exceed .
Aspect ratio is another way to measure ram-air parachutes. Aspect ratios of parachutes are measured the same way as aircraft wings, by comparing span with chord. Low aspect ratio parachutes, i.e., span 1.8 times the chord, are now limited to precision landing competitions. Popular precision landing parachutes include Jalbert (now NAA) Para-Foils and John Eiff's series of Challenger Classics. While low aspect ratio parachutes tend to be extremely stable, with gentle stall characteristics, they suffer from steep glide ratios and a small tolerance, or "sweet spot", for timing the landing flare.
Because of their predictable opening characteristics, parachutes with a medium aspect ratio around 2.1 are widely used for reserves, BASE, and canopy formation competition. Most medium aspect ratio parachutes have seven cells.
High aspect ratio parachutes have the flattest glide and the largest tolerance for timing the landing flare, but the least predictable openings. An aspect ratio of 2.7 is about the upper limit for parachutes. High aspect ratio canopies typically have nine or more cells. All reserve ram-air parachutes are of the square variety, because of the greater reliability, and the less-demanding handling characteristics.
Paragliders
Paragliders - virtually all of which use ram-air canopies - are more akin to today's sport parachutes than, say, parachutes of the mid-1970s and earlier. Technically, they are ascending parachutes, though that term is not used in the paragliding community, and they have the same basic airfoil design of today's 'square' or 'elliptical' sports parachuting canopy, but generally have more sectioned cells, higher aspect ratio and a lower profile. Cell count varies widely, typically from the high 20s to the 70s, while aspect ratio can be 8 or more, though aspect ratio (projected) for such a canopy might be down at 6 or so - both outrageously higher than a representative skydiver's parachute. The wing span is typically so great that it's far closer to a very elongated rectangle or ellipse than a square and that term is rarely used by paraglider pilots. Similarly, span might be ~15 m with span (projected) at 12 m. Canopies are still attached to the harness by suspension lines and (four or six) risers, but they use lockable carabiners as the final connection to the harness. Modern high-performance paragliders often have the cell openings closer to the bottom of the leading edge and the end cells might appear to be closed, both for aerodynamic streamlining (these apparently closed end cells are vented and inflated from the adjacent cells, which have venting in the cell walls).
The main difference is in paragliders' usage, typically longer flights that can last all day and hundreds of kilometres in some cases. The harness is also quite different from a parachuting harness and can vary dramatically from ones for the beginner (which might be just a bench seat with nylon material and webbing to ensure the pilot is secure, no matter the position), to seatboardless ones for high altitude and cross-country flights (these are usually full-body cocoon- or hammock-like devices to include the outstretched legs - called speedbags, aerocones, etc. - to ensure aerodynamic efficiency and warmth). In many designs, there will be protection for the back and shoulder areas built-in, and support for a reserve canopy, water container, etc. Some even have windshields.
Because paragliders are made for foot- or ski-launch, they aren't suitable for terminal velocity openings and there is no slider to slow down an opening (paraglider pilots typically start with an open but uninflated canopy). To launch a paraglider, one typically spreads out the canopy on the ground to closely approximate an open canopy with the suspension lines having little slack and less tangle - see more in Paragliding. Depending on the wind, the pilot has three basic options: 1) a running forward launch (typically in no wind or slight wind), 2) a standing launch (in ideal winds) and 3) a reverse launch (in higher winds). In ideal winds, the pilot pulls on the top risers to have the wind inflate the cells and simply eases the brakes down, much like an aircraft's flaps, and takes off. Or if there is no wind, the pilot runs or skis to make it inflate, typically at the edge of a cliff or hill. Once the canopy is above one's head, it's a gentle pull down on both toggles in ideal winds, a tow (say, behind a vehicle) on flat ground, a continued run down the hill, etc. Ground handling in a variety of winds is important and there are even canopies made strictly for that practice, to save on wear and tear of more expensive canopies designed for say, XC, competition or just recreational flying.
General characteristics
Main parachutes used by skydivers today are designed to open softly. Overly rapid deployment was an early problem with ram-air designs. The primary innovation that slows the deployment of a ram-air canopy is the slider; a small rectangular piece of fabric with a grommet near each corner. Four collections of lines go through the grommets to the risers (risers are strips of webbing joining the harness and the rigging lines of a parachute). During deployment, the slider slides down from the canopy to just above the risers. The slider is slowed by air resistance as it descends and reduces the rate at which the lines can spread. This reduces the speed at which the canopy can open and inflate.
At the same time, the overall design of a parachute still has a significant influence on the deployment speed. Modern sport parachutes' deployment speeds vary considerably. Most modern parachutes open comfortably, but individual skydivers may prefer harsher deployment.
The deployment process is inherently chaotic. Rapid deployments can still occur even with well-behaved canopies. On rare occasions, deployment can even be so rapid that the jumper suffers bruising, injury, or death. Reducing the amount of fabric decreases the air resistance. This can be done by making the slider smaller, inserting a mesh panel, or cutting a hole in the slider.
Deployment
Reserve parachutes usually have a ripcord deployment system, which was first designed by Theodore Moscicki, but most modern main parachutes used by sports parachutists use a form of hand-deployed pilot chute. A ripcord system pulls a closing pin (sometimes multiple pins), which releases a spring-loaded pilot chute, and opens the container; the pilot chute is then propelled into the air stream by its spring, then uses the force generated by passing air to extract a deployment bag containing the parachute canopy, to which it is attached via a bridle. A hand-deployed pilot chute, once thrown into the air stream, pulls a closing pin on the pilot chute bridle to open the container, then the same force extracts the deployment bag. There are variations on hand-deployed pilot chutes, but the system described is the more common throw-out system.
Only the hand-deployed pilot chute may be collapsed automatically after deployment—by a kill line reducing the in-flight drag of the pilot chute on the main canopy. Reserves, on the other hand, do not retain their pilot chutes after deployment. The reserve deployment bag and pilot chute are not connected to the canopy in a reserve system. This is known as a free-bag configuration, and the components are sometimes not recovered after a reserve deployment.
Occasionally, a pilot chute does not generate enough force either to pull the pin or to extract the bag. Causes may be that the pilot chute is caught in the turbulent wake of the jumper (the "burble"), the closing loop holding the pin is too tight, or the pilot chute is generating insufficient force. This effect is known as "pilot chute hesitation," and, if it does not clear, it can lead to a total malfunction, requiring reserve deployment.
Paratroopers' main parachutes are usually deployed by static lines that release the parachute, yet retain the deployment bag that contains the parachute—without relying on a pilot chute for deployment. In this configuration, the deployment bag is known as a direct-bag system, in which the deployment is rapid, consistent, and reliable.
Safety
A parachute is carefully folded, or "packed" to ensure that it will open reliably. If a parachute is not packed properly it can result in a malfunction where the main parachute fails to deploy correctly or fully. In the United States and many developed countries, emergency and reserve parachutes are packed by "riggers" who must be trained and certified according to legal standards. Sport skydivers are always trained to pack their own primary "main" parachutes.
Exact numbers are difficult to estimate because parachute design, maintenance, loading, packing technique and operator experience all have a significant impact on malfunction rates. Approximately one in a thousand sport main parachute openings malfunctions, requiring the use of the reserve parachute, although some skydivers have many thousands of jumps and never needed to use their reserve parachute.
Reserve parachutes are packed and deployed somewhat differently. They are also designed more conservatively, favouring reliability over responsiveness and are built and tested to more exacting standards, making them more reliable than main parachutes. Regulated inspection intervals, coupled with significantly less use contributes to reliability as wear on some components can adversely affect reliability. The safety advantage of a reserve parachute comes from the small probability of a main malfunction being multiplied by the even smaller probability of a reserve malfunction. This yields an even smaller probability of a double malfunction, although there is also a small possibility of a malfunctioning main parachute not being able to be released and thus interfering with the reserve parachute. In the United States, the 2017 average fatality rate is recorded to be 1 in 133,571 jumps.
Injuries and fatalities in sport skydiving are possible even under a fully functional main parachute, such as may occur if the skydiver makes an error in judgment while flying the canopy which results in a high-speed impact either with the ground or with a hazard on the ground, which might otherwise have been avoided, or results in collision with another skydiver under canopy.
Malfunctions
Below are listed the malfunctions specific to round parachutes:
A "Mae West" or "blown periphery" is a type of round parachute malfunction that contorts the shape of the canopy into the outward appearance of a large brassiere, named after the generous proportions of the late actress Mae West. The column of nylon fabric, buffeted by the wind, rapidly heats from friction and opposite sides of the canopy can fuse together in a narrow region, removing any chance of it opening fully.
A "streamer" is the main chute which becomes entangled in its lines and fails to deploy, taking the shape of a paper streamer. The parachutist cuts it away to provide space and clean air for deploying the reserve.
An "inversion" occurs when one skirt of the canopy blows between the suspension lines on the opposite side of the parachute and then catches air. That portion then forms a secondary lobe with the canopy inverted. The secondary lobe grows until the canopy turns completely inside out.
A "barber's pole" describes having a tangle of lines behind the jumper's head, who cuts away the main and opens his reserve.
The "horseshoe" is an out-of-sequence deployment, when the parachute lines and bag are released before the bag drogue and bridle. This can cause the lines to become tangled or a situation where the parachute drogue is not released from the container.
"Jumper-In-Tow" involves a static line that does not disconnect, resulting in a jumper being towed behind the aircraft.
Records
On August 16, 1960, Joseph Kittinger, in the Excelsior III test jump, set the previous world record for the highest parachute jump. He jumped from a balloon at an altitude of (which was also a piloted balloon altitude record at the time). A small stabilizer chute deployed successfully, and Kittinger fell for 4 minutes and 36 seconds, also setting a still-standing world record for the longest parachute free-fall, if falling with a stabilizer chute is counted as free-fall. At an altitude of , Kittinger opened his main chute and landed safely in the New Mexico desert. The whole descent took 13 minutes and 45 seconds. During the descent, Kittinger experienced temperatures as low as . In the free-fall stage, he reached a top speed of 614 mph (988 km/h or 274 m/s), or Mach 0.8.
According to Guinness World Records, Yevgeni Andreyev, a colonel in the Soviet Air Force, held the official FAI record for the longest free-fall parachute jump (without drogue chute) after falling for 24,500 m (80,380 ft) from an altitude of 25,457 m (83,523 ft) near the city of Saratov, Russia on November 1, 1962, until broken by Felix Baumgartner in 2012.
Felix Baumgartner broke Joseph Kittinger's record on October 14, 2012, with a jump from an altitude of 127,852 feet (38,969.3 m) and reaching speeds up to 833.9 mph (1,342.0 km/h or 372.8 m/s), or nearly Mach 1.1. Kittinger was an advisor for Baumgartner's jump.
Alan Eustace made a jump from the stratosphere on October 24, 2014, from an altitude of 135,889.108 feet (41,419 m). However, because Eustace's jump involved a drogue parachute while Baumgartner's did not, their vertical speed and free fall distance records remain in different record categories.
Uses
In addition to the use of a parachute to slow the descent of a person or object, a drogue parachute is used to aid horizontal deceleration of a land or air vehicle, including fixed-wing aircraft and drag racers, provide stability, as to assist certain types of light aircraft in distress, tandem free-fall; and as a pilot triggering deployment of a larger parachute.
Parachutes are also used as play equipment.
| Technology | Aviation | null |
48340 | https://en.wikipedia.org/wiki/Pesticide | Pesticide | Pesticides are substances that are used to control pests. They include herbicides, insecticides, nematicides, fungicides, and many others (see table). The most common of these are herbicides, which account for approximately 50% of all pesticide use globally. Most pesticides are used as plant protection products (also known as crop protection products), which in general protect plants from weeds, fungi, or insects. In general, a pesticide is a chemical or biological agent (such as a virus, bacterium, or fungus) that deters, incapacitates, kills, or otherwise discourages pests. Target pests can include insects, plant pathogens, weeds, molluscs, birds, mammals, fish, nematodes (roundworms), and microbes that destroy property, cause nuisance, or spread disease, or are disease vectors. Along with these benefits, pesticides also have drawbacks, such as potential toxicity to humans and other species.
Definition
The word pesticide derives from the Latin pestis (plague) and caedere (kill).
The Food and Agriculture Organization (FAO) has defined pesticide as:
any substance or mixture of substances intended for preventing, destroying, or controlling any pest, including vectors of human or animal disease, unwanted species of plants or animals, causing harm during or otherwise interfering with the production, processing, storage, transport, or marketing of food, agricultural commodities, wood and wood products or animal feedstuffs, or substances that may be administered to animals for the control of insects, arachnids, or other pests in or on their bodies. The term includes substances intended for use as a plant growth regulator, defoliant, desiccant, or agent for thinning fruit or preventing the premature fall of fruit. Also used as substances applied to crops either before or after harvest to protect the commodity from deterioration during storage and transport.
Classifications
Pesticides can be classified by target organism (e.g., herbicides, insecticides, fungicides, rodenticides, and pediculicides – see table),
Biopesticides according to the EPA include microbial pesticides, biochemical pesticides, and plant-incorporated protectants.
Pesticides can be classified into structural classes, with many structural classes developed for each of the target organisms listed in the table. A structural class is usually associated with a single mode of action, whereas a mode of action may encompass more than one structural class.
The pesticidal chemical (active ingredient) is mixed (formulated) with other components to form the product that is sold, and which is applied in various ways. Pesticides in gas form are fumigants.
Pesticides can be classified based upon their mode of action, which indicates the exact biological mechanism which the pesticide disrupts. The modes of action are important for resistance management, and are categorized and administered by the insecticide, herbicide, and fungicide resistance action committees.
Pesticides may be systemic or non-systemic. A systemic pesticide moves (translocates) inside the plant. Translocation may be upward in the xylem, or downward in the phloem or both. Non-systemic pesticides (contact pesticides) remain on the surface and act through direct contact with the target organism. Pesticides are more effective if they are systemic. Systemicity is a prerequisite for the pesticide to be used as a seed-treatment.
Pesticides can be classified as persistent (non-biodegradable) or non-persistent (biodegradable). A pesticide must be persistent enough to kill or control its target but must degrade fast enough not to accumulate in the environment or the food chain in order to be approved by the authorities. Persistent pesticides, including DDT, were banned many years ago, an exception being spraying in houses to combat malaria vectors.
History
From biblical times until the 1950s the pesticides used were inorganic compounds and plant extracts. The inorganic compounds were derivatives of copper, arsenic, mercury, sulfur, among others, and the plant extracts contained pyrethrum, nicotine, and rotenone among others. The less toxic of these are still in use in organic farming. In the 1940s the insecticide DDT, and the herbicide 2,4-D, were introduced. These synthetic organic compounds were widely used and were very profitable. They were followed in the 1950s and 1960s by numerous other synthetic pesticides, which led to the growth of the pesticide industry. During this period, it became increasingly evident that DDT, which had been sprayed widely in the environment to combat the vector, had accumulated in the food chain. It had become a global pollutant, as summarized in the well-known book Silent Spring. Finally, DDT was banned in the 1970s in several countries, and subsequently all persistent pesticides were banned worldwide, an exception being spraying on interior walls for vector control.
Resistance to a pesticide was first seen in the 1920s with inorganic pesticides, and later it was found that development of resistance is to be expected, and measures to delay it are important. Integrated pest management (IPM) was introduced in the 1950s. By careful analysis and spraying only when an economical or biological threshold of crop damage is reached, pesticide application is reduced. This became in the 2020s the official policy of international organisations, industry, and many governments. With the introduction of high yielding varieties in the 1960s in the green revolution, more pesticides were used. Since the 1980s genetically modified crops were introduced, which resulted in lower amounts of insecticides used on them. Organic agriculture, which uses only non-synthetic pesticides, has grown and in 2020 represents about 1.5 per cent of the world's total agricultural land.
Pesticides have become more effective. Application rates fell from 1,000 to 2,500 grams of active ingredient per hectare (g/ha) in the 1950s to 40–100 g/ha in the 2000s. Despite this, amounts used have increased. In high income countries over 20 years between the 1990s and 2010s amounts used increased 20%, while in the low income countries amounts increased 1623%.
Development of new pesticides
The aim is to find new compounds or agents with improved properties such as a new mode of action or lower application rate. Another aim is to replace older pesticides which have been banned for reasons of toxicity or environmental harm or have become less effective due to development of resistance.
The process starts with testing (screening) against target organisms such as insects, fungi or plants. Inputs are typically random compounds, natural products, compounds designed to disrupt a biochemical target, compounds described in patents or literature, or biocontrol organisms.
Compounds that are active in the screening process, known as hits or leads, cannot be used as pesticides, except for biocontrol organisms and some potent natural products. These lead compounds need to be optimised by a series of cycles of synthesis and testing of analogs. For approval by regulatory authorities for use as pesticides, the optimized compounds must meet several requirements. In addition to being potent (low application rate), they must show low toxicity to non-target organisms, low environmental impact, and viable manufacturing cost. The cost of developing a pesticide in 2022 was estimated to be 350 million US dollars. It has become more difficult to find new pesticides. More than 100 new active ingredients were introduced in the 2000s and less than 40 in the 2010s. Biopesticides are cheaper to develop, since the authorities require less toxicological and environmental study. Since 2000 the rate of new biological product introduction has frequently exceeded that of conventional products.
More than 25% of existing chemical pesticides contain one or more chiral centres (stereogenic centres). Newer pesticides with lower application rates tend to have more complex structures, and thus more often contain chiral centres. In cases when most or all of the pesticidal activity in a new compound is found in one enantiomer (the eutomer), the registration and use of the compound as this single enantiomer is preferred. This reduces the total application rate and avoids the tedious environmental testing required when registering a racemate. However, if a viable enantioselective manufacturing route cannot be found, then the racemate is registered and used.
Uses
In addition to their main use in agriculture, pesticides have a number of other applications. Pesticides are used to control organisms that are considered to be harmful, or pernicious to their surroundings. For example, they are used to kill mosquitoes that can transmit potentially deadly diseases like West Nile virus, yellow fever, and malaria. They can also kill bees, wasps or ants that can cause allergic reactions. Insecticides can protect animals from illnesses that can be caused by parasites such as fleas. Pesticides can prevent sickness in humans that could be caused by moldy food or diseased produce. Herbicides can be used to clear roadside weeds, trees, and brush. They can also kill invasive weeds that may cause environmental damage. Herbicides are commonly applied in ponds and lakes to control algae and plants such as water grasses that can interfere with activities like swimming and fishing and cause the water to look or smell unpleasant. Uncontrolled pests such as termites and mold can damage structures such as houses. Pesticides are used in grocery stores and food storage facilities to manage rodents and insects that infest food such as grain. Pesticides are used on lawns and golf courses, partly for cosmetic reasons.
Integrated pest management, the use of multiple approaches to control pests, is becoming widespread and has been used with success in countries such as Indonesia, China, Bangladesh, the U.S., Australia, and Mexico. IPM attempts to recognize the more widespread impacts of an action on an ecosystem, so that natural balances are not upset.
Each use of a pesticide carries some associated risk. Proper pesticide use decreases these associated risks to a level deemed acceptable by pesticide regulatory agencies such as the United States Environmental Protection Agency (EPA) and the Pest Management Regulatory Agency (PMRA) of Canada.
DDT, sprayed on the walls of houses, is an organochlorine that has been used to fight malaria vectors (mosquitos) since the 1940s. The World Health Organization recommend this approach. It and other organochlorine pesticides have been banned in most countries worldwide because of their persistence in the environment and human toxicity. DDT has become less effective, as resistance was identified in Africa as early as 1955, and by 1972 nineteen species of mosquito worldwide were resistant to DDT.
Amount used
Total pesticides use in agriculture in 2021 was 3.54 million tonnes of active ingredients (Mt), a 4 percent increase with respect to 2020, an 11 percent increase in a decade, and a doubling since 1990. Pesticides use per area of cropland in 2021 was 2.26 kg per hectare (kg/ha), an increase of 4 percent with respect to 2020; use per value of agricultural production was 0.86 kg per thousand international dollar (kg/1000 I$) (+2%); and use per person was 0.45 kg per capita (kg/cap) (+3%). Between 1990 and 2021, these indicators increased by 85 percent, 3 percent, and 33 percent, respectively. Brazil was the world's largest user of pesticides in 2021, with 720 kt of pesticides applications for agricultural use, while the USA (457 kt) was the second-largest user.
Applications per cropland area in 2021 varied widely, from 10.9 kg/hectare in Brazil to 0.8 kg/ha in the Russian Federation. The level in Brazil was about twice as high as in Argentina (5.6 kg/ha) and Indonesia (5.3 kg/ha). Insecticide use in the US has declined by more than half since 1980 (0.6%/yr), mostly due to the near phase-out of organophosphates. In corn fields, the decline was even steeper, due to the switchover to transgenic Bt corn.
Benefits
Pesticides increase agricultural yields and lower costs. One study found that not using pesticides reduced crop yields by about 10%. Another study, conducted in 1999, found that a ban on pesticides in the United States may result in a rise of food prices, loss of jobs, and an increase in world hunger.
There are two levels of benefits for pesticide use, primary and secondary. Primary benefits are direct gains from the use of pesticides and secondary benefits are effects that are more long-term.
Biological
Controlling pests and plant disease vectors
Improved crop yields
Improved crop/livestock quality
Invasive species controlled
Controlling human/livestock disease vectors and nuisance organisms
Human lives saved and disease reduced. Diseases controlled include malaria, with millions of lives having been saved or enhanced with the use of DDT alone.
Animal lives saved and disease reduced
Controlling organisms that harm other human activities and structures
Drivers view unobstructed
Tree/brush/leaf hazards prevented
Wooden structures protected
Economics
In 2018 world pesticide sales were estimated to be $65 billion, of which 88% was used for agriculture. Generic accounted for 85% of sales in 2018. In one study, it was estimated that for every dollar ($1) that is spent on pesticides for crops results in up to four dollars ($4) in crops which would otherwise be lost to insects, fungi and weeds. In general, farmers benefit from having an increase in crop yield and from being able to grow a variety of crops throughout the year. Consumers of agricultural products also benefit from being able to afford the vast quantities of produce available year-round.
Disadvantages
On the cost side of pesticide use there can be costs to the environment and costs to human health. Pesticides safety education and pesticide applicator regulation are designed to protect the public from pesticide misuse, but do not eliminate all misuse. Reducing the use of pesticides and choosing less toxic pesticides may reduce risks placed on society and the environment from pesticide use.
Health effects
Pesticides may affect health negatively. mimicking hormones causing reproductive problems, and also causing cancer. A 2007 systematic review found that "most studies on non-Hodgkin lymphoma and leukemia showed positive associations with pesticide exposure" and thus concluded that cosmetic use of pesticides should be decreased. There is substantial evidence of associations between organophosphate insecticide exposures and neurobehavioral alterations. Limited evidence also exists for other negative outcomes from pesticide exposure including neurological, birth defects, and fetal death.
The American Academy of Pediatrics recommends limiting exposure of children to pesticides and using safer alternatives:
Pesticides are also found in majority of U.S. households with 88 million out of the 121.1 million households indicating that they use some form of pesticide in 2012. As of 2007, there were more than 1,055 active ingredients registered as pesticides, which yield over 20,000 pesticide products that are marketed in the United States.
Owing to inadequate regulation and safety precautions, 99% of pesticide-related deaths occur in developing countries that account for only 25% of pesticide usage.
One study found pesticide self-poisoning the method of choice in one third of suicides worldwide, and recommended, among other things, more restrictions on the types of pesticides that are most harmful to humans.
A 2014 epidemiological review found associations between autism and exposure to certain pesticides, but noted that the available evidence was insufficient to conclude that the relationship was causal.
Occupational exposure among agricultural workers
The World Health Organization and the UN Environment Programme estimate that 3 million agricultural workers in the developing world experience severe poisoning from pesticides each year, resulting in 18,000 deaths. According to one study, as many as 25 million workers in developing countries may suffer mild pesticide poisoning yearly. Other occupational exposures besides agricultural workers, including pet groomers, groundskeepers, and fumigators, may also put individuals at risk of health effects from pesticides.
Pesticide use is widespread in Latin America, as around US$3 billion are spent each year in the region. Records indicate an increase in the frequency of pesticide poisonings over the past two decades. The most common incidents of pesticide poisoning is thought to result from exposure to organophosphate and carbamate insecticides. At-home pesticide use, use of unregulated products, and the role of undocumented workers within the agricultural industry makes characterizing true pesticide exposure a challenge. It is estimated that 50–80% of pesticide poisoning cases are unreported.
Underreporting of pesticide poisoning is especially common in areas where agricultural workers are less likely to seek care from a healthcare facility that may be monitoring or tracking the incidence of acute poisoning. The extent of unintentional pesticide poisoning may be much greater than available data suggest, particularly among developing countries. Globally, agriculture and food production remain one of the largest industries. In East Africa, the agricultural industry represents one of the largest sectors of the economy, with nearly 80% of its population relying on agriculture for income. Farmers in these communities rely on pesticide products to maintain high crop yields.
Some East Africa governments are shifting to corporate farming, and opportunities for foreign conglomerates to operate commercial farms have led to more accessible research on pesticide use and exposure among workers. In other areas where large proportions of the population rely on subsistence, small-scale farming, estimating pesticide use and exposure is more difficult.
Pesticide poisoning
Pesticides may exhibit toxic effects on humans and other non-target species, the severity of which depends on the frequency and magnitude of exposure. Toxicity also depends on the rate of absorption, distribution within the body, metabolism, and elimination of compounds from the body. Commonly used pesticides like organophosphates and carbamates act by inhibiting acetylcholinesterase activity, which prevents the breakdown of acetylcholine at the neural synapse. Excess acetylcholine can lead to symptoms like muscle cramps or tremors, confusion, dizziness and nausea. Studies show that farm workers in Ethiopia, Kenya, and Zimbabwe have decreased concentrations of plasma acetylcholinesterase, the enzyme responsible for breaking down acetylcholine acting on synapses throughout the nervous system. Other studies in Ethiopia have observed reduced respiratory function among farm workers who spray crops with pesticides. Numerous exposure pathways for farm workers increase the risk of pesticide poisoning, including dermal absorption walking through fields and applying products, as well as inhalation exposure.
Measuring exposure to pesticides
There are multiple approaches to measuring a person's exposure to pesticides, each of which provides an estimate of an individual's internal dose. Two broad approaches include measuring biomarkers and markers of biological effect. The former involves taking direct measurement of the parent compound or its metabolites in various types of media: urine, blood, serum. Biomarkers may include a direct measurement of the compound in the body before it's been biotransformed during metabolism. Other suitable biomarkers may include the metabolites of the parent compound after they've been biotransformed during metabolism. Toxicokinetic data can provide more detailed information on how quickly the compound is metabolized and eliminated from the body, and provide insights into the timing of exposure.
Markers of biological effect provide an estimation of exposure based on cellular activities related to the mechanism of action. For example, many studies investigating exposure to pesticides often involve the quantification of the acetylcholinesterase enzyme at the neural synapse to determine the magnitude of the inhibitory effect of organophosphate and carbamate pesticides.
Another method of quantifying exposure involves measuring, at the molecular level, the amount of pesticide interacting with the site of action. These methods are more commonly used for occupational exposures where the mechanism of action is better understood, as described by WHO guidelines published in "Biological Monitoring of Chemical Exposure in the Workplace". Better understanding of how pesticides elicit their toxic effects is needed before this method of exposure assessment can be applied to occupational exposure of agricultural workers.
Alternative methods to assess exposure include questionnaires to discern from participants whether they are experiencing symptoms associated with pesticide poisoning. Self-reported symptoms may include headaches, dizziness, nausea, joint pain, or respiratory symptoms.
Challenges in assessing pesticide exposure
Multiple challenges exist in assessing exposure to pesticides in the general population, and many others that are specific to occupational exposures of agricultural workers. Beyond farm workers, estimating exposure to family members and children presents additional challenges, and may occur through "take-home" exposure from pesticide residues collected on clothing or equipment belonging to parent farm workers and inadvertently brought into the home. Children may also be exposed to pesticides prenatally from mothers who are exposed to pesticides during pregnancy. Characterizing children's exposure resulting from drift of airborne and spray application of pesticides is similarly challenging, yet well documented in developing countries. Because of critical development periods of the fetus and newborn children, these non-working populations are more vulnerable to the effects of pesticides, and may be at increased risk of developing neurocognitive effects and impaired development.
While measuring biomarkers or markers of biological effects may provide more accurate estimates of exposure, collecting these data in the field is often impractical and many methods are not sensitive enough to detect low-level concentrations. Rapid cholinesterase test kits exist to collect blood samples in the field. Conducting large scale assessments of agricultural workers in remote regions of developing countries makes the implementation of these kits a challenge. The cholinesterase assay is a useful clinical tool to assess individual exposure and acute toxicity. Considerable variability in baseline enzyme activity among individuals makes it difficult to compare field measurements of cholinesterase activity to a reference dose to determine health risk associated with exposure. Another challenge in deriving a reference dose is identifying health endpoints that are relevant to exposure. More epidemiological research is needed to identify critical health endpoints, particularly among populations who are occupationally exposed.
Prevention
Minimizing harmful exposure to pesticides can be achieved by proper use of personal protective equipment, adequate reentry times into recently sprayed areas, and effective product labeling for hazardous substances as per FIFRA regulations. Training high-risk populations, including agricultural workers, on the proper use and storage of pesticides, can reduce the incidence of acute pesticide poisoning and potential chronic health effects associated with exposure. Continued research into the human toxic health effects of pesticides serves as a basis for relevant policies and enforceable standards that are health protective to all populations.
Environmental effects
Pesticide use raises a number of environmental concerns. Over 98% of sprayed insecticides and 95% of herbicides reach a destination other than their target species, including non-target species, air, water and soil. Pesticide drift occurs when pesticides suspended in the air as particles are carried by wind to other areas, potentially contaminating them. Pesticides are one of the causes of water pollution, and some pesticides were persistent organic pollutants (now banned), which contribute to soil and flower (pollen, nectar) contamination. Furthermore, pesticide use can adversely impact neighboring agricultural activity, as pests themselves drift to and harm nearby crops that have no pesticide used on them.
In addition, pesticide use reduces invertebrate biodiversity in streams, contributes to pollinator decline, destroys habitat (especially for birds), and threatens endangered species. Pests can develop a resistance to the pesticide (pesticide resistance), necessitating a new pesticide. Alternatively a greater dose of the pesticide can be used to counteract the resistance, although this will cause a worsening of the ambient pollution problem.
The Stockholm Convention on Persistent Organic Pollutants banned all persistent pesticides, in particular DDT and other organochlorine pesticides, which were stable and lipophilic, and thus able to bioaccumulate in the body and the food chain. and which spread throughout the planet. Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. Because the half life in soil is long (for DDT 2–15 years) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s.
Pesticides now have to be degradable in the environment. Such degradation of pesticides is due to both innate chemical properties of the compounds and environmental processes or conditions. For example, the presence of halogens within a chemical structure often slows down degradation in an aerobic environment. Adsorption to soil may retard pesticide movement, but also may reduce bioavailability to microbial degraders.
Pesticide contamination in the environment can be monitored through bioindicators such as bee pollinators.
Economics
In one study, the human health and environmental costs due to pesticides in the United States was estimated to be $9.6 billion: offset by about $40 billion in increased agricultural production.
Additional costs include the registration process and the cost of purchasing pesticides: which are typically borne by agrichemical companies and farmers respectively. The registration process can take several years to complete (there are 70 types of field tests) and can cost $50–70 million for a single pesticide. At the beginning of the 21st century, the United States spent approximately $10 billion on pesticides annually.
Resistance
The use of pesticides inherently entails the risk of resistance developing. Various techniques and procedures of pesticide application can slow the development of resistance, as can some natural features of the target population and surrounding environment.
Alternatives
Alternatives to pesticides are available and include methods of cultivation, use of biological pest controls (such as pheromones and microbial pesticides), genetic engineering (mostly of crops), and methods of interfering with insect breeding. Application of composted yard waste has also been used as a way of controlling pests.
These methods are becoming increasingly popular and often are safer than traditional chemical pesticides. In addition, EPA is registering reduced-risk pesticides in increasing numbers.
Cultivation practices
Cultivation practices include polyculture (growing multiple types of plants), crop rotation, planting crops in areas where the pests that damage them do not live, timing planting according to when pests will be least problematic, and use of trap crops that attract pests away from the real crop. Trap crops have successfully controlled pests in some commercial agricultural systems while reducing pesticide usage. In other systems, trap crops can fail to reduce pest densities at a commercial scale, even when the trap crop works in controlled experiments.
Use of other organisms
Release of other organisms that fight the pest is another example of an alternative to pesticide use. These organisms can include natural predators or parasites of the pests. Biological pesticides based on entomopathogenic fungi, bacteria and viruses causing disease in the pest species can also be used.
Biological control engineering
Interfering with insects' reproduction can be accomplished by sterilizing males of the target species and releasing them, so that they mate with females but do not produce offspring. This technique was first used on the screwworm fly in 1958 and has since been used with the medfly, the tsetse fly, and the gypsy moth. This is a costly and slow approach that only works on some types of insects.
Other alternatives
Other alternatives include "laserweeding" – the use of novel agricultural robots for weed control using lasers.
Push pull strategy
Push-pull technique: intercropping with a "push" crop that repels the pest, and planting a "pull" crop on the boundary that attracts and traps it.
Effectiveness
Some evidence shows that alternatives to pesticides can be equally effective as the use of chemicals. A study of Maize fields in northern Florida found that the application of composted yard waste with high carbon to nitrogen ratio to agricultural fields was highly effective at reducing the population of plant-parasitic nematodes and increasing crop yield, with yield increases ranging from 10% to 212%; the observed effects were long-term, often not appearing until the third season of the study. Additional silicon nutrition protects some horticultural crops against fungal diseases almost completely, while insufficient silicon sometimes leads to severe infection even when fungicides are used.
Pesticide resistance is increasing and that may make alternatives more attractive.
Types
Biopesticides
Biopesticides are certain types of pesticides derived from such natural materials as animals, plants, bacteria, and certain minerals. For example, canola oil and baking soda have pesticidal applications and are considered biopesticides. Biopesticides fall into three major classes:
Microbial pesticides which consist of bacteria, entomopathogenic fungi or viruses (and sometimes includes the metabolites that bacteria or fungi produce). Entomopathogenic nematodes are also often classed as microbial pesticides, even though they are multi-cellular.
Biochemical pesticides or herbal pesticides are naturally occurring substances that control (or monitor in the case of pheromones) pests and microbial diseases.
Plant-incorporated protectants (PIPs) have genetic material from other species incorporated into their genetic material (i.e. GM crops). Their use is controversial, especially in many European countries.
By pest type
Pesticides that are related to the type of pests are:
Regulation
International
In many countries, pesticides must be approved for sale and use by a government agency.
Worldwide, 85% of countries have pesticide legislation for the proper storage of pesticides and 51% include provisions to ensure proper disposal of all obsolete pesticides.
Though pesticide regulations differ from country to country, pesticides, and products on which they were used are traded across international borders. To deal with inconsistencies in regulations among countries, delegates to a conference of the United Nations Food and Agriculture Organization adopted an International Code of Conduct on the Distribution and Use of Pesticides in 1985 to create voluntary standards of pesticide regulation for many countries. The Code was updated in 1998 and 2002. The FAO claims that the code has raised awareness about pesticide hazards and decreased the number of countries without restrictions on pesticide use.
Three other efforts to improve regulation of international pesticide trade are the United Nations London Guidelines for the Exchange of Information on Chemicals in International Trade and the United Nations Codex Alimentarius Commission. The former seeks to implement procedures for ensuring that prior informed consent exists between countries buying and selling pesticides, while the latter seeks to create uniform standards for maximum levels of pesticide residues among participating countries.
United States
In the United States, the Environmental Protection Agency (EPA) is responsible for regulating pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Food Quality Protection Act (FQPA).
Studies must be conducted to establish the conditions in which the material is safe to use and the effectiveness against the intended pest(s). The EPA regulates pesticides to ensure that these products do not pose adverse effects to humans or the environment, with an emphasis on the health and safety of children. Pesticides produced before November 1984 continue to be reassessed in order to meet the current scientific and regulatory standards. All registered pesticides are reviewed every 15 years to ensure they meet the proper standards. During the registration process, a label is created. The label contains directions for proper use of the material in addition to safety restrictions. Based on acute toxicity, pesticides are assigned to a Toxicity Class. Pesticides are the most thoroughly tested chemicals after drugs in the United States; those used on food require more than 100 tests to determine a range of potential impacts.
Some pesticides are considered too hazardous for sale to the general public and are designated restricted use pesticides. Only certified applicators, who have passed an exam, may purchase or supervise the application of restricted use pesticides. Records of sales and use are required to be maintained and may be audited by government agencies charged with the enforcement of pesticide regulations. These records must be made available to employees and state or territorial environmental regulatory agencies.
In addition to the EPA, the United States Department of Agriculture (USDA) and the United States Food and Drug Administration (FDA) set standards for the level of pesticide residue that is allowed on or in crops. The EPA looks at what the potential human health and environmental effects might be associated with the use of the pesticide.
In addition, the U.S. EPA uses the National Research Council's four-step process for human health risk assessment: (1) Hazard Identification, (2) Dose-Response Assessment, (3) Exposure Assessment, and (4) Risk Characterization.
In 2013 Kaua'i County (Hawai'i) passed Bill No. 2491 to add an article to Chapter 22 of the county's code relating to pesticides and GMOs. The bill strengthens protections of local communities in Kaua'i where many large pesticide companies test their products.
The first legislation providing federal authority for regulating pesticides was enacted in 1910.
Canada
EU
EU legislation has been approved banning the use of highly toxic pesticides including those that are carcinogenic, mutagenic or toxic to reproduction, those that are endocrine-disrupting, and those that are persistent, bioaccumulative and toxic (PBT) or very persistent and very bioaccumulative (vPvB) and measures have been approved to improve the general safety of pesticides across all EU member states.
In 2023 The Environment Committee of European Parliament approved a decision aiming to reduce pesticide use by 50% (the most hazardous by 65%) by the year 2030 and ensure sustainable use of pesticides (for example use them only as a last resort). The decision also includes measures for providing farmers with alternatives.
Residue
Pesticide residue refers to the pesticides that may remain on or in food after they are applied to food crops. The maximum residue limits (MRL) of pesticides in food are carefully set by the regulatory authorities to ensure, to their best judgement, no health impacts. Regulations such as pre-harvest intervals also often prevent harvest of crop or livestock products if recently treated in order to allow residue concentrations to decrease over time to safe levels before harvest. Exposure of the general population to these residues most commonly occurs through consumption of treated food sources, or being in close contact to areas treated with pesticides such as farms or lawns.
Persistent pesticides are no longer used for agriculture, and will not be approved by the authorities. Because the half life in soil is long (for DDT 2–15 years) residues can still be detected in humans at levels 5 to 10 times lower than found in the 1970s.
Residues are monitored by the authorities. In 2016, over 99% of samples of US produce had no pesticide residue or had residue levels well below the EPA tolerance levels for each pesticide.
| Technology | Horticultural techniques | null |
48366 | https://en.wikipedia.org/wiki/Polyurethane | Polyurethane | Polyurethane (; often abbreviated PUR and PU) refers to a class of polymers composed of organic units joined by carbamate (urethane) links. In contrast to other common polymers such as polyethylene and polystyrene, polyurethane term does not refer to the single type of polymer but a group of polymers. Unlike polyethylene and polystyrene polyurethanes can be produced from a wide range of starting materials resulting various polymers within the same group. This chemical variety produces polyurethanes with different chemical structures leading to many different applications. These include rigid and flexible foams, and coatings, adhesives, electrical potting compounds, and fibers such as spandex and polyurethane laminate (PUL). Foams are the largest application accounting for 67% of all polyurethane produced in 2016.
A polyurethane is typically produced by reacting a polymeric isocyanate with a polyol. Since a polyurethane contains two types of monomers, which polymerize one after the other, they are classed as alternating copolymers. Both the isocyanates and polyols used to make a polyurethane contain two or more functional groups per molecule.
Global production in 2019 was 25 million metric tonnes, accounting for about 6% of all polymers produced in that year.
History
Otto Bayer and his coworkers at IG Farben in Leverkusen, Germany, first made polyurethanes in 1937. The new polymers had some advantages over existing plastics that were made by polymerizing olefins or by polycondensation, and were not covered by patents obtained by Wallace Carothers on polyesters. Early work focused on the production of fibers and flexible foams and PUs were applied on a limited scale as aircraft coating during World War II. Polyisocyanates became commercially available in 1952, and production of flexible polyurethane foam began in 1954 by combining toluene diisocyanate (TDI) and polyester polyols. These materials were also used to produce rigid foams, gum rubber, and elastomers. Linear fibers were produced from hexamethylene diisocyanate (HDI) and 1,4-Butanediol (BDO).
DuPont introduced polyethers, specifically poly(tetramethylene ether) glycol, in 1956. BASF and Dow Chemical introduced polyalkylene glycols in 1957. Polyether polyols were cheaper, easier to handle and more water-resistant than polyester polyols. Union Carbide and Mobay, a U.S. Monsanto/Bayer joint venture, also began making polyurethane chemicals. In 1960 more than 45,000 metric tons of flexible polyurethane foams were produced. The availability of chlorofluoroalkane blowing agents, inexpensive polyether polyols, and methylene diphenyl diisocyanate (MDI) allowed polyurethane rigid foams to be used as high-performance insulation materials. In 1967, urethane-modified polyisocyanurate rigid foams were introduced, offering even better thermal stability and flammability resistance. During the 1960s, automotive interior safety components, such as instrument and door panels, were produced by back-filling thermoplastic skins with semi-rigid foam.
In 1969, Bayer exhibited an all-plastic car in Düsseldorf, Germany. Parts of this car, such as the fascia and body panels, were manufactured using a new process called reaction injection molding (RIM), in which the reactants were mixed and then injected into a mold. The addition of fillers, such as milled glass, mica, and processed mineral fibers, gave rise to reinforced RIM (RRIM), which provided improvements in flexural modulus (stiffness), reduction in coefficient of thermal expansion and better thermal stability. This technology was used to make the first plastic-body automobile in the United States, the Pontiac Fiero, in 1983. Further increases in stiffness were obtained by incorporating pre-placed glass mats into the RIM mold cavity, also known broadly as resin injection molding, or structural RIM.
Starting in the early 1980s, water-blown microcellular flexible foams were used to mold gaskets for automotive panels and air-filter seals, replacing PVC polymers. Polyurethane foams are used in many automotive applications including seating, head and arm rests, and headliners.
Polyurethane foam (including foam rubber) is sometimes made using small amounts of blowing agents to give less dense foam, better cushioning/energy absorption or thermal insulation. In the early 1990s, because of their impact on ozone depletion, the Montreal Protocol restricted the use of many chlorine-containing blowing agents, such as trichlorofluoromethane (CFC-11). By the late 1990s, blowing agents such as carbon dioxide, pentane, 1,1,1,2-tetrafluoroethane (HFC-134a) and 1,1,1,3,3-pentafluoropropane (HFC-245fa) were widely used in North America and the EU, although chlorinated blowing agents remained in use in many developing countries. Later, HFC-134a was also banned due to high ODP and GWP readings, and HFC-141B was introduced in early 2000s as an alternate blowing agent in developing nations.
Chemistry
Polyurethanes are produced by reacting diisocyanates with polyols, often in the presence of a catalyst, or upon exposure to ultraviolet radiation.
Common catalysts include tertiary amines, such as DABCO, DMDEE, or metallic soaps, such as dibutyltin dilaurate. The stoichiometry of the starting materials must be carefully controlled as excess isocyanate can trimerise, leading to the formation of rigid polyisocyanurates. The polymer usually has a highly crosslinked molecular structure, resulting in a thermosetting material which does not melt on heating; although some thermoplastic polyurethanes are also produced.
The most common application of polyurethane is as solid foams, which requires the presence of a gas, or blowing agent, during the polymerization step. This is commonly achieved by adding small amounts of water, which reacts with isocyanates to form CO2 gas and an amine, via an unstable carbamic acid group. The amine produced can also react with isocyanates to form urea groups, and as such the polymer will contain both these and urethane linkers. The urea is not very soluble in the reaction mixture and tends to form separate "hard segment" phases consisting mostly of polyurea. The concentration and organization of these polyurea phases can have a significant impact on the properties of the foam.
The type of foam produced can be controlled by regulating the amount of blowing agent and also by the addition of various surfactants which change the rheology of the polymerising mixture. Foams can be either "closed-cell", where most of the original bubbles or cells remain intact, or "open-cell", where the bubbles have broken but the edges of the bubbles are stiff enough to retain their shape, in extreme cases reticulated foams can be formed. Open-cell foams feel soft and allow air to flow through, so they are comfortable when used in seat cushions or mattresses. Closed-cell foams are used as rigid thermal insulation. High-density microcellular foams can be formed without the addition of blowing agents by mechanically frothing the polyol prior to use. These are tough elastomeric materials used in covering car steering wheels or shoe soles.
The properties of a polyurethane are greatly influenced by the types of isocyanates and polyols used to make it. Long, flexible segments, contributed by the polyol, give soft, elastic polymer. High amounts of crosslinking give tough or rigid polymers. Long chains and low crosslinking give a polymer that is very stretchy, short chains with many crosslinks produce a hard polymer while long chains and intermediate crosslinking give a polymer useful for making foam. The choices available for the isocyanates and polyols, in addition to other additives and processing conditions allow polyurethanes to have the very wide range of properties that make them such widely used polymers.
Raw materials
The main ingredients to make a polyurethane are di- and tri-isocyanates and polyols. Other materials are added to aid processing the polymer or to modify the properties of the polymer. PU foam formulation sometimes have water added too.
Isocyanates
Isocyanates used to make polyurethane have two or more isocyanate groups on each molecule. The most commonly used isocyanates are the aromatic diisocyanates, toluene diisocyanate (TDI) and methylene diphenyl diisocyanate, (MDI). These aromatic isocyanates are more reactive than aliphatic isocyanates.
TDI and MDI are generally less expensive and more reactive than other isocyanates. Industrial grade TDI and MDI are mixtures of isomers and MDI often contains polymeric materials. They are used to make flexible foam (for example slabstock foam for mattresses or molded foams for car seats), rigid foam (for example insulating foam in refrigerators) elastomers (shoe soles, for example), and so on. The isocyanates may be modified by partially reacting them with polyols or introducing some other materials to reduce volatility (and hence toxicity) of the isocyanates, decrease their freezing points to make handling easier or to improve the properties of the final polymers.
Aliphatic and cycloaliphatic isocyanates are used in smaller quantities, most often in coatings and other applications where color and transparency are important since polyurethanes made with aromatic isocyanates tend to darken on exposure to light. The most important aliphatic and cycloaliphatic isocyanates are 1,6-hexamethylene diisocyanate (HDI), 1-isocyanato-3-isocyanatomethyl-3,5,5-trimethyl-cyclohexane (isophorone diisocyanate, IPDI), and 4,4′-diisocyanato dicyclohexylmethane (H12MDI or hydrogenated MDI). Other more specialized isocyanates include Tetramethylxylylene diisocyanate (TMXDI).
Polyols
Polyols are polymers in their own right and have on average two or more hydroxyl groups per molecule. They can be converted to polyether polyols by co-polymerizing ethylene oxide and propylene oxide with a suitable polyol precursor. Polyester polyols are made by the polycondensation of multifunctional carboxylic acids and polyhydroxyl compounds. They can be further classified according to their end use. Higher molecular weight polyols (molecular weights from 2,000 to 10,000) are used to make more flexible polyurethanes while lower molecular weight polyols make more rigid products.
Polyols for flexible applications use low functionality initiators such as dipropylene glycol (f = 2), glycerine (f = 3), or a sorbitol/water solution (f = 2.75). Polyols for rigid applications use higher functionality initiators such as sucrose (f = 8), sorbitol (f = 6), toluenediamine (f = 4), and Mannich bases (f = 4). Propylene oxide and/or ethylene oxide is added to the initiators until the desired molecular weight is achieved. The order of addition and the amounts of each oxide affect many polyol properties, such as compatibility, water-solubility, and reactivity. Polyols made with only propylene oxide are terminated with secondary hydroxyl groups and are less reactive than polyols capped with ethylene oxide, which contain primary hydroxyl groups. Incorporating carbon dioxide into the polyol structure is being researched by multiple companies.
Graft polyols (also called filled polyols or polymer polyols) contain finely dispersed styrene–acrylonitrile, acrylonitrile, or polyurea (PHD) polymer solids chemically grafted to a high molecular weight polyether backbone. They are used to increase the load-bearing properties of low-density high-resiliency (HR) foam, as well as add toughness to microcellular foams and cast elastomers. Initiators such as ethylenediamine and triethanolamine are used to make low molecular weight rigid foam polyols that have built-in catalytic activity due to the presence of nitrogen atoms in the backbone. A special class of polyether polyols, poly(tetramethylene ether) glycols, which are made by polymerizing tetrahydrofuran, are used in high performance coating, wetting and elastomer applications.
Conventional polyester polyols are based on virgin raw materials and are manufactured by the direct polyesterification of high-purity diacids and glycols, such as adipic acid and 1,4-butanediol. Polyester polyols are usually more expensive and more viscous than polyether polyols, but they make polyurethanes with better solvent, abrasion, and cut resistance. Other polyester polyols are based on reclaimed raw materials. They are manufactured by transesterification (glycolysis) of recycled poly(ethyleneterephthalate) (PET) or dimethylterephthalate (DMT) distillation bottoms with glycols such as diethylene glycol. These low molecular weight, aromatic polyester polyols are used in rigid foam, and bring low cost and excellent flammability characteristics to polyisocyanurate (PIR) boardstock and polyurethane spray foam insulation.
Specialty polyols include polycarbonate polyols, polycaprolactone polyols, polybutadiene polyols, and polysulfide polyols. The materials are used in elastomer, sealant, and adhesive applications that require superior weatherability, and resistance to chemical and environmental attack. Natural oil polyols derived from castor oil and other vegetable oils are used to make elastomers, flexible bunstock, and flexible molded foam.
Co-polymerizing chlorotrifluoroethylene or tetrafluoroethylene with vinyl ethers containing hydroxyalkyl vinyl ether produces fluorinated (FEVE) polyols. Two-component fluorinated polyurethanes prepared by reacting FEVE fluorinated polyols with polyisocyanate have been used to make ambient cure paints and coatings. Since fluorinated polyurethanes contain a high percentage of fluorine–carbon bonds, which are the strongest bonds among all chemical bonds, fluorinated polyurethanes exhibit resistance to UV, acids, alkali, salts, chemicals, solvents, weathering, corrosion, fungi and microbial attack. These have been used for high performance coatings and paints.
Phosphorus-containing polyols are available that become chemically bonded to the polyurethane matrix for the use as flame retardants. This covalent linkage prevents migration and leaching of the organophosphorus compound.
Bio-derived materials
Interest in sustainable "green" products raised interest in polyols derived from vegetable oils. Various oils used in the preparation polyols for polyurethanes include soybean oil, cottonseed oil, neem seed oil, and castor oil. Vegetable oils are functionalized in various ways and modified to polyetheramides, polyethers, alkyds, etc. Renewable sources used to prepare polyols may be fatty acids or dimer fatty acids. Some biobased and isocyanate-free polyurethanes exploit the reaction between polyamines and cyclic carbonates to produce polyhydroxyurethanes.
Chain extenders and cross linkers
Chain extenders (f = 2) and cross linkers (f ≥ 3) are low molecular weight hydroxyl and amine terminated compounds that play an important role in the polymer morphology of polyurethane fibers, elastomers, adhesives, and certain integral skin and microcellular foams. The elastomeric properties of these materials are derived from the phase separation of the hard and soft copolymer segments of the polymer, such that the urethane hard segment domains serve as cross-links between the amorphous polyether (or polyester) soft segment domains. This phase separation occurs because the mainly nonpolar, low melting soft segments are incompatible with the polar, high melting hard segments. The soft segments, which are formed from high molecular weight polyols, are mobile and are normally present in coiled formation, while the hard segments, which are formed from the isocyanate and chain extenders, are stiff and immobile. As the hard segments are covalently coupled to the soft segments, they inhibit plastic flow of the polymer chains, thus creating elastomeric resiliency. Upon mechanical deformation, a portion of the soft segments are stressed by uncoiling, and the hard segments become aligned in the stress direction. This reorientation of the hard segments and consequent powerful hydrogen bonding contributes to high tensile strength, elongation, and tear resistance values.
The choice of chain extender also determines flexural, heat, and chemical resistance properties. The most important chain extenders are ethylene glycol, 1,4-butanediol (1,4-BDO or BDO), 1,6-hexanediol, cyclohexane dimethanol and hydroquinone bis(2-hydroxyethyl) ether (HQEE). All of these glycols form polyurethanes that phase separate well and form well defined hard segment domains, and are melt processable. They are all suitable for thermoplastic polyurethanes with the exception of ethylene glycol, since its derived bis-phenyl urethane undergoes unfavorable degradation at high hard segment levels. Diethanolamine and triethanolamine are used in flex molded foams to build firmness and add catalytic activity. Diethyltoluenediamine is used extensively in RIM, and in polyurethane and polyurea elastomer formulations.
Catalysts
Polyurethane catalysts can be classified into two broad categories, basic and acidic amine. Tertiary amine catalysts function by enhancing the nucleophilicity of the diol component. Alkyl tin carboxylates, oxides and mercaptides oxides function as mild Lewis acids in accelerating the formation of polyurethane. As bases, traditional amine catalysts include triethylenediamine (TEDA, also called DABCO, 1,4-diazabicyclo[2.2.2]octane), dimethylcyclohexylamine (DMCHA), dimethylethanolamine (DMEA), Dimethylaminoethoxyethanol and bis-(2-dimethylaminoethyl)ether, a blowing catalyst also called A-99. A typical Lewis acidic catalyst is dibutyltin dilaurate. The process is highly sensitive to the nature of the catalyst and is also known to be autocatalytic.
Factors affecting catalyst selection include balancing three reactions: urethane (polyol+isocyanate, or gel) formation, the urea (water+isocyanate, or "blow") formation, or the isocyanate trimerization reaction (e.g., using potassium acetate, to form isocyanurate rings). A variety of specialized catalysts have been developed.
Surfactants
Surfactants are used to modify the characteristics of both foam and non-foam polyurethane polymers. They take the form of polydimethylsiloxane-polyoxyalkylene block copolymers, silicone oils, nonylphenol ethoxylates, and other organic compounds. In foams, they are used to emulsify the liquid components, regulate cell size, and stabilize the cell structure to prevent collapse and sub-surface voids. In non-foam applications they are used as air release and antifoaming agents, as wetting agents, and are used to eliminate surface defects such as pin holes, orange peel, and sink marks.
Production
Polyurethanes are produced by mixing two or more liquid streams. The polyol stream contains catalysts, surfactants, blowing agents (when making polyurethane foam insulation) and so on. The two components are referred to as a polyurethane system, or simply a system. The isocyanate is commonly referred to in North America as the 'A-side' or just the 'iso'. The blend of polyols and other additives is commonly referred to as the 'B-side' or as the 'poly'. This mixture might also be called a 'resin' or 'resin blend'. In Europe the meanings for 'A-side' and 'B-side' are reversed. Resin blend additives may include chain extenders, cross linkers, surfactants, flame retardants, blowing agents, pigments, and fillers. Polyurethane can be made in a variety of densities and hardnesses by varying the isocyanate, polyol or additives.
Health and safety
Fully reacted polyurethane polymer is chemically inert. No exposure limits have been established in the U.S. by OSHA (Occupational Safety and Health Administration) or ACGIH (American Conference of Governmental Industrial Hygienists). It is not regulated by OSHA for carcinogenicity.
Polyurethanes are combustible. Decomposition from fire can produce significant amounts of carbon monoxide and hydrogen cyanide, in addition to nitrogen oxides, isocyanates, and other toxic products. Due to the flammability of the material, it has to be treated with flame retardants (at least in case of furniture), almost all of which are considered harmful. California later issued Technical Bulletin 117 2013 which allowed most polyurethane foam to pass flammability tests without the use of flame retardants. Green Science Policy Institute states: "Although the new standard can be met without flame retardants, it does NOT ban their
use. Consumers who wish to reduce household exposure to flame retardants can look for a TB117-2013 tag on furniture, and verify with retailers that products do not contain flame retardants."
Liquid resin blends and isocyanates may contain hazardous or regulated components. Isocyanates are known skin and respiratory sensitizers. Additionally, amines, glycols, and phosphate present in spray polyurethane foams present risks.
Exposure to chemicals that may be emitted during or after application of polyurethane spray foam (such as isocyanates) are harmful to human health and therefore special precautions are required during and after this process.
In the United States, additional health and safety information can be found through organizations such as the Polyurethane Manufacturers Association (PMA) and the Center for the Polyurethanes Industry (CPI), as well as from polyurethane system and raw material manufacturers. Regulatory information can be found in the Code of Federal Regulations Title 21 (Food and Drugs) and Title 40 (Protection of the Environment). In Europe, health and safety information is available from ISOPA, the European Diisocyanate and Polyol Producers Association.
Manufacturing
The methods of manufacturing polyurethane finished goods range from small, hand pour piece-part operations to large, high-volume bunstock and boardstock production lines. Regardless of the end-product, the manufacturing principle is the same: to meter the liquid isocyanate and resin blend at a specified stoichiometric ratio, mix them together until a homogeneous blend is obtained, dispense the reacting liquid into a mold or on to a surface, wait until it cures, then demold the finished part.
Dispensing equipment
Although the capital outlay can be high, it is desirable to use a meter-mix or dispense unit for even low-volume production operations that require a steady output of finished parts. Dispense equipment consists of material holding (day) tanks, metering pumps, a mix head, and a control unit. Often, a conditioning or heater–chiller unit is added to control material temperature in order to improve mix efficiency, cure rate, and to reduce process variability. Choice of dispense equipment components depends on shot size, throughput, material characteristics such as viscosity and filler content, and process control. Material day tanks may be single to hundreds of gallons in size and may be supplied directly from drums, IBCs (intermediate bulk containers, such as caged IBC totes), or bulk storage tanks. They may incorporate level sensors, conditioning jackets, and mixers. Pumps can be sized to meter in single grams per second up to hundreds of pounds per minute. They can be rotary, gear, or piston pumps, or can be specially hardened lance pumps to meter liquids containing highly abrasive fillers such as chopped or hammer-milled glass fiber and wollastonite.
The pumps can drive low-pressure (10 to 30 bar, 1 to 3 MPa) or high-pressure (125 to 250 bar, 12.5 to 25.0 MPa) dispense systems. Mix heads can be simple static mix tubes, rotary-element mixers, low-pressure dynamic mixers, or high-pressure hydraulically actuated direct impingement mixers. Control units may have basic on/off and dispense/stop switches, and analogue pressure and temperature gauges, or may be computer-controlled with flow meters to electronically calibrate mix ratio, digital temperature and level sensors, and a full suite of statistical process control software. Add-ons to dispense equipment include nucleation or gas injection units, and third or fourth stream capability for adding pigments or metering in supplemental additive packages.
Tooling
Distinct from pour-in-place, bun and boardstock, and coating applications, the production of piece parts requires tooling to contain and form the reacting liquid.
The choice of mold-making material is dependent on the expected number of uses to end-of-life (EOL), molding pressure, flexibility, and heat transfer characteristics.
RTV silicone is used for tooling that has an EOL in the thousands of parts. It is typically used for molding rigid foam parts, where the ability to stretch and peel the mold around undercuts is needed.
The heat transfer characteristic of RTV silicone tooling is poor. High-performance, flexible polyurethane elastomers are also used in this way.
Epoxy, metal-filled epoxy, and metal-coated epoxy is used for tooling that has an EOL in the tens of thousands of parts. It is typically used for molding flexible foam cushions and seating, integral skin and microcellular foam padding, and shallow-draft RIM bezels and fascia. The heat transfer characteristic of epoxy tooling is fair; the heat transfer characteristic of metal-filled and metal-coated epoxy is good. Copper tubing can be incorporated into the body of the tool, allowing hot water to circulate and heat the mold surface.
Aluminum is used for tooling that has an EOL in the hundreds of thousands of parts. It is typically used for molding microcellular foam gasketing and cast elastomer parts, and is milled or extruded into shape.
Mirror-finish stainless steel is used for tooling that imparts a glossy appearance to the finished part. The heat transfer characteristic of metal tooling is excellent.
Finally, molded or milled polypropylene is used to create low-volume tooling for molded gasket applications. Instead of many expensive metal molds, low-cost plastic tooling can be formed from a single metal master, which also allows greater design flexibility. The heat transfer characteristic of polypropylene tooling is poor, which must be taken into consideration during the formulation process.
Applications
In 2008, the global consumption of polyurethane raw materials was above 12 million metric tons, and the average annual growth rate was about 5%. Revenues generated with PUR on the global market are expected to rise to approximately US$75 billion by 2022. As they are such an important class of materials, research is constantly taking place and papers published.
Degradation and environmental fate
Effects of visible light
Polyurethanes, especially those made using aromatic isocyanates, contain chromophores that interact with light. This is of particular interest in the area of polyurethane coatings, where light stability is a critical factor and is the main reason that aliphatic isocyanates are used in making polyurethane coatings. When PU foam, which is made using aromatic isocyanates, is exposed to visible light, it discolors, turning from off-white to yellow to reddish brown. It has been generally accepted that apart from yellowing, visible light has little effect on foam properties. This is especially the case if the yellowing happens on the outer portions of a large foam, as the deterioration of properties in the outer portion has little effect on the overall bulk properties of the foam itself.
It has been reported that exposure to visible light can affect the variability of some physical property test results.
Higher-energy UV radiation promotes chemical reactions in foam, some of which are detrimental to the foam structure.
Hydrolysis and biodegradation
Polyurethanes may degrade due to hydrolysis. This is a common problem with shoes left in a closet, and reacting with moisture in the air.
Microbial degradation of polyurethane is believed to be due to the action of esterase, urethanase, hydrolase and protease enzymes. The process is slow as most microbes have difficulty moving beyond the surface of the polymer. Susceptibility to fungi is higher due to their release of extracellular enzymes, which are better able to permeate the polymer matrix. Two species of the Ecuadorian fungus Pestalotiopsis are capable of biodegrading polyurethane in aerobic and anaerobic conditions such as found at the bottom of landfills. Degradation of polyurethane items at museums has been reported. Polyester-type polyurethanes are more easily biodegraded by fungus than polyether-type.
| Physical sciences | Polymers | Chemistry |
48381 | https://en.wikipedia.org/wiki/Astronomical%20coordinate%20systems | Astronomical coordinate systems | In astronomy, coordinate systems are used for specifying positions of celestial objects (satellites, planets, stars, galaxies, etc.) relative to a given reference frame, based on physical reference points available to a situated observer (e.g. the true horizon and north to an observer on Earth's surface). Coordinate systems in astronomy can specify an object's relative position in three-dimensional space or plot merely by its direction on a celestial sphere, if the object's distance is unknown or trivial.
Spherical coordinates, projected on the celestial sphere, are analogous to the geographic coordinate system used on the surface of Earth. These differ in their choice of fundamental plane, which divides the celestial sphere into two equal hemispheres along a great circle. Rectangular coordinates, in appropriate units, have the same fundamental () plane and primary (-axis) direction, such as an axis of rotation. Each coordinate system is named after its choice of fundamental plane.
Coordinate systems
The following table lists the common coordinate systems in use by the astronomical community. The fundamental plane divides the celestial sphere into two equal hemispheres and defines the baseline for the latitudinal coordinates, similar to the equator in the geographic coordinate system. The poles are located at ±90° from the fundamental plane. The primary direction is the starting point of the longitudinal coordinates. The origin is the zero distance point, the "center of the celestial sphere", although the definition of celestial sphere is ambiguous about the definition of its center point.
Horizontal system
The horizontal, or altitude-azimuth, system is based on the position of the observer on Earth, which revolves around its own axis once per sidereal day (23 hours, 56 minutes and 4.091 seconds) in relation to the star background. The positioning of a celestial object by the horizontal system varies with time, but is a useful coordinate system for locating and tracking objects for observers on Earth. It is based on the position of stars relative to an observer's ideal horizon.
Equatorial system
The equatorial coordinate system is centered at Earth's center, but fixed relative to the celestial poles and the March equinox. The coordinates are based on the location of stars relative to Earth's equator if it were projected out to an infinite distance. The equatorial describes the sky as seen from the Solar System, and modern star maps almost exclusively use equatorial coordinates.
The equatorial system is the normal coordinate system for most professional and many amateur astronomers having an equatorial mount that follows the movement of the sky during the night. Celestial objects are found by adjusting the telescope's or other instrument's scales so that they match the equatorial coordinates of the selected object to observe.
Popular choices of pole and equator are the older B1950 and the modern J2000 systems, but a pole and equator "of date" can also be used, meaning one appropriate to the date under consideration, such as when a measurement of the position of a planet or spacecraft is made. There are also subdivisions into "mean of date" coordinates, which average out or ignore nutation, and "true of date," which include nutation.
Ecliptic system
The fundamental plane is the plane of the Earth's orbit, called the ecliptic plane. There are two principal variants of the ecliptic coordinate system: geocentric ecliptic coordinates centered on the Earth and heliocentric ecliptic coordinates centered on the center of mass of the Solar System.
The geocentric ecliptic system was the principal coordinate system for ancient astronomy and is still useful for computing the apparent motions of the Sun, Moon, and planets. It was used to define the twelve astrological signs of the zodiac, for instance.
The heliocentric ecliptic system describes the planets' orbital movement around the Sun, and centers on the barycenter of the Solar System (i.e. very close to the center of the Sun). The system is primarily used for computing the positions of planets and other Solar System bodies, as well as defining their orbital elements.
Galactic system
The galactic coordinate system uses the approximate plane of the Milky Way Galaxy as its fundamental plane. The Solar System is still the center of the coordinate system, and the zero point is defined as the direction towards the Galactic Center. Galactic latitude resembles the elevation above the galactic plane and galactic longitude determines direction relative to the center of the galaxy.
Supergalactic system
The supergalactic coordinate system corresponds to a fundamental plane that contains a higher than average number of local galaxies in the sky as seen from Earth.
Converting coordinates
Conversions between the various coordinate systems are given. See the notes before using these equations.
Notation
Horizontal coordinates
, azimuth
, altitude
Equatorial coordinates
, right ascension
, declination
, hour angle
Ecliptic coordinates
, ecliptic longitude
, ecliptic latitude
Galactic coordinates
, galactic longitude
, galactic latitude
Miscellaneous
, observer's longitude
, observer's latitude
, obliquity of the ecliptic (about 23.4°)
, local sidereal time
, Greenwich sidereal time
Hour angle ↔ right ascension
Equatorial ↔ ecliptic
The classical equations, derived from spherical trigonometry, for the longitudinal coordinate are presented to the right of a bracket; dividing the first equation by the second gives the convenient tangent equation seen on the left. The rotation matrix equivalent is given beneath each case. This division is ambiguous because tan has a period of 180° () whereas cos and sin have periods of 360° (2).
Equatorial ↔ horizontal
Azimuth () is measured from the south point, turning positive to the west.
Zenith distance, the angular distance along the great circle from the zenith to a celestial object, is simply the complementary angle of the altitude: .
In solving the equation for , in order to avoid the ambiguity of the arctangent, use of the two-argument arctangent, denoted , is recommended. The two-argument arctangent computes the arctangent of , and accounts for the quadrant in which it is being computed. Thus, consistent with the convention of azimuth being measured from the south and opening positive to the west,
,
where
.
If the above formula produces a negative value for , it can be rendered positive by simply adding 360°.
Again, in solving the equation for , use of the two-argument arctangent that accounts for the quadrant is recommended. Thus, again consistent with the convention of azimuth being measured from the south and opening positive to the west,
,
where
Equatorial ↔ galactic
These equations are for converting equatorial coordinates to Galactic coordinates.
run_going
are the equatorial coordinates of the North Galactic Pole and is the Galactic longitude of the North Celestial Pole. Referred to J2000.0 the values of these quantities are:
If the equatorial coordinates are referred to another equinox, they must be precessed to their place at J2000.0 before applying these formulae.
These equations convert to equatorial coordinates referred to B2000.0.
>laft_spasse>11.3
| Physical sciences | Celestial sphere | null |
48384 | https://en.wikipedia.org/wiki/Equatorial%20coordinate%20system | Equatorial coordinate system | The equatorial coordinate system is a celestial coordinate system widely used to specify the positions of celestial objects. It may be implemented in spherical or rectangular coordinates, both defined by an origin at the centre of Earth, a fundamental plane consisting of the projection of Earth's equator onto the celestial sphere (forming the celestial equator), a primary direction towards the March equinox, and a right-handed convention.
The origin at the centre of Earth means the coordinates are geocentric, that is, as seen from the centre of Earth as if it were transparent. The fundamental plane and the primary direction mean that the coordinate system, while aligned with Earth's equator and pole, does not rotate with the Earth, but remains relatively fixed against the background stars. A right-handed convention means that coordinates increase northward from and eastward around the fundamental plane.
Primary direction
This description of the orientation of the reference frame is somewhat simplified; the orientation is not quite fixed. A slow motion of Earth's axis, precession, causes a slow, continuous turning of the coordinate system westward about the poles of the ecliptic, completing one circuit in about 26,000 years. Superimposed on this is a smaller motion of the ecliptic, and a small oscillation of the Earth's axis, nutation.
In order to fix the exact primary direction, these motions necessitate the specification of the equinox of a particular date, known as an epoch, when giving a position. The three most commonly used are:
Mean equinox of a standard epoch (usually J2000.0, but may include B1950.0, B1900.0, etc.) is a fixed standard direction, allowing positions established at various dates to be compared directly.
Mean equinox of date is the intersection of the ecliptic of "date" (that is, the ecliptic in its position at "date") with the mean equator (that is, the equator rotated by precession to its position at "date", but free from the small periodic oscillations of nutation). Commonly used in planetary orbit calculation.
True equinox of date is the intersection of the ecliptic of "date" with the true equator (that is, the mean equator plus nutation). This is the actual intersection of the two planes at any particular moment, with all motions accounted for.
A position in the equatorial coordinate system is thus typically specified true equinox and equator of date, mean equinox and equator of J2000.0, or similar. Note that there is no "mean ecliptic", as the ecliptic is not subject to small periodic oscillations.
Spherical coordinates
Use in astronomy
A star's spherical coordinates are often expressed as a pair, right ascension and declination, without a distance coordinate. The direction of sufficiently distant objects is the same for all observers, and it is convenient to specify this direction with the same coordinates for all. In contrast, in the horizontal coordinate system, a star's position differs from observer to observer based on their positions on the Earth's surface, and is continuously changing with the Earth's rotation.
Telescopes equipped with equatorial mounts and setting circles employ the equatorial coordinate system to find objects. Setting circles in conjunction with a star chart or ephemeris allow the telescope to be easily pointed at known objects on the celestial sphere.
Declination
The declination symbol , (lower case "delta", abbreviated DEC) measures the angular distance of an object perpendicular to the celestial equator, positive to the north, negative to the south. For example, the north celestial pole has a declination of +90°. The origin for declination is the celestial equator, which is the projection of the Earth's equator onto the celestial sphere. Declination is analogous to terrestrial latitude.
Right ascension
The right ascension symbol , (lower case "alpha", abbreviated RA) measures the angular distance of an object eastward along the celestial equator from the March equinox to the hour circle passing through the object. The March equinox point is one of the two points where the ecliptic intersects the celestial equator. Right ascension is usually measured in sidereal hours, minutes and seconds instead of degrees, a result of the method of measuring right ascensions by timing the passage of objects across the meridian as the Earth rotates. There are = 15° in one hour of right ascension, and 24h of right ascension around the entire celestial equator.
When used together, right ascension and declination are usually abbreviated RA/Dec.
Hour angle
Alternatively to right ascension, hour angle (abbreviated HA or LHA, local hour angle), a left-handed system, measures the angular distance of an object westward along the celestial equator from the observer's meridian to the hour circle passing through the object. Unlike right ascension, hour angle is always increasing with the rotation of Earth. Hour angle may be considered a means of measuring the time since upper culmination, the moment when an object contacts the meridian overhead.
A culminating star on the observer's meridian is said to have a zero hour angle (0h). One sidereal hour (approximately 0.9973 solar hours) later, Earth's rotation will carry the star to the west of the meridian, and its hour angle will be 1h. When calculating topocentric phenomena, right ascension may be converted into hour angle as an intermediate step.
Rectangular coordinates: geocentric equatorial coordinates
There are a number of rectangular variants of equatorial coordinates. All have:
The origin at the centre of the Earth.
The fundamental plane in the plane of the Earth's equator.
The primary direction (the axis) toward the March equinox, that is, the place where the Sun crosses the celestial equator in a northward direction in its annual apparent circuit around the ecliptic.
A right-handed convention, specifying a axis 90° to the east in the fundamental plane and a axis along the north polar axis.
The reference frames do not rotate with the Earth (in contrast to Earth-centred, Earth-fixed frames), remaining always directed toward the equinox, and drifting over time with the motions of precession and nutation.
In astronomy:
The position of the Sun is often specified in the geocentric equatorial rectangular coordinates , , and a fourth distance coordinate, , in units of the astronomical unit.
The positions of the planets and other Solar System bodies are often specified in the geocentric equatorial rectangular coordinates , , and a fourth distance coordinate, (equal to ), in units of the astronomical unit.These rectangular coordinates are related to the corresponding spherical coordinates by
In astrodynamics:
The positions of artificial Earth satellites are specified in geocentric equatorial coordinates, also known as geocentric equatorial inertial (GEI), Earth-centred inertial (ECI), and conventional inertial system (CIS), all of which are equivalent in definition to the astronomical geocentric equatorial rectangular frames, above. In the geocentric equatorial frame, the , and axes are often designated , and , respectively, or the frame's basis is specified by the unit vectors , and .
The Geocentric Celestial Reference Frame (GCRF) is the geocentric equivalent of the International Celestial Reference Frame (ICRF). Its primary direction is the equinox of J2000.0, and does not move with precession and nutation, but it is otherwise equivalent to the above systems.
Generalization: heliocentric equatorial coordinates
In astronomy, there is also a heliocentric rectangular variant of equatorial coordinates, designated , , , which has:
The origin at the centre of the Sun.
The fundamental plane in the plane of the Earth's equator.
The primary direction (the axis) toward the March equinox.
A right-handed convention, specifying a axis 90° to the east in the fundamental plane and a axis along Earth's north polar axis.
This frame is similar to the , , frame above, except that the origin is removed to the centre of the Sun. It is commonly used in planetary orbit calculation. The three astronomical rectangular coordinate systems are related by
| Physical sciences | Celestial sphere | null |
48389 | https://en.wikipedia.org/wiki/Galactic%20coordinate%20system | Galactic coordinate system | The galactic coordinate system is a celestial coordinate system in spherical coordinates, with the Sun as its center, the primary direction aligned with the approximate center of the Milky Way Galaxy, and the fundamental plane parallel to an approximation of the galactic plane but offset to its north. It uses the right-handed convention, meaning that coordinates are positive toward the north and toward the east in the fundamental plane.
Spherical coordinates
Galactic longitude
Longitude (symbol ) measures the angular distance of an object eastward along the galactic equator from the Galactic Center. Analogous to terrestrial longitude, galactic longitude is usually measured in degrees (°).
Galactic latitude
Latitude (symbol ) measures the angle of an object northward of the galactic equator (or midplane) as viewed from Earth. Analogous to terrestrial latitude, galactic latitude is usually measured in degrees (°).
Definition
The first galactic coordinate system was used by William Herschel in 1785. A number of different coordinate systems, each differing by a few degrees, were used until 1932, when Lund Observatory assembled a set of conversion tables that defined a standard galactic coordinate system based on a galactic north pole at RA , dec +28° (in the B1900.0 epoch convention) and a 0° longitude at the point where the galactic plane and equatorial plane intersected.
In 1958, the International Astronomical Union (IAU) defined the galactic coordinate system in reference to radio observations of galactic neutral hydrogen through the hydrogen line, changing the definition of the Galactic longitude by 32° and the latitude by 1.5°. In the equatorial coordinate system, for equinox and equator of 1950.0, the north galactic pole is defined at right ascension , declination +27.4°, in the constellation Coma Berenices, with a probable error of ±0.1°. Longitude 0° is the great semicircle that originates from this point along the line in position angle 123° with respect to the equatorial pole. The galactic longitude increases in the same direction as right ascension. Galactic latitude is positive towards the north galactic pole, with a plane passing through the Sun and parallel to the galactic equator being 0°, whilst the poles are ±90°. Based on this definition, the galactic poles and equator can be found from spherical trigonometry and can be precessed to other epochs; see the table.
The IAU recommended that during the transition period from the old, pre-1958 system to the new, the old longitude and latitude should be designated and while the new should be designated and . This convention is occasionally seen.
Radio source Sagittarius A*, which is the best physical marker of the true Galactic Center, is located at , (J2000). Rounded to the same number of digits as the table, , −29.01° (J2000), there is an offset of about 0.07° from the defined coordinate center, well within the 1958 error estimate of ±0.1°. Due to the Sun's position, which currently lies north of the midplane, and the heliocentric definition adopted by the IAU, the galactic coordinates of Sgr A* are latitude south, longitude . Since as defined the galactic coordinate system does not rotate with time, Sgr A* is actually decreasing in longitude at the rate of galactic rotation at the sun, , approximately 5.7 milliarcseconds per year (see Oort constants).
Conversion between equatorial and galactic coordinates
An object's location expressed in the equatorial coordinate system can be transformed into the galactic coordinate system. In these equations, is right ascension, is declination. NGP refers to the coordinate values of the north galactic pole and NCP to those of the north celestial pole.
The reverse (galactic to equatorial) can also be accomplished with the following conversion formulas.
Where:
Rectangular coordinates
In some applications use is made of rectangular coordinates based on galactic longitude and latitude and distance. In some work regarding the distant past or future the galactic coordinate system is taken as rotating so that the -axis always goes to the centre of the galaxy.
There are two major rectangular variations of galactic coordinates, commonly used for computing space velocities of galactic objects. In these systems the -axes are designated , but the definitions vary by author. In one system, the axis is directed toward the Galactic Center ( = 0°), and it is a right-handed system (positive towards the east and towards the north galactic pole); in the other, the axis is directed toward the galactic anticenter ( = 180°), and it is a left-handed system (positive towards the east and towards the north galactic pole).
In the constellations
The galactic equator runs through the following constellations:
Sagittarius
Serpens
Scutum
Aquila
Sagitta
Vulpecula
Cygnus
Cepheus
Cassiopeia
Camelopardalis
Perseus
Auriga
Taurus
Gemini
Orion
Monoceros
Canis Major
Puppis
Vela
Carina
Crux
Centaurus
Circinus
Norma
Ara
Scorpius
Ophiuchus
| Physical sciences | Celestial sphere: General | Astronomy |
48395 | https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes%20equations | Navier–Stokes equations | The Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The Navier–Stokes equations mathematically express momentum balance for Newtonian fluids and make use of conservation of mass. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are an elliptic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable).
The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other problems. Coupled with Maxwell's equations, they can be used to model and study magnetohydrodynamics.
The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether they are infinitely differentiable (or even just bounded) at all points in the domain. This is called the Navier–Stokes existence and smoothness problem. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US$1 million prize for a solution or a counterexample.
Flow velocity
The solution of the equations is a flow velocity. It is a vector field—to every point in a fluid, at any moment in a time interval, it gives a vector whose direction and magnitude are those of the velocity of the fluid at that point in space and at that moment in time. It is usually studied in three spatial dimensions and one time dimension, although two (spatial) dimensional and steady-state cases are often used as models, and higher-dimensional analogues are studied in both pure and applied mathematics. Once the velocity field is calculated, other quantities of interest such as pressure or temperature may be found using dynamical equations and relations. This is different from what one normally sees in classical mechanics, where solutions are typically trajectories of position of a particle or deflection of a continuum. Studying velocity instead of position makes more sense for a fluid, although for visualization purposes one can compute various trajectories. In particular, the streamlines of a vector field, interpreted as flow velocity, are the paths along which a massless fluid particle would travel. These paths are the integral curves whose derivative at each point is equal to the vector field, and they can represent visually the behavior of the vector field at a point in time.
General continuum equations
The Navier–Stokes momentum equation can be derived as a particular form of the Cauchy momentum equation, whose general convective form is:
By setting the Cauchy stress tensor to be the sum of a viscosity term (the deviatoric stress) and a pressure term (volumetric stress), we arrive at:
where
is the material derivative, defined as ,
is the (mass) density,
is the flow velocity,
is the divergence,
is the pressure,
is time,
is the deviatoric stress tensor, which has order 2,
represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electrostatic accelerations, and so on.
In this form, it is apparent that in the assumption of an inviscid fluid – no deviatoric stress – Cauchy equations reduce to the Euler equations.
Assuming conservation of mass, with the known properties of divergence and gradient we can use the mass continuity equation, which represents the mass per unit volume of a homogenous fluid with respect to space and time (i.e., material derivative ) of any finite volume (V) to represent the change of velocity in fluid media:
where
is the material derivative of mass per unit volume (density, ),
is the mathematical operation for the integration throughout the volume (V),
is the partial derivative mathematical operator,
is the divergence of the flow velocity (), which is a scalar field, Note 1
is the gradient of density (), which is the vector derivative of a scalar field, Note 1
Note 1 - Refer to the mathematical operator del represented by the nabla () symbol.
to arrive at the conservation form of the equations of motion. This is often written:
where is the outer product of the flow velocity ():
The left side of the equation describes acceleration, and may be composed of time-dependent and convective components (also the effects of non-inertial coordinates if present). The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces (such as gravity).
All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation. By expressing the deviatoric (shear) stress tensor in terms of viscosity and the fluid velocity gradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below.
Convective acceleration
A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. While individual fluid particles indeed experience time-dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
Compressible flow
Remark: here, the deviatoric stress tensor is denoted as it was in the general continuum equations and in the incompressible flow section.
The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient , or more simply the rate-of-strain tensor:
the deviatoric stress is linear in this variable: , where is independent on the strain rate tensor, is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product.
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity and the dynamic viscosity , as it is usual in linear elasticity:
where is the identity tensor, and is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as:
Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow:
Given this relation, and since the trace of the identity tensor in three dimensions is three:
the trace of the stress tensor in three dimensions becomes:
So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics:
Introducing the bulk viscosity ,
we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics:
which can also be arranged in the other usual form:
Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term:
and the deviatoric stress tensor is still coincident with the shear stress tensor (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity:
Both bulk viscosity and dynamic viscosity need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state.
The most general of the Navier–Stokes equations become
in index notation, the equation can be written as
The corresponding equation in conservation form can be obtained by considering that, given the mass continuity equation, the left side is equivalent to:
To give finally:
{{Equation box 1
|indent=:
|title=Navier–Stokes momentum equation (conservative form)
|equation=
|cellpadding
|border
|border colour = #FF0000
|background colour = #DCDCDC
}}
Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity can be assumed to be constant in which case, the effect of the volume viscosity is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below.
However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming . The assumption of setting is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect. With the Stokes hypothesis, the Navier–Stokes equations become
If the dynamic and bulk viscosities are assumed to be uniform in space, the equations in convective form can be simplified further. By computing the divergence of the stress tensor, since the divergence of tensor is and the divergence of tensor is , one finally arrives to the compressible Navier–Stokes momentum equation:
where is the material derivative. is the shear kinematic viscosity and is the bulk kinematic viscosity. The left-hand side changes in the conservation form of the Navier–Stokes momentum equation.
By bringing the operator on the flow velocity on the left side, one also has:
The convective acceleration term can also be written as
where the vector is known as the Lamb vector.
For the special case of an incompressible flow, the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with .
Incompressible flow
The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient .
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor can be expressed in terms of the dynamic viscosity :
where
is the rate-of-strain tensor. So this decomposition can be made explicit as:
This is constitutive equation is also called the Newtonian law of viscosity.
Dynamic viscosity need not be constant – in incompressible flows it can depend on density and on pressure. Any equation that makes explicit one of these transport coefficient in the conservative variables is called an equation of state.
The divergence of the deviatoric stress in case of uniform viscosity is given by:
because for an incompressible fluid.
Incompressibility rules out density and pressure waves like sound or shock waves, so this simplification is not useful if these phenomena are of interest. The incompressible flow assumption typically holds well with all fluids at low Mach numbers (say up to about Mach 0.3), such as for modelling air winds at normal temperatures. the incompressible Navier–Stokes equations are best visualized by dividing for the density:
where is called the kinematic viscosity.
By isolating the fluid velocity, one can also state:
If the density is constant throughout the fluid domain, or, in other words, if all fluid elements have the same density, , then we have
where is called the unit pressure head.
In incompressible flows, the pressure field satisfies the Poisson equation,
which is obtained by taking the divergence of the momentum equations.
It is well worth observing the meaning of each term (compare to the Cauchy momentum equation):
The higher-order term, namely the shear stress divergence , has simply reduced to the vector Laplacian term . This Laplacian term can be interpreted as the difference between the velocity at a point and the mean velocity in a small surrounding volume. This implies that – for a Newtonian fluid – viscosity operates as a diffusion of momentum, in much the same way as the heat conduction. In fact neglecting the convection term, incompressible Navier–Stokes equations lead to a vector diffusion equation (namely Stokes equations), but in general the convection term is present, so incompressible Navier–Stokes equations belong to the class of convection–diffusion equations.
In the usual case of an external field being a conservative field:
by defining the hydraulic head:
one can finally condense the whole source in one term, arriving to the incompressible Navier–Stokes equation with conservative external field:
The incompressible Navier–Stokes equations with uniform density and viscosity and conservative external field is the fundamental equation of hydraulics. The domain for these equations is commonly a 3 or less dimensional Euclidean space, for which an orthogonal coordinate reference frame is usually set to explicit the system of scalar partial differential equations to be solved. In 3-dimensional orthogonal coordinate systems are 3: Cartesian, cylindrical, and spherical. Expressing the Navier–Stokes vector equation in Cartesian coordinates is quite straightforward and not much influenced by the number of dimensions of the euclidean space employed, and this is the case also for the first-order terms (like the variation and convection ones) also in non-cartesian orthogonal coordinate systems. But for the higher order terms (the two coming from the divergence of the deviatoric stress that distinguish Navier–Stokes equations from Euler equations) some tensor calculus is required for deducing an expression in non-cartesian orthogonal coordinate systems.
A special case of the fundamental equation of hydraulics is the Bernoulli's equation.
The incompressible Navier–Stokes equation is composite, the sum of two orthogonal equations,
where and are solenoidal and irrotational projection operators satisfying , and and are the non-conservative and conservative parts of the body force. This result follows from the Helmholtz theorem (also known as the fundamental theorem of vector calculus). The first equation is a pressureless governing equation for the velocity, while the second equation for the pressure is a functional of the velocity and is related to the pressure Poisson equation.
The explicit functional form of the projection operator in 3D is found from the Helmholtz Theorem:
with a similar structure in 2D. Thus the governing equation is an integro-differential equation similar to Coulomb and Biot–Savart law, not convenient for numerical computation.
An equivalent weak or variational form of the equation, proved to produce the same velocity solution as the Navier–Stokes equation, is given by,
for divergence-free test functions satisfying appropriate boundary conditions. Here, the projections are accomplished by the orthogonality of the solenoidal and irrotational function spaces. The discrete form of this is eminently suited to finite element computation of divergence-free flow, as we shall see in the next section. There one will be able to address the question "How does one specify pressure-driven (Poiseuille) problems with a pressureless governing equation?".
The absence of pressure forces from the governing velocity equation demonstrates that the equation is not a dynamic one, but rather a kinematic equation where the divergence-free condition serves the role of a conservation equation. This all would seem to refute the frequent statements that the incompressible pressure enforces the divergence-free condition.
Weak form of the incompressible Navier–Stokes equations
Strong form
Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density in a domain
with boundary
being and portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied ():
is the fluid velocity, the fluid pressure, a given forcing term, the outward directed unit normal vector to , and the viscous stress tensor defined as:
Let be the dynamic viscosity of the fluid, the second-order identity tensor and the strain-rate tensor defined as:
The functions and are given Dirichlet and Neumann boundary data, while is the initial condition. The first equation is the momentum balance equation, while the second represents the mass conservation, namely the continuity equation.
Assuming constant dynamic viscosity, using the vectorial identity
and exploiting mass conservation, the divergence of the total stress tensor in the momentum equation can also be expressed as:
Moreover, note that the Neumann boundary conditions can be rearranged as:
Weak form
In order to find the weak form of the Navier–Stokes equations, firstly, consider the momentum equation
multiply it for a test function , defined in a suitable space , and integrate both members with respect to the domain :
Counter-integrating by parts the diffusive and the pressure terms and by using the Gauss' theorem:
Using these relations, one gets:
In the same fashion, the continuity equation is multiplied for a test function belonging to a space and integrated in the domain :
The space functions are chosen as follows:
Considering that the test function vanishes on the Dirichlet boundary and considering the Neumann condition, the integral on the boundary can be rearranged as:
Having this in mind, the weak formulation of the Navier–Stokes equations is expressed as:
Discrete velocity
With partitioning of the problem domain and defining basis functions on the partitioned domain, the discrete form of the governing equation is
It is desirable to choose basis functions that reflect the essential feature of incompressible flow – the elements must be divergence-free. While the velocity is the variable of interest, the existence of the stream function or vector potential is necessary by the Helmholtz theorem. Further, to determine fluid flow in the absence of a pressure gradient, one can specify the difference of stream function values across a 2D channel, or the line integral of the tangential component of the vector potential around the channel in 3D, the flow being given by Stokes' theorem. Discussion will be restricted to 2D in the following.
We further restrict discussion to continuous Hermite finite elements which have at least first-derivative degrees-of-freedom. With this, one can draw a large number of candidate triangular and rectangular elements from the plate-bending literature. These elements have derivatives as components of the gradient. In 2D, the gradient and curl of a scalar are clearly orthogonal, given by the expressions,
Adopting continuous plate-bending elements, interchanging the derivative degrees-of-freedom and changing the sign of the appropriate one gives many families of stream function elements.
Taking the curl of the scalar stream function elements gives divergence-free velocity elements. The requirement that the stream function elements be continuous assures that the normal component of the velocity is continuous across element interfaces, all that is necessary for vanishing divergence on these interfaces.
Boundary conditions are simple to apply. The stream function is constant on no-flow surfaces, with no-slip velocity conditions on surfaces.
Stream function differences across open channels determine the flow. No boundary conditions are necessary on open boundaries, though consistent values may be used with some problems. These are all Dirichlet conditions.
The algebraic equations to be solved are simple to set up, but of course are non-linear, requiring iteration of the linearized equations.
Similar considerations apply to three-dimensions, but extension from 2D is not immediate because of the vector nature of the potential, and there exists no simple relation between the gradient and the curl as was the case in 2D.
Pressure recovery
Recovering pressure from the velocity field is easy. The discrete weak equation for the pressure gradient is,
where the test/weight functions are irrotational. Any conforming scalar finite element may be used. However, the pressure gradient field may also be of interest. In this case, one can use scalar Hermite elements for the pressure. For the test/weight functions one would choose the irrotational vector elements obtained from the gradient of the pressure element.
Non-inertial frame of reference
The rotating frame of reference introduces some interesting pseudo-forces into the equations through the material derivative term. Consider a stationary inertial frame of reference , and a non-inertial frame of reference , which is translating with velocity and rotating with angular velocity with respect to the stationary frame. The Navier–Stokes equation observed from the non-inertial frame then becomes
Here and are measured in the non-inertial frame. The first term in the parenthesis represents Coriolis acceleration, the second term is due to centrifugal acceleration, the third is due to the linear acceleration of with respect to and the fourth term is due to the angular acceleration of with respect to .
Other equations
The Navier–Stokes equations are strictly a statement of the balance of momentum. To fully describe fluid flow, more information is needed, how much depending on the assumptions made. This additional information may include boundary data (no-slip, capillary surface, etc.), conservation of mass, balance of energy, and/or an equation of state.
Continuity equation for incompressible fluid
Regardless of the flow assumptions, a statement of the conservation of mass is generally necessary. This is achieved through the mass continuity equation, as discussed above in the "General continuum equations" within this article, as follows:
A fluid media for which the density () is constant is called incompressible. Therefore, the rate of change of density () with respect to time and the gradient of density are equal to zero . In this case the general equation of continuity, , reduces to: . Furthermore, assuming that density () is a non-zero constant means that the right-hand side of the equation is divisible by density (). Therefore, the continuity equation for an incompressible fluid reduces further to:This relationship, , identifies that the divergence of the flow velocity vector () is equal to zero , which means that for an incompressible fluid the flow velocity field is a solenoidal vector field or a divergence-free vector field. Note that this relationship can be expanded upon due to its uniqueness with the vector Laplace operator , and vorticity which is now expressed like so, for an incompressible fluid:
Stream function for incompressible 2D fluid
Taking the curl of the incompressible Navier–Stokes equation results in the elimination of pressure. This is especially easy to see if 2D Cartesian flow is assumed (like in the degenerate 3D case with and no dependence of anything on ), where the equations reduce to:
Differentiating the first with respect to , the second with respect to and subtracting the resulting equations will eliminate pressure and any conservative force.
For incompressible flow, defining the stream function through
results in mass continuity being unconditionally satisfied (given the stream function is continuous), and then incompressible Newtonian 2D momentum and mass conservation condense into one equation:
where is the 2D biharmonic operator and is the kinematic viscosity, . We can also express this compactly using the Jacobian determinant:
This single equation together with appropriate boundary conditions describes 2D fluid flow, taking only kinematic viscosity as a parameter. Note that the equation for creeping flow results when the left side is assumed zero.
In axisymmetric flow another stream function formulation, called the Stokes stream function, can be used to describe the velocity components of an incompressible flow with one scalar function.
The incompressible Navier–Stokes equation is a differential algebraic equation, having the inconvenient feature that there is no explicit mechanism for advancing the pressure in time. Consequently, much effort has been expended to eliminate the pressure from all or part of the computational process. The stream function formulation eliminates the pressure but only in two dimensions and at the expense of introducing higher derivatives and elimination of the velocity, which is the primary variable of interest.
Properties
Nonlinearity
The Navier–Stokes equations are nonlinear partial differential equations in the general case and so remain in almost every real situation. In some cases, such as one-dimensional flow and Stokes flow (or creeping flow), the equations can be simplified to linear equations. The nonlinearity makes most problems difficult or impossible to solve and is the main contributor to the turbulence that the equations model.
The nonlinearity is due to convective acceleration, which is an acceleration associated with the change in velocity over position. Hence, any convective flow, whether turbulent or not, will involve nonlinearity. An example of convective but laminar (nonturbulent) flow would be the passage of a viscous fluid (for example, oil) through a small converging nozzle. Such flows, whether exactly solvable or not, can often be thoroughly studied and understood.
Turbulence
Turbulence is the time-dependent chaotic behaviour seen in many fluid flows. It is generally believed that it is due to the inertia of the fluid as a whole: the culmination of time-dependent and convective acceleration; hence flows where inertial effects are small tend to be laminar (the Reynolds number quantifies how much the flow is affected by inertia). It is believed, though not known with certainty, that the Navier–Stokes equations describe turbulence properly.
The numerical solution of the Navier–Stokes equations for turbulent flow is extremely difficult, and due to the significantly different mixing-length scales that are involved in turbulent flow, the stable solution of this requires such a fine mesh resolution that the computational time becomes significantly infeasible for calculation or direct numerical simulation. Attempts to solve turbulent flow using a laminar solver typically result in a time-unsteady solution, which fails to converge appropriately. To counter this, time-averaged equations such as the Reynolds-averaged Navier–Stokes equations (RANS), supplemented with turbulence models, are used in practical computational fluid dynamics (CFD) applications when modeling turbulent flows. Some models include the Spalart–Allmaras, –, –, and SST models, which add a variety of additional equations to bring closure to the RANS equations. Large eddy simulation (LES) can also be used to solve these equations numerically. This approach is computationally more expensive—in time and in computer memory—than RANS, but produces better results because it explicitly resolves the larger turbulent scales.
Applicability
Together with supplemental equations (for example, conservation of mass) and well-formulated boundary conditions, the Navier–Stokes equations seem to model fluid motion accurately; even turbulent flows seem (on average) to agree with real world observations.
The Navier–Stokes equations assume that the fluid being studied is a continuum (it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving at relativistic velocities. At very small scales or under extreme conditions, real fluids made out of discrete molecules will produce results different from the continuous fluids modeled by the Navier–Stokes equations. For example, capillarity of internal layers in fluids appears for flow with high gradients. For large Knudsen number of the problem, the Boltzmann equation may be a suitable replacement.
Failing that, one may have to resort to molecular dynamics or various hybrid methods.
Another limitation is simply the complicated nature of the equations. Time-tested formulations exist for common fluid families, but the application of the Navier–Stokes equations to less common families tends to result in very complicated formulations and often to open research problems. For this reason, these equations are usually written for Newtonian fluids where the viscosity model is linear; truly general models for the flow of other kinds of fluids (such as blood) do not exist.
Application to specific problems
The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. This is partly because there is an enormous variety of problems that may be modeled, ranging from as simple as the distribution of static pressure to as complicated as multiphase flow driven by surface tension.
Generally, application to specific problems begins with some flow assumptions and initial/boundary condition formulation, this may be followed by scale analysis to further simplify the problem.
Parallel flow
Assume steady, parallel, one-dimensional, non-convective pressure-driven flow between parallel plates, the resulting scaled (dimensionless) boundary value problem is:
The boundary condition is the no slip condition. This problem is easily solved for the flow field:
From this point onward, more quantities of interest can be easily obtained, such as viscous drag force or net flow rate.
Radial flow
Difficulties may arise when the problem becomes slightly more complicated. A seemingly modest twist on the parallel flow above would be the radial flow between parallel plates; this involves convection and thus non-linearity. The velocity field may be represented by a function that must satisfy:
This ordinary differential equation is what is obtained when the Navier–Stokes equations are written and the flow assumptions applied (additionally, the pressure gradient is solved for). The nonlinear term makes this a very difficult problem to solve analytically (a lengthy implicit solution may be found which involves elliptic integrals and roots of cubic polynomials). Issues with the actual existence of solutions arise for (approximately; this is not ), the parameter being the Reynolds number with appropriately chosen scales. This is an example of flow assumptions losing their applicability, and an example of the difficulty in "high" Reynolds number flows.
Convection
A type of natural convection that can be described by the Navier–Stokes equation is the Rayleigh–Bénard convection. It is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility.
Exact solutions of the Navier–Stokes equations
Some exact solutions to the Navier–Stokes equations exist. Examples of degenerate cases—with the non-linear terms in the Navier–Stokes equations equal to zero—are Poiseuille flow, Couette flow and the oscillatory Stokes boundary layer. But also, more interesting examples, solutions to the full non-linear equations, exist, such as Jeffery–Hamel flow, Von Kármán swirling flow, stagnation point flow, Landau–Squire jet, and Taylor–Green vortex.Landau & Lifshitz (1987) pp. 75–88. Time-dependent self-similar solutions of the three-dimensional non-compressible Navier-Stokes equations in Cartesian coordinate can be given with the help of the Kummer's functions with quadratic arguments. For the compressible Navier-Stokes equations the time-dependent self-similar solutions are however the Whittaker functions again with quadratic arguments when the polytropic equation of state is used as a closing condition. Note that the existence of these exact solutions does not imply they are stable: turbulence may develop at higher Reynolds numbers.
Under additional assumptions, the component parts can be separated.
A three-dimensional steady-state vortex solution
A steady-state example with no singularities comes from considering the flow along the lines of a Hopf fibration. Let be a constant radius of the inner coil. One set of solutions is given by:
for arbitrary constants and . This is a solution in a non-viscous gas (compressible fluid) whose density, velocities and pressure goes to zero far from the origin. (Note this is not a solution to the Clay Millennium problem because that refers to incompressible fluids where is a constant, and neither does it deal with the uniqueness of the Navier–Stokes equations with respect to any turbulence properties.) It is also worth pointing out that the components of the velocity vector are exactly those from the Pythagorean quadruple parametrization. Other choices of density and pressure are possible with the same velocity field:
Viscous three-dimensional periodic solutions
Two examples of periodic fully-three-dimensional viscous solutions are described in.
These solutions are defined on a three-dimensional torus and are characterized by positive and negative helicity respectively.
The solution with positive helicity is given by:
where is the wave number and the velocity components are normalized so that the average kinetic energy per unit of mass is at .
The pressure field is obtained from the velocity field as (where and are reference values for the pressure and density fields respectively).
Since both the solutions belong to the class of Beltrami flow, the vorticity field is parallel to the velocity and, for the case with positive helicity, is given by .
These solutions can be regarded as a generalization in three dimensions of the classic two-dimensional Taylor-Green Taylor–Green vortex.
Wyld diagrams
Wyld diagrams are bookkeeping graphs that correspond to the Navier–Stokes equations via a perturbation expansion of the fundamental continuum mechanics. Similar to the Feynman diagrams in quantum field theory, these diagrams are an extension of Keldysh's technique for nonequilibrium processes in fluid dynamics. In other words, these diagrams assign graphs to the (often) turbulent phenomena in turbulent fluids by allowing correlated and interacting fluid particles to obey stochastic processes associated to pseudo-random functions in probability distributions.
Representations in 3D
Note that the formulas in this section make use of the single-line notation for partial derivatives, where, e.g. means the partial derivative of with respect to , and means the second-order partial derivative of with respect to .
A 2022 paper provides a less costly, dynamical and recurrent solution of the Navier-Stokes equation for 3D turbulent fluid flows. On suitably short time scales, the dynamics of turbulence is deterministic.
Cartesian coordinates
From the general form of the Navier–Stokes, with the velocity vector expanded as , sometimes respectively named , , , we may write the vector equation explicitly,
Note that gravity has been accounted for as a body force, and the values of , , will depend on the orientation of gravity with respect to the chosen set of coordinates.
The continuity equation reads:
When the flow is incompressible, does not change for any fluid particle, and its material derivative vanishes: . The continuity equation is reduced to:
Thus, for the incompressible version of the Navier–Stokes equation the second part of the viscous terms fall away (see Incompressible flow).
This system of four equations comprises the most commonly used and studied form. Though comparatively more compact than other representations, this is still a nonlinear system of partial differential equations for which solutions are difficult to obtain.
Cylindrical coordinates
A change of variables on the Cartesian equations will yield the following momentum equations for , , and
The gravity components will generally not be constants, however for most applications either the coordinates are chosen so that the gravity components are constant or else it is assumed that gravity is counteracted by a pressure field (for example, flow in horizontal pipe is treated normally without gravity and without a vertical pressure gradient). The continuity equation is:
This cylindrical representation of the incompressible Navier–Stokes equations is the second most commonly seen (the first being Cartesian above). Cylindrical coordinates are chosen to take advantage of symmetry, so that a velocity component can disappear. A very common case is axisymmetric flow with the assumption of no tangential velocity (), and the remaining quantities are independent of :
Spherical coordinates
In spherical coordinates, the , , and momentum equations are (note the convention used: is polar angle, or colatitude, ):
Mass continuity will read:
These equations could be (slightly) compacted by, for example, factoring from the viscous terms. However, doing so would undesirably alter the structure of the Laplacian and other quantities.
Navier–Stokes equations use in games
The Navier–Stokes equations are used extensively in video games in order to model a wide variety of natural phenomena. Simulations of small-scale gaseous fluids, such as fire and smoke, are often based on the seminal paper "Real-Time Fluid Dynamics for Games" by Jos Stam, which elaborates one of the methods proposed in Stam's earlier, more famous paper "Stable Fluids" from 1999. Stam proposes stable fluid simulation using a Navier–Stokes solution method from 1968, coupled with an unconditionally stable semi-Lagrangian advection scheme, as first proposed in 1992.
More recent implementations based upon this work run on the game systems graphics processing unit (GPU) as opposed to the central processing unit (CPU) and achieve a much higher degree of performance.
Many improvements have been proposed to Stam's original work, which suffers inherently from high numerical dissipation in both velocity and mass.
An introduction to interactive fluid simulation can be found in the 2007 ACM SIGGRAPH course, Fluid Simulation for Computer Animation.
| Physical sciences | Fluid mechanics | null |
48396 | https://en.wikipedia.org/wiki/Mathematical%20analysis | Mathematical analysis | Analysis is the branch of mathematics dealing with continuous functions, limits, and related theories, such as differentiation, integration, measure, infinite sequences, series, and analytic functions.
These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis.
Analysis may be distinguished from geometry; however, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space).
History
Ancient
Mathematical analysis formally developed in the 17th century during the Scientific Revolution, but many of its ideas can be traced back to earlier mathematicians. Early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, an infinite geometric sum is implicit in Zeno's paradox of the dichotomy. (Strictly speaking, the point of the paradox is to deny that the infinite sum exists.) Later, Greek mathematicians such as Eudoxus and Archimedes made more explicit, but informal, use of the concepts of limits and convergence when they used the method of exhaustion to compute the area and volume of regions and solids. The explicit use of infinitesimals appears in Archimedes' The Method of Mechanical Theorems, a work rediscovered in the 20th century. In Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century CE to find the area of a circle. From Jain literature, it appears that Hindus were in possession of the formulae for the sum of the arithmetic and geometric series as early as the 4th century BCE.
Ācārya Bhadrabāhu uses the sum of a geometric series in his Kalpasūtra in .
Medieval
Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere in the 5th century. In the 12th century, the Indian mathematician Bhāskara II used infinitesimal and used what is now known as Rolle's theorem.
In the 14th century, Madhava of Sangamagrama developed infinite series expansions, now called Taylor series, of functions such as sine, cosine, tangent and arctangent. Alongside his development of Taylor series of trigonometric functions, he also estimated the magnitude of the error terms resulting of truncating these series, and gave a rational approximation of some infinite series. His followers at the Kerala School of Astronomy and Mathematics further expanded his works, up to the 16th century.
Modern
Foundations
The modern foundations of mathematical analysis were established in 17th century Europe. This began when Fermat and Descartes developed analytic geometry, which is the precursor to modern calculus. Fermat's method of adequality allowed him to determine the maxima and minima of functions and the tangents of curves. Descartes's publication of La Géométrie in 1637, which introduced the Cartesian coordinate system, is considered to be the establishment of mathematical analysis. It would be a few decades later that Newton and Leibniz independently developed infinitesimal calculus, which grew, with the stimulus of applied work that continued through the 18th century, into analysis topics such as the calculus of variations, ordinary and partial differential equations, Fourier analysis, and generating functions. During this period, calculus techniques were applied to approximate discrete problems by continuous ones.
Modernization
In the 18th century, Euler introduced the notion of a mathematical function. Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the modern definition of continuity in 1816, but Bolzano's work did not become widely known until the 1870s. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, particularly by Euler. Instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required an infinitesimal change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the formal theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations and harmonic analysis. The contributions of these mathematicians and others, such as Weierstrass, developed the (ε, δ)-definition of limit approach, thus founding the modern field of mathematical analysis. Around the same time, Riemann introduced his theory of integration, and made significant advances in complex analysis.
Towards the end of the 19th century, mathematicians started worrying that they were assuming the existence of a continuum of real numbers without proof. Dedekind then constructed the real numbers by Dedekind cuts, in which irrational numbers are formally defined, which serve to fill the "gaps" between rational numbers, thereby creating a complete set: the continuum of real numbers, which had already been developed by Simon Stevin in terms of decimal expansions. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the "size" of the set of discontinuities of real functions.
Also, various pathological objects, (such as nowhere continuous functions, continuous but nowhere differentiable functions, and space-filling curves), commonly known as "monsters", began to be investigated. In this context, Jordan developed his theory of measure, Cantor developed what is now called naive set theory, and Baire proved the Baire category theorem. In the early 20th century, calculus was formalized using an axiomatic set theory. Lebesgue greatly improved measure theory, and introduced his own theory of integration, now known as Lebesgue integration, which proved to be a big improvement over Riemann's. Hilbert introduced Hilbert spaces to solve integral equations. The idea of normed vector space was in the air, and in the 1920s Banach created functional analysis.
Important concepts
Metric spaces
In mathematics, a metric space is a set where a notion of distance (called a metric) between elements of the set is defined.
Much of analysis happens in some metric space; the most commonly used are the real line, the complex plane, Euclidean space, other vector spaces, and the integers. Examples of analysis without a metric include measure theory (which describes size rather than distance) and functional analysis (which studies topological vector spaces that need not have any sense of distance).
Formally, a metric space is an ordered pair where is a set and is a metric on , i.e., a function
such that for any , the following holds:
, with equality if and only if (identity of indiscernibles),
(symmetry), and
(triangle inequality).
By taking the third property and letting , it can be shown that (non-negative).
Sequences and limits
A sequence is an ordered list. Like a set, it contains members (also called elements, or terms). Unlike a set, order matters, and exactly the same elements can appear multiple times at different positions in the sequence. Most precisely, a sequence can be defined as a function whose domain is a countable totally ordered set, such as the natural numbers.
One of the most important properties of a sequence is convergence. Informally, a sequence converges if it has a limit. Continuing informally, a (singly-infinite) sequence has a limit if it approaches some point x, called the limit, as n becomes very large. That is, for an abstract sequence (an) (with n running from 1 to infinity understood) the distance between an and x approaches 0 as n → ∞, denoted
Main branches
Calculus
Real analysis
Real analysis (traditionally, the "theory of functions of a real variable") is a branch of mathematical analysis dealing with the real numbers and real-valued functions of a real variable. In particular, it deals with the analytic properties of real functions and sequences, including convergence and limits of sequences of real numbers, the calculus of the real numbers, and continuity, smoothness and related properties of real-valued functions.
Complex analysis
Complex analysis (traditionally known as the "theory of functions of a complex variable") is the branch of mathematical analysis that investigates functions of complex numbers. It is useful in many branches of mathematics, including algebraic geometry, number theory, applied mathematics; as well as in physics, including hydrodynamics, thermodynamics, mechanical engineering, electrical engineering, and particularly, quantum field theory.
Complex analysis is particularly concerned with the analytic functions of complex variables (or, more generally, meromorphic functions). Because the separate real and imaginary parts of any analytic function must satisfy Laplace's equation, complex analysis is widely applicable to two-dimensional problems in physics.
Functional analysis
Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. inner product, norm, topology, etc.) and the linear operators acting upon these spaces and respecting these structures in a suitable sense. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining continuous, unitary etc. operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.
Harmonic analysis
Harmonic analysis is a branch of mathematical analysis concerned with the representation of functions and signals as the superposition of basic waves. This includes the study of the notions of Fourier series and Fourier transforms (Fourier analysis), and of their generalizations. Harmonic analysis has applications in areas as diverse as music theory, number theory, representation theory, signal processing, quantum mechanics, tidal analysis, and neuroscience.
Differential equations
A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. Differential equations play a prominent role in engineering, physics, economics, biology, and other disciplines.
Differential equations arise in many areas of science and technology, specifically whenever a deterministic relation involving some continuously varying quantities (modeled by functions) and their rates of change in space or time (expressed as derivatives) is known or postulated. This is illustrated in classical mechanics, where the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow one (given the position, velocity, acceleration and various forces acting on the body) to express these variables dynamically as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly.
Measure theory
A measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the -dimensional Euclidean space . For instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically, 1.
Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set . It must assign 0 to the empty set and be (countably) additive: the measure of a 'large' subset that can be decomposed into a finite (or countable) number of 'smaller' disjoint subsets, is the sum of the measures of the "smaller" subsets. In general, if one wants to associate a consistent size to each subset of a given set while satisfying the other axioms of a measure, one only finds trivial examples like the counting measure. This problem was resolved by defining measure only on a sub-collection of all subsets; the so-called measurable subsets, which are required to form a -algebra. This means that the empty set, countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are necessarily complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a non-trivial consequence of the axiom of choice.
Numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics).
Modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors.
Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equations appear in celestial mechanics (planets, stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.
Vector analysis
Vector analysis, also called vector calculus, is a branch of mathematical analysis dealing with vector-valued functions.
Scalar analysis
Scalar analysis is a branch of mathematical analysis dealing with values related to scale as opposed to direction. Values such as temperature are scalar because they describe the magnitude of a value without regard to direction, force, or displacement that value may or may not have.
Tensor analysis
Other topics
Calculus of variations deals with extremizing functionals, as opposed to ordinary calculus which deals with functions.
Harmonic analysis deals with the representation of functions or signals as the superposition of basic waves.
Geometric analysis involves the use of geometrical methods in the study of partial differential equations and the application of the theory of partial differential equations to geometry.
Clifford analysis, the study of Clifford valued functions that are annihilated by Dirac or Dirac-like operators, termed in general as monogenic or Clifford analytic functions.
p-adic analysis, the study of analysis within the context of p-adic numbers, which differs in some interesting and surprising ways from its real and complex counterparts.
Non-standard analysis, which investigates the hyperreal numbers and their functions and gives a rigorous treatment of infinitesimals and infinitely large numbers.
Computable analysis, the study of which parts of analysis can be carried out in a computable manner.
Stochastic calculus – analytical notions developed for stochastic processes.
Set-valued analysis – applies ideas from analysis and topology to set-valued functions.
Convex analysis, the study of convex sets and functions.
Idempotent analysis – analysis in the context of an idempotent semiring, where the lack of an additive inverse is compensated somewhat by the idempotent rule A + A = A.
Tropical analysis – analysis of the idempotent semiring called the tropical semiring (or max-plus algebra/min-plus algebra).
Constructive analysis, which is built upon a foundation of constructive, rather than classical, logic and set theory.
Intuitionistic analysis, which is developed from constructive logic like constructive analysis but also incorporates choice sequences.
Paraconsistent analysis, which is built upon a foundation of paraconsistent, rather than classical, logic and set theory.
Smooth infinitesimal analysis, which is developed in a smooth topos.
Applications
Techniques from analysis are also found in other areas such as:
Physical sciences
The vast majority of classical mechanics, relativity, and quantum mechanics is based on applied analysis, and differential equations in particular. Examples of important differential equations include Newton's second law, the Schrödinger equation, and the Einstein field equations.
Functional analysis is also a major factor in quantum mechanics.
Signal processing
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate individual components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
Other areas of mathematics
Techniques from analysis are used in many areas of mathematics, including:
Analytic number theory
Analytic combinatorics
Continuous probability
Differential entropy in information theory
Differential games
Differential geometry, the application of calculus to specific mathematical spaces known as manifolds that possess a complicated internal structure but behave in a simple manner locally.
Differentiable manifolds
Differential topology
Partial differential equations
Famous Textbooks
Foundation of Analysis: The Arithmetic of Whole Rational, Irrational and Complex Numbers, by Edmund Landau
Introductory Real Analysis, by Andrey Kolmogorov, Sergei Fomin
Differential and Integral Calculus (3 volumes), by Grigorii Fichtenholz
The Fundamentals of Mathematical Analysis (2 volumes), by Grigorii Fichtenholz
A Course Of Mathematical Analysis (2 volumes), by Sergey Nikolsky
Mathematical Analysis (2 volumes), by Vladimir Zorich
A Course of Higher Mathematics (5 volumes, 6 parts), by Vladimir Smirnov
Differential And Integral Calculus, by Nikolai Piskunov
A Course of Mathematical Analysis, by Aleksandr Khinchin
Mathematical Analysis: A Special Course, by Georgiy Shilov
Theory of Functions of a Real Variable (2 volumes), by Isidor Natanson
Problems in Mathematical Analysis, by Boris Demidovich
Problems and Theorems in Analysis (2 volumes), by George Pólya, Gábor Szegő
Mathematical Analysis: A Modern Approach to Advanced Calculus, by Tom Apostol
Principles of Mathematical Analysis, by Walter Rudin
Real Analysis: Measure Theory, Integration, and Hilbert Spaces, by Elias Stein
Complex Analysis: An Introduction to the Theory of Analytic Functions of One Complex Variable, by Lars Ahlfors
Complex Analysis, by Elias Stein
Functional Analysis: Introduction to Further Topics in Analysis, by Elias Stein
Analysis (2 volumes), by Terence Tao
Analysis (3 volumes), by Herbert Amann, Joachim Escher
Real and Functional Analysis, by Vladimir Bogachev, Oleg Smolyanov
Real and Functional Analysis, by Serge Lang
| Mathematics | Analysis | null |
48404 | https://en.wikipedia.org/wiki/Ring%20%28mathematics%29 | Ring (mathematics) | In mathematics, rings are algebraic structures that generalize fields: multiplication need not be commutative and multiplicative inverses need not exist. Informally, a ring is a set equipped with two binary operations satisfying properties analogous to those of addition and multiplication of integers. Ring elements may be numbers such as integers or complex numbers, but they may also be non-numerical objects such as polynomials, square matrices, functions, and power series.
Formally, a ring is a set endowed with two binary operations called addition and multiplication such that the ring is an abelian group with respect to the addition operator, and the multiplication operator is associative, is distributive over the addition operation, and has a multiplicative identity element. (Some authors define rings without requiring a multiplicative identity and instead call the structure defined above a ring with identity. See .)
Whether a ring is commutative has profound implications on its behavior. Commutative algebra, the theory of commutative rings, is a major branch of ring theory. Its development has been greatly influenced by problems and ideas of algebraic number theory and algebraic geometry. The simplest commutative rings are those that admit division by non-zero elements; such rings are called fields.
Examples of commutative rings include the set of integers with their standard addition and multiplication, the set of polynomials with their addition and multiplication, the coordinate ring of an affine algebraic variety, and the ring of integers of a number field. Examples of noncommutative rings include the ring of real square matrices with , group rings in representation theory, operator algebras in functional analysis, rings of differential operators, and cohomology rings in topology.
The conceptualization of rings spanned the 1870s to the 1920s, with key contributions by Dedekind, Hilbert, Fraenkel, and Noether. Rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. They later proved useful in other branches of mathematics such as geometry and analysis.
Definition
A ring is a set equipped with two binary operations + (addition) and ⋅ (multiplication) satisfying the following three sets of axioms, called the ring axioms:
is an abelian group under addition, meaning that:
for all in (that is, is associative).
for all in (that is, is commutative).
There is an element in such that for all in (that is, is the additive identity).
For each in there exists in such that (that is, is the additive inverse of ).
is a monoid under multiplication, meaning that:
for all in (that is, is associative).
There is an element in such that and for all in (that is, is the multiplicative identity).
Multiplication is distributive with respect to addition, meaning that:
for all in (left distributivity).
for all in (right distributivity).
In notation, the multiplication symbol is often omitted, in which case is written as .
Variations on the definition
In the terminology of this article, a ring is defined to have a multiplicative identity, while a structure with the same axiomatic definition but without the requirement for a multiplicative identity is instead called a "" (IPA: ) with a missing "i". For example, the set of even integers with the usual + and ⋅ is a rng, but not a ring. As explained in below, many authors apply the term "ring" without requiring a multiplicative identity.
Although ring addition is commutative, ring multiplication is not required to be commutative: need not necessarily equal . Rings that also satisfy commutativity for multiplication (such as the ring of integers) are called commutative rings. Books on commutative algebra or algebraic geometry often adopt the convention that ring means commutative ring, to simplify terminology.
In a ring, multiplicative inverses are not required to exist. A nonzero commutative ring in which every nonzero element has a multiplicative inverse is called a field.
The additive group of a ring is the underlying set equipped with only the operation of addition. Although the definition requires that the additive group be abelian, this can be inferred from the other ring axioms. The proof makes use of the "", and does not work in a rng. (For a rng, omitting the axiom of commutativity of addition leaves it inferable from the remaining rng assumptions only for elements that are products: .)
There are a few authors who use the term "ring" to refer to structures in which there is no requirement for multiplication to be associative. For these authors, every algebra is a "ring".
Illustration
The most familiar example of a ring is the set of all integers consisting of the numbers
The axioms of a ring were elaborated as a generalization of familiar properties of addition and multiplication of integers.
Some properties
Some basic properties of a ring follow immediately from the axioms:
The additive identity is unique.
The additive inverse of each element is unique.
The multiplicative identity is unique.
For any element in a ring , one has (zero is an absorbing element with respect to multiplication) and .
If in a ring (or more generally, is a unit element), then has only one element, and is called the zero ring.
If a ring contains the zero ring as a subring, then itself is the zero ring.
The binomial formula holds for any and satisfying .
Example: Integers modulo 4
Equip the set with the following operations:
The sum in is the remainder when the integer is divided by (as is always smaller than , this remainder is either or ). For example, and
The product in is the remainder when the integer is divided by . For example, and
Then is a ring: each axiom follows from the corresponding axiom for If is an integer, the remainder of when divided by may be considered as an element of and this element is often denoted by "" or which is consistent with the notation for . The additive inverse of any in is For example,
has a subring , and if is prime, then has no subrings.
Example: 2-by-2 matrices
The set of 2-by-2 square matrices with entries in a field is
With the operations of matrix addition and matrix multiplication, satisfies the above ring axioms. The element is the multiplicative identity of the ring. If and then while this example shows that the ring is noncommutative.
More generally, for any ring , commutative or not, and any nonnegative integer , the square matrices of dimension with entries in form a ring; see Matrix ring.
History
Dedekind
The study of rings originated from the theory of polynomial rings and the theory of algebraic integers. In 1871, Richard Dedekind defined the concept of the ring of integers of a number field. In this context, he introduced the terms "ideal" (inspired by Ernst Kummer's notion of ideal number) and "module" and studied their properties. Dedekind did not use the term "ring" and did not define the concept of a ring in a general setting.
Hilbert
The term "Zahlring" (number ring) was coined by David Hilbert in 1892 and published in 1897. In 19th century German, the word "Ring" could mean "association", which is still used today in English in a limited sense (for example, spy ring), so if that were the etymology then it would be similar to the way "group" entered mathematics by being a non-technical word for "collection of related things". According to Harvey Cohn, Hilbert used the term for a ring that had the property of "circling directly back" to an element of itself (in the sense of an equivalence). Specifically, in a ring of algebraic integers, all high powers of an algebraic integer can be written as an integral combination of a fixed set of lower powers, and thus the powers "cycle back". For instance, if then:
and so on; in general, is going to be an integral linear combination of , , and .
Fraenkel and Noether
The first axiomatic definition of a ring was given by Adolf Fraenkel in 1915, but his axioms were stricter than those in the modern definition. For instance, he required every non-zero-divisor to have a multiplicative inverse. In 1921, Emmy Noether gave a modern axiomatic definition of commutative rings (with and without 1) and developed the foundations of commutative ring theory in her paper Idealtheorie in Ringbereichen.
Multiplicative identity and the term "ring"
Fraenkel's axioms for a "ring" included that of a multiplicative identity, whereas Noether's did not.
Most or all books on algebra up to around 1960 followed Noether's convention of not requiring a for a "ring". Starting in the 1960s, it became increasingly common to see books including the existence of in the definition of "ring", especially in advanced books by notable authors such as Artin, Bourbaki, Eisenbud, and Lang. There are also books published as late as 2022 that use the term without the requirement for a . Likewise, the Encyclopedia of Mathematics does not require unit elements in rings. In a research article, the authors often specify which definition of ring they use in the beginning of that article.
Gardner and Wiegandt assert that, when dealing with several objects in the category of rings (as opposed to working with a fixed ring), if one requires all rings to have a , then some consequences include the lack of existence of infinite direct sums of rings, and that proper direct summands of rings are not subrings. They conclude that "in many, maybe most, branches of ring theory the requirement of the existence of a unity element is not sensible, and therefore unacceptable." Poonen makes the counterargument that the natural notion for rings would be the direct product rather than the direct sum. However, his main argument is that rings without a multiplicative identity are not totally associative, in the sense that they do not contain the product of any finite sequence of ring elements, including the empty sequence.
Authors who follow either convention for the use of the term "ring" may use one of the following terms to refer to objects satisfying the other convention:
to include a requirement for a multiplicative identity: "unital ring", "unitary ring", "unit ring", "ring with unity", "ring with identity", "ring with a unit", or "ring with 1".
to omit a requirement for a multiplicative identity: "rng" or "pseudo-ring", although the latter may be confusing because it also has other meanings.
Basic examples
Commutative rings
The prototypical example is the ring of integers with the two operations of addition and multiplication.
The rational, real and complex numbers are commutative rings of a type called fields.
A unital associative algebra over a commutative ring is itself a ring as well as an -module. Some examples:
The algebra of polynomials with coefficients in .
The algebra of formal power series with coefficients in .
The set of all continuous real-valued functions defined on the real line forms a commutative -algebra. The operations are pointwise addition and multiplication of functions.
Let be a set, and let be a ring. Then the set of all functions from to forms a ring, which is commutative if is commutative.
The ring of quadratic integers, the integral closure of in a quadratic extension of It is a subring of the ring of all algebraic integers.
The ring of profinite integers the (infinite) product of the rings of -adic integers over all prime numbers .
The Hecke ring, the ring generated by Hecke operators.
If is a set, then the power set of becomes a ring if we define addition to be the symmetric difference of sets and multiplication to be intersection. This is an example of a Boolean ring.
Noncommutative rings
For any ring and any natural number , the set of all square -by- matrices with entries from , forms a ring with matrix addition and matrix multiplication as operations. For , this matrix ring is isomorphic to itself. For (and not the zero ring), this matrix ring is noncommutative.
If is an abelian group, then the endomorphisms of form a ring, the endomorphism ring of . The operations in this ring are addition and composition of endomorphisms. More generally, if is a left module over a ring , then the set of all -linear maps forms a ring, also called the endomorphism ring and denoted by .
The endomorphism ring of an elliptic curve. It is a commutative ring if the elliptic curve is defined over a field of characteristic zero.
If is a group and is a ring, the group ring of over is a free module over having as basis. Multiplication is defined by the rules that the elements of commute with the elements of and multiply together as they do in the group .
The ring of differential operators (depending on the context). In fact, many rings that appear in analysis are noncommutative. For example, most Banach algebras are noncommutative.
Non-rings
The set of natural numbers with the usual operations is not a ring, since is not even a group (not all the elements are invertible with respect to addition – for instance, there is no natural number which can be added to to get as a result). There is a natural way to enlarge it to a ring, by including negative numbers to produce the ring of integers The natural numbers (including ) form an algebraic structure known as a semiring (which has all of the axioms of a ring excluding that of an additive inverse).
Let be the set of all continuous functions on the real line that vanish outside a bounded interval that depends on the function, with addition as usual but with multiplication defined as convolution: Then is a rng, but not a ring: the Dirac delta function has the property of a multiplicative identity, but it is not a function and hence is not an element of .
Basic concepts
Products and powers
For each nonnegative integer , given a sequence of elements of , one can define the product recursively: let and let for .
As a special case, one can define nonnegative integer powers of an element of a ring: and for . Then for all .
Elements in a ring
A left zero divisor of a ring is an element in the ring such that there exists a nonzero element of such that . A right zero divisor is defined similarly.
A nilpotent element is an element such that for some . One example of a nilpotent element is a nilpotent matrix. A nilpotent element in a nonzero ring is necessarily a zero divisor.
An idempotent is an element such that . One example of an idempotent element is a projection in linear algebra.
A unit is an element having a multiplicative inverse; in this case the inverse is unique, and is denoted by . The set of units of a ring is a group under ring multiplication; this group is denoted by or or . For example, if is the ring of all square matrices of size over a field, then consists of the set of all invertible matrices of size , and is called the general linear group.
Subring
A subset of is called a subring if any one of the following equivalent conditions holds:
the addition and multiplication of restrict to give operations making a ring with the same multiplicative identity as .
; and for all in , the elements , , and are in .
can be equipped with operations making it a ring such that the inclusion map is a ring homomorphism.
For example, the ring of integers is a subring of the field of real numbers and also a subring of the ring of polynomials (in both cases, contains 1, which is the multiplicative identity of the larger rings). On the other hand, the subset of even integers does not contain the identity element and thus does not qualify as a subring of one could call a subrng, however.
An intersection of subrings is a subring. Given a subset of , the smallest subring of containing is the intersection of all subrings of containing , and it is called the subring generated by .
For a ring , the smallest subring of is called the characteristic subring of . It can be generated through addition of copies of and . It is possible that ( times) can be zero. If is the smallest positive integer such that this occurs, then is called the characteristic of . In some rings, is never zero for any positive integer , and those rings are said to have characteristic zero.
Given a ring , let denote the set of all elements in such that commutes with every element in : for any in . Then is a subring of , called the center of . More generally, given a subset of , let be the set of all elements in that commute with every element in . Then is a subring of , called the centralizer (or commutant) of . The center is the centralizer of the entire ring . Elements or subsets of the center are said to be central in ; they (each individually) generate a subring of the center.
Ideal
Let be a ring. A left ideal of is a nonempty subset of such that for any in and in , the elements and are in . If denotes the -span of , that is, the set of finite sums
then is a left ideal if . Similarly, a right ideal is a subset such that . A subset is said to be a two-sided ideal or simply ideal if it is both a left ideal and right ideal. A one-sided or two-sided ideal is then an additive subgroup of . If is a subset of , then is a left ideal, called the left ideal generated by ; it is the smallest left ideal containing . Similarly, one can consider the right ideal or the two-sided ideal generated by a subset of .
If is in , then and are left ideals and right ideals, respectively; they are called the principal left ideals and right ideals generated by . The principal ideal is written as . For example, the set of all positive and negative multiples of along with form an ideal of the integers, and this ideal is generated by the integer . In fact, every ideal of the ring of integers is principal.
Like a group, a ring is said to be simple if it is nonzero and it has no proper nonzero two-sided ideals. A commutative simple ring is precisely a field.
Rings are often studied with special conditions set upon their ideals. For example, a ring in which there is no strictly increasing infinite chain of left ideals is called a left Noetherian ring. A ring in which there is no strictly decreasing infinite chain of left ideals is called a left Artinian ring. It is a somewhat surprising fact that a left Artinian ring is left Noetherian (the Hopkins–Levitzki theorem). The integers, however, form a Noetherian ring which is not Artinian.
For commutative rings, the ideals generalize the classical notion of divisibility and decomposition of an integer into prime numbers in algebra. A proper ideal of is called a prime ideal if for any elements we have that implies either or Equivalently, is prime if for any ideals , we have that implies either or . This latter formulation illustrates the idea of ideals as generalizations of elements.
Homomorphism
A homomorphism from a ring to a ring is a function from to that preserves the ring operations; namely, such that, for all , in the following identities hold:
If one is working with , then the third condition is dropped.
A ring homomorphism is said to be an isomorphism if there exists an inverse homomorphism to (that is, a ring homomorphism that is an inverse function), or equivalently if it is bijective.
Examples:
The function that maps each integer to its remainder modulo (a number in ) is a homomorphism from the ring to the quotient ring ("quotient ring" is defined below).
If is a unit element in a ring , then is a ring homomorphism, called an inner automorphism of .
Let be a commutative ring of prime characteristic . Then is a ring endomorphism of called the Frobenius homomorphism.
The Galois group of a field extension is the set of all automorphisms of whose restrictions to are the identity.
For any ring , there are a unique ring homomorphism and a unique ring homomorphism .
An epimorphism (that is, right-cancelable morphism) of rings need not be surjective. For example, the unique map is an epimorphism.
An algebra homomorphism from a -algebra to the endomorphism algebra of a vector space over is called a representation of the algebra.
Given a ring homomorphism , the set of all elements mapped to 0 by is called the kernel of . The kernel is a two-sided ideal of . The image of , on the other hand, is not always an ideal, but it is always a subring of .
To give a ring homomorphism from a commutative ring to a ring with image contained in the center of is the same as to give a structure of an algebra over to (which in particular gives a structure of an -module).
Quotient ring
The notion of quotient ring is analogous to the notion of a quotient group. Given a ring and a two-sided ideal of , view as subgroup of ; then the quotient ring is the set of cosets of together with the operations
for all in . The ring is also called a factor ring.
As with a quotient group, there is a canonical homomorphism , given by . It is surjective and satisfies the following universal property:
If is a ring homomorphism such that , then there is a unique homomorphism such that
For any ring homomorphism , invoking the universal property with produces a homomorphism that gives an isomorphism from to the image of .
Module
The concept of a module over a ring generalizes the concept of a vector space (over a field) by generalizing from multiplication of vectors with elements of a field (scalar multiplication) to multiplication with elements of a ring. More precisely, given a ring , an -module is an abelian group equipped with an operation (associating an element of to every pair of an element of and an element of ) that satisfies certain axioms. This operation is commonly denoted by juxtaposition and called multiplication. The axioms of modules are the following: for all , in and all , in ,
is an abelian group under addition.
When the ring is noncommutative these axioms define left modules; right modules are defined similarly by writing instead of . This is not only a change of notation, as the last axiom of right modules (that is ) becomes , if left multiplication (by ring elements) is used for a right module.
Basic examples of modules are ideals, including the ring itself.
Although similarly defined, the theory of modules is much more complicated than that of vector space, mainly, because, unlike vector spaces, modules are not characterized (up to an isomorphism) by a single invariant (the dimension of a vector space). In particular, not all modules have a basis.
The axioms of modules imply that , where the first minus denotes the additive inverse in the ring and the second minus the additive inverse in the module. Using this and denoting repeated addition by a multiplication by a positive integer allows identifying abelian groups with modules over the ring of integers.
Any ring homomorphism induces a structure of a module: if is a ring homomorphism, then is a left module over by the multiplication: . If is commutative or if is contained in the center of , the ring is called a -algebra. In particular, every ring is an algebra over the integers.
Constructions
Direct product
Let and be rings. Then the product can be equipped with the following natural ring structure:
for all in and in . The ring with the above operations of addition and multiplication and the multiplicative identity is called the direct product of with . The same construction also works for an arbitrary family of rings: if are rings indexed by a set , then is a ring with componentwise addition and multiplication.
Let be a commutative ring and be ideals such that whenever . Then the Chinese remainder theorem says there is a canonical ring isomorphism:
A "finite" direct product may also be viewed as a direct sum of ideals. Namely, let be rings, the inclusions with the images (in particular are rings though not subrings). Then are ideals of and
as a direct sum of abelian groups (because for abelian groups finite products are the same as direct sums). Clearly the direct sum of such ideals also defines a product of rings that is isomorphic to . Equivalently, the above can be done through central idempotents. Assume that has the above decomposition. Then we can write
By the conditions on one has that are central idempotents and , (orthogonal). Again, one can reverse the construction. Namely, if one is given a partition of 1 in orthogonal central idempotents, then let which are two-sided ideals. If each is not a sum of orthogonal central idempotents, then their direct sum is isomorphic to .
An important application of an infinite direct product is the construction of a projective limit of rings (see below). Another application is a restricted product of a family of rings (cf. adele ring).
Polynomial ring
Given a symbol (called a variable) and a commutative ring , the set of polynomials
forms a commutative ring with the usual addition and multiplication, containing as a subring. It is called the polynomial ring over . More generally, the set of all polynomials in variables forms a commutative ring, containing as subrings.
If is an integral domain, then is also an integral domain; its field of fractions is the field of rational functions. If is a Noetherian ring, then is a Noetherian ring. If is a unique factorization domain, then is a unique factorization domain. Finally, is a field if and only if is a principal ideal domain.
Let be commutative rings. Given an element of , one can consider the ring homomorphism
(that is, the substitution). If and , then . Because of this, the polynomial is often also denoted by . The image of the map is denoted by ; it is the same thing as the subring of generated by and .
Example: denotes the image of the homomorphism
In other words, it is the subalgebra of generated by and .
Example: let be a polynomial in one variable, that is, an element in a polynomial ring . Then is an element in and is divisible by in that ring. The result of substituting zero to in is , the derivative of at .
The substitution is a special case of the universal property of a polynomial ring. The property states: given a ring homomorphism and an element in there exists a unique ring homomorphism such that and restricts to . For example, choosing a basis, a symmetric algebra satisfies the universal property and so is a polynomial ring.
To give an example, let be the ring of all functions from to itself; the addition and the multiplication are those of functions. Let be the identity function. Each in defines a constant function, giving rise to the homomorphism . The universal property says that this map extends uniquely to
( maps to ) where is the polynomial function defined by . The resulting map is injective if and only if is infinite.
Given a non-constant monic polynomial in , there exists a ring containing such that is a product of linear factors in .
Let be an algebraically closed field. The Hilbert's Nullstellensatz (theorem of zeros) states that there is a natural one-to-one correspondence between the set of all prime ideals in and the set of closed subvarieties of . In particular, many local problems in algebraic geometry may be attacked through the study of the generators of an ideal in a polynomial ring. (cf. Gröbner basis.)
There are some other related constructions. A formal power series ring consists of formal power series
together with multiplication and addition that mimic those for convergent series. It contains as a subring. A formal power series ring does not have the universal property of a polynomial ring; a series may not converge after a substitution. The important advantage of a formal power series ring over a polynomial ring is that it is local (in fact, complete).
Matrix ring and endomorphism ring
Let be a ring (not necessarily commutative). The set of all square matrices of size with entries in forms a ring with the entry-wise addition and the usual matrix multiplication. It is called the matrix ring and is denoted by . Given a right -module , the set of all -linear maps from to itself forms a ring with addition that is of function and multiplication that is of composition of functions; it is called the endomorphism ring of and is denoted by .
As in linear algebra, a matrix ring may be canonically interpreted as an endomorphism ring: This is a special case of the following fact: If is an -linear map, then may be written as a matrix with entries in , resulting in the ring isomorphism:
Any ring homomorphism induces .
Schur's lemma says that if is a simple right -module, then is a division ring. If is a direct sum of -copies of simple -modules then
The Artin–Wedderburn theorem states any semisimple ring (cf. below) is of this form.
A ring and the matrix ring over it are Morita equivalent: the category of right modules of is equivalent to the category of right modules over . In particular, two-sided ideals in correspond in one-to-one to two-sided ideals in .
Limits and colimits of rings
Let be a sequence of rings such that is a subring of for all . Then the union (or filtered colimit) of is the ring defined as follows: it is the disjoint union of all 's modulo the equivalence relation if and only if in for sufficiently large .
Examples of colimits:
A polynomial ring in infinitely many variables:
The algebraic closure of finite fields of the same characteristic
The field of formal Laurent series over a field : (it is the field of fractions of the formal power series ring )
The function field of an algebraic variety over a field is where the limit runs over all the coordinate rings of nonempty open subsets (more succinctly it is the stalk of the structure sheaf at the generic point.)
Any commutative ring is the colimit of finitely generated subrings.
A projective limit (or a filtered limit) of rings is defined as follows. Suppose we are given a family of rings , running over positive integers, say, and ring homomorphisms , such that are all the identities and is whenever . Then is the subring of consisting of such that maps to under , .
For an example of a projective limit, see .
Localization
The localization generalizes the construction of the field of fractions of an integral domain to an arbitrary ring and modules. Given a (not necessarily commutative) ring and a subset of , there exists a ring together with the ring homomorphism that "inverts" ; that is, the homomorphism maps elements in to unit elements in and, moreover, any ring homomorphism from that "inverts" uniquely factors through The ring is called the localization of with respect to . For example, if is a commutative ring and an element in , then the localization consists of elements of the form (to be precise, )
The localization is frequently applied to a commutative ring with respect to the complement of a prime ideal (or a union of prime ideals) in . In that case one often writes for is then a local ring with the maximal ideal This is the reason for the terminology "localization". The field of fractions of an integral domain is the localization of at the prime ideal zero. If is a prime ideal of a commutative ring , then the field of fractions of is the same as the residue field of the local ring and is denoted by
If is a left -module, then the localization of with respect to is given by a change of rings
The most important properties of localization are the following: when is a commutative ring and a multiplicatively closed subset
is a bijection between the set of all prime ideals in disjoint from and the set of all prime ideals in
running over elements in with partial ordering given by divisibility.
The localization is exact: is exact over whenever is exact over .
Conversely, if is exact for any maximal ideal then is exact.
A remark: localization is no help in proving a global existence. One instance of this is that if two modules are isomorphic at all prime ideals, it does not follow that they are isomorphic. (One way to explain this is that the localization allows one to view a module as a sheaf over prime ideals and a sheaf is inherently a local notion.)
In category theory, a localization of a category amounts to making some morphisms isomorphisms. An element in a commutative ring may be thought of as an endomorphism of any -module. Thus, categorically, a localization of with respect to a subset of is a functor from the category of -modules to itself that sends elements of viewed as endomorphisms to automorphisms and is universal with respect to this property. (Of course, then maps to and -modules map to -modules.)
Completion
Let be a commutative ring, and let be an ideal of .
The completion of at is the projective limit it is a commutative ring. The canonical homomorphisms from to the quotients induce a homomorphism The latter homomorphism is injective if is a Noetherian integral domain and is a proper ideal, or if is a Noetherian local ring with maximal ideal , by Krull's intersection theorem. The construction is especially useful when is a maximal ideal.
The basic example is the completion of at the principal ideal generated by a prime number ; it is called the ring of -adic integers and is denoted The completion can in this case be constructed also from the -adic absolute value on The -adic absolute value on is a map from to given by where denotes the exponent of in the prime factorization of a nonzero integer into prime numbers (we also put and ). It defines a distance function on and the completion of as a metric space is denoted by It is again a field since the field operations extend to the completion. The subring of consisting of elements with is isomorphic to
Similarly, the formal power series ring is the completion of at (see also Hensel's lemma)
A complete ring has much simpler structure than a commutative ring. This owns to the Cohen structure theorem, which says, roughly, that a complete local ring tends to look like a formal power series ring or a quotient of it. On the other hand, the interaction between the integral closure and completion has been among the most important aspects that distinguish modern commutative ring theory from the classical one developed by the likes of Noether. Pathological examples found by Nagata led to the reexamination of the roles of Noetherian rings and motivated, among other things, the definition of excellent ring.
Rings with generators and relations
The most general way to construct a ring is by specifying generators and relations. Let be a free ring (that is, free algebra over the integers) with the set of symbols, that is, consists of polynomials with integral coefficients in noncommuting variables that are elements of . A free ring satisfies the universal property: any function from the set to a ring factors through so that is the unique ring homomorphism. Just as in the group case, every ring can be represented as a quotient of a free ring.
Now, we can impose relations among symbols in by taking a quotient. Explicitly, if is a subset of , then the quotient ring of by the ideal generated by is called the ring with generators and relations . If we used a ring, say, as a base ring instead of then the resulting ring will be over . For example, if then the resulting ring will be the usual polynomial ring with coefficients in in variables that are elements of (It is also the same thing as the symmetric algebra over with symbols .)
In the category-theoretic terms, the formation is the left adjoint functor of the forgetful functor from the category of rings to Set (and it is often called the free ring functor.)
Let , be algebras over a commutative ring . Then the tensor product of -modules is an -algebra with multiplication characterized by
Special kinds of rings
Domains
A nonzero ring with no nonzero zero-divisors is called a domain. A commutative domain is called an integral domain. The most important integral domains are principal ideal domains, PIDs for short, and fields. A principal ideal domain is an integral domain in which every ideal is principal. An important class of integral domains that contain a PID is a unique factorization domain (UFD), an integral domain in which every nonunit element is a product of prime elements (an element is prime if it generates a prime ideal.) The fundamental question in algebraic number theory is on the extent to which the ring of (generalized) integers in a number field, where an "ideal" admits prime factorization, fails to be a PID.
Among theorems concerning a PID, the most important one is the structure theorem for finitely generated modules over a principal ideal domain. The theorem may be illustrated by the following application to linear algebra. Let be a finite-dimensional vector space over a field and a linear map with minimal polynomial . Then, since is a unique factorization domain, factors into powers of distinct irreducible polynomials (that is, prime elements):
Letting we make a -module. The structure theorem then says is a direct sum of cyclic modules, each of which is isomorphic to the module of the form Now, if then such a cyclic module (for ) has a basis in which the restriction of is represented by a Jordan matrix. Thus, if, say, is algebraically closed, then all 's are of the form and the above decomposition corresponds to the Jordan canonical form of .
In algebraic geometry, UFDs arise because of smoothness. More precisely, a point in a variety (over a perfect field) is smooth if the local ring at the point is a regular local ring. A regular local ring is a UFD.
The following is a chain of class inclusions that describes the relationship between rings, domains and fields:
Division ring
A division ring is a ring such that every non-zero element is a unit. A commutative division ring is a field. A prominent example of a division ring that is not a field is the ring of quaternions. Any centralizer in a division ring is also a division ring. In particular, the center of a division ring is a field. It turned out that every finite domain (in particular finite division ring) is a field; in particular commutative (the Wedderburn's little theorem).
Every module over a division ring is a free module (has a basis); consequently, much of linear algebra can be carried out over a division ring instead of a field.
The study of conjugacy classes figures prominently in the classical theory of division rings; see, for example, the Cartan–Brauer–Hua theorem.
A cyclic algebra, introduced by L. E. Dickson, is a generalization of a quaternion algebra.
Semisimple rings
A semisimple module is a direct sum of simple modules. A semisimple ring is a ring that is semisimple as a left module (or right module) over itself.
Examples
A division ring is semisimple (and simple).
For any division ring and positive integer , the matrix ring is semisimple (and simple).
For a field and finite group , the group ring is semisimple if and only if the characteristic of does not divide the order of (Maschke's theorem).
Clifford algebras are semisimple.
The Weyl algebra over a field is a simple ring, but it is not semisimple. The same holds for a ring of differential operators in many variables.
Properties
Any module over a semisimple ring is semisimple. (Proof: A free module over a semisimple ring is semisimple and any module is a quotient of a free module.)
For a ring , the following are equivalent:
is semisimple.
is artinian and semiprimitive.
is a finite direct product where each is a positive integer, and each is a division ring (Artin–Wedderburn theorem).
Semisimplicity is closely related to separability. A unital associative algebra over a field is said to be separable if the base extension is semisimple for every field extension . If happens to be a field, then this is equivalent to the usual definition in field theory (cf. separable extension.)
Central simple algebra and Brauer group
For a field , a -algebra is central if its center is and is simple if it is a simple ring. Since the center of a simple -algebra is a field, any simple -algebra is a central simple algebra over its center. In this section, a central simple algebra is assumed to have finite dimension. Also, we mostly fix the base field; thus, an algebra refers to a -algebra. The matrix ring of size over a ring will be denoted by .
The Skolem–Noether theorem states any automorphism of a central simple algebra is inner.
Two central simple algebras and are said to be similar if there are integers and such that Since the similarity is an equivalence relation. The similarity classes with the multiplication form an abelian group called the Brauer group of and is denoted by . By the Artin–Wedderburn theorem, a central simple algebra is the matrix ring of a division ring; thus, each similarity class is represented by a unique division ring.
For example, is trivial if is a finite field or an algebraically closed field (more generally quasi-algebraically closed field; cf. Tsen's theorem). has order 2 (a special case of the theorem of Frobenius). Finally, if is a nonarchimedean local field (for example, then through the invariant map.
Now, if is a field extension of , then the base extension induces . Its kernel is denoted by . It consists of such that is a matrix ring over (that is, is split by .) If the extension is finite and Galois, then is canonically isomorphic to
Azumaya algebras generalize the notion of central simple algebras to a commutative local ring.
Valuation ring
If is a field, a valuation is a group homomorphism from the multiplicative group to a totally ordered abelian group such that, for any , in with nonzero, The valuation ring of is the subring of consisting of zero and all nonzero such that .
Examples:
The field of formal Laurent series over a field comes with the valuation such that is the least degree of a nonzero term in ; the valuation ring of is the formal power series ring
More generally, given a field and a totally ordered abelian group , let be the set of all functions from to whose supports (the sets of points at which the functions are nonzero) are well ordered. It is a field with the multiplication given by convolution: It also comes with the valuation such that is the least element in the support of . The subring consisting of elements with finite support is called the group ring of (which makes sense even if is not commutative). If is the ring of integers, then we recover the previous example (by identifying with the series whose th coefficient is .)
Rings with extra structure
A ring may be viewed as an abelian group (by using the addition operation), with extra structure: namely, ring multiplication. In the same way, there are other mathematical objects which may be considered as rings with extra structure. For example:
An associative algebra is a ring that is also a vector space over a field such that the scalar multiplication is compatible with the ring multiplication. For instance, the set of -by- matrices over the real field has dimension as a real vector space.
A ring is a topological ring if its set of elements is given a topology which makes the addition map () and the multiplication map to be both continuous as maps between topological spaces (where inherits the product topology or any other product in the category). For example, -by- matrices over the real numbers could be given either the Euclidean topology, or the Zariski topology, and in either case one would obtain a topological ring.
A λ-ring is a commutative ring together with operations that are like th exterior powers:
For example, is a λ-ring with the binomial coefficients. The notion plays a central rule in the algebraic approach to the Riemann–Roch theorem.
A totally ordered ring is a ring with a total ordering that is compatible with ring operations.
Some examples of the ubiquity of rings
Many different kinds of mathematical objects can be fruitfully analyzed in terms of some associated ring.
Cohomology ring of a topological space
To any topological space one can associate its integral cohomology ring
a graded ring. There are also homology groups of a space, and indeed these were defined first, as a useful tool for distinguishing between certain pairs of topological spaces, like the spheres and tori, for which the methods of point-set topology are not well-suited. Cohomology groups were later defined in terms of homology groups in a way which is roughly analogous to the dual of a vector space. To know each individual integral homology group is essentially the same as knowing each individual integral cohomology group, because of the universal coefficient theorem. However, the advantage of the cohomology groups is that there is a natural product, which is analogous to the observation that one can multiply pointwise a -multilinear form and an -multilinear form to get a ()-multilinear form.
The ring structure in cohomology provides the foundation for characteristic classes of fiber bundles, intersection theory on manifolds and algebraic varieties, Schubert calculus and much more.
Burnside ring of a group
To any group is associated its Burnside ring which uses a ring to describe the various ways the group can act on a finite set. The Burnside ring's additive group is the free abelian group whose basis is the set of transitive actions of the group and whose addition is the disjoint union of the action. Expressing an action in terms of the basis is decomposing an action into its transitive constituents. The multiplication is easily expressed in terms of the representation ring: the multiplication in the Burnside ring is formed by writing the tensor product of two permutation modules as a permutation module. The ring structure allows a formal way of subtracting one action from another. Since the Burnside ring is contained as a finite index subring of the representation ring, one can pass easily from one to the other by extending the coefficients from integers to the rational numbers.
Representation ring of a group ring
To any group ring or Hopf algebra is associated its representation ring or "Green ring". The representation ring's additive group is the free abelian group whose basis are the indecomposable modules and whose addition corresponds to the direct sum. Expressing a module in terms of the basis is finding an indecomposable decomposition of the module. The multiplication is the tensor product. When the algebra is semisimple, the representation ring is just the character ring from character theory, which is more or less the Grothendieck group given a ring structure.
Function field of an irreducible algebraic variety
To any irreducible algebraic variety is associated its function field. The points of an algebraic variety correspond to valuation rings contained in the function field and containing the coordinate ring. The study of algebraic geometry makes heavy use of commutative algebra to study geometric concepts in terms of ring-theoretic properties. Birational geometry studies maps between the subrings of the function field.
Face ring of a simplicial complex
Every simplicial complex has an associated face ring, also called its Stanley–Reisner ring. This ring reflects many of the combinatorial properties of the simplicial complex, so it is of particular interest in algebraic combinatorics. In particular, the algebraic geometry of the Stanley–Reisner ring was used to characterize the numbers of faces in each dimension of simplicial polytopes.
Category-theoretic description
Every ring can be thought of as a monoid in Ab, the category of abelian groups (thought of as a monoidal category under the tensor product of -modules). The monoid action of a ring on an abelian group is simply an -module. Essentially, an -module is a generalization of the notion of a vector space – where rather than a vector space over a field, one has a "vector space over a ring".
Let be an abelian group and let be its endomorphism ring (see above). Note that, essentially, is the set of all morphisms of , where if is in , and is in , the following rules may be used to compute and :
where as in is addition in , and function composition is denoted from right to left. Therefore, associated to any abelian group, is a ring. Conversely, given any ring, , is an abelian group. Furthermore, for every in , right (or left) multiplication by gives rise to a morphism of , by right (or left) distributivity. Let . Consider those endomorphisms of , that "factor through" right (or left) multiplication of . In other words, let be the set of all morphisms of , having the property that . It was seen that every in gives rise to a morphism of : right multiplication by . It is in fact true that this association of any element of , to a morphism of , as a function from to , is an isomorphism of rings. In this sense, therefore, any ring can be viewed as the endomorphism ring of some abelian -group (by -group, it is meant a group with being its set of operators). In essence, the most general form of a ring, is the endomorphism group of some abelian -group.
Any ring can be seen as a preadditive category with a single object. It is therefore natural to consider arbitrary preadditive categories to be generalizations of rings. And indeed, many definitions and theorems originally given for rings can be translated to this more general context. Additive functors between preadditive categories generalize the concept of ring homomorphism, and ideals in additive categories can be defined as sets of morphisms closed under addition and under composition with arbitrary morphisms.
Generalization
Algebraists have defined structures more general than rings by weakening or dropping some of ring axioms.
Rng
A rng is the same as a ring, except that the existence of a multiplicative identity is not assumed.
Nonassociative ring
A nonassociative ring is an algebraic structure that satisfies all of the ring axioms except the associative property and the existence of a multiplicative identity. A notable example is a Lie algebra. There exists some structure theory for such algebras that generalizes the analogous results for Lie algebras and associative algebras.
Semiring
A semiring (sometimes rig) is obtained by weakening the assumption that is an abelian group to the assumption that is a commutative monoid, and adding the axiom that for all a in (since it no longer follows from the other axioms).
Examples:
the non-negative integers with ordinary addition and multiplication;
the tropical semiring.
Other ring-like objects
Ring object in a category
Let be a category with finite products. Let pt denote a terminal object of (an empty product). A ring object in is an object equipped with morphisms (addition), (multiplication), (additive identity), (additive inverse), and (multiplicative identity) satisfying the usual ring axioms. Equivalently, a ring object is an object equipped with a factorization of its functor of points through the category of rings:
Ring scheme
In algebraic geometry, a ring scheme over a base scheme is a ring object in the category of -schemes. One example is the ring scheme over , which for any commutative ring returns the ring of -isotypic Witt vectors of length over .
Ring spectrum
In algebraic topology, a ring spectrum is a spectrum together with a multiplication and a unit map from the sphere spectrum , such that the ring axiom diagrams commute up to homotopy. In practice, it is common to define a ring spectrum as a monoid object in a good category of spectra such as the category of symmetric spectra.
| Mathematics | Algebra | null |
48405 | https://en.wikipedia.org/wiki/Caesar%20cipher | Caesar cipher | In cryptography, a Caesar cipher, also known as Caesar's cipher, the shift cipher, Caesar's code, or Caesar shift, is one of the simplest and most widely known encryption techniques. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, would be replaced by , would become , and so on. The method is named after Julius Caesar, who used it in his private correspondence.
The encryption step performed by a Caesar cipher is often incorporated as part of more complex schemes, such as the Vigenère cipher, and still has modern application in the ROT13 system. As with all single-alphabet substitution ciphers, the Caesar cipher is easily broken and in modern practice offers essentially no communications security.
Example
The transformation can be represented by aligning two alphabets; the cipher alphabet is the plain alphabet rotated left or right by some number of positions. For instance, here is a Caesar cipher using a left rotation of three places, equivalent to a right shift of 23 (the shift parameter is used as the key):
When encrypting, a person looks up each letter of the message in the "plain" line and writes down the corresponding letter in the "cipher" line.
Plaintext: THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG
Ciphertext: QEB NRFZH YOLTK CLU GRJMP LSBO QEB IXWV ALD
Deciphering is done in reverse, with a right shift of 3.
The encryption can also be represented using modular arithmetic by first transforming the letters into numbers, according to the scheme, A → 0, B → 1, ..., Z → 25. Encryption of a letter x by a shift n can be described mathematically as,
Decryption is performed similarly,
(Here, "mod" refers to the modulo operation. The value x is in the range 0 to 25, but if or are not in this range then 26 should be added or subtracted.)
The replacement remains the same throughout the message, so the cipher is classed as a type of monoalphabetic substitution, as opposed to polyalphabetic substitution.
History and usage
The Caesar cipher is named after Julius Caesar, who, according to Suetonius, used it with a shift of three (A becoming D when encrypting, and D becoming A when decrypting) to protect messages of military significance. While Caesar's was the first recorded use of this scheme, other substitution ciphers are known to have been used earlier.
His nephew, Augustus, also used the cipher, but with a right shift of one, and it did not wrap around to the beginning of the alphabet:
Evidence exists that Julius Caesar also used more complicated systems, and one writer, Aulus Gellius, refers to a (now lost) treatise on his ciphers:
It is unknown how effective the Caesar cipher was at the time; there is no record at that time of any techniques for the solution of simple substitution ciphers. The earliest surviving records date to the 9th-century works of Al-Kindi in the Arab world with the discovery of frequency analysis.
A piece of text encrypted in a Hebrew version of the Caesar cipher is sometimes found on the back of Jewish mezuzah scrolls. When each letter is replaced with the letter before it in the Hebrew alphabet the text translates as "YHWH, our God, YHWH", a quotation from the main part of the scroll.
In the 19th century, the personal advertisements section in newspapers would sometimes be used to exchange messages encrypted using simple cipher schemes. David Kahn (1967) describes instances of lovers engaging in secret communications enciphered using the Caesar cipher in The Times. Even as late as 1915, the Caesar cipher was in use: the Russian army employed it as a replacement for more complicated ciphers which had proved to be too difficult for their troops to master; German and Austrian cryptanalysts had little difficulty in decrypting their messages.
Caesar ciphers can be found today in children's toys such as secret decoder rings. A Caesar shift of thirteen is also performed in the ROT13 algorithm, a simple method of obfuscating text widely found on Usenet and used to obscure text (such as joke punchlines and story spoilers), but not seriously used as a method of encryption.
The Vigenère cipher uses a Caesar cipher with a different shift at each position in the text; the value of the shift is defined using a repeating keyword. If the keyword is as long as the message, is chosen at random, never becomes known to anyone else, and is never reused, this is the one-time pad cipher, proven unbreakable. However the problems involved in using a random key as long as the message make the one-time pad difficult to use in practice. Keywords shorter than the message (e.g., "Complete Victory" used by the Confederacy during the American Civil War), introduce a cyclic pattern that might be detected with a statistically advanced version of frequency analysis.
In April 2006, fugitive Mafia boss Bernardo Provenzano was captured in Sicily partly because some of his messages, clumsily written in a variation of the Caesar cipher, were broken. Provenzano's cipher used numbers, so that "A" would be written as "4", "B" as "5", and so on.
In 2011, Rajib Karim was convicted in the United Kingdom of "terrorism offences" after using the Caesar cipher to communicate with Bangladeshi Islamic activists discussing plots to blow up British Airways planes or disrupt their IT networks. Although the parties had access to far better encryption techniques (Karim himself used PGP for data storage on computer disks), they chose to use their own scheme (implemented in Microsoft Excel), rejecting a more sophisticated code program called Mujahideen Secrets "because 'kaffirs', or non-believers, know about it, so it must be less secure".
Breaking the cipher
The Caesar cipher can be easily broken even in a ciphertext-only scenario. Since there are only a limited number of possible shifts (25 in English), an attacker can mount a brute force attack by deciphering the message, or part of it, using each possible shift. The correct description will be the one which makes sense as English text. An example is shown on the right for the ciphertext ""; the candidate plaintext for shift four "" is the only one which makes sense as English text. Another type of brute force attack is to write out the alphabet beneath each letter of the ciphertext, starting at that letter. Again the correct decryption is the one which makes sense as English text. This technique is sometimes known as "completing the plain component".
Another approach is to match up the frequency distribution of the letters. By graphing the frequencies of letters in the ciphertext, and by knowing the expected distribution of those letters in the original language of the plaintext, a human can easily spot the value of the shift by looking at the displacement of particular features of the graph. This is known as frequency analysis. For example, in the English language the plaintext frequencies of the letters , , (usually most frequent), and , (typically least frequent) are particularly distinctive. Computers can automate this process by assessing the similarity between the observed frequency distribution and the expected distribution. This can be achieved, for instance, through the utilization of the chi-squared statistic or by minimizing the sum of squared errors between the observed and known language distributions.
The unicity distance for the Caesar cipher is about 2, meaning that on average at least two characters of ciphertext are required to determine the key. In rare cases more text may be needed. For example, the words "" and "" can be converted to each other with a Caesar shift, which means they can produce the same ciphertext with different shifts. However, in practice the key can almost certainly be found with at least 6 characters of ciphertext.
With the Caesar cipher, encrypting a text multiple times provides no additional security. This is because two encryptions of, say, shift A and shift B, will be equivalent to a single encryption with shift . In mathematical terms, the set of encryption operations under each possible key forms a group under composition.
| Technology | Computer security | null |
48510 | https://en.wikipedia.org/wiki/Terrestrial%20planet | Terrestrial planet | A terrestrial planet, tellurian planet, telluric planet, or rocky planet, is a planet that is composed primarily of silicate, rocks or metals. Within the Solar System, the terrestrial planets accepted by the IAU are the inner planets closest to the Sun: Mercury, Venus, Earth and Mars. Among astronomers who use the geophysical definition of a planet, two or three planetary-mass satellites – Earth's Moon, Io, and sometimes Europa – may also be considered terrestrial planets. The large rocky asteroids Pallas and Vesta are sometimes included as well, albeit rarely. The terms "terrestrial planet" and "telluric planet" are derived from Latin words for Earth (Terra and Tellus), as these planets are, in terms of structure, Earth-like. Terrestrial planets are generally studied by geologists, astronomers, and geophysicists.
Terrestrial planets have a solid planetary surface, making them substantially different from larger gaseous planets, which are composed mostly of some combination of hydrogen, helium, and water existing in various physical states.
Structure
All terrestrial planets in the Solar System have the same basic structure, such as a central metallic core (mostly iron) with a surrounding silicate mantle.
The large rocky asteroid 4 Vesta has a similar structure; possibly so does the smaller one 21 Lutetia. Another rocky asteroid 2 Pallas is about the same size as Vesta, but is significantly less dense; it appears to have never differentiated a core and a mantle. The Earth's Moon and Jupiter's moon Io have similar structures to terrestrial planets, but Earth's Moon has a much smaller iron core. Another Jovian moon Europa has a similar density but has a significant ice layer on the surface: for this reason, it is sometimes considered an icy planet instead.
Terrestrial planets can have surface structures such as canyons, craters, mountains, volcanoes, and others, depending on the presence at any time of an erosive liquid or tectonic activity or both.
Terrestrial planets have secondary atmospheres, generated by volcanic out-gassing or from comet impact debris. This contrasts with the outer, giant planets, whose atmospheres are primary; primary atmospheres were captured directly from the original solar nebula.
Terrestrial planets within the Solar System
The Solar System has four terrestrial planets under the dynamical definition: Mercury, Venus, Earth and Mars. The Earth's Moon as well as Jupiter's moons Io and Europa would also count geophysically, as well as perhaps the large protoplanet-asteroids Pallas and Vesta (though those are borderline cases). Among these bodies, only the Earth has an active surface hydrosphere. Europa is believed to have an active hydrosphere under its ice layer.
During the formation of the Solar System, there were many terrestrial planetesimals and proto-planets, but most merged with or were ejected by the four terrestrial planets, leaving only Pallas and Vesta to survive more or less intact. These two were likely both dwarf planets in the past, but have been battered out of equilibrium shapes by impacts. Some other protoplanets began to accrete and differentiate but suffered catastrophic collisions that left only a metallic or rocky core, like 16 Psyche or 8 Flora respectively. Many S-type and M-type asteroids may be such fragments.
The other round bodies from the asteroid belt outward are geophysically icy planets. They are similar to terrestrial planets in that they have a solid surface, but are composed of ice and rock rather than of rock and metal. These include the dwarf planets, such as Ceres, Pluto and Eris, which are found today only in the regions beyond the formation snow line where water ice was stable under direct sunlight in the early Solar System. It also includes the other round moons, which are ice-rock (e.g. Ganymede, Callisto, Titan, and Triton) or even almost pure (at least 99%) ice (Tethys and Iapetus). Some of these bodies are known to have subsurface hydrospheres (Ganymede, Callisto, Enceladus, and Titan), like Europa, and it is also possible for some others (e.g. Ceres, Mimas, Dione, Miranda, Ariel, Triton, and Pluto). Titan even has surface bodies of liquid, albeit liquid methane rather than water. Jupiter's Ganymede, though icy, does have a metallic core like the Moon, Io, Europa, and the terrestrial planets.
The name Terran world has been suggested to define all solid worlds (bodies assuming a rounded shape), without regard to their composition. It would thus include both terrestrial and icy planets.
Density trends
The uncompressed density of a terrestrial planet is the average density its materials would have at zero pressure. A greater uncompressed density indicates a greater metal content. Uncompressed density differs from the true average density (also often called "bulk" density) because compression within planet cores increases their density; the average density depends on planet size, temperature distribution, and material stiffness as well as composition.
Calculations to estimate uncompressed density inherently require a model of the planet's structure. Where there have been landers or multiple orbiting spacecraft, these models are constrained by seismological data and also moment of inertia data derived from the spacecraft's orbits. Where such data is not available, uncertainties are inevitably higher.
The uncompressed densities of the rounded terrestrial bodies directly orbiting the Sun trend towards lower values as the distance from the Sun increases, consistent with the temperature gradient that would have existed within the primordial solar nebula. The Galilean satellites show a similar trend going outwards from Jupiter; however, no such trend is observable for the icy satellites of Saturn or Uranus. The icy worlds typically have densities less than 2 g·cm−3. Eris is significantly denser (), and may be mostly rocky with some surface ice, like Europa. It is unknown whether extrasolar terrestrial planets in general will follow such a trend.
The data in the tables below are mostly taken from a list of gravitationally rounded objects of the Solar System and planetary-mass moon. All distances from the Sun are averages.
Extrasolar terrestrial planets
Most of the planets discovered outside the Solar System are giant planets, because they are more easily detectable. But since 2005, hundreds of potentially terrestrial extrasolar planets have also been found, with several being confirmed as terrestrial. Most of these are super-Earths, i.e. planets with masses between Earth's and Neptune's; super-Earths may be gas planets or terrestrial, depending on their mass and other parameters.
During the early 1990s, the first extrasolar planets were discovered orbiting the pulsar PSR B1257+12, with masses of 0.02, 4.3, and 3.9 times that of Earth, by pulsar timing.
When 51 Pegasi b, the first planet found around a star still undergoing fusion, was discovered, many astronomers assumed it to be a gigantic terrestrial, because it was assumed no gas giant could exist as close to its star (0.052 AU) as 51 Pegasi b did. It was later found to be a gas giant.
In 2005, the first planets orbiting a main-sequence star and which showed signs of being terrestrial planets were found: Gliese 876 d and OGLE-2005-BLG-390Lb. Gliese 876 d orbits the red dwarf Gliese 876, 15 light years from Earth, and has a mass seven to nine times that of Earth and an orbital period of just two Earth days. OGLE-2005-BLG-390Lb has about 5.5 times the mass of Earth and orbits a star about 21,000 light-years away in the constellation Scorpius.
From 2007 to 2010, three (possibly four) potential terrestrial planets were found orbiting within the Gliese 581 planetary system. The smallest, Gliese 581e, is only about 1.9 Earth masses, but orbits very close to the star. Two others, Gliese 581c and the disputed Gliese 581d, are more-massive super-Earths orbiting in or close to the habitable zone of the star, so they could potentially be habitable, with Earth-like temperatures.
Another possibly terrestrial planet, HD 85512 b, was discovered in 2011; it has at least 3.6 times the mass of Earth.
The radius and composition of all these planets are unknown.
The first confirmed terrestrial exoplanet, Kepler-10b, was found in 2011 by the Kepler space telescope, specifically designed to discover Earth-size planets around other stars using the transit method.
In the same year, the Kepler space telescope mission team released a list of 1235 extrasolar planet candidates, including six that are "Earth-size" or "super-Earth-size" (i.e. they have a radius less than twice that of the Earth) and in the habitable zone of their star.
Since then, Kepler has discovered hundreds of planets ranging from Moon-sized to super-Earths, with many more candidates in this size range (see image).
In 2016, statistical modeling of the relationship between a planet's mass and radius using a broken power law appeared to suggest that the transition point between rocky, terrestrial worlds and mini-Neptunes without a defined surface was in fact very close to Earth and Venus's, suggesting that rocky worlds much larger than our own are in fact quite rare. This resulted in some advocating for the retirement of the term "super-earth" as being scientifically misleading. Since 2016 the catalog of known exoplanets has increased significantly, and there have been several published refinements of the mass-radius model. As of 2024, the expected transition point between rocky and intermediate-mass planets sits at roughly 4.4 earth masses, and roughly 1.6 earth radii.
In September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an Earth-mass rogue planet (named OGLE-2016-BLG-1928) unbounded by any star, and free-floating in the Milky Way galaxy.
List of terrestrial exoplanets
The following exoplanets have a density of at least 5 g/cm3 and a mass below Neptune's and are thus very likely terrestrial:
Kepler-10b, Kepler-20b, Kepler-36b, Kepler-48d, Kepler 68c, Kepler-78b, Kepler-89b, Kepler-93b, Kepler-97b, Kepler-99b, Kepler-100b, Kepler-101c, Kepler-102b, Kepler-102d, Kepler-113b, Kepler-131b, Kepler-131c, Kepler-138c, Kepler-406b, Kepler-406c, Kepler-409b.
Frequency
In 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth- and super-Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way. Eleven billion of these estimated planets may be orbiting Sun-like stars. The nearest such planet may be 12 light-years away, according to the scientists. However, this does not give estimates for the number of extrasolar terrestrial planets, because there are planets as small as Earth that have been shown to be gas planets (see Kepler-138d).
Estimates show that about 80% of potentially habitable worlds are covered by land, and about 20% are ocean planets. Planets with rations more like those of Earth, which was 30% land and 70% ocean, only make up 1% of these worlds.
Types
Several possible classifications for solid planets have been proposed.
Silicate planet
A solid planet like Venus, Earth, or Mars, made primarily of a silicon-based rocky mantle with a metallic (iron) core.
Carbon planet (also called "diamond planet")
A theoretical class of planets, composed of a metal core surrounded by primarily carbon-based minerals. They may be considered a type of terrestrial planet if the metal content dominates. The Solar System contains no carbon planets but does have carbonaceous asteroids, such as Ceres and Hygiea. It is unknown if Ceres has a rocky or metallic core.
Iron planet
A theoretical type of solid planet that consists almost entirely of iron and therefore has a greater density and a smaller radius than other solid planets of comparable mass. Mercury in the Solar System has a metallic core equal to 60–70% of its planetary mass, and is sometimes called an iron planet, though its surface is made of silicates and is iron-poor. Iron planets are thought to form in the high-temperature regions close to a star, like Mercury, and if the protoplanetary disk is rich in iron.
Icy planet
A type of solid planet with an icy surface of volatiles. In the Solar System, most planetary-mass moons (such as Titan, Triton, and Enceladus) and many dwarf planets (such as Pluto and Eris) have such a composition. Europa is sometimes considered an icy planet due to its surface ice, but its higher density indicates that its interior is mostly rocky. Such planets can have internal saltwater oceans and cryovolcanoes erupting liquid water (i.e. an internal hydrosphere, like Europa or Enceladus); they can have an atmosphere and hydrosphere made from methane or nitrogen (like Titan). A metallic core is possible, as exists on Ganymede.
Coreless planet
A theoretical type of solid planet that consists of silicate rock but has no metallic core, i.e. the opposite of an iron planet. Although the Solar System contains no coreless planets, chondrite asteroids and meteorites are common in the Solar System. Ceres and Pallas have mineral compositions similar to carbonaceous chondrites, though Pallas is significantly less hydrated. Coreless planets are thought to form farther from the star where volatile oxidizing material is more common.
| Physical sciences | Planetary science | null |
48517 | https://en.wikipedia.org/wiki/Antelope | Antelope | The term antelope refers to numerous extant or recently extinct species of the ruminant artiodactyl family Bovidae that are indigenous to most of Africa, India, the Middle East, Central Asia, and a small area of Eastern Europe. Antelopes do not form a monophyletic group, as some antelopes are more closely related to other bovid groups, like bovines, goats, and sheep, than to other antelopes.
A better definition, also known as the "true antelopes", includes only the genera Gazella, Nanger, Eudorcas, and Antilope. One North American mammal, the pronghorn or "pronghorn antelope", is colloquially referred to as the "American antelope", despite the fact that it belongs to a completely different family (Antilocapridae) than the true Old-World antelopes; pronghorn are the sole extant member of an extinct prehistoric lineage that once included many unique species.
Although antelope are sometimes referred to, and easily misidentified as, "deer" (cervids), true deer are only distant relatives of antelopes. While antelope are found in abundance in Africa, only one deer species is found on the continent—the Barbary red deer of Northern Africa. By comparison, numerous deer species are usually found in regions of the world with fewer or no antelope species present, such as throughout Southeast Asia, Europe and all of the Americas. This is likely due to competition over shared resources, as deer and antelope fill a virtually identical ecological niche in their respective habitats. Countries like India, however, have large populations of endemic deer and antelope, with the different species generally keeping to their own "niches" with minimal overlap.
Unlike deer, in which the males sport elaborate head antlers that are shed and regrown annually, antelope horns are bone and grow steadily, never falling off. If a horn is broken, it will either remain broken or take years to partially regenerate, depending on the species of the antelope.
Etymology
The English word "antelope" first appeared in 1417 and is derived from the Old French antelop, itself derived from Medieval Latin ant(h)alopus, which in turn comes from the Byzantine Greek word ἀνθόλοψ, anthólops, first attested in Eustathius of Antioch (), according to whom it was a fabulous animal "haunting the banks of the Euphrates, very savage, hard to catch and having long, saw-like horns capable of cutting down trees". It perhaps derives from Greek ἀνθος, anthos (flower) and ώψ, ops (eye), perhaps meaning "beautiful eye" or alluding to the animals' long eyelashes. This, however, may be a folk etymology in Greek based on some earlier root. The word talopus and calopus, from Latin, came to be used in heraldry. In 1607, it was first used for living, cervine animals .
Species
There are 91 antelope species, most of which are native to Africa, occur in about 30 genera. The classification of tribes or subfamilies within Bovoidea is still a matter of debate, with several alternative systems proposed.
Antelope are not a cladistic or taxonomically defined group. The term is used to describe all members of the family Bovidae that do not fall under the category of sheep, cattle, or goats. Usually, all species of the Antilopinae, Hippotraginae, Reduncinae, Cephalophinae, many Bovinae, the grey rhebok, and the impala are called antelope.
Distribution and habitat
More species of antelope are native to Africa than to any other continent, almost exclusively in savannahs, with 25-40 species co-occurring over much of East Africa. Because savannah habitat in Africa has expanded and contracted five times over the last three million years, and the fossil record indicates this is when most extant species evolved, it is believed that isolation in refugia during contractions was a major driver of this diversification. Other species occur in Asia: the Arabian Peninsula is home to the Arabian oryx and Dorcas gazelle. South Asia is home to the nilgai, chinkara, blackbuck, Tibetan antelope, and four-horned antelope, while Russia and Central Asia have the Tibetan antelope and saiga.
No antelope species is native to Australasia or Antarctica, nor do any extant species occur in the Americas, though the nominate saiga subspecies occurred in North America during the Pleistocene. North America is currently home to the native pronghorn, which taxonomists do not consider a member of the antelope group, but which is often locally referred to as such (e.g., "American antelope"). In Europe, several extinct species occur in the fossil record, and the saiga was found widely during the Pleistocene but did not persist into the later Holocene, except in Russian Kalmykia and Astrakhan Oblast.
Many species of antelope have been imported to other parts of the world, especially the United States, for exotic game hunting. With some species possessing spectacular leaping and evasive skills, individuals may escape. Texas in particular has many game ranches, as well as habitats and climates that are very hospitable to African and Asian plains antelope species. Accordingly, wild populations of blackbuck antelope, gemsbok, and nilgai may be found in Texas.
Antelope live in a wide range of habitats. Most live in the African savannahs. However, many species are more secluded, such as the forest antelope, as well as the extreme cold-living saiga, the desert-adapted Arabian oryx, the rocky koppie-living klipspringer, and semiaquatic sitatunga.
Species living in forests, woodland, or bush tend to be sedentary, but many of the plains species undertake long migrations. These enable grass-eating species to follow the rains and thereby their food supply. The gnus and gazelles of East Africa perform some of the most impressive mass migratory circuits of all mammals.
Morphology
Body and covering
Antelope vary greatly in size. For example, a male common eland can measure at the shoulder and weigh almost , whereas an adult royal antelope may stand only at the shoulder and weigh a mere .
Not surprisingly for animals with long, slender yet powerful legs, many antelope have long strides and can run fast. Some (e.g. klipspringer) are also adapted to inhabiting rock koppies and crags. Both dibatags and gerenuks habitually stand on their two hind legs to reach acacia and other tree foliage. Different antelope have different body types, which can affect movement. Duikers are short, bush-dwelling antelope that can pick through dense foliage and dive into the shadows rapidly. Gazelle and springbok are known for their speed and leaping abilities. Even larger antelope, such as nilgai, elands, and kudus, are capable of jumping or greater, although their running speed is restricted by their greater mass.
Antelope have a wide variety of coverings, though most have a dense coat of short fur. In most species, the coat (pelage) is some variation of a brown colour (or several shades of brown), often with white or pale underbodies. Exceptions include the zebra-marked zebra duiker, the grey, black, and white Jentink's duiker, and the black lechwe. Most of the "spiral-horned" antelope have pale, vertical stripes on their backs. Many desert and semidesert species are particularly pale, some almost silvery or whitish (e.g. Arabian oryx); the beisa and southern oryxes have gray and black pelages with vivid black-and-white faces. Common features of various gazelles are white rumps, which flash a warning to others when they run from danger, and dark stripes midbody (the latter feature is also shared by the springbok and beira). The springbok also has a pouch of white, brushlike hairs running along its back, which opens up when the animal senses danger, causing the dorsal hairs to stand on end.
Many antelope are sexually dimorphic. In most species, both sexes have horns, but those of males tend to be larger. Males tend to be larger than the females, but exceptions in which the females tend to be heavier than the males include the bush duiker, dwarf antelope, Cape grysbok, and oribi, all rather small species. A number of species have hornless females (e.g., sitatunga, red lechwe, and suni). In some species, the males and females have differently coloured pelages (e.g. blackbuck and nyala).
Sensory and digestive systems
Antelope are ruminants, so they have well-developed molar teeth, which grind cud (food balls stored in the stomach) into a pulp for further digestion. They have no upper incisors, but rather a hard upper gum pad, against which their lower incisors bite to tear grass stems and leaves.
Like many other herbivores, antelope rely on keen senses to avoid predators. Their eyes are placed on the sides of their heads, giving them a broad radius of vision with minimal binocular vision. Their horizontally elongated pupils also help in this respect. Acute senses of smell and hearing give antelope the ability to perceive danger at night out in the open (when predators are often on the prowl). These same senses play an important role in contact between individuals of the same species; markings on their heads, ears, legs, and rumps are used in such communication. Many species "flash" such markings, as well as their tails; vocal communications include loud barks, whistles, "moos", and trumpeting; many species also use scent marking to define their territories or simply to maintain contact with their relatives and neighbors.
Antelope horns
The size and shape of antelope horns varies greatly. Those of the duikers and dwarf antelope tend to be simple "spikes", but differ in the angle to the head from backward curved and backward pointing (e.g. yellow-backed duiker) to straight and upright (e.g. steenbok). Other groups have twisted (e.g. common eland), spiral (e.g. greater kudu), "recurved" (e.g. the reedbucks), lyrate (e.g. impala), or long, curved (e.g. the oryxes) horns. Horns are not shed and their bony cores are covered with a thick, persistent sheath of horny material, both of which distinguish them from antlers.
Antelope horns are efficient weapons, and tend to be better developed in those species where males fight over females (large herd antelope) than in solitary or lekking species. With male-male competition for mates, horns are clashed in combat. Males more commonly use their horns against each other than against another species. The boss of the horns is typically arranged in such a way that two antelope striking at each other's horns cannot crack each other's skulls, making a fight via horn more ritualized than dangerous. Many species have ridges in their horns for at least two-thirds the length of their horns, but these ridges are not a direct indicator of age.
Behavior
Mating strategies
Antelope are often classified by their reproductive behavior.
Small antelope, such as dik-diks, tend to be monogamous. They live in a forest environment with patchy resources, and a male is unable to monopolize more than one female due to this sparse distribution. Larger forest species often form very small herds of two to four females and one male.
Some species, such as lechwes, pursue a lek breeding system, where the males gather on a lekking ground and compete for a small territory, while the females appraise males and choose one with which to mate.
Large grazing antelope, such as impala or wildebeest, form large herds made up of many females and a single breeding male, which excludes all other males, often by combat.
Defense
Antelope pursue a number of defense strategies, often dictated by their morphology.
Large antelope that gather in large herds, such as wildebeest, rely on numbers and running speed for protection. In some species, adults will encircle the offspring, protecting them from predators when threatened. Many forest antelope rely on cryptic coloring and good hearing to avoid predators. Forest antelope often have very large ears and dark or striped colorations. Small antelope, especially duikers, evade predation by jumping into dense bush where the predator cannot pursue. Springboks use a behavior known as stotting to confuse predators.
Open grassland species have nowhere to hide from predators, so they tend to be fast runners. They are agile and have good endurance—these are advantages when pursued by sprint-dependent predators such as cheetahs, which are the fastest of land animals, but tire quickly. Reaction distances vary with predator species and behaviour. For example, gazelles may not flee from a lion until it is closer than 200 m (650 ft)—lions hunt as a pride or by surprise, usually by stalking; one that can be seen clearly is unlikely to attack. However, sprint-dependent cheetahs will cause gazelles to flee at a range of over .
If escape is not an option, antelope are capable of fighting back. Oryxes in particular have been known to stand sideways like many unrelated bovids to appear larger than they are, and may charge at a predator as a last resort.
Status
About 25 species are rated by the IUCN as endangered, such as the dama gazelle and mountain nyala. A number of subspecies are also endangered, including the giant sable antelope and the mhorr gazelle. The main causes for concern for these species are habitat loss, competition with cattle for grazing, and trophy hunting.
The chiru or Tibetan antelope is hunted for its pelt, which is used in making shahtoosh wool, used in shawls. Since the fur can only be removed from dead animals, and each animal yields very little of the downy fur, several antelope must be killed to make a single shawl. This unsustainable demand has led to enormous declines in the chiru population.
The saiga is hunted for its horns, which are considered an aphrodisiac by some cultures. Only the males have horns, and have been so heavily hunted that some herds contain up to 800 females to one male. The species showed a steep decline and was formerly classified as critically endangered. However, the saigas have experienced a massive regrowth and are now classified as near threatened.
Lifespan
It is difficult to determine how long antelope live in the wild. With the preference of predators towards old and infirm individuals, which can no longer sustain peak speeds, few wild prey-animals live as long as their biological potential. In captivity, wildebeest have lived beyond 20 years old, and impalas have reached their late teens.
Relationship with humans
Culture
The antelope's horn is prized for supposed medicinal and magical powers in many places. The horn of the male saiga, in Eastern practice, is ground as an aphrodisiac, for which it has been hunted nearly to extinction. In the Congo, it is thought to confine spirits. The antelope's ability to run swiftly has also led to their association with the wind, such as in the Rig Veda, as the steeds of the Maruts and the wind god Vayu. There is, however, no scientific evidence that the horns of any antelope have any change on a human's physiology or characteristics.
In Mali, antelope were believed to have brought the skills of agriculture to mankind.
Humans have also used the term "Antelope" to refer to a tradition usually found in the sport of track and field.
Domestication
Domestication of animals requires certain traits in the animal that antelope do not typically display. Most species are difficult to contain in any density, due to the territoriality of the males, or in the case of oryxes (which have a relatively hierarchical social structure), an aggressive disposition; they can easily kill a human. Because many have extremely good jumping abilities, providing adequate fencing is a challenge. Also, antelope will consistently display a fear response to perceived predators, such as humans, making them very difficult to herd or handle. Although antelope have diets and rapid growth rates highly suitable for domestication, this tendency to panic and their non-hierarchical social structure explains why farm-raised antelope are uncommon. Ancient Egyptians kept herds of gazelles and addax for meat, and occasionally pets. It is unknown whether they were truly domesticated, but it seems unlikely, as no domesticated gazelles exist today.
However, humans have had success taming certain species, such as the elands. These antelope sometimes jump over each other's backs when alarmed, but this incongruous talent seems to be exploited only by wild members of the species; tame elands do not take advantage of it and can be enclosed within a very low fence. Their meat, milk, and hides are all of excellent quality, and experimental eland husbandry has been going on for some years in both Ukraine and Zimbabwe. In both locations, the animal has proved wholly amenable to domestication. Similarly, European visitors to Arabia reported "tame gazelles are very common in the Asiatic countries of which the species is a native; and the poetry of these countries abounds in allusions both to the beauty and the gentleness of the gazelle." Other antelope that have been tamed successfully include the gemsbok, the kudu, and the springbok.
Hybrid antelope
A wide variety of antelope hybrids have been recorded in zoos, game parks, and wildlife ranches, due to either a lack of more appropriate mates in enclosures shared with other species or a misidentification of species. The ease of hybridization shows how closely related some antelope species are. With few exceptions, most hybrid antelope occur only in captivity.
Most hybrids occur between species within the same genus. All reported examples occur within the same subfamily. As with most mammal hybrids, the less closely related the parents, the more likely the offspring will be sterile.
Heraldry
Antelope are a common symbol in heraldry, though they occur in a highly distorted form from nature. The heraldic antelope has the body of a stag and the tail of a lion, with serrated horns, and a small tusk at the end of its snout. This bizarre and inaccurate form was invented by European heralds in the Middle Ages, who knew little of foreign animals and made up the rest. The antelope was mistakenly imagined to be a monstrous beast of prey; the 16th century poet Edmund Spenser referred to it as being "as fierce and fell as a wolf."
Antelope can all also occur in their natural form, in which case they are termed "natural antelope" to distinguish them from the more usual heraldic antelope. The arms previously used by the Republic of South Africa featured a natural antelope, along with an oryx.
| Biology and health sciences | Artiodactyla | null |
48519 | https://en.wikipedia.org/wiki/Highway | Highway | A highway is any public or private road or other public way on land. It includes not just major roads, but also other public roads and rights of way. In the United States, it is also used as an equivalent term to controlled-access highway, or a translation for motorway, Autobahn, autostrada, autoroute, etc.
According to Merriam-Webster, the use of the term predates the 12th century. According to Etymonline, "high" is in the sense of "main".
In North American and Australian English, major roads such as controlled-access highways or arterial roads are often state highways (Canada: provincial highways). Other roads may be designated "county highways" in the US and Ontario. These classifications refer to the level of government (state, provincial, county) that maintains the roadway. In British English, "highway" is primarily a legal term. Everyday use normally implies roads, while the legal use covers any route or path with a public right of access, including footpaths etc.
The term has led to several related derived terms, including highway system, highway code, highway patrol and highwayman.
Overview
Major highways are often named and numbered by the governments that typically develop and maintain them. Australia's Highway 1 is the longest national highway in the world at over and runs almost the entire way around the continent. China has the world's largest network of highways, followed closely by the United States. Some highways, like the Pan-American Highway or the European routes, span multiple countries. Some major highway routes include ferry services, such as US Route 10, which crosses Lake Michigan.
Traditionally highways were used by people on foot or on horses. Later they also accommodated carriages, bicycles and eventually motor cars, facilitated by advancements in road construction. In the 1920s and 1930s, many nations began investing heavily in highway systems in an effort to spur commerce and bolster national defence.
Major highways that connect cities in populous developed and developing countries usually incorporate features intended to enhance the road's capacity, efficiency, and safety to various degrees. Such features include a reduction in the number of locations for user access, the use of dual carriageways with two or more lanes on each carriageway, and grade-separated junctions with other roads and modes of transport. These features are typically present on highways built as motorways (freeways).
Terminology
England and Wales
The general legal definition deals with right of use, not the form of construction; this is distinct from e.g. the popular use of the word in the US. A highway is defined in English common law by a number of similarly worded definitions such as "a way over which all members of the public have the right to pass and repass without hindrance" usually accompanied by "at all times"; ownership of the ground is for most purposes irrelevant, thus the term encompasses all such ways from the widest trunk roads in public ownership to the narrowest footpath providing unlimited pedestrian access over private land.
A highway might be open to all forms of lawful land traffic (e.g. vehicular, horse, pedestrian) or limited to specific modes of traffic; usually a highway available to vehicles is also available to foot or horse traffic, a highway available to horse traffic is available to cyclists and pedestrians; but there are exceptional cases in which a highway is only available to vehicles, or is subdivided into dedicated parallel sections for different users.
A highway can share ground with a private right of way for which full use is not available to the general public: for example farm roads which the owner may use for any purpose but for which the general public only has a right of use on foot or horseback. The status of highway on most older roads has been gained by established public use, while newer roads are typically dedicated as highways from the time they are adopted (taken into the care and control of a council or other public authority). In England and Wales, a public highway is also known as "The King's Highway".
The core definition of a highway is modified in various legislation for a number of purposes but only for the specific matters dealt with in each such piece of legislation. This is typically in the case of bridges, tunnels and other structures whose ownership, mode of use or availability would otherwise exclude them from the general definition of a highway. Recent examples include toll bridges and tunnels which have the definition of highway imposed upon them (in a legal order applying only to the individual structure) to allow application of most traffic laws to those using them but without causing all of the general obligations or rights of use otherwise applicable to a highway.
Limited access highways for vehicles, with their own traffic rules, are called "motorways" in the UK.
Scotland
Scots law is similar to English law with regard to highways but with differing terminology and legislation. What is defined in England as a highway will often in Scotland be what is defined by s.151 Roads (Scotland) Act 1984 (but only "in this act" although other legislation could imitate) simply as a road, that is:
"any way (other than a waterway) over which there is a public right of passage (by whatever means [and whether subject to a toll or not]) and includes the road’s verge, and any bridge (whether permanent or temporary) over which, or tunnel through which, the road passes; and any reference to a road includes a part thereof"
The word highway is itself no longer a statutory expression in Scots law but remains in common law.
United States
In American law, the word "highway" is sometimes used to denote any public way used for travel, whether a "road, street, and parkway"; however, in practical and useful meaning, a "highway" is a major and significant, well-constructed road that is capable of carrying reasonably heavy to extremely heavy traffic. Highways generally have a route number designated by the state and federal departments of transportation.
California Vehicle Code, Sections 360, 590, define a "highway" as only a way open for use by motor vehicles, but the California Supreme Court has held that "the definition of 'highway' in the Vehicle Code is used for special purposes of that act" and that canals of the Los Angeles neighborhood of Venice are "highways" that are entitled to be maintained with state highway funds.
History
Large scale highway systems developed in the 20th century as automobile usage increased. The first United States limited-access road was constructed on Long Island, New York, and known as the Long Island Motor Parkway or the Vanderbilt Motor Parkway. It was completed in 1911. It included many modern features, including banked turns, guard rails and reinforced concrete tarmac. Traffic could turn left between the parkway and connectors, crossing oncoming traffic, so it was not a controlled-access highway (or "freeway" as later defined by the federal government's Manual on Uniform Traffic Control Devices).
Italy was the first country in the world to build controlled-access highways reserved for fast traffic and for motor vehicles only. The Autostrada dei Laghi ("Lakes Highway"), the first built in the world, connecting Milan to Lake Como and Lake Maggiore, and now parts of the A8 and A9 highways, was devised by Piero Puricelli and was inaugurated in 1924. This highway, called autostrada, contained only one lane in each direction and no interchanges.
The Southern State Parkway opened in 1927, while the Long Island Motor Parkway was closed in 1937 and replaced by the Northern State Parkway (opened 1931) and the contiguous Grand Central Parkway (opened 1936). In Germany, construction of the Bonn-Cologne Autobahn began in 1929 and was opened in 1932 by Konrad Adenauer, then the mayor of Cologne. Soon the Autobahn was the first limited-access, high-speed road network in the world, with the first section from Frankfurt am Main to Darmstadt opening in 1935.
In the US, the Federal Aid Highway Act of 1921 (Phipps Act) enacted a fund to create an extensive highway system. In 1922, the first blueprint for a national highway system (the Pershing Map) was published. The Federal Aid Highway Act of 1956 allocated $25 billion for the construction of the Interstate Highway System over a 20-year period.
In Great Britain, the Special Roads Act 1949 provided the legislative basis for roads for restricted classes of vehicles and non-standard or no speed limits applied (later mostly termed motorways but now with speed limits not exceeding 70 mph); in terms of general road law this legislation overturned the usual principle that a road available to vehicular traffic was also available to horse or pedestrian traffic as is usually the only practical change when non-motorways are reclassified as special roads. The first section of motorway in the UK opened in 1958 (part of the M6 motorway) and then in 1959 the first section of the M1 motorway.
Social effects
Often reducing travel times relative to city or town streets, highways with limited access and grade separation can create increased opportunities for people to travel for business, trade or pleasure and also provide trade routes for goods. Highways can reduce commute and other travel time but additional road capacity can also release latent traffic demand. If not accurately predicted at the planning stage, this extra traffic may lead to the new road becoming congested sooner than would otherwise be anticipated by considering increases in vehicle ownership. More roads allow drivers to use their cars when otherwise alternatives may have been sought, or the journey may not have been made, which can mean that a new road brings only short-term mitigation of traffic congestion.
Where highways are created through existing communities, there can be reduced community cohesion and more difficult local access. Consequently, property values have decreased in many cutoff neighborhoods, leading to decreased housing quality over time. Mostly in the U.S., many of these effects are from racist planning practices from before the advent of civil rights. This would result in the vast majority of displacement and social effects mostly going to people like African Americans.
In recent times, the use of freeway removal or the public policy of urban planning to demolish freeways and create mixed-use urban areas, parks, residential, commercial, or other land uses is being popular in many cities to combat most of the social problems caused from highways.
Economic effects
In transport, demand can be measured in numbers of journeys made or in total distance travelled across all journeys (e.g. passenger-kilometres for public transport or vehicle-kilometres of travel (VKT) for private transport). Supply is considered to be a measure of capacity. The price of the good (travel) is measured using the generalised cost of travel, which includes both money and time expenditure.
The effect of increases in supply (capacity) are of particular interest in transport economics (see induced demand), as the potential environmental consequences are significant (see externalities below).
In addition to providing benefits to their users, transport networks impose both positive and negative externalities on non-users. The consideration of these externalities—particularly the negative ones—is a part of transport economics. Positive externalities of transport networks may include the ability to provide emergency services, increases in land value and agglomeration benefits. Negative externalities are wide-ranging and may include local air pollution, noise pollution, light pollution, safety hazards, community severance and congestion. The contribution of transport systems to potentially hazardous climate change is a significant negative externality which is difficult to evaluate quantitatively, making it difficult (but not impossible) to include in transport economics-based research and analysis. Congestion is considered a negative externality by economists.
A 2016 study found that for the United States, "a 10% increase in a region's stock of highways causes a 1.7% increase in regional patenting over a five-year period." A 2021 study found that areas that obtained access to a new highway experienced a substantial increase in top-income taxpayers and a decline in low-income taxpayers. Highways also contributed to job and residential urban sprawl.
Environmental effects
Highways are extended linear sources of pollution.
Roadway noise increases with operating speed so major highways generate more noise than arterial streets. Therefore, considerable noise health effects are expected from highway systems. Noise mitigation strategies exist to reduce sound levels at nearby sensitive receptors. The idea that highway design could be influenced by acoustical engineering considerations first arose about 1973.
Air quality issues: Highways may contribute fewer emissions than arterials carrying the same vehicle volumes. This is because high, constant-speed operation creates an emissions reduction compared to vehicular flows with stops and starts. However, concentrations of air pollutants near highways may be higher due to increased traffic volumes. Therefore, the risk of exposure to elevated levels of air pollutants from a highway may be considerable, and further magnified when highways have traffic congestion.
New highways can also cause habitat fragmentation, encourage urban sprawl and allow human intrusion into previously untouched areas, as well as (counterintuitively) increasing congestion, by increasing the number of intersections.
They can also reduce the use of public transport, indirectly leading to greater pollution.
High-occupancy vehicle lanes are being added to some newer/reconstructed highways in the United States and other countries around the world to encourage carpooling and mass transit. These lanes help reduce the number of cars on the highway and thus reduces pollution and traffic congestion by promoting the use of carpooling in order to be able to use these lanes. However, they tend to require dedicated lanes on a highway, which makes them difficult to construct in dense urban areas where they are the most effective.
To address habitat fragmentation, wildlife crossings have become increasingly popular in many countries. Wildlife crossings allow animals to safely cross human-made barriers like highways.
Road traffic safety
Road traffic safety describes the safety performance of roads and streets, and methods used to reduce the harm (deaths, injuries, and property damage) on the highway system from traffic collisions. It includes the design, construction and regulation of the roads, the vehicles used on them and the training of drivers and other road-users.
A report published by the World Health Organization in 2004 estimated that some 1.2 million people were killed and 50 million injured on the roads around the world each year and was the leading cause of death among children 10–19 years of age.
The report also noted that the problem was most severe in developing countries and that simple prevention measures could halve the number of deaths. For reasons of clear data collection, only harm involving a road vehicle is included. A person tripping with fatal consequences or dying for some unrelated reason on a public road is not included in the relevant statistics.
Statistics
The United States has the world's largest network of highways, including both the Interstate Highway System and the United States Numbered Highway System. At least one of these networks is present in every state and they interconnect most major cities. It is also the world's most expensive mega-project, as the entirety of the Interstate Highway System was estimated to cost $27 billion in 1955 (equivalent to $ in ).
China's highway network is the second most extensive in the world, with a total length of about . China's expressway network is the longest Expressway system in the world, and it is quickly expanding, stretching some at the end of 2011. In 2008 alone, expressways were added to the network.
Longest international highway The Pan-American Highway, which connects many countries in the Americas, is nearly long . The Pan-American Highway is discontinuous because there is a significant gap in it in southeastern Panama, where the rainfall is immense and the terrain is entirely unsuitable for highway construction.
Longest national highway (point to point) The Trans-Canada Highway has one main route, a northern route through the western provinces, and several branches in the central and eastern provinces. The main route is long alone, and the entire system is over long. The TCH runs east–west across southern Canada, the populated portion of the country, and it connects many of the major urban centres along its route crossing all provinces, and reaching nearly all of their capital cities. The TCH begins on the east coast in Newfoundland, traverses that island, and crosses to the mainland by ferry. It crosses the Maritime Provinces of eastern Canada with a branch route serving the province of Prince Edward Island via a ferry and bridge. After crossing the remainder of the country's mainland, the highway reaches Vancouver, British Columbia on the Pacific coast, where a ferry continues it to Vancouver Island and the provincial capital of Victoria. Numeric designation is the responsibility of the provinces, and there is no single route number across the country.
Longest national highway (circuit) Australia's Highway 1 at over . It runs almost the entire way around the country's coastline. With the exception of the Federal Capital of Canberra, which is far inland, Highway 1 links all of Australia's capital cities, although Brisbane and Darwin are not directly connected, but rather are bypassed short distances away. Also, there is a ferry connection to the island state of Tasmania, and then a stretch of Highway 1 that links the major towns and cities of Tasmania, including Launceston and Hobart (this state's capital city).
Largest national highway system The United States of America has approximately of highway within its borders .
Busiest highway Highway 401 in Ontario, Canada, has volumes surpassing an average of 500,000 vehicles per day in some sections of Toronto .
Widest highway (maximum number of lanes) The Katy Freeway (part of Interstate 10) in Houston, Texas, has a total of 26 lanes in some sections . However, they are divided up into general use/ frontage roads/ HOV lanes, restricting the traverse traffic flow.
Widest highway (maximum number of through lanes) Interstate 5 along a section between Interstate 805 and California State Route 56 in San Diego, California, which was completed in April 2007, is 22 lanes wide.
Highest international highway The Karakoram Highway, between Pakistan and China, is at an altitude of .
Highest national highway National Highway 5, in India, connecting Amritsar in Punjab with Manali in Himachal Pradesh & Leh in Ladakh, reaches an approximate altitude of . The highest motorable road passes through Umling La at an altitude of falls under the branch highway connecting National Highway 5 in India.
Bus lane
South Korea
In South Korea, in February 1995 a bus lane (essentially an HOV-9) was established between the northern terminus and Sintanjin for important holidays and on 1 July 2008 bus lane enforcement between Seoul and Osan (Sintanjin on weekends) became daily between 6 a.m. and 10 p.m. On 1 October this was adjusted to 7 a.m. to 9 p.m. weekdays, and 9 a.m. to 9 p.m. weekends.
On the dotted line, vehicles except buses can make a right turn and temporarily pass for joining. However, when the lanes are not open, it is treated as white dotted lines.
On the double-dotted line, bus-only is implemented even during hours other than commuting. Vehicles except for buses can temporarily pass for right turns and joining. However, when the dedicated vehicle is not operating hours, it is treated as a white dotted line.
On the solid line, vehicles except buses are prohibited from driving, but it is operated flexibly according to the time and day of the week. When it is not operating hours for exclusive vehicles, it is treated as a white solid line.
On the double line, buses will be operated even during hours other than commuting hours. However, if the cars are not operated, it will be treated as solid white lines.
Hong Kong
In Hong Kong, some highways are set up with bus lanes to solve the traffic congestion.
Philippines
Traffic congestion was a principal problem in major roads and highways in the Philippines, especially in Metro Manila and other major cities. The government decided to set up some bus lanes in Metro Manila like in the Epifanio delos Santos Avenue.
Gallery
Highways by country
The following is a list of highways by country in alphabetical order.
Algeria East–West Highway
Autobahns of Austria
Autoput and Autocesta
Rodovia
Avtomagistrala
Highways in Canada
Expressway
Autocesta
Dálnice
Autostrada
Autoroute
Autobahns of Germany
Aftokinitodromos
Autópálya
National Highways and Expressways
Motorway
List of highways in Israel
Autostrade of Italy
Kōsokudōro
Lebuhraya
Autopista de Carretera Federal
Autoroute
Avtopat
Motorvei
Motorways and National Highways of Pakistan
Philippine highway network
Autoestrada
Russian federal highways
Autoput
Avtocesta
Expressways in South Korea
Autopista
Motorväg
Autobahns of Switzerland
Freeways in Taiwan
Thai highway network
State Highways (Ukraine)
Highways in the United Kingdom
Autofamba
| Technology | Road transport | null |
48520 | https://en.wikipedia.org/wiki/Sloop | Sloop | In modern usage, a sloop is a sailboat with a single mast generally having only one headsail in front of the mast and one mainsail abaft (behind) the mast. It is a type of fore-and-aft rig. The mainsail may be of any type, most often Bermuda rig, but also others, such as gaff or gunter.
In naval terminology, "sloop-of-war" refers to the purpose of the craft, rather than to the specific size or sail-plan, and thus a sloop should not be confused with a sloop-of-war. As with many rig definitions, it was some time before the term sloop referred to the type of rig.
Regionally, the definition also takes into account the position of the mast. A forward mast placement and a fixed (as opposed to ) bowsprit, but with two headsails may give categorisation as a sloop. An example is the Friendship Sloop.
Origins
The name originates from the Dutch sloep, which is related to the Old English slūpan, to glide. The original Dutch term applied to an open rowing boat. A sloop is usually regarded as a single-masted rig with a single headsail and a fore-and-aft mainsail. In this form, the sloop is the commonest of all sailing rigswith the Bermuda sloop being the default rig for leisure craft, being used on types that range from simple cruising dinghies to large racing yachts with high-tech sail fabrics and large powerful winches.If the vessel has two or more headsails, the term cutter is usually applied, though there are regional and historic variations on this. A boat with a forward mast placement and a fixed bowsprit, but more than one headsail, may be called a sloop. The Friendship sloop is an example of this. Particularly with historic craft, categorisation as a cutter may rely on having a running bowsprit.
Variations
Before the Bermuda rig became popular outside of Bermuda in the early 20th century, a (non-Bermudian) sloop might carry one or more square-rigged topsails which will be hung from a topsail yard and be supported from below by a crossjack.
A sloop's headsail may be masthead-rigged or fractional-rigged. On a masthead-rigged sloop, the forestay (on which the headsail is carried) attaches at the top of the mast. On a fractional-rigged sloop, the forestay attaches to the mast at a point below the top. A sloop may use a bowsprit, a spar that projects forward from the bow.
Gallery
| Technology | Naval transport | null |
48530 | https://en.wikipedia.org/wiki/Blood%20vessel | Blood vessel | Blood vessels are the tubular structures of a circulatory system that transport blood throughout a vertebrate's body. Blood vessels transport blood cells, nutrients, and oxygen to most of the tissues of a body. They also take waste and carbon dioxide away from the tissues. Some tissues such as cartilage, epithelium, and the lens and cornea of the eye are not supplied with blood vessels and are termed avascular.
There are five types of blood vessels: the arteries, which carry the blood away from the heart; the arterioles; the capillaries, where the exchange of water and chemicals between the blood and the tissues occurs; the venules; and the veins, which carry blood from the capillaries back towards the heart.
The word vascular, is derived from the Latin vas, meaning vessel, and is mostly used in relation to blood vessels.
Etymology
artery – late Middle English; from Latin arteria, from Greek artēria, probably from airein ("raise").
vein – Middle English; from Old French veine, from Latin vena.
capillary – mid-17th century; from Latin capillaris, from capillus ("hair"), influenced by Old French capillaire.
Structure
The arteries and veins have three layers. The middle layer is thicker in the arteries than it is in the veins:
The inner layer, tunica intima, is the thinnest layer. It is a single layer of flat cells (simple squamous epithelium) glued by a polysaccharide intercellular matrix, surrounded by a thin layer of subendothelial connective tissue interlaced with a number of circularly arranged elastic bands called the internal elastic lamina. A thin membrane of elastic fibers in the tunica intima run parallel to the vessel.
The middle layer of tunica media is the thickest layer in arteries. It consists of circularly arranged elastic fiber, connective tissue and polysaccharide substances; the second and third layer are separated by another thick elastic band called external elastic lamina. The tunica media may (especially in arteries) be rich in vascular smooth muscle, which controls the caliber of the vessel. Veins do not have the external elastic lamina, but only an internal one. The tunica media is thicker in the arteries rather than the veins.
The outer layer is the tunica adventitia and the thickest layer in veins. It is entirely made of connective tissue. It also contains nerves that supply the vessel as well as nutrient capillaries (vasa vasorum) in the larger blood vessels.
Capillaries consist of a single layer of endothelial cells with a supporting subendothelium consisting of a basement membrane and connective tissue. When blood vessels connect to form a region of diffuse vascular supply, it is called an anastomosis. Anastomoses provide alternative routes for blood to flow through in case of blockages. Veins can have valves that prevent the backflow of the blood that was being pumped against gravity by the surrounding muscles. In humans, arteries do not have valves except for the two 'arteries' that originate from the heart's ventricles.
Early estimates by Danish physiologist August Krogh suggested that the total length of capillaries in human muscles could reach approximately (assuming a high muscle mass human body, like that of a bodybuilder). However, later studies suggest a more conservative figure of taking into account updated capillary density and average muscle mass in adults. Despite these later studies, many textbooks and other types of media include Krogh's estimates as a fun fact as opposed to the more recent studies.
Types
There are various kinds of blood vessels:
Arteries
Elastic arteries
Distributing arteries
Arterioles
Capillaries (smallest type of blood vessels)
Venules
Veins
Large collecting vessels, such as the subclavian vein, the jugular vein, the renal vein and the iliac vein.
Venae cavae (the two largest veins, carry blood into the heart).
Sinusoids
Extremely small vessels located within bone marrow, the spleen and the liver.
They are roughly grouped as "arterial" and "venous", determined by whether the blood in it is flowing away from (arterial) or toward (venous) the heart. The term "arterial blood" is nevertheless used to indicate blood high in oxygen, although the pulmonary artery carries "venous blood" and blood flowing in the pulmonary vein is rich in oxygen. This is because they are carrying the blood to and from the lungs, respectively, to be oxygenated.
Function
Blood vessels function to transport blood to an animal's body tissues. In general, arteries and arterioles transport oxygenated blood from the lungs to the body and its organs, and veins and venules transport deoxygenated blood from the body to the lungs. Blood vessels also circulate blood throughout the circulatory system. Oxygen (bound to hemoglobin in red blood cells) is the most critical nutrient carried by the blood. In all arteries apart from the pulmonary artery, hemoglobin is highly saturated (95–100%) with oxygen. In all veins, apart from the pulmonary vein, the saturation of hemoglobin is about 75%. (The values are reversed in the pulmonary circulation.) In addition to carrying oxygen, blood also carries hormones, and nutrients to the cells of a body and removes waste products.
Blood vessels do not actively engage in the transport of blood (they have no appreciable peristalsis). Blood is propelled through arteries and arterioles through pressure generated by the heartbeat. Blood vessels also transport red blood cells. Hematocrit tests can be performed to calculate the proportion of red blood cells in the blood. Higher proportions result in conditions such as dehydration or heart disease, while lower proportions could lead to anemia and long-term blood loss.
Permeability of the endothelium is pivotal in the release of nutrients to the tissue. It is also increased in inflammation in response to histamine, prostaglandins and interleukins, which leads to most of the symptoms of inflammation (swelling, redness, warmth and pain).
Constriction
Arteries—and veins to a degree—can regulate their inner diameter by contraction of the muscular layer. This changes the blood flow to downstream organs and is determined by the autonomic nervous system. Vasodilation and vasoconstriction are also used antagonistically as methods of thermoregulation.
The size of blood vessels is different for each of them. It ranges from a diameter of about 30–25 millimeters for the aorta to only about 5 micrometers (0,005mm) for the capillaries. Vasoconstriction is the constriction of blood vessels (narrowing, becoming smaller in cross-sectional area) by contracting the vascular smooth muscle in the vessel walls. It is regulated by vasoconstrictors (agents that cause vasoconstriction). These can include paracrine factors (e.g., prostaglandins), a number of hormones (e.g., vasopressin and angiotensin) and neurotransmitters (e.g., epinephrine) from the nervous system.
Vasodilation is a similar process mediated by antagonistically acting mediators. The most prominent vasodilator is nitric oxide (termed endothelium-derived relaxing factor for this reason).
Flow
The circulatory system uses the channel of blood vessels to deliver blood to all parts of the body. This is a result of the left and right sides of the heart working together to allow blood to flow continuously to the lungs and other parts of the body. Oxygen-poor blood enters the right side of the heart through two large veins. Oxygen-rich blood from the lungs enters through the pulmonary veins on the left side of the heart into the aorta and then reaches the rest of the body. The capillaries are responsible for allowing the blood to receive oxygen through tiny air sacs in the lungs. This is also the site where carbon dioxide exits the blood. This all occurs in the lungs where blood is oxygenated.
The blood pressure in blood vessels is traditionally expressed in millimetres of mercury (1 mmHg = 133 Pa). In the arterial system, this is usually around 120 mmHg systolic (high pressure wave due to contraction of the heart) and 80 mmHg diastolic (low pressure wave). In contrast, pressures in the venous system are constant and rarely exceed 10 mmHg.
Vascular resistance occurs when the vessels away from the heart oppose the flow of blood. Resistance is an accumulation of three different factors: blood viscosity, blood vessel length and vessel radius. Blood viscosity is the thickness of the blood and its resistance to flow as a result of the different components of the blood. Blood is 92% water by weight and the rest of blood is composed of protein, nutrients, electrolytes, wastes, and dissolved gases. Depending on the health of an individual, the blood viscosity can vary (i.e., anemia causing relatively lower concentrations of protein, high blood pressure an increase in dissolved salts or lipids, etc.).
Vessel length is the total length of the vessel measured as the distance away from the heart. As the total length of the vessel increases, the total resistance as a result of friction will increase. Vessel radius also affects the total resistance as a result of contact with the vessel wall. As the radius of the wall gets smaller, the proportion of the blood making contact with the wall will increase. The greater amount of contact with the wall will increase the total resistance against the blood flow.
Disease
Blood vessels play a huge role in virtually every medical condition. Cancer, for example, cannot progress unless the tumor causes angiogenesis (formation of new blood vessels) to supply the malignant cells' metabolic demand. Atherosclerosis represents around 85% of all deaths from cardiovascular diseases due to the buildup of plaque. Coronary artery disease that often follows after atherosclerosis can cause heart attacks or cardiac arrest, resulting in 370,000 worldwide deaths in 2022. In 2019, around 17.9 million people died from cardiovascular diseases. Of these deaths, around 85% of them were due to heart attack and stroke.
Blood vessel permeability is increased in inflammation. Damage, due to trauma or spontaneously, may lead to hemorrhage due to mechanical damage to the vessel endothelium. In contrast, occlusion of the blood vessel by atherosclerotic plaque, an embolised blood clot or a foreign body leads to downstream ischemia (insufficient blood supply) and possibly infarction (necrosis due to lack of blood supply). Vessel occlusion tends to be a positive feedback system; an occluded vessel creates eddies in the normally laminar flow or plug flow blood currents. These eddies create abnormal fluid velocity gradients which push blood elements, such as cholesterol or chylomicron bodies, to the endothelium. These deposit onto the arterial walls which are already partially occluded and build upon the blockage.
The most common disease of the blood vessels is hypertension or high blood pressure. This is caused by an increase in the pressure of the blood flowing through the vessels. Hypertension can lead to heart failure and stroke. Aspirin helps prevent blood clots and can also help limit inflammation. Vasculitis is inflammation of the vessel wall due to autoimmune disease or infection.
| Biology and health sciences | Circulatory system | Biology |
48548 | https://en.wikipedia.org/wiki/Dopamine | Dopamine | Dopamine (DA, a contraction of 3,4-dihydroxyphenethylamine) is a neuromodulatory molecule that plays several important roles in cells. It is an organic chemical of the catecholamine and phenethylamine families. Dopamine constitutes about 80% of the catecholamine content in the brain. It is an amine synthesized by removing a carboxyl group from a molecule of its precursor chemical, L-DOPA, which is synthesized in the brain and kidneys. Dopamine is also synthesized in plants and most animals. In the brain, dopamine functions as a neurotransmitter—a chemical released by neurons (nerve cells) to send signals to other nerve cells. Neurotransmitters are synthesized in specific regions of the brain but affect many regions systemically. The brain includes several distinct dopamine pathways, one of which plays a major role in the motivational component of reward-motivated behavior. The anticipation of most types of rewards increases the level of dopamine in the brain, and many addictive drugs increase dopamine release or block its reuptake into neurons following release. Other brain dopamine pathways are involved in motor control and in controlling the release of various hormones. These pathways and cell groups form a dopamine system which is neuromodulatory.
In popular culture and media, dopamine is often portrayed as the main chemical of pleasure, but the current opinion in pharmacology is that dopamine instead confers motivational salience; in other words, dopamine signals the perceived motivational prominence (i.e., the desirability or aversiveness) of an outcome, which in turn propels the organism's behavior toward or away from achieving that outcome.
Outside the central nervous system, dopamine functions primarily as a local paracrine messenger. In blood vessels, it inhibits norepinephrine release and acts as a vasodilator; in the kidneys, it increases sodium excretion and urine output; in the pancreas, it reduces insulin production; in the digestive system, it reduces gastrointestinal motility and protects intestinal mucosa; and in the immune system, it reduces the activity of lymphocytes. With the exception of the blood vessels, dopamine in each of these peripheral systems is synthesized locally and exerts its effects near the cells that release it.
Several important diseases of the nervous system are associated with dysfunctions of the dopamine system, and some of the key medications used to treat them work by altering the effects of dopamine. Parkinson's disease, a degenerative condition causing tremor and motor impairment, is caused by a loss of dopamine-secreting neurons in an area of the midbrain called the substantia nigra. Its metabolic precursor L-DOPA can be manufactured; Levodopa, a pure form of L-DOPA, is the most widely used treatment for Parkinson's. There is evidence that schizophrenia involves altered levels of dopamine activity, and most antipsychotic drugs used to treat this are dopamine antagonists which reduce dopamine activity. Similar dopamine antagonist drugs are also some of the most effective anti-nausea agents. Restless legs syndrome and attention deficit hyperactivity disorder (ADHD) are associated with decreased dopamine activity. Dopaminergic stimulants can be addictive in high doses, but some are used at lower doses to treat ADHD. Dopamine itself is available as a manufactured medication for intravenous injection. It is useful in the treatment of severe heart failure or cardiogenic shock. In newborn babies it may be used for hypotension and septic shock.
Structure
A dopamine molecule consists of a catechol structure (a benzene ring with two hydroxyl side groups) with one amine group attached via an ethyl chain. As such, dopamine is the simplest possible catecholamine, a family that also includes the neurotransmitters norepinephrine and epinephrine. The presence of a benzene ring with this amine attachment makes it a substituted phenethylamine, a family that includes numerous psychoactive drugs.
Like most amines, dopamine is an organic base. As a base, it is generally protonated in acidic environments (in an acid-base reaction). The protonated form is highly water-soluble and relatively stable, but can become oxidized if exposed to oxygen or other oxidants. In basic environments, dopamine is not protonated. In this free base form, it is less water-soluble and also more highly reactive. Because of the increased stability and water-solubility of the protonated form, dopamine is supplied for chemical or pharmaceutical use as dopamine hydrochloride—that is, the hydrochloride salt that is created when dopamine is combined with hydrochloric acid. In dry form, dopamine hydrochloride is a fine powder which is white to yellow in color.
Biochemistry
Synthesis
Dopamine is synthesized in a restricted set of cell types, mainly neurons and cells in the medulla of the adrenal glands. The primary and minor metabolic pathways respectively are:
Primary: L-Phenylalanine → L-Tyrosine → L-DOPA → Dopamine
Minor: L-Phenylalanine → L-Tyrosine → p-Tyramine → Dopamine
Minor: L-Phenylalanine → m-Tyrosine → m-Tyramine → Dopamine
The direct precursor of dopamine, L-DOPA, can be synthesized indirectly from the essential amino acid phenylalanine or directly from the non-essential amino acid tyrosine. These amino acids are found in nearly every protein and so are readily available in food, with tyrosine being the most common. Although dopamine is also found in many types of food, it is incapable of crossing the blood–brain barrier that surrounds and protects the brain. It must therefore be synthesized inside the brain to perform its neuronal activity.
L-Phenylalanine is converted into L-tyrosine by the enzyme phenylalanine hydroxylase, with molecular oxygen (O2) and tetrahydrobiopterin as cofactors. L-Tyrosine is converted into L-DOPA by the enzyme tyrosine hydroxylase, with tetrahydrobiopterin, O2, and iron (Fe2+) as cofactors. L-DOPA is converted into dopamine by the enzyme aromatic L-amino acid decarboxylase (also known as DOPA decarboxylase), with pyridoxal phosphate as the cofactor.
Dopamine itself is used as precursor in the synthesis of the neurotransmitters norepinephrine and epinephrine. Dopamine is converted into norepinephrine by the enzyme dopamine β-hydroxylase, with O2 and L-ascorbic acid as cofactors. Norepinephrine is converted into epinephrine by the enzyme phenylethanolamine N-methyltransferase with S-adenosyl-L-methionine as the cofactor.
Some of the cofactors also require their own synthesis. Deficiency in any required amino acid or cofactor can impair the synthesis of dopamine, norepinephrine, and epinephrine.
Degradation
Dopamine is broken down into inactive metabolites by a set of enzymes—monoamine oxidase (MAO), catechol-O-methyl transferase (COMT), and aldehyde dehydrogenase (ALDH), acting in sequence. Both isoforms of monoamine oxidase, MAO-A and MAO-B, effectively metabolize dopamine. Different breakdown pathways exist but the main end-product is homovanillic acid (HVA), which has no known biological activity. From the bloodstream, homovanillic acid is filtered out by the kidneys and then excreted in the urine. The two primary metabolic routes that convert dopamine into HVA are:
Dopamine → DOPAL → DOPAC → HVA – catalyzed by MAO, ALDH, and COMT respectively
Dopamine → 3-Methoxytyramine → HVA – catalyzed by COMT and MAO+ALDH respectively
In clinical research on schizophrenia, measurements of homovanillic acid in plasma have been used to estimate levels of dopamine activity in the brain. A difficulty in this approach however, is separating the high level of plasma homovanillic acid contributed by the metabolism of norepinephrine.
Although dopamine is normally broken down by an oxidoreductase enzyme, it is also susceptible to oxidation by direct reaction with oxygen, yielding quinones plus various free radicals as products. The rate of oxidation can be increased by the presence of ferric iron or other factors. Quinones and free radicals produced by autoxidation of dopamine can poison cells, and there is evidence that this mechanism may contribute to the cell loss that occurs in Parkinson's disease and other conditions.
Functions
Cellular effects
Dopamine exerts its effects by binding to and activating cell surface receptors. In humans, dopamine has a high binding affinity at dopamine receptors and human trace amine-associated receptor 1 (hTAAR1). In mammals, five subtypes of dopamine receptors have been identified, labeled from D1 to D5. All of them function as metabotropic, G protein-coupled receptors, meaning that they exert their effects via a complex second messenger system. These receptors can be divided into two families, known as D1-like and D2-like. For receptors located on neurons in the nervous system, the ultimate effect of D1-like activation (D1 and D5) can be excitation (via opening of sodium channels) or inhibition (via opening of potassium channels); the ultimate effect of D2-like activation (D2, D3, and D4) is usually inhibition of the target neuron. Consequently, it is incorrect to describe dopamine itself as either excitatory or inhibitory: its effect on a target neuron depends on which types of receptors are present on the membrane of that neuron and on the internal responses of that neuron to the second messenger cAMP. D1 receptors are the most numerous dopamine receptors in the human nervous system; D2 receptors are next; D3, D4, and D5 receptors are present at significantly lower levels.
Storage, release, and reuptake
Inside the brain, dopamine functions as a neurotransmitter and neuromodulator, and is controlled by a set of mechanisms common to all monoamine neurotransmitters. After synthesis, dopamine is transported from the cytosol into secretory vesicles, including synaptic vesicles, small and large dense core vesicles by a solute carrier—a vesicular monoamine transporter, VMAT2. Dopamine is stored in these vesicles until it is ejected into the synaptic cleft. In most cases, the release of dopamine occurs through a process called exocytosis which is caused by action potentials, but it can also be caused by the activity of an intracellular trace amine-associated receptor, TAAR1. TAAR1 is a high-affinity receptor for dopamine, trace amines, and certain substituted amphetamines that is located along membranes in the intracellular milieu of the presynaptic cell; activation of the receptor can regulate dopamine signaling by inducing dopamine reuptake inhibition and efflux as well as by inhibiting neuronal firing through a diverse set of mechanisms.
Once in the synapse, dopamine binds to and activates dopamine receptors. These can be postsynaptic dopamine receptors, which are located on dendrites (the postsynaptic neuron), or presynaptic autoreceptors (e.g., the D2sh and presynaptic D3 receptors), which are located on the membrane of an axon terminal (the presynaptic neuron). After the postsynaptic neuron elicits an action potential, dopamine molecules quickly become unbound from their receptors. They are then absorbed back into the presynaptic cell, via reuptake mediated either by the dopamine transporter or by the plasma membrane monoamine transporter. Once back in the cytosol, dopamine can either be broken down by a monoamine oxidase or repackaged into vesicles by VMAT2, making it available for future release.
In the brain the level of extracellular dopamine is modulated by two mechanisms: phasic and tonic transmission. Phasic dopamine release, like most neurotransmitter release in the nervous system, is driven directly by action potentials in the dopamine-containing cells. Tonic dopamine transmission occurs when small amounts of dopamine are released without being preceded by presynaptic action potentials. Tonic transmission is regulated by a variety of factors, including the activity of other neurons and neurotransmitter reuptake.
Central nervous system
Inside the brain, dopamine plays important roles in executive functions, motor control, motivation, arousal, reinforcement, and reward, as well as lower-level functions including lactation, sexual gratification, and nausea. The dopaminergic cell groups and pathways make up the dopamine system which is neuromodulatory.
Dopaminergic neurons (dopamine-producing nerve cells) are comparatively few in number—a total of around 400,000 in the human brain—and their cell bodies are confined in groups to a few relatively small brain areas. However their axons project to many other brain areas, and they exert powerful effects on their targets. These dopaminergic cell groups were first mapped in 1964 by Annica Dahlström and Kjell Fuxe, who assigned them labels starting with the letter "A" (for "aminergic"). In their scheme, areas A1 through A7 contain the neurotransmitter norepinephrine, whereas A8 through A14 contain dopamine. The dopaminergic areas they identified are the substantia nigra (groups 8 and 9); the ventral tegmental area (group 10); the posterior hypothalamus (group 11); the arcuate nucleus (group 12); the zona incerta (group 13) and the periventricular nucleus (group 14).
The substantia nigra is a small midbrain area that forms a component of the basal ganglia. This has two parts—an input area called the pars reticulata and an output area called the pars compacta. The dopaminergic neurons are found mainly in the pars compacta (cell group A8) and nearby (group A9). In humans, the projection of dopaminergic neurons from the substantia nigra pars compacta to the dorsal striatum, termed the nigrostriatal pathway, plays a significant role in the control of motor function and in learning new motor skills. These neurons are especially vulnerable to damage, and when a large number of them die, the result is a parkinsonian syndrome.
The ventral tegmental area (VTA) is another midbrain area. The most prominent group of VTA dopaminergic neurons projects to the prefrontal cortex via the mesocortical pathway and another smaller group projects to the nucleus accumbens via the mesolimbic pathway. Together, these two pathways are collectively termed the mesocorticolimbic projection. The VTA also sends dopaminergic projections to the amygdala, cingulate gyrus, hippocampus, and olfactory bulb. Mesocorticolimbic neurons play a central role in reward and other aspects of motivation. Accumulating literature shows that dopamine also plays a crucial role in aversive learning through its effects on a number of brain regions.
The posterior hypothalamus has dopamine neurons that project to the spinal cord, but their function is not well established. There is some evidence that pathology in this area plays a role in restless legs syndrome, a condition in which people have difficulty sleeping due to an overwhelming compulsion to constantly move parts of the body, especially the legs.
The arcuate nucleus and the periventricular nucleus of the hypothalamus have dopamine neurons that form an important projection—the tuberoinfundibular pathway which goes to the pituitary gland, where it influences the secretion of the hormone prolactin. Dopamine is the primary neuroendocrine inhibitor of the secretion of prolactin from the anterior pituitary gland. Dopamine produced by neurons in the arcuate nucleus is secreted into the hypophyseal portal system of the median eminence, which supplies the pituitary gland. The prolactin cells that produce prolactin, in the absence of dopamine, secrete prolactin continuously; dopamine inhibits this secretion.
The zona incerta, grouped between the arcuate and periventricular nuclei, projects to several areas of the hypothalamus, and participates in the control of gonadotropin-releasing hormone, which is necessary to activate the development of the male and female reproductive systems, following puberty.
An additional group of dopamine-secreting neurons is found in the retina of the eye. These neurons are amacrine cells, meaning that they have no axons. They release dopamine into the extracellular medium, and are specifically active during daylight hours, becoming silent at night. This retinal dopamine acts to enhance the activity of cone cells in the retina while suppressing rod cells—the result is to increase sensitivity to color and contrast during bright light conditions, at the cost of reduced sensitivity when the light is dim.
Basal ganglia
The largest and most important sources of dopamine in the vertebrate brain are the substantia nigra and ventral tegmental area. Both structures are components of the midbrain, closely related to each other and functionally similar in many respects. The largest component of the basal ganglia is the striatum. The substantia nigra sends a dopaminergic projection to the dorsal striatum, while the ventral tegmental area sends a similar type of dopaminergic projection to the ventral striatum.
Progress in understanding the functions of the basal ganglia has been slow. The most popular hypotheses, broadly stated, propose that the basal ganglia play a central role in action selection. The action selection theory in its simplest form proposes that when a person or animal is in a situation where several behaviors are possible, activity in the basal ganglia determines which of them is executed, by releasing that response from inhibition while continuing to inhibit other motor systems that if activated would generate competing behaviors. Thus the basal ganglia, in this concept, are responsible for initiating behaviors, but not for determining the details of how they are carried out. In other words, they essentially form a decision-making system.
The basal ganglia can be divided into several sectors, and each is involved in controlling particular types of actions. The ventral sector of the basal ganglia (containing the ventral striatum and ventral tegmental area) operates at the highest level of the hierarchy, selecting actions at the whole-organism level. The dorsal sectors (containing the dorsal striatum and substantia nigra) operate at lower levels, selecting the specific muscles and movements that are used to implement a given behavior pattern.
Dopamine contributes to the action selection process in at least two important ways. First, it sets the "threshold" for initiating actions. The higher the level of dopamine activity, the lower the impetus required to evoke a given behavior. As a consequence, high levels of dopamine lead to high levels of motor activity and impulsive behavior; low levels of dopamine lead to torpor and slowed reactions. Parkinson's disease, in which dopamine levels in the substantia nigra circuit are greatly reduced, is characterized by stiffness and difficulty initiating movement—however, when people with the disease are confronted with strong stimuli such as a serious threat, their reactions can be as vigorous as those of a healthy person. In the opposite direction, drugs that increase dopamine release, such as cocaine or amphetamine, can produce heightened levels of activity, including, at the extreme, psychomotor agitation and stereotyped movements.
The second important effect of dopamine is as a "teaching" signal. When an action is followed by an increase in dopamine activity, the basal ganglia circuit is altered in a way that makes the same response easier to evoke when similar situations arise in the future. This is a form of operant conditioning, in which dopamine plays the role of a reward signal.
Reward
In the language used to discuss the reward system, reward is the attractive and motivational property of a stimulus that induces appetitive behavior (also known as approach behavior) and consummatory behavior. A rewarding stimulus is one that can induce the organism to approach it and choose to consume it. Pleasure, learning (e.g., classical and operant conditioning), and approach behavior are the three main functions of reward. As an aspect of reward, pleasure provides a definition of reward; however, while all pleasurable stimuli are rewarding, not all rewarding stimuli are pleasurable (e.g., extrinsic rewards like money). The motivational or desirable aspect of rewarding stimuli is reflected by the approach behavior that they induce, whereas the pleasure from intrinsic rewards results from consuming them after acquiring them. A neuropsychological model which distinguishes these two components of an intrinsically rewarding stimulus is the incentive salience model, where "wanting" or desire (less commonly, "seeking") corresponds to appetitive or approach behavior while "liking" or pleasure corresponds to consummatory behavior. In human drug addicts, "wanting" becomes dissociated with "liking" as the desire to use an addictive drug increases, while the pleasure obtained from consuming it decreases due to drug tolerance.
Within the brain, dopamine functions partly as a global reward signal. An initial dopamine response to a rewarding stimulus encodes information about the salience, value, and context of a reward. In the context of reward-related learning, dopamine also functions as a reward prediction error signal, that is, the degree to which the value of a reward is unexpected. According to this hypothesis proposed by Montague, Dayan, and Sejnowski, rewards that are expected do not produce a second phasic dopamine response in certain dopaminergic cells, but rewards that are unexpected, or greater than expected, produce a short-lasting increase in synaptic dopamine, whereas the omission of an expected reward actually causes dopamine release to drop below its background level. The "prediction error" hypothesis has drawn particular interest from computational neuroscientists, because an influential computational-learning method known as temporal difference learning makes heavy use of a signal that encodes prediction error. This confluence of theory and data has led to a fertile interaction between neuroscientists and computer scientists interested in machine learning.
Evidence from microelectrode recordings from the brains of animals shows that dopamine neurons in the ventral tegmental area (VTA) and substantia nigra are strongly activated by a wide variety of rewarding events. These reward-responsive dopamine neurons in the VTA and substantia nigra are crucial for reward-related cognition and serve as the central component of the reward system. The function of dopamine varies in each axonal projection from the VTA and substantia nigra; for example, the VTA–nucleus accumbens shell projection assigns incentive salience ("want") to rewarding stimuli and its associated cues, the VTA–prefrontal cortex projection updates the value of different goals in accordance with their incentive salience, the VTA–amygdala and VTA–hippocampus projections mediate the consolidation of reward-related memories, and both the VTA–nucleus accumbens core and substantia nigra–dorsal striatum pathways are involved in learning motor responses that facilitate the acquisition of rewarding stimuli. Some activity within the VTA dopaminergic projections appears to be associated with reward prediction as well.
Pleasure
While dopamine has a central role in causing "wanting," associated with the appetitive or approach behavioral responses to rewarding stimuli, detailed studies have shown that dopamine cannot simply be equated with hedonic "liking" or pleasure, as reflected in the consummatory behavioral response. Dopamine neurotransmission is involved in some but not all aspects of pleasure-related cognition, since pleasure centers have been identified both within the dopamine system (i.e., nucleus accumbens shell) and outside the dopamine system (i.e., ventral pallidum and parabrachial nucleus). For example, direct electrical stimulation of dopamine pathways, using electrodes implanted in the brain, is experienced as pleasurable, and many types of animals are willing to work to obtain it. Antipsychotic drugs reduce dopamine levels and tend to cause anhedonia, a diminished ability to experience pleasure. Many types of pleasurable experiences—such as sexual intercourse, eating, and playing video games—increase dopamine release. All addictive drugs directly or indirectly affect dopamine neurotransmission in the nucleus accumbens; these drugs increase drug "wanting", leading to compulsive drug use, when repeatedly taken in high doses, presumably through the sensitization of incentive-salience. Drugs that increase synaptic dopamine concentrations include psychostimulants such as methamphetamine and cocaine. These produce increases in "wanting" behaviors, but do not greatly alter expressions of pleasure or change levels of satiation. However, opiate drugs such as heroin and morphine produce increases in expressions of "liking" and "wanting" behaviors. Moreover, animals in which the ventral tegmental dopamine system has been rendered inactive do not seek food, and will starve to death if left to themselves, but if food is placed in their mouths they will consume it and show expressions indicative of pleasure.
A clinical study from January 2019 that assessed the effect of a dopamine precursor (levodopa), dopamine antagonist (risperidone), and a placebo on reward responses to music – including the degree of pleasure experienced during musical chills, as measured by changes in electrodermal activity as well as subjective ratings – found that the manipulation of dopamine neurotransmission bidirectionally regulates pleasure cognition (specifically, the hedonic impact of music) in human subjects. This research demonstrated that increased dopamine neurotransmission acts as a sine qua non condition for pleasurable hedonic reactions to music in humans.
A study published in Nature in 1998 found evidence that playing video games releases dopamine in the human striatum. This dopamine is associated with learning, behavior reinforcement, attention, and sensorimotor integration. Researchers used positron emission tomography scans and 11C-labelled raclopride to track dopamine levels in the brain during goal-directed motor tasks and found that dopamine release was positively correlated with task performance and was greatest in the ventral striatum. This was the first study to demonstrate the behavioral conditions under which dopamine is released in humans. It highlights the ability of positron emission tomography to detect neurotransmitter fluxes during changes in behavior. According to research, potentially problematic video game use is related to personality traits such as low self-esteem and low self-efficacy, anxiety, aggression, and clinical symptoms of depression and anxiety disorders. Additionally, the reasons individuals play video games vary and may include coping, socialization, and personal satisfaction. The DSM-5 defines Internet Gaming Disorder as a mental disorder closely related to Gambling Disorder. This has been supported by some researchers but has also caused controversy.
Outside the central nervous system
Dopamine does not cross the blood–brain barrier, so its synthesis and functions in peripheral areas are to a large degree independent of its synthesis and functions in the brain. A substantial amount of dopamine circulates in the bloodstream, but its functions there are not entirely clear. Dopamine is found in blood plasma at levels comparable to those of epinephrine, but in humans, over 95% of the dopamine in the plasma is in the form of dopamine sulfate, a conjugate produced by the enzyme sulfotransferase 1A3/1A4 acting on free dopamine. The bulk of this dopamine sulfate is produced in the mesenteric organs. The production of dopamine sulfate is thought to be a mechanism for detoxifying dopamine that is ingested as food or produced by the digestive process—levels in the plasma typically rise more than fifty-fold after a meal. Dopamine sulfate has no known biological functions and is excreted in urine.
The relatively small quantity of unconjugated dopamine in the bloodstream may be produced by the sympathetic nervous system, the digestive system, or possibly other organs. It may act on dopamine receptors in peripheral tissues, or be metabolized, or be converted to norepinephrine by the enzyme dopamine beta hydroxylase, which is released into the bloodstream by the adrenal medulla. Some dopamine receptors are located in the walls of arteries, where they act as a vasodilator and an inhibitor of norepinephrine release from postganglionic sympathetic nerves terminals (dopamine can inhibit norepinephrine release by acting on presynaptic dopamine receptors, and also on presynaptic α-1 receptors, like norepinephrine itself). These responses might be activated by dopamine released from the carotid body under conditions of low oxygen, but whether arterial dopamine receptors perform other biologically useful functions is not known.
Beyond its role in modulating blood flow, there are several peripheral systems in which dopamine circulates within a limited area and performs an exocrine or paracrine function. The peripheral systems in which dopamine plays an important role include the immune system, the kidneys and the pancreas.
Immune system
In the immune system dopamine acts upon receptors present on immune cells, especially lymphocytes. Dopamine can also affect immune cells in the spleen, bone marrow, and circulatory system. In addition, dopamine can be synthesized and released by immune cells themselves. The main effect of dopamine on lymphocytes is to reduce their activation level. The functional significance of this system is unclear, but it affords a possible route for interactions between the nervous system and immune system, and may be relevant to some autoimmune disorders.
Kidneys
The renal dopaminergic system is located in the cells of the nephron in the kidney, where all subtypes of dopamine receptors are present. Dopamine is also synthesized there, by tubule cells, and discharged into the tubular fluid. Its actions include increasing the blood supply to the kidneys, increasing the glomerular filtration rate, and increasing the excretion of sodium in the urine. Hence, defects in renal dopamine function can lead to reduced sodium excretion and consequently result in the development of high blood pressure. There is strong evidence that faults in the production of dopamine or in the receptors can result in a number of pathologies including oxidative stress, edema, and either genetic or essential hypertension. Oxidative stress can itself cause hypertension. Defects in the system can also be caused by genetic factors or high blood pressure.
Pancreas
In the pancreas the role of dopamine is somewhat complex. The pancreas consists of two parts, an exocrine and an endocrine component. The exocrine part synthesizes and secretes digestive enzymes and other substances, including dopamine, into the small intestine. The function of this secreted dopamine after it enters the small intestine is not clearly established—the possibilities include protecting the intestinal mucosa from damage and reducing gastrointestinal motility (the rate at which content moves through the digestive system).
The pancreatic islets make up the endocrine part of the pancreas, and synthesize and secrete hormones including insulin into the bloodstream. There is evidence that the beta cells in the islets that synthesize insulin contain dopamine receptors, and that dopamine acts to reduce the amount of insulin they release. The source of their dopamine input is not clearly established—it may come from dopamine that circulates in the bloodstream and derives from the sympathetic nervous system, or it may be synthesized locally by other types of pancreatic cells.
Medical uses
Dopamine as a manufactured medication is sold under the trade names Intropin, Dopastat, and Revimine, among others. It is on the World Health Organization's List of Essential Medicines. It is most commonly used as a stimulant drug in the treatment of severe low blood pressure, slow heart rate, and cardiac arrest. It is especially important in treating these in newborn infants. It is given intravenously. Since the half-life of dopamine in plasma is very short—approximately one minute in adults, two minutes in newborn infants and up to five minutes in preterm infants—it is usually given in a continuous intravenous drip rather than a single injection.
Its effects, depending on dosage, include an increase in sodium excretion by the kidneys, an increase in urine output, an increase in heart rate, and an increase in blood pressure. At low doses it acts through the sympathetic nervous system to increase heart muscle contraction force and heart rate, thereby increasing cardiac output and blood pressure. Higher doses also cause vasoconstriction that further increases blood pressure. Older literature also describes very low doses thought to improve kidney function without other consequences, but recent reviews have concluded that doses at such low levels are not effective and may sometimes be harmful. While some effects result from stimulation of dopamine receptors, the prominent cardiovascular effects result from dopamine acting at α1, β1, and β2 adrenergic receptors.
Side effects of dopamine include negative effects on kidney function and irregular heartbeats. The LD50, or lethal dose which is expected to prove fatal in 50% of the population, has been found to be: 59 mg/kg (mouse; administered intravenously); 95 mg/kg (mouse; administered intraperitoneally); 163 mg/kg (rat; administered intraperitoneally); 79 mg/kg (dog; administered intravenously).
Disease, disorders, and pharmacology
The dopamine system plays a central role in several significant medical conditions, including Parkinson's disease, attention deficit hyperactivity disorder, Tourette syndrome, schizophrenia, bipolar disorder, and addiction. Aside from dopamine itself, there are many other important drugs that act on dopamine systems in various parts of the brain or body. Some are used for medical or recreational purposes, but neurochemists have also developed a variety of research drugs, some of which bind with high affinity to specific types of dopamine receptors and either agonize or antagonize their effects, and many that affect other aspects of dopamine physiology, including dopamine transporter inhibitors, VMAT inhibitors, and enzyme inhibitors.
Aging brain
A number of studies have reported an age-related decline in dopamine synthesis and dopamine receptor density (i.e., the number of receptors) in the brain. This decline has been shown to occur in the striatum and extrastriatal regions. Decreases in the D1, D2, and D3 receptors are well documented. The reduction of dopamine with aging is thought to be responsible for many neurological symptoms that increase in frequency with age, such as decreased arm swing and increased rigidity. Changes in dopamine levels may also cause age-related changes in cognitive flexibility.
Multiple sclerosis
Studies reported that dopamine imbalance influences the fatigue in multiple sclerosis. In patients with multiple sclerosis, dopamine inhibits production of IL-17 and IFN-γ by peripheral blood mononuclear cells.
Parkinson's disease
Parkinson's disease is an age-related disorder characterized by movement disorders such as stiffness of the body, slowing of movement, and trembling of limbs when they are not in use. In advanced stages it progresses to dementia and eventually death. The main symptoms are caused by the loss of dopamine-secreting cells in the substantia nigra. These dopamine cells are especially vulnerable to damage, and a variety of insults, including encephalitis (as depicted in the book and movie Awakenings), repeated sports-related concussions, and some forms of chemical poisoning such as MPTP, can lead to substantial cell loss, producing a parkinsonian syndrome that is similar in its main features to Parkinson's disease. Most cases of Parkinson's disease, however, are idiopathic, meaning that the cause of cell death cannot be identified.
The most widely used treatment for parkinsonism is administration of L-DOPA, the metabolic precursor for dopamine. L-DOPA is converted to dopamine in the brain and various parts of the body by the enzyme DOPA decarboxylase. L-DOPA is used rather than dopamine itself because, unlike dopamine, it is capable of crossing the blood–brain barrier. It is often co-administered with an enzyme inhibitor of peripheral decarboxylation such as carbidopa or benserazide, to reduce the amount converted to dopamine in the periphery and thereby increase the amount of L-DOPA that enters the brain. When L-DOPA is administered regularly over a long time period, a variety of unpleasant side effects such as dyskinesia often begin to appear; even so, it is considered the best available long-term treatment option for most cases of Parkinson's disease.
L-DOPA treatment cannot restore the dopamine cells that have been lost, but it causes the remaining cells to produce more dopamine, thereby compensating for the loss to at least some degree. In advanced stages the treatment begins to fail because the cell loss is so severe that the remaining ones cannot produce enough dopamine regardless of L-DOPA levels. Other drugs that enhance dopamine function, such as bromocriptine and pergolide, are also sometimes used to treat Parkinsonism, but in most cases L-DOPA appears to give the best trade-off between positive effects and negative side-effects.
Dopaminergic medications that are used to treat Parkinson's disease are sometimes associated with the development of a dopamine dysregulation syndrome, which involves the overuse of dopaminergic medication and medication-induced compulsive engagement in natural rewards like gambling and sexual activity. The latter behaviors are similar to those observed in individuals with a behavioral addiction.
Drug addiction and psychostimulants
Cocaine, substituted amphetamines (including methamphetamine), Adderall, methylphenidate (marketed as Ritalin or Concerta), and other psychostimulants exert their effects primarily or partly by increasing dopamine levels in the brain by a variety of mechanisms. Cocaine and methylphenidate are dopamine transporter blockers or reuptake inhibitors; they non-competitively inhibit dopamine reuptake, resulting in increased dopamine concentrations in the synaptic cleft. Like cocaine, substituted amphetamines and amphetamine also increase the concentration of dopamine in the synaptic cleft, but by different mechanisms.
The effects of psychostimulants include increases in heart rate, body temperature, and sweating; improvements in alertness, attention, and endurance; increases in pleasure produced by rewarding events; but at higher doses agitation, anxiety, or even loss of contact with reality. Drugs in this group can have a high addiction potential, due to their activating effects on the dopamine-mediated reward system in the brain. However some can also be useful, at lower doses, for treating attention deficit hyperactivity disorder (ADHD) and narcolepsy. An important differentiating factor is the onset and duration of action. Cocaine can take effect in seconds if it is injected or inhaled in free base form; the effects last from 5 to 90 minutes. This rapid and brief action makes its effects easily perceived and consequently gives it high addiction potential. Methylphenidate taken in pill form, in contrast, can take two hours to reach peak levels in the bloodstream, and depending on formulation the effects can last for up to 12 hours. These longer acting formulations have the benefit of reducing the potential for abuse, and improving adherence for treatment by using more convenient dosage regimens.
A variety of addictive drugs produce an increase in reward-related dopamine activity. Stimulants such as nicotine, cocaine and methamphetamine promote increased levels of dopamine which appear to be the primary factor in causing addiction. For other addictive drugs such as the opioid heroin, the increased levels of dopamine in the reward system may play only a minor role in addiction. When people addicted to stimulants go through withdrawal, they do not experience the physical suffering associated with alcohol withdrawal or withdrawal from opiates; instead they experience craving, an intense desire for the drug characterized by irritability, restlessness, and other arousal symptoms, brought about by psychological dependence.
The dopamine system plays a crucial role in several aspects of addiction. At the earliest stage, genetic differences that alter the expression of dopamine receptors in the brain can predict whether a person will find stimulants appealing or aversive. Consumption of stimulants produces increases in brain dopamine levels that last from minutes to hours. Finally, the chronic elevation in dopamine that comes with repetitive high-dose stimulant consumption triggers a wide-ranging set of structural changes in the brain that are responsible for the behavioral abnormalities which characterize an addiction. Treatment of stimulant addiction is very difficult, because even if consumption ceases, the craving that comes with psychological withdrawal does not. Even when the craving seems to be extinct, it may re-emerge when faced with stimuli that are associated with the drug, such as friends, locations and situations. Association networks in the brain are greatly interlinked.
Psychosis and antipsychotic drugs
Psychiatrists in the early 1950s discovered that a class of drugs known as typical antipsychotics (also known as major tranquilizers), were often effective at reducing the psychotic symptoms of schizophrenia. The introduction of the first widely used antipsychotic, chlorpromazine (Thorazine), in the 1950s, led to the release of many patients with schizophrenia from institutions in the years that followed. By the 1970s researchers understood that these typical antipsychotics worked as antagonists on the D2 receptors. This realization led to the so-called dopamine hypothesis of schizophrenia, which postulates that schizophrenia is largely caused by hyperactivity of brain dopamine systems. The dopamine hypothesis drew additional support from the observation that psychotic symptoms were often intensified by dopamine-enhancing stimulants such as methamphetamine, and that these drugs could also produce psychosis in healthy people if taken in large enough doses. In the following decades other atypical antipsychotics that had fewer serious side effects were developed. Many of these newer drugs do not act directly on dopamine receptors, but instead produce alterations in dopamine activity indirectly. These drugs were also used to treat other psychoses. Antipsychotic drugs have a broadly suppressive effect on most types of active behavior, and particularly reduce the delusional and agitated behavior characteristic of overt psychosis.
Later observations, however, have caused the dopamine hypothesis to lose popularity, at least in its simple original form. For one thing, patients with schizophrenia do not typically show measurably increased levels of brain dopamine activity. Even so, many psychiatrists and neuroscientists continue to believe that schizophrenia involves some sort of dopamine system dysfunction. As the "dopamine hypothesis" has evolved over time, however, the sorts of dysfunctions it postulates have tended to become increasingly subtle and complex.
Psychopharmacologist Stephen M. Stahl suggested in a review of 2018 that in many cases of psychosis, including schizophrenia, three interconnected networks based on dopamine, serotonin, and glutamate – each on its own or in various combinations – contributed to an overexcitation of dopamine D2 receptors in the ventral striatum.
Attention deficit hyperactivity disorder
Altered dopamine neurotransmission is implicated in attention deficit hyperactivity disorder (ADHD), a condition associated with impaired cognitive control, in turn leading to problems with regulating attention (attentional control), inhibiting behaviors (inhibitory control), and forgetting things or missing details (working memory), among other problems. There are genetic links between dopamine receptors, the dopamine transporter, and ADHD, in addition to links to other neurotransmitter receptors and transporters. The most important relationship between dopamine and ADHD involves the drugs that are used to treat ADHD. Some of the most effective therapeutic agents for ADHD are psychostimulants such as methylphenidate (Ritalin, Concerta) and amphetamine (Evekeo, Adderall, Dexedrine), drugs that increase both dopamine and norepinephrine levels in the brain. The clinical effects of these psychostimulants in treating ADHD are mediated through the indirect activation of dopamine and norepinephrine receptors, specifically dopamine receptor D1 and adrenoceptor α2, in the prefrontal cortex.
Pain
Dopamine plays a role in pain processing in multiple levels of the central nervous system including the spinal cord, periaqueductal gray, thalamus, basal ganglia, and cingulate cortex. Decreased levels of dopamine have been associated with painful symptoms that frequently occur in Parkinson's disease. Abnormalities in dopaminergic neurotransmission also occur in several painful clinical conditions, including burning mouth syndrome, fibromyalgia, and restless legs syndrome.
Nausea
Nausea and vomiting are largely determined by activity in the area postrema in the medulla of the brainstem, in a region known as the chemoreceptor trigger zone. This area contains a large population of type D2 dopamine receptors. Consequently, drugs that activate D2 receptors have a high potential to cause nausea. This group includes some medications that are administered for Parkinson's disease, as well as other dopamine agonists such as apomorphine. In some cases, D2-receptor antagonists such as metoclopramide are useful as anti-nausea drugs.
Fear and Anxiety
Simultaneous positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), have shown that the amount of dopamine release is dependent on the strength of conditioned fear response and is linearly coupled to learning-induced activity in the amygdala. Dopamine is generally linked to reward learning, but it also plays a key role in fear learning and extinction by helping to form, store and update fear memories through its interaction with other brain regions like amygdala, ventromedial prefrontal cortex and striatum.
Comparative biology and evolution
Microorganisms
There are no reports of dopamine in archaea, but it has been detected in some types of bacteria and in the protozoan called Tetrahymena. Perhaps more importantly, there are types of bacteria that contain homologs of all the enzymes that animals use to synthesize dopamine. It has been proposed that animals derived their dopamine-synthesizing machinery from bacteria, via horizontal gene transfer that may have occurred relatively late in evolutionary time, perhaps as a result of the symbiotic incorporation of bacteria into eukaryotic cells that gave rise to mitochondria.
Animals
Dopamine is used as a neurotransmitter in most multicellular animals. In sponges there is only a single report of the presence of dopamine, with no indication of its function; however, dopamine has been reported in the nervous systems of many other radially symmetric species, including the cnidarian jellyfish, hydra and some corals. This dates the emergence of dopamine as a neurotransmitter back to the earliest appearance of the nervous system, over 500 million years ago in the Cambrian Period. Dopamine functions as a neurotransmitter in vertebrates, echinoderms, arthropods, molluscs, and several types of worm.
In every type of animal that has been examined, dopamine has been seen to modify motor behavior. In the model organism, nematode Caenorhabditis elegans, it reduces locomotion and increases food-exploratory movements; in flatworms it produces "screw-like" movements; in leeches it inhibits swimming and promotes crawling. Across a wide range of vertebrates, dopamine has an "activating" effect on behavior-switching and response selection, comparable to its effect in mammals.
Dopamine has also consistently been shown to play a role in reward learning, in all animal groups. As in all vertebrates – invertebrates such as roundworms, flatworms, molluscs and common fruit flies can all be trained to repeat an action if it is consistently followed by an increase in dopamine levels. In fruit flies, distinct elements for reward learning suggest a modular structure to the insect reward processing system that broadly parallels that in the mammalian one. For example, dopamine regulates short- and long-term learning in monkeys; in fruit flies, different groups of dopamine neurons mediate reward signals for short- and long-term memories.
It had long been believed that arthropods were an exception to this with dopamine being seen as having an adverse effect. Reward was seen to be mediated instead by octopamine, a neurotransmitter closely related to norepinephrine. More recent studies, however, have shown that dopamine does play a part in reward learning in fruit flies. It has also been found that the rewarding effect of octopamine is due to its activating a set of dopaminergic neurons not previously accessed in the research.
Plants
Many plants, including a variety of food plants, synthesize dopamine to varying degrees. The highest concentrations have been observed in bananas—the fruit pulp of red and yellow bananas contains dopamine at levels of 40 to 50 parts per million by weight. Potatoes, avocados, broccoli, and Brussels sprouts may also contain dopamine at levels of 1 part per million or more; oranges, tomatoes, spinach, beans, and other plants contain measurable concentrations less than 1 part per million. The dopamine in plants is synthesized from the amino acid tyrosine, by biochemical mechanisms similar to those that animals use. It can be metabolized in a variety of ways, producing melanin and a variety of alkaloids as byproducts. The functions of plant catecholamines have not been clearly established, but there is evidence that they play a role in the response to stressors such as bacterial infection, act as growth-promoting factors in some situations, and modify the way that sugars are metabolized. The receptors that mediate these actions have not yet been identified, nor have the intracellular mechanisms that they activate.
Dopamine consumed in food cannot act on the brain, because it cannot cross the blood–brain barrier. However, there are also a variety of plants that contain L-DOPA, the metabolic precursor of dopamine. The highest concentrations are found in the leaves and bean pods of plants of the genus Mucuna, especially in Mucuna pruriens (velvet beans), which have been used as a source for L-DOPA as a drug. Another plant containing substantial amounts of L-DOPA is Vicia faba, the plant that produces fava beans (also known as "broad beans"). The level of L-DOPA in the beans, however, is much lower than in the pod shells and other parts of the plant. The seeds of Cassia and Bauhinia trees also contain substantial amounts of L-DOPA.
In a species of marine green algae Ulvaria obscura, a major component of some algal blooms, dopamine is present in very high concentrations, estimated at 4.4% of dry weight. There is evidence that this dopamine functions as an anti-herbivore defense, reducing consumption by snails and isopods.
As a precursor for melanin
Melanins are a family of dark-pigmented substances found in a wide range of organisms. Chemically they are closely related to dopamine, and there is a type of melanin, known as dopamine-melanin, that can be synthesized by oxidation of dopamine via the enzyme tyrosinase. The melanin that darkens human skin is not of this type: it is synthesized by a pathway that uses L-DOPA as a precursor but not dopamine. However, there is substantial evidence that the neuromelanin that gives a dark color to the brain's substantia nigra is at least in part dopamine-melanin.
Dopamine-derived melanin probably appears in at least some other biological systems as well. Some of the dopamine in plants is likely to be used as a precursor for dopamine-melanin. The complex patterns that appear on butterfly wings, as well as black-and-white stripes on the bodies of insect larvae, are also thought to be caused by spatially structured accumulations of dopamine-melanin.
History and development
Dopamine was first synthesized in 1910 by George Barger and James Ewens at Wellcome Laboratories in London, England and first identified in the human brain by Katharine Montagu in 1957. It was named dopamine because it is a monoamine whose precursor in the Barger-Ewens synthesis is 3,4-dihydroxyphenylalanine (levodopa or L-DOPA). Dopamine's function as a neurotransmitter was first recognized in 1958 by Arvid Carlsson and Nils-Åke Hillarp at the Laboratory for Chemical Pharmacology of the National Heart Institute of Sweden. Carlsson was awarded the 2000 Nobel Prize in Physiology or Medicine for showing that dopamine is not only a precursor of norepinephrine (noradrenaline) and epinephrine (adrenaline), but is also itself a neurotransmitter.
Polydopamine
Research motivated by adhesive polyphenolic proteins in mussels led to the discovery in 2007 that a wide variety of materials, if placed in a solution of dopamine at slightly basic pH, will become coated with a layer of polymerized dopamine, often referred to as polydopamine. This polymerized dopamine forms by a spontaneous oxidation reaction, and is formally a type of melanin. Furthermore, dopamine self-polymerization can be used to modulate the mechanical properties of peptide-based gels. Synthesis of polydopamine usually involves reaction of dopamine hydrochloride with Tris as a base in water. The structure of polydopamine is unknown.
Polydopamine coatings can form on objects ranging in size from nanoparticles to large surfaces. Polydopamine layers have chemical properties that have the potential to be extremely useful, and numerous studies have examined their possible applications. At the simplest level, they can be used for protection against damage by light, or to form capsules for drug delivery. At a more sophisticated level, their adhesive properties may make them useful as substrates for biosensors or other biologically active macromolecules.
| Biology and health sciences | Biochemistry and molecular biology | null |
48560 | https://en.wikipedia.org/wiki/Tree%20%28graph%20theory%29 | Tree (graph theory) | In graph theory, a tree is an undirected graph in which any two vertices are connected by path, or equivalently a connected acyclic undirected graph. A forest is an undirected graph in which any two vertices are connected by path, or equivalently an acyclic undirected graph, or equivalently a disjoint union of trees.
A directed tree, oriented tree, polytree, or singly connected network is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest.
The various kinds of data structures referred to as trees in computer science have underlying graphs that are trees in graph theory, although such data structures are generally rooted trees. A rooted tree may be directed, called a directed rooted tree, either making all its edges point away from the root—in which case it is called an arborescence or out-tree—or making all its edges point towards the root—in which case it is called an anti-arborescence or in-tree. A rooted tree itself has been defined by some authors as a directed graph. A rooted forest is a disjoint union of rooted trees. A rooted forest may be directed, called a directed rooted forest, either making all its edges point away from the root in each rooted tree—in which case it is called a branching or out-forest—or making all its edges point towards the root in each rooted tree—in which case it is called an anti-branching or in-forest.
The term was coined in 1857 by the British mathematician Arthur Cayley.
Definitions
Tree
A tree is an undirected graph that satisfies any of the following equivalent conditions:
is connected and acyclic (contains no cycles).
is acyclic, and a simple cycle is formed if any edge is added to .
is connected, but would become disconnected if any single edge is removed from .
is connected and the complete graph is not a minor of .
Any two vertices in can be connected by a unique simple path.
If has finitely many vertices, say of them, then the above statements are also equivalent to any of the following conditions:
is connected and has edges.
is connected, and every subgraph of includes at least one vertex with zero or one incident edges. (That is, is connected and 1-degenerate.)
has no simple cycles and has edges.
As elsewhere in graph theory, the order-zero graph (graph with no vertices) is generally not considered to be a tree: while it is vacuously connected as a graph (any two vertices can be connected by a path), it is not 0-connected (or even (−1)-connected) in algebraic topology, unlike non-empty trees, and violates the "one more vertex than edges" relation. It may, however, be considered as a forest consisting of zero trees.
An (or inner vertex) is a vertex of degree at least 2. Similarly, an (or outer vertex, terminal vertex or leaf) is a vertex of degree 1. A branch vertex in a tree is a vertex of degree at least 3.
An (or series-reduced tree) is a tree in which there is no vertex of degree 2 (enumerated at sequence in the OEIS).
Forest
A is an undirected acyclic graph or equivalently a disjoint union of trees. Trivially so, each connected component of a forest is a tree. As special cases, the order-zero graph (a forest consisting of zero trees), a single tree, and an edgeless graph, are examples of forests.
Since for every tree , we can easily count the number of trees that are within a forest by subtracting the difference between total vertices and total edges. number of trees in a forest.
Polytree
A (or directed tree or oriented tree or singly connected network) is a directed acyclic graph (DAG) whose underlying undirected graph is a tree. In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is both connected and acyclic.
Some authors restrict the phrase "directed tree" to the case where the edges are all directed towards a particular vertex, or all directed away from a particular vertex (see arborescence).
Polyforest
A (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest. In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is acyclic.
As with directed trees, some authors restrict the phrase "directed forest" to the case where the edges of each connected component are all directed towards a particular vertex, or all directed away from a particular vertex (see branching).
Rooted tree
A is a tree in which one vertex has been designated the root. The edges of a rooted tree can be assigned a natural orientation, either away from or towards the root, in which case the structure becomes a directed rooted tree. When a directed rooted tree has an orientation away from the root, it is called an arborescence or out-tree; when it has an orientation towards the root, it is called an anti-arborescence or in-tree. The tree-order is the partial ordering on the vertices of a tree with if and only if the unique path from the root to passes through . A rooted tree that is a subgraph of some graph is a normal tree if the ends of every -path in are comparable in this tree-order . Rooted trees, often with an additional structure such as an ordering of the neighbors at each vertex, are a key data structure in computer science; see tree data structure.
In a context where trees typically have a root, a tree without any designated root is called a free tree.
A labeled tree is a tree in which each vertex is given a unique label. The vertices of a labeled tree on vertices (for nonnegative integers ) are typically given the labels . A recursive tree is a labeled rooted tree where the vertex labels respect the tree order (i.e., if for two vertices and , then the label of is smaller than the label of ).
In a rooted tree, the parent of a vertex is the vertex connected to on the path to the root; every vertex has a unique parent, except the root has no parent. A child of a vertex is a vertex of which is the parent. An ascendant of a vertex is any vertex that is either the parent of or is (recursively) an ascendant of a parent of . A descendant of a vertex is any vertex that is either a child of or is (recursively) a descendant of a child of . A sibling to a vertex is any other vertex on the tree that shares a parent with . A leaf is a vertex with no children. An internal vertex is a vertex that is not a leaf.
The height of a vertex in a rooted tree is the length of the longest downward path to a leaf from that vertex. The height of the tree is the height of the root. The depth of a vertex is the length of the path to its root (root path). The depth of a tree is the maximum depth of any vertex. Depth is commonly needed in the manipulation of the various self-balancing trees, AVL trees in particular. The root has depth zero, leaves have height zero, and a tree with only a single vertex (hence both a root and leaf) has depth and height zero. Conventionally, an empty tree (a tree with no vertices, if such are allowed) has depth and height −1.
A -ary tree (for nonnegative integers ) is a rooted tree in which each vertex has at most children. 2-ary trees are often called binary trees, while 3-ary trees are sometimes called ternary trees.
Ordered tree
An ordered tree (alternatively, plane tree or positional tree) is a rooted tree in which an ordering is specified for the children of each vertex. This is called a "plane tree" because an ordering of the children is equivalent to an embedding of the tree in the plane, with the root at the top and the children of each vertex lower than that vertex. Given an embedding of a rooted tree in the plane, if one fixes a direction of children, say left to right, then an embedding gives an ordering of the children. Conversely, given an ordered tree, and conventionally drawing the root at the top, then the child vertices in an ordered tree can be drawn left-to-right, yielding an essentially unique planar embedding.
Properties
Every tree is a bipartite graph. A graph is bipartite if and only if it contains no cycles of odd length. Since a tree contains no cycles at all, it is bipartite.
Every tree with only countably many vertices is a planar graph.
Every connected graph G admits a spanning tree, which is a tree that contains every vertex of G and whose edges are edges of G. More specific types spanning trees, existing in every connected finite graph, include depth-first search trees and breadth-first search trees. Generalizing the existence of depth-first-search trees, every connected graph with only countably many vertices has a Trémaux tree. However, some uncountable-order graphs do not have such a tree.
Every finite tree with n vertices, with , has at least two terminal vertices (leaves). This minimal number of leaves is characteristic of path graphs; the maximal number, , is attained only by star graphs. The number of leaves is at least the maximum vertex degree.
For any three vertices in a tree, the three paths between them have exactly one vertex in common. More generally, a vertex in a graph that belongs to three shortest paths among three vertices is called a median of these vertices. Because every three vertices in a tree have a unique median, every tree is a median graph.
Every tree has a center consisting of one vertex or two adjacent vertices. The center is the middle vertex or middle two vertices in every longest path. Similarly, every n-vertex tree has a centroid consisting of one vertex or two adjacent vertices. In the first case removal of the vertex splits the tree into subtrees of fewer than n/2 vertices. In the second case, removal of the edge between the two centroidal vertices splits the tree into two subtrees of exactly n/2 vertices.
The maximal cliques of a tree are precisely its edges, implying that the class of trees has few cliques.
Enumeration
Labeled trees
Cayley's formula states that there are trees on labeled vertices. A classic proof uses Prüfer sequences, which naturally show a stronger result: the number of trees with vertices of degrees respectively, is the multinomial coefficient
A more general problem is to count spanning trees in an undirected graph, which is addressed by the matrix tree theorem. (Cayley's formula is the special case of spanning trees in a complete graph.) The similar problem of counting all the subtrees regardless of size is #P-complete in the general case ().
Unlabeled trees
Counting the number of unlabeled free trees is a harder problem. No closed formula for the number of trees with vertices up to graph isomorphism is known. The first few values of are
1, 1, 1, 1, 2, 3, 6, 11, 23, 47, 106, 235, 551, 1301, 3159, … .
proved the asymptotic estimate
with and . Here, the symbol means that
This is a consequence of his asymptotic estimate for the number of unlabeled rooted trees with vertices:
with and the same as above (cf. , chap. 2.3.4.4 and , chap. VII.5, p. 475).
The first few values of are
1, 1, 2, 4, 9, 20, 48, 115, 286, 719, 1842, 4766, 12486, 32973, … .
Types of trees
A path graph (or linear graph) consists of vertices arranged in a line, so that vertices and are connected by an edge for .
A starlike tree consists of a central vertex called root and several path graphs attached to it. More formally, a tree is starlike if it has exactly one vertex of degree greater than 2.
A star tree is a tree which consists of a single internal vertex (and leaves). In other words, a star tree of order is a tree of order with as many leaves as possible.
A caterpillar tree is a tree in which all vertices are within distance 1 of a central path subgraph.
A lobster tree is a tree in which all vertices are within distance 2 of a central path subgraph.
A regular tree of degree is the infinite tree with edges at each vertex. These arise as the Cayley graphs of free groups, and in the theory of Tits buildings. In statistical mechanics they are known as Bethe lattices.
| Mathematics | Graph theory | null |
48563 | https://en.wikipedia.org/wiki/Air%20traffic%20control | Air traffic control | Air traffic control (ATC) is a service provided by ground-based air traffic controllers who direct aircraft on the ground and through a given section of controlled airspace, and can provide advisory services to aircraft in non-controlled airspace. The primary purpose of ATC is to prevent collisions, organize and expedite the flow of traffic in the air, and provide information and other support for pilots.
Personnel of air traffic control monitor aircraft location in their assigned airspace by radar, and communicate with the pilots by radio. To prevent collisions, ATC enforces traffic separation rules, which ensure each aircraft maintains a minimum amount of 'empty space' around it at all times. It is also common for ATC to provide services to all private, military, and commercial aircraft operating within its airspace; not just civilian aircraft. Depending on the type of flight and the class of airspace, ATC may issue instructions that pilots are required to obey, or advisories (known as flight information in some countries) that pilots may, at their discretion, disregard. The pilot in command of an aircraft always retains final authority for its safe operation, and may, in an emergency, deviate from ATC instructions to the extent required to maintain safe operation of the aircraft.
Language
Pursuant to requirements of the International Civil Aviation Organization (ICAO), ATC operations are conducted either in the English language, or the local language used by the station on the ground. In practice, the native language for a region is used; however, English must be used upon request.
History
In 1920, Croydon Airport near London, England, was the first airport in the world to introduce air traffic control. The 'aerodrome control tower' was a wooden hut high with windows on all four sides. It was commissioned on 25 February 1920, and provided basic traffic, weather, and location information to pilots.
In the United States, air traffic control developed three divisions. The first of several air mail radio stations (AMRS) was created in 1922, after World War I, when the U.S. Post Office began using techniques developed by the U.S. Army to direct and track the movements of reconnaissance aircraft. Over time, the AMRS morphed into flight service stations. Today's flight service stations do not issue control instructions, but provide pilots with many other flight related informational services. They do relay control instructions from ATC in areas where flight service is the only facility with radio or phone coverage. The first airport traffic control tower, regulating arrivals, departures, and surface movement of aircraft in the US at a specific airport, opened in Cleveland in 1930. Approach / departure control facilities were created after adoption of radar in the 1950s to monitor and control the busy airspace around larger airports. The first air route traffic control center (ARTCC), which directs the movement of aircraft between departure and destination, was opened in Newark in 1935, followed in 1936 by Chicago and Cleveland. Currently in the US, the Federal Aviation Administration (FAA) operates 22 Air Route Traffic Control Centers.
After the 1956 Grand Canyon mid-air collision, killing all 128 on board, the FAA was given the air-traffic responsibility in the United States in 1958, and this was followed by other countries. In 1960, Britain, France, Germany, and the Benelux countries set up Eurocontrol, intending to merge their airspaces. The first and only attempt to pool controllers between countries is the Maastricht Upper Area Control Centre (MUAC), founded in 1972 by Eurocontrol, and covering Belgium, Luxembourg, the Netherlands, and north-western Germany. In 2001, the European Union (EU) aimed to create a 'Single European Sky', hoping to boost efficiency and gain economies of scale.
Airport traffic control tower
The primary method of controlling the immediate airport environment is visual observation from the airport control tower. The tower is typically a tall, windowed structure, located within the airport grounds. The air traffic controllers, usually abbreviated 'controller', are responsible for separation and efficient movement of aircraft and vehicles operating on the taxiways and runways of the airport itself, and aircraft in the air near the airport, generally , depending on the airport procedures. A controller must carry out the job using the precise and effective application of rules and procedures; however, they need flexible adjustments according to differing circumstances, often under time pressure. In a study that compared stress in the general population and this kind of system markedly showed more stress level for controllers. This variation can be explained, at least in part, by the characteristics of the job.
Surveillance displays are also available to controllers at larger airports to assist with controlling air traffic. Controllers may use a radar system called secondary surveillance radar for airborne traffic approaching and departing. These displays include a map of the area, the position of various aircraft, and data tags that include aircraft identification, speed, altitude, and other information described in local procedures. In adverse weather conditions, the tower controllers may also use surface movement radar (SMR), surface movement guidance and control system (SMGCS), or advanced surface movement guidance and control system (ASMGCS) to control traffic on the manoeuvring area (taxiways and runways).
The areas of responsibility for tower controllers fall into three general operational disciplines: local control or air control, ground control, and flight data / clearance delivery. Other categories, such as airport apron control, or ground movement planner, may also exist at extremely busy airports. While each tower may have unique airport-specific procedures, such as multiple teams of controllers () at major or complex airports with multiple runways, the following provides a general concept of the delegation of responsibilities within the air traffic control tower environment.
Remote and virtual tower (RVT) is a system based on air traffic controllers being located somewhere other than at the local airport tower, and still able to provide air traffic control services. Displays for the air traffic controllers may be live video, synthetic images based on surveillance sensor data, or both.
Ground control
Ground control (sometimes known as , GMC) is responsible for the airport areas, as well as areas not released to the airlines or other users. This generally includes all taxiways, inactive runways, holding areas, and some transitional aprons or intersections where aircraft arrive, having vacated the runway or departure gate. Exact areas and control responsibilities are clearly defined in local documents and agreements at each airport. Any aircraft, vehicle, or person walking or working in these areas is required to have clearance from ground control. This is normally done via VHF / UHF radio, but there may be special cases where other procedures are used. Aircraft or vehicles without radios must respond to ATC instructions via aviation light signals, or else be led by official airport vehicles with radios. People working on the airport surface normally have a communications link through which they can communicate with ground control, commonly either by handheld radio or even cell phone. Ground control is vital to the smooth operation of the airport because this position impacts the sequencing of departure aircraft, affecting the safety and efficiency of the airport's operation.
Some busier airports have surface movement radar (SMR), such as ASDE-3, AMASS, or ASDE-X, designed to display aircraft and vehicles on the ground. These are used by ground control as an additional tool to control ground traffic, particularly at night or in poor visibility. There is a wide range of capabilities on these systems as they are being modernised. Older systems will display a map of the airport and the target. Newer systems include the capability to display higher-quality mapping, radar targets, data blocks, and safety alerts, and to interface with other systems, such as digital flight strips.
Air control or local control
Air control (known to pilots as or ) is responsible for the active runway surfaces. Air control gives clearance for aircraft takeoff or landing, whilst ensuring that prescribed runway separation will exist at all times. If the air controller detects any unsafe conditions, a landing aircraft may be instructed to 'go-around', and be re-sequenced into the landing pattern. This re-sequencing will depend on the type of flight, and may be handled by the air controller, approach, or terminal area controller.
Within the tower, a highly disciplined communications process between the air control and ground control is an absolute necessity. Air control must ensure that ground control is aware of any operations that will impact the taxiways, and work with the approach radar controllers to create in the arrival traffic; to allow taxiing traffic to cross runways, and to allow departing aircraft to take off. Ground control needs to keep the air controllers aware of the traffic flow towards their runways to maximise runway utilisation through effective approach spacing. Crew resource management (CRM) procedures are often used to ensure this communication process is efficient and clear. Within ATC, it is usually known as 'team resource management' (TRM), and the level of focus on TRM varies within different ATC organisations.
Flight data and clearance delivery
Clearance delivery is the position that issues route clearances to aircraft, typically before they commence taxiing. These clearances contain details of the route that the aircraft is expected to fly after departure. Clearance delivery, or, at busy airports, (GMP) or (TMC) will, if necessary, coordinate with the relevant radar centre or flow control unit to obtain releases for aircraft. At busy airports, these releases are often automatic, and are controlled by local agreements allowing 'free-flow' departures. When weather or extremely high demand for a certain airport or airspace becomes a factor, there may be ground 'stops' (or 'slot delays'), or re-routes may be necessary to ensure the system does not get overloaded. The primary responsibility of clearance delivery is to ensure that the aircraft has the correct aerodrome information, such as weather and airport conditions, the correct route after departure, and time restrictions relating to that flight. This information is also coordinated with the relevant radar centre or flow control unit and ground control, to ensure that the aircraft reaches the runway in time to meet the time restriction provided by the relevant unit. At some airports, clearance delivery also plans aircraft push-backs and engine starts, in which case it is known as the (GMP): this position is particularly important at heavily congested airports to prevent taxiway and aircraft parking area gridlock.
Flight data (which is routinely combined with clearance delivery) is the position that is responsible for ensuring that both controllers and pilots have the most current information: pertinent weather changes, outages, airport ground delays / ground stops, runway closures, etc. Flight data may inform the pilots using a recorded continuous loop on a specific frequency known as the (ATIS).
Approach and terminal control
Many airports have a radar control facility that is associated with that specific airport. In most countries, this is referred to as terminal control and abbreviated to TMC; in the U.S., it is referred to as a 'terminal radar approach control' or TRACON. While every airport varies, terminal controllers usually handle traffic in a radius from the airport. Where there are many busy airports close together, one consolidated terminal control centre may service all the airports. The airspace boundaries and altitudes assigned to a terminal control centre, which vary widely from airport to airport, are based on factors such as traffic flows, neighbouring airports, and terrain. A large and complex example was the London Terminal Control Centre (LTCC), which controlled traffic for five main London airports up to an altitude of and out to a distance of .
Terminal controllers are responsible for providing all ATC services within their airspace. Traffic flow is broadly divided into departures, arrivals, and overflights. As aircraft move in and out of the terminal airspace, they are 'handed off' to the next appropriate control facility (a control tower, an en-route control facility, or a bordering terminal or approach control). Terminal control is responsible for ensuring that aircraft are at an appropriate altitude when they are handed off, and that aircraft arrive at a suitable rate for landing.
Not all airports have a radar approach or terminal control available. In this case, the en-route centre or a neighbouring terminal or approach control may co-ordinate directly with the tower on the airport and vector inbound aircraft to a position from where they can land visually. At some of these airports, the tower may provide a non-radar procedural approach service to arriving aircraft handed over from a radar unit before they are visual to land. Some units also have a dedicated approach unit, which can provide the procedural approach service either all the time, or for any periods of radar outage for any reason.
In the U.S., TRACONs are additionally designated by a three-digit alphanumeric code. For example, the Chicago TRACON is designated C90.
Area control centre / en-route centre
Air traffic control also provides services to aircraft in flight between airports. Pilots fly under one of two sets of rules for separation: visual flight rules (VFR), or instrument flight rules (IFR). Air traffic controllers have different responsibilities to aircraft operating under the different sets of rules. While IFR flights are under positive control, in the US and Canada, VFR pilots can request 'flight following' (radar advisories), which provides traffic advisory services on a time permitting basis, and may also provide assistance in avoiding areas of weather and flight restrictions, as well as allowing pilots into the air traffic control system prior to the need to a clearance into certain airspace. Throughout Europe, pilots may request a 'Flight Information Service', which is similar to flight following. In the United Kingdom, it is known as a 'basic service'.
En-route air traffic controllers issue clearances and instructions for airborne aircraft, and pilots are required to comply with these instructions. En-route controllers also provide air traffic control services to many smaller airports around the country, including clearance off the ground and clearance for approach to an airport. Controllers adhere to a set of separation standards that define the minimum distance allowed between aircraft. These distances vary depending on the equipment and procedures used in providing ATC services.
General characteristics
En-route air traffic controllers work in facilities called air traffic control centres, each of which is commonly referred to as a 'centre'. The United States uses the equivalent term air route traffic control center. Each centre is responsible for a given flight information region (FIR). Each flight information region typically covers many thousands of square miles of airspace, and the airports within that airspace. Centres control IFR aircraft from the time they depart from an airport or terminal area's airspace, to the time they arrive at another airport or terminal area's airspace. Centres may also 'pick up' VFR aircraft that are already airborne, and integrate them into their system. These aircraft must continue under VFR flight rules until the centre provides a clearance.
Centre controllers are responsible for issuing instructions to pilots to climb their aircraft to their assigned altitude, while, at the same time, ensuring that the aircraft is properly separated from all other aircraft in its immediate area. Additionally, the aircraft must be placed in a flow consistent with the aircraft's route of flight. This effort is complicated by crossing traffic, severe weather, special missions that require large airspace allocations, and traffic density. When the aircraft approaches its destination, the centre is responsible for issuing instructions to pilots so that they will meet altitude restrictions by specific points, as well as providing many destination airports with a traffic flow, which prohibits all of the arrivals being 'bunched together'. These 'flow restrictions' often begin in the middle of the route, as controllers will position aircraft landing in the same destination so that when the aircraft are close to their destination they are sequenced.
As an aircraft reaches the boundary of a centre's control area, it is 'handed off' or 'handed over' to the next area control centre. In some cases, this 'hand-off' process involves a transfer of identification and details between controllers so that air traffic control services can be provided in a seamless manner; in other cases, local agreements may allow 'silent handovers', such that the receiving centre does not require any co-ordination if traffic is presented in an agreed manner. After the hand-off, the aircraft is given a frequency change, and its pilot begins talking to the next controller. This process continues until the aircraft is handed off to a terminal controller ('approach').
Radar coverage
Since centres control a large airspace area, they will typically use long-range radar, that has the capability, at higher altitudes, to see aircraft within of the radar antenna. They may also use radar data to control when it provides a better 'picture' of the traffic, or when it can fill in a portion of the area not covered by the long range radar.
In the U.S. system, at higher altitudes, over 90% of the U.S. airspace is covered by radar, and often by multiple radar systems; however, coverage may be inconsistent at lower altitudes used by aircraft, due to high terrain or distance from radar facilities. A centre may require numerous radar systems to cover the airspace assigned to them, and may also rely on pilot position reports from aircraft flying below the floor of radar coverage. This results in a large amount of data being available to the controller. To address this, automation systems have been designed that consolidate the radar data for the controller. This consolidation includes eliminating duplicate radar returns, ensuring the best radar for each geographical area is providing the data, and displaying the data in an effective format.
Centres also exercise control over traffic travelling over the world's ocean areas. These areas are also flight information regions (FIRs). Because there are no radar systems available for oceanic control, oceanic controllers provide ATC services using procedural control. These procedures use aircraft position reports, time, altitude, distance, and speed, to ensure separation. Controllers record information on flight progress strips, and in specially developed oceanic computer systems, as aircraft report positions. This process requires that aircraft be separated by greater distances, which reduces the overall capacity for any given route. The North Atlantic Track system is a notable example of this method.
Some air navigation service providers (e.g., Airservices Australia, the U.S. Federal Aviation Administration, Nav Canada, etc.) have implemented automatic dependent surveillance – broadcast (ADS-B) as part of their surveillance capability. This newer technology reverses the radar concept. Instead of radar 'finding' a target by interrogating the transponder, the ADS-B equipped aircraft 'broadcasts' a position report as determined by the navigation equipment on board the aircraft. ADS-C is another mode of automatic dependent surveillance, however ADS-C operates in the 'contract' mode, where the aircraft reports a position, automatically or initiated by the pilot, based on a predetermined time interval. It is also possible for controllers to request more frequent reports to more quickly establish aircraft position for specific reasons. However, since the cost for each report is charged by the ADS service providers to the company operating the aircraft, more frequent reports are not commonly requested, except in emergency situations. ADS-C is significant, because it can be used where it is not possible to locate the infrastructure for a radar system (e.g., over water). Computerised radar displays are now being designed to accept ADS-C inputs as part of their display. This technology is currently used in portions of the North Atlantic and the Pacific by a variety of states who share responsibility for the control of this airspace.
'Precision approach radars' (PAR) are commonly used by military controllers of air forces of several countries, to assist the pilot in final phases of landing in places where instrument landing system and other sophisticated airborne equipment are unavailable to assist the pilots in marginal or near zero visibility conditions. This procedure is also called a 'talk-down'.
A radar archive system (RAS) keeps an electronic record of all radar information, preserving it for a few weeks. This information can be useful for search and rescue. When an aircraft has 'disappeared' from radar screens, a controller can review the last radar returns from the aircraft to determine its likely position. For an example, see the crash report in the following citation. RAS is also useful to technicians who are maintaining radar systems.
Flight traffic mapping
The mapping of flights in real-time is based on the air traffic control system, and volunteer ADS-B receivers. In 1991, data on the location of aircraft was made available by the Federal Aviation Administration to the airline industry. The National Business Aviation Association (NBAA), the General Aviation Manufacturers Association, the Aircraft Owners and Pilots Association, the Helicopter Association International, and the National Air Transportation Association, petitioned the FAA to make ASDI information available on a 'need-to-know' basis. Subsequently, NBAA advocated the broad-scale dissemination of air traffic data. The Aircraft Situational Display to Industry (ASDI) system now conveys up-to-date flight information to the airline industry and the public. Some companies that distribute ASDI information are Flightradar24, FlightExplorer, FlightView, and FlyteComm. Each company maintains a website that provides free updated information to the public on flight status. Stand-alone programmes are also available for displaying the geographic location of airborne instrument flight rules (IFR) air traffic anywhere in the FAA air traffic system. Positions are reported for both commercial and general aviation traffic. The programmes can overlay air traffic with a wide selection of maps such as, geo-political boundaries, air traffic control centre boundaries, high altitude jet routes, satellite cloud and radar imagery.
Problems
Traffic
The day-to-day problems faced by the air traffic control system are primarily related to the volume of air traffic demand placed on the system, and weather. Several factors dictate the amount of traffic that can land at an airport in a given amount of time. Each landing aircraft must touch down, slow, and exit the runway, before the next aircraft crosses the approach end of the runway. This process requires at least one, and up to four minutes for each aircraft. Allowing for departures between arrivals, each runway can thus handle about 30 aircraft arrivals per hour. A large airport with two arrival runways can handle about 60 arrivals per hour in good weather. Problems arise when airlines schedule more arrivals into an airport than can be physically handled, or when delays elsewhere cause groups of aircraft – that would otherwise be separated in time – to arrive simultaneously. Aircraft must then be delayed in the air by holding over specified locations until they may be safely sequenced to the runway. Up until the 1990s, holding, which has significant environmental and cost implications, was a routine occurrence at many airports. Advances in computers now allow the sequencing of aircraft hours in advance. Thus, aircraft may be delayed before they even take off (by being given a 'slot'), or may reduce speed in flight and proceed more slowly thus significantly reducing the amount of holding.
Air traffic control errors occur when the separation (either vertical or horizontal) between airborne aircraft falls below the minimum prescribed separation set (for the domestic United States) by the US Federal Aviation Administration. Separation minimums for terminal control areas (TCAs) around airports are lower than en-route standards. Errors generally occur during periods following times of intense activity, when controllers tend to relax and overlook the presence of traffic and conditions that lead to loss of minimum separation.
Weather
Beyond runway capacity issues, the weather is a major factor in traffic capacity. Rain, ice, snow, or hail on the runway cause landing aircraft to take longer to slow and exit, thus reducing the safe arrival rate, and requiring more space between landing aircraft. Fog also requires a decrease in the landing rate. These, in turn, increase airborne delay for holding aircraft. If more aircraft are scheduled than can be safely and efficiently held in the air, a ground delay programme may be established, delaying aircraft on the ground before departure due to conditions at the arrival airport.
In Area Control Centres, a major weather problem is thunderstorms, which present a variety of hazards to aircraft. Airborne aircraft will deviate around storms, reducing the capacity of the en-route system, by requiring more space per aircraft, or causing congestion, as many aircraft try to move through a single hole in a line of thunderstorms. Occasionally, weather considerations cause delays to aircraft prior to their departure as routes are closed by thunderstorms.
Much money has been spent on creating software to streamline this process. However, at some ACCs, air traffic controllers still record data for each flight on strips of paper and personally coordinate their paths. In newer sites, these flight progress strips have been replaced by electronic data presented on computer screens. As new equipment is brought in, more and more sites are upgrading away from paper flight strips.
Congestion
Constrained control capacity and growing traffic lead to flight cancellation and delays:
In America, delays caused by ATC grew by 69% between 2012 and 2017. ATC staffing issues were a major factor in congestion.
In China, the average delay per domestic flight spiked by 50% in 2017 to 15 minutes per flight.
In Europe, en route delays grew by 105% in 2018, due to a lack of capacity or staff (60%), weather (25%) or strikes (14%), costing the European economy €17.6bn ($20.8bn), up by 28% on 2017.
By then the market for air-traffic services was worth $14bn. More efficient ATC could save 5-10% of aviation fuel by avoiding holding patterns and indirect airways.
The military takes 80% of Chinese airspace, congesting the thin corridors open to airliners. The United Kingdom closes its military airspace only during military exercises.
Call signs
A prerequisite to safe air traffic separation is the assignment and use of distinctive call signs. These are permanently allocated by ICAO on request, usually to scheduled flights, and some air forces and other military services for military flights. There are written call signs with a two or three letter combination followed by the flight number such as AAL872 or VLG1011. As such, they appear on flight plans and ATC radar labels. There are also the audio or radio-telephony call signs used on the radio contact between pilots and air traffic control. These are not always identical to their written counterparts. An example of an audio call sign would be 'Speedbird 832', instead of the written 'BAW832'. This is used to reduce the chance of confusion between ATC and the aircraft. By default, the call sign for any other flight is the registration number (or tail number in US parlance) of the aircraft, such as 'N12345', 'C-GABC', or 'EC-IZD'. The short radio-telephony call signs for these tail numbers is the last three letters using the NATO phonetic alphabet (e.g. ABC, spoken alpha-bravo-charlie for C-GABC), or the last three numbers (e.g. three-four-five for N12345). In the United States, the prefix may be an aircraft type, model, or manufacturer in place of the first registration character, for example, 'N11842' could become 'Cessna 842'. This abbreviation is only allowed after communications have been established in each sector.
Before around 1980, International Air Transport Association (IATA) and ICAO were using the same two-letter call signs. Due to the larger number of new airlines after deregulation, the ICAO established the three-letter call signs as mentioned above. The IATA call signs are currently used in aerodromes on the announcement tables, but are no longer used in air traffic control. For example, AA is the IATA call sign for American Airlines; the ATC equivalent is AAL. Flight numbers in regular commercial flights are designated by the aircraft operator, and identical call sign might be used for the same scheduled journey each day it is operated, even if the departure time varies a little across different days of the week. The call sign of the return flight often differs only by the final digit from the outbound flight. Generally, airline flight numbers are even if east-bound, and odd if west-bound. In order to reduce the possibility of two call signs on one frequency at any time sounding too similar, a number of airlines, particularly in Europe, have started using alphanumeric call signs that are not based on flight numbers (e.g. DLH23LG, spoken as Lufthansa-two-three-lima-golf, to prevent confusion between incoming DLH23 and outgoing DLH24 in the same frequency). Additionally, it is the right of the air traffic controller to change the 'audio' call sign for the period the flight is in his sector if there is a risk of confusion, usually choosing the aircraft registration identifier instead.
Technology
Many technologies are used in air traffic control systems. Primary and secondary radars are used to enhance a controller's situational awareness within their assigned airspace; all types of aircraft send back primary echoes of varying sizes to controllers' screens as radar energy is bounced off their skins, and transponder-equipped aircraft reply to secondary radar interrogations by giving an ID (Mode A), an altitude (Mode C), and / or a unique callsign (Mode S). Certain types of weather may also register on the radar screen. These inputs, added to data from other radars, are correlated to build the air situation. Some basic processing occurs on the radar tracks, such as calculating ground speed and magnetic headings.
Usually, a flight data processing system manages all the flight plan related data, incorporating, in a low or high degree, the information of the track once the correlation between them (flight plan and track) is established. All this information is distributed to modern operational display systems, making it available to controllers.
The Federal Aviation Administration (FAA) has spent over US$3 billion on software, but a fully automated system is still yet to be achieved. In 2002, the United Kingdom commissioned a new area control centre into service at the London Area Control Centre (LACC) at Swanwick in Hampshire, relieving a busy suburban centre at West Drayton in Middlesex, north of London Heathrow Airport. Software from Lockheed-Martin predominates at the London Area Control Centre. However, the centre was initially troubled by software and communications problems causing delays and occasional shutdowns.
Some tools are available in different domains to help the controller further:
Flight data processing systems: this is the system (usually one per centre) that processes all the information related to the flight (the flight plan), typically in the time horizon from gate to gate (airport departure / arrival gates). It uses such processed information to invoke other flight plan related tools (such as e.g. Medium Term Conflict Detection (MTCD), and distributes such processed information to all the stakeholders (air traffic controllers, collateral centres, airports, etc).
Short-term conflict alert (STCA) that checks possible conflicting trajectories in a time horizon of about two or three minutes (or even less in approach context; 35 seconds in the French Roissy & Orly approach centres) and alerts the controller prior to the loss of separation. The algorithms used may also provide in some systems a possible vectoring solution, that is, the manner in which to turn, descend, increase / decrease speed, or climb the aircraft in order to avoid infringing the minimum safety distance or altitude clearance.
Minimum safe altitude warning (MSAW): a tool that alerts the controller if an aircraft appears to be flying too low to the ground or will impact terrain based on its current altitude and heading.
System coordination (SYSCO) to enable controller to negotiate the release of flights from one sector to another.
Area penetration warning (APW) to inform a controller that a flight will penetrate a restricted area.
Arrival and departure manager to help sequence the takeoff and landing of aircraft.
The departure manager (DMAN): a system aid for the ATC at airports, that calculates a planned departure flow with the goal to maintain an optimal throughput at the runway, reduce queuing at holding point, and distribute the information to various stakeholders at the airport (i.e. the airline, ground handling and air traffic control (ATC)).
The arrival manager (AMAN): a system aid for the ATC at airports, that calculates a planned arrival flow with the goal to maintain an optimal throughput at the runway, reduce arrival queuing and distribute the information to various stakeholders.
Passive final approach spacing tool (pFAST): a CTAS tool, provides runway assignment and sequence number advisories to terminal controllers to improve the arrival rate at congested airports. pFAST was deployed and operational at five US TRACONs before being cancelled. NASA research included an active FAST capability that also provided vector and speed advisories to implement the runway and sequence advisories.
Converging runway display aid (CRDA): enables approach controllers to run two final approaches that intersect, and make sure that go arounds are minimised.
Center TRACON automation system (CTAS): a suite of human centred decision support tools developed by NASA Ames Research Center. Several of the CTAS tools have been field tested and transitioned to the FAA for operational evaluation and use. Some of the CTAS tools are: traffic management advisor (TMA), passive final approach spacing tool (pFAST), collaborative arrival planning (CAP), direct-to (D2), en route descent advisor (EDA), and multi-center TMA. The software is running on Linux.
Traffic management advisor (TMA): a CTAS tool, is an en-route decision support tool that automates time based metering solutions to provide an upper limit of aircraft to a TRACON from the centre over a set period of time. Schedules are determined that will not exceed the specified arrival rate, and controllers use the scheduled times to provide the appropriate delay to arrivals, while in the en-route domain. This results in an overall reduction in en-route delays, and also moves the delays to more efficient airspace (higher altitudes) than occur if holding near the TRACON boundary, which is required in order to prevent overloading the TRACON controllers. TMA is operational at most en-route air route traffic control centres (ARTCCs), and continues to be enhanced to address more complex traffic situations (e.g. adjacent centre metering (ACM) and en route departure capability (EDC))
MTCD and URET
In the US, user request evaluation tool (URET) takes paper strips out of the equation for en-route controllers at ARTCCs by providing a display that shows all aircraft that are either in, or currently routed into the sector.
In Europe, several MTCD tools are available: (National Air Traffic Services), VAFORIT (Deutsche Flugsicherung), new FDPS (Maastricht Upper Area Control). The Single European Sky ATM Research (SESAR) programme should soon launch new MTCD concepts.
URET and MTCD provide conflict advisories up to 30 minutes in advance, and have a suite of assistance tools that assist in evaluating resolution options and pilot requests.
Mode S: provides a data downlink of flight parameters via secondary surveillance radars allowing radar processing systems and therefore controllers to see various data on a flight, including airframe unique id (24-bits encoded), indicated airspeed, and flight director selected level, amongst others.
Controller–pilot data link communications (CPDLC): allows digital messages to be sent between controllers and pilots, avoiding the need to use radiotelephony. It is especially useful in areas where difficult-to-use HF radiotelephony was previously used for communication with aircraft, e.g. oceans. This is currently in use in various parts of the world including the Atlantic and Pacific oceans.
ADS-B: automatic dependent surveillance broadcast; provides a data downlink of various flight parameters to air traffic control systems via the transponder (1090 MHz), and reception of those data by other aircraft in the vicinity. The most important is the aircraft's latitude, longitude and level: such data can be utilised to create a radar-like display of aircraft for controllers, and thus allows a form of pseudo-radar control to be done in areas where the installation of radar is either prohibitive on the grounds of low traffic levels, or technically not feasible (e.g. oceans). This is currently in use in Australia, Canada, and parts of the Pacific Ocean, and Alaska.
The electronic flight strip system (e-strip): a system of electronic flight strips replacing the existing paper strips is being used by several service providers, such as Nav Canada, MASUAC, DFS, DECEA. E-strips allows controllers to manage electronic flight data online without paper strips, reducing the need for manual functions, creating new tools, and reducing the ATCO's workload. The firsts electronic flight strips systems were independently and simultaneously invented and implemented by Nav Canada and Saipher ATC in 1999. The Nav Canada system known as EXCDS and rebranded in 2011 to NAVCANstrips, and Saipher's first generation system known as SGTC, which is now being updated by its 2nd generation system, the TATIC TWR. DECEA in Brazil is the world's largest user of tower e-strips system, ranging from very small airports up to the busiest ones, taking the advantage of real time information and data collection from each of more than 150 sites for use in air traffic flow management (ATFM), billing, and statistics.
Screen content recording: hardware or software based recording function which is part of most modern automation system, and that captures the screen content shown to the ATCO. Such recordings are used for a later replay together with audio recording for investigations and post event analysis.
Communication navigation surveillance / air traffic management (CNS / ATM) systems are communications, navigation, and surveillance systems, employing digital technologies, including satellite systems, together with various levels of automation, applied in support of a seamless global air traffic management system.
Air navigation service providers (ANSPs) and air traffic service providers (ATSPs)
Azerbaijan – AzərAeroNaviqasiya
Albania – Albcontrol
Algeria – Etablissement National de la Navigation Aérienne (ENNA)
Argentina – Empresa Argentina de Navegación Aérea (EANA)
Armenia – Armenian Air Traffic Services (ARMATS)
Australia – Airservices Australia (government owned corporation) and Royal Australian Air Force
Austria – Austro Control
Bangladesh – Civil Aviation Authority, Bangladesh
Belarus – Republican Unitary Enterprise Белаэронавигация (Belarusian Air Navigation)
Belgium – Skeyes - Authority of Airways
Bosnia and Herzegovina – Agencija za pružanje usluga u zračnoj plovidbi (Bosnia and Herzegovina Air Navigation Services Agency)
Brazil – Departamento de Controle do Espaço Aéreo (ATC/ATM Authority) and ANAC – Agência Nacional de Aviação Civil (Civil Aviation Authority)
Bulgaria – Air Traffic Services Authority
Cambodia – Cambodia Air Traffic Services (CATS)
Canada – Nav Canada, formerly provided by Transport Canada and Canadian Forces
Cayman Islands – CIAA Cayman Islands Airports Authority
Central America – Corporación Centroamericana de Servicios de Navegación Aérea
Guatemala – Dirección General de Aeronáutica Civil (DGAC)
El Salvador
Honduras
Nicaragua – Empresa Administradora Aeropuertos Internacionales (EAAI)
Costa Rica – Dirección General de Aviación Civil
Belize
Chile – Dirección General de Aeronáutica Civil (DGAC)
Colombia – Aeronáutica Civil Colombiana (UAEAC)
Croatia – Hrvatska kontrola zračne plovidbe (Croatia Control Ltd.)
Cuba – Instituto de Aeronáutica Civil de Cuba (IACC)
Czech Republic – Řízení letového provozu ČR
Cyprus – Department of Civil Aviation
Denmark – Naviair (Danish ATC)
Dominican Republic – Instituto Dominicano de Aviación Civil (IDAC) 'Dominican Institute of Civil Aviation'
Eastern Caribbean – Eastern Caribbean Civil Aviation Authority (ECCAA)
Anguilla
Antigua and Barbuda
British Virgin Islands
Dominica
Grenada
Saint Kitts and Nevis
Saint Lucia
Saint Vincent and the Grenadines
Ecuador – Dirección General de Aviación Civil (DGAC) 'General Direction of Civil Aviation' government body
Estonia – Estonian Air Navigation Services
Europe – Eurocontrol (European organisation for the safety of air navigation)
Fiji – Fiji Airports (fully owned government commercial company)
Finland – Finavia
France – Direction Générale de l'Aviation Civile (DGAC): Direction des Services de la Navigation Aérienne (DSNA) (government body)
Georgia – SAKAERONAVIGATSIA, Ltd. (Georgian Air Navigation)
Germany – Deutsche Flugsicherung (German ATC – state-owned company)
Greece – Hellenic Civil Aviation Authority (HCAA)
Hong Kong – Civil Aviation Department (CAD)
Hungary – HungaroControl Magyar Légiforgalmi Szolgálat Zrt. (HungaroControl Hungarian Air Navigation Services Pte. Ltd. Co.)
Iceland – ISAVIA
India – Airports Authority of India (AAI) (under Ministry of Civil Aviation, Government of India and Indian Air Force)
Indonesia – AirNav Indonesia
Iran – Iran Civil Aviation Organization (ICAO)
Ireland – Irish Aviation Authority (IAA)
Iraq – Iraqi Air Navigation – ICAA
Israel – Israeli Airports Authority (IIA)
Italy – ENAV SpA and Italian Air Force
Jamaica – Jamaica Civil Aviation Authority (JCAA)
Japan – Japan Civil Aviation Bureau (JCAB)
Kenya – Kenya Civil Aviation Authority (KCAA)
Latvia – LGS (Latvian ATC)
Lithuania – ANS (Lithuanian ATC)
Luxembourg – Administration de la navigation aérienne (ANA – government administration)
Macedonia – DGCA (Macedonian ATC)
Malaysia – Civil Aviation Authority of Malaysia (CAAM)
Malta – Malta Air Traffic Services Ltd
Mexico – Servicios a la Navegación en el Espacio Aéreo Mexicano
Morocco – Office National Des Aeroports (ONDA)
Nepal – Civil Aviation Authority of Nepal
Netherlands – Luchtverkeersleiding Nederland (LVNL) (Dutch ATC) Eurocontrol (Maastricht Upper Area Control Centre)
New Zealand – Airways New Zealand (state owned enterprise)
Nigeria – Nigeria Civil Aviation Authority (NCAA)
Norway – Avinor (state-owned private company)
Oman – Directorate General of Meteorology & Air Navigation (Government of Oman)
Pakistan – Civil Aviation Authority (under Government of Pakistan)
Peru – Centro de Instrucción de Aviación Civil (CIAC)
Philippines – Civil Aviation Authority of the Philippines (CAAP) (under the Philippine Government)
Poland – Polish Air Navigation Services Agency (PANSA)
Portugal – NAV (Portuguese ATC)
Puerto Rico – Administracion Federal de Aviacion
Romania – Romanian Air Traffic Services Administration (ROMATSA)
Russia – Federal State Unitary Enterprise (State ATM Corporation)
Saudi Arabia – Saudi Air Navigation Services (SANS)
Seychelles – Seychelles Civil Aviation Authority (SCAA)
Singapore – Civil Aviation Authority of Singapore (CAAS)
Serbia – Serbia and Montenegro Air Traffic Services Agency Ltd. (SMATSA)
Slovakia – Letové prevádzkové služby Slovenskej republiky
Slovenia – Slovenia Control
South Africa – Air Traffic and Navigation Services (ATNS)
South Korea – Korea Office of Civil Aviation
Spain – AENA now AENA S.A. (Spanish Airports) and ENAIRE (ATC & ATSP)
Sri Lanka – Airport & Aviation Services (Sri Lanka) Limited (government owned company)
Sweden – LFV (government body)
Switzerland – Skyguide
Taiwan – ANWS (Civil Aeronautical Administration)
Thailand – AEROTHAI (Aeronautical Radio of Thailand)
Trinidad and Tobago – Trinidad and Tobago Civil Aviation Authority (TTCAA)
Turkey – General Directorate of State Airports Authority (DHMI)
United Arab Emirates – General Civil Aviation Authority (GCAA)
United Kingdom – National Air Traffic Services (NATS) (49% state-owned public-private partnership, civilian and military)
United States – Federal Aviation Administration (FAA) (government body)
Ukraine – Ukrainian State Air Traffic Service Enterprise (UkSATSE)
Venezuela – Instituto Nacional de Aeronautica Civil (INAC)
Vietnam – Vietnam Air Traffic Management Corporation (VATM)
Zambia – Zambia Civil Aviation Authority (ZCAA)
Zimbabwe – Zimbabwe Civil Aviation Authority
Proposed changes
In the United States, some alterations to traffic control procedures are being examined:
The Next Generation Air Transportation System examines how to overhaul the United States national airspace system.
Free flight is a developing air traffic control method that uses no centralised control (e.g. air traffic controllers). Instead, parts of airspace are reserved dynamically and automatically in a distributed way using computer communication to ensure the required separation between aircraft.
In Europe, the Single European Sky ATM Research (SESAR) programme plans to develop new methods, technologies, procedures, and systems to accommodate future (2020 and beyond) air traffic needs. In October 2018, European controller unions dismissed setting targets to improve ATC as "a waste of time and effort", as new technology could cut costs for users but threaten their jobs. In April 2019, the EU called for a 'Digital European Sky', focusing on cutting costs by including a common digitisation standard, and allowing controllers to move to where they are needed instead of merging national ATCs, as it would not solve all problems. Single air-traffic control services in continent-sized America and China does not alleviate congestion. Eurocontrol tries to reduce delays by diverting flights to less busy routes: flight paths across Europe were redesigned to accommodate the new airport in Istanbul, which opened in April, but the extra capacity will be absorbed by rising demand for air travel.
Well-paid jobs in western Europe could move east with cheaper labour. The average Spanish controller earn over €200,000 a year, over seven times the country average salary, more than pilots, and at least ten controllers were paid over €810,000 ($1.1m) a year in 2010. French controllers spent a cumulative nine months on strike between 2004 and 2016.
Privatisation
Many countries have also privatised or corporatised their air navigation service providers. There are several models that can be used for ATC service providers. The first is to have the ATC services be part of a government agency as is currently the case in the United States. The problem with this model is that funding can be inconsistent, and can disrupt the development and operation of services. Sometimes funding can disappear when lawmakers cannot approve budgets in time. Both proponents and opponents of privatisation recognise that stable funding is one of the major factors for successful upgrades of ATC infrastructure. Some of the funding issues include sequestration and politicisation of projects. Proponents argue that moving ATC services to a private corporation could stabilise funding over the long term which will result in more predictable planning and rollout of new technology as well as training of personnel. As of November 2024, The United States had 265 contractor towers that are staffed by private companies but administered by FAA through its FAA Contract Tower Program, which was established in 1982. These contract control towers cover 51% of all the Federal air traffic control towers in the U.S..
Another model is to have ATC services provided by a government corporation. This model is used in Germany, where funding is obtained through user fees. Yet another model is to have a for-profit corporation operate ATC services. This is the model used in the United Kingdom, but there have been several issues with the system there, including a large-scale failure in December 2014 which caused delays and cancellations and has been attributed to cost-cutting measures put in place by this corporation. In fact, earlier that year, the corporation owned by the German government won the bid to provide ATC services for Gatwick Airport in the United Kingdom. The last model, which is often the suggested model for the United States to transition to is to have a non-profit organisation that would handle ATC services as is used in Canada.
The Canadian system is the one most often used as a model by proponents of privatisation. Air traffic control privatisation has been successful in Canada with the creation of Nav Canada, a private non-profit organisation which has reduced costs, and has allowed new technologies to be deployed faster due to the elimination of much of the bureaucratic red tape. This has resulted in shorter flights and less fuel usage. It has also resulted in flights being safer due to new technology. Nav Canada is funded from fees that are collected from the airlines based on the weight of the aircraft and the distance flown.
Air traffic control is operated by national governments with few exceptions: in the European Union, only Italy has private shareholders. Privatisation does not guarantee lower prices: the profit margin of MUAC was 70% in 2017, as there is no competition, but governments could offer fixed terms concessions. Australia, Fiji, and New Zealand run the upper-airspace for the Pacific islands' governments. HungaroControl offers remote airport tower services from Budapest, and since 2014 provides upper airspace management for Kosovo.
ATC regulations in the United States
The United States airspace is divided into 21 zones (centres), and each zone is divided into sectors. Also within each zone are portions of airspace, about in diameter, called TRACON (Terminal Radar Approach Control) airspaces. Within each TRACON airspace are a number of airports, each of which has its own airspace with a radius. FAA control tower operators (CTO) / air traffic controllers use FAA Order 7110.65 as the authority for all procedures regarding air traffic.
| Technology | Aviation | null |
48628 | https://en.wikipedia.org/wiki/Gaussian%20integer | Gaussian integer | In number theory, a Gaussian integer is a complex number whose real and imaginary parts are both integers. The Gaussian integers, with ordinary addition and multiplication of complex numbers, form an integral domain, usually written as or
Gaussian integers share many properties with integers: they form a Euclidean domain, and thus have a Euclidean division and a Euclidean algorithm; this implies unique factorization and many related properties. However, Gaussian integers do not have a total ordering that respects arithmetic.
Gaussian integers are algebraic integers and form the simplest ring of quadratic integers.
Gaussian integers are named after the German mathematician Carl Friedrich Gauss.
Basic definitions
The Gaussian integers are the set
In other words, a Gaussian integer is a complex number such that its real and imaginary parts are both integers.
Since the Gaussian integers are closed under addition and multiplication, they form a commutative ring, which is a subring of the field of complex numbers. It is thus an integral domain.
When considered within the complex plane, the Gaussian integers constitute the -dimensional integer lattice.
The conjugate of a Gaussian integer is the Gaussian integer .
The norm of a Gaussian integer is its product with its conjugate.
The norm of a Gaussian integer is thus the square of its absolute value as a complex number. The norm of a Gaussian integer is a nonnegative integer, which is a sum of two squares. Thus a norm cannot be of the form , with integer.
The norm is multiplicative, that is, one has
for every pair of Gaussian integers . This can be shown directly, or by using the multiplicative property of the modulus of complex numbers.
The units of the ring of Gaussian integers (that is the Gaussian integers whose multiplicative inverse is also a Gaussian integer) are precisely the Gaussian integers with norm 1, that is, 1, –1, and .
Euclidean division
Gaussian integers have a Euclidean division (division with remainder) similar to that of integers and polynomials. This makes the Gaussian integers a Euclidean domain, and implies that Gaussian integers share with integers and polynomials many important properties such as the existence of a Euclidean algorithm for computing greatest common divisors, Bézout's identity, the principal ideal property, Euclid's lemma, the unique factorization theorem, and the Chinese remainder theorem, all of which can be proved using only Euclidean division.
A Euclidean division algorithm takes, in the ring of Gaussian integers, a dividend and divisor , and produces a quotient and remainder such that
In fact, one may make the remainder smaller:
Even with this better inequality, the quotient and the remainder are not necessarily unique, but one may refine the choice to ensure uniqueness.
To prove this, one may consider the complex number quotient . There are unique integers and such that and , and thus . Taking , one has
with
and
The choice of and in a semi-open interval is required for uniqueness.
This definition of Euclidean division may be interpreted geometrically in the complex plane (see the figure), by remarking that the distance from a complex number to the closest Gaussian integer is at most .
Principal ideals
Since the ring of Gaussian integers is a Euclidean domain, is a principal ideal domain, which means that every ideal of is principal. Explicitly, an ideal is a subset of a ring such that every sum of elements of and every product of an element of by an element of belong to . An ideal is principal if it consists of all multiples of a single element , that is, it has the form
In this case, one says that the ideal is generated by or that is a generator of the ideal.
Every ideal in the ring of the Gaussian integers is principal, because, if one chooses in a nonzero element of minimal norm, for every element of , the remainder of Euclidean division of by belongs also to and has a norm that is smaller than that of ; because of the choice of , this norm is zero, and thus the remainder is also zero. That is, one has , where is the quotient.
For any , the ideal generated by is also generated by any associate of , that is, ; no other element generates the same ideal. As all the generators of an ideal have the same norm, the norm of an ideal is the norm of any of its generators.
In some circumstances, it is useful to choose, once for all, a generator for each ideal. There are two classical ways for doing that, both considering first the ideals of odd norm. If the has an odd norm , then one of and is odd, and the other is even. Thus has exactly one associate with a real part that is odd and positive. In his original paper, Gauss made another choice, by choosing the unique associate such that the remainder of its division by is one. In fact, as , the norm of the remainder is not greater than 4. As this norm is odd, and 3 is not the norm of a Gaussian integer, the norm of the remainder is one, that is, the remainder is a unit. Multiplying by the inverse of this unit, one finds an associate that has one as a remainder, when divided by .
If the norm of is even, then either or , where is a positive integer, and is odd. Thus, one chooses the associate of for getting a which fits the choice of the associates for elements of odd norm.
Gaussian primes
As the Gaussian integers form a principal ideal domain, they also form a unique factorization domain. This implies that a Gaussian integer is irreducible (that is, it is not the product of two non-units) if and only if it is prime (that is, it generates a prime ideal).
The prime elements of are also known as Gaussian primes. An associate of a Gaussian prime is also a Gaussian prime. The conjugate of a Gaussian prime is also a Gaussian prime (this implies that Gaussian primes are symmetric about the real and imaginary axes).
A positive integer is a Gaussian prime if and only if it is a prime number that is congruent to 3 modulo 4 (that is, it may be written , with a nonnegative integer) . The other prime numbers are not Gaussian primes, but each is the product of two conjugate Gaussian primes.
A Gaussian integer is a Gaussian prime if and only if either:
one of is zero and the absolute value of the other is a prime number of the form (with a nonnegative integer), or
both are nonzero and is a prime number (which will not be of the form ).
In other words, a Gaussian integer is a Gaussian prime if and only if either its norm is a prime number, or is the product of a unit () and a prime number of the form .
It follows that there are three cases for the factorization of a prime natural number in the Gaussian integers:
If is congruent to 3 modulo 4, then it is a Gaussian prime; in the language of algebraic number theory, is said to be inert in the Gaussian integers.
If is congruent to 1 modulo 4, then it is the product of a Gaussian prime by its conjugate, both of which are non-associated Gaussian primes (neither is the product of the other by a unit); is said to be a decomposed prime in the Gaussian integers. For example, and .
If , we have ; that is, 2 is the product of the square of a Gaussian prime by a unit; it is the unique ramified prime in the Gaussian integers.
Unique factorization
As for every unique factorization domain, every Gaussian integer may be factored as a product of a unit and Gaussian primes, and this factorization is unique up to the order of the factors, and the replacement of any prime by any of its associates (together with a corresponding change of the unit factor).
If one chooses, once for all, a fixed Gaussian prime for each equivalence class of associated primes, and if one takes only these selected primes in the factorization, then one obtains a prime factorization which is unique up to the order of the factors. With the choices described above, the resulting unique factorization has the form
where is a unit (that is, ), and are nonnegative integers, are positive integers, and are distinct Gaussian primes such that, depending on the choice of selected associates,
either with odd and positive, and even,
or the remainder of the Euclidean division of by equals 1 (this is Gauss's original choice).
An advantage of the second choice is that the selected associates behave well under products for Gaussian integers of odd norm. On the other hand, the selected associate for the real Gaussian primes are negative integers. For example, the factorization of 231 in the integers, and with the first choice of associates is , while it is with the second choice.
Gaussian rationals
The field of Gaussian rationals is the field of fractions of the ring of Gaussian integers. It consists of the complex numbers whose real and imaginary part are both rational.
The ring of Gaussian integers is the integral closure of the integers in the Gaussian rationals.
This implies that Gaussian integers are quadratic integers and that a Gaussian rational is a Gaussian integer, if and only if it is a solution of an equation
with and integers. In fact is solution of the equation
and this equation has integer coefficients if and only if and are both integers.
Greatest common divisor
As for any unique factorization domain, a greatest common divisor (gcd) of two Gaussian integers is a Gaussian integer that is a common divisor of and , which has all common divisors of and as divisor. That is (where denotes the divisibility relation),
and , and
and implies .
Thus, greatest is meant relatively to the divisibility relation, and not for an ordering of the ring (for integers, both meanings of greatest coincide).
More technically, a greatest common divisor of and is a generator of the ideal generated by and (this characterization is valid for principal ideal domains, but not, in general, for unique factorization domains).
The greatest common divisor of two Gaussian integers is not unique, but is defined up to the multiplication by a unit. That is, given a greatest common divisor of and , the greatest common divisors of and are , and .
There are several ways for computing a greatest common divisor of two Gaussian integers and . When one knows the prime factorizations of and ,
where the primes are pairwise non associated, and the exponents non-associated, a greatest common divisor is
with
Unfortunately, except in simple cases, the prime factorization is difficult to compute, and Euclidean algorithm leads to a much easier (and faster) computation. This algorithm consists of replacing of the input by , where is the remainder of the Euclidean division of by , and repeating this operation until getting a zero remainder, that is a pair . This process terminates, because, at each step, the norm of the second Gaussian integer decreases. The resulting is a greatest common divisor, because (at each step) and have the same divisors as and , and thus the same greatest common divisor.
This method of computation works always, but is not as simple as for integers because Euclidean division is more complicated. Therefore, a third method is often preferred for hand-written computations. It consists in remarking that the norm of the greatest common divisor of and is a common divisor of , , and . When the greatest common divisor of these three integers has few factors, then it is easy to test, for common divisor, all Gaussian integers with a norm dividing .
For example, if , and , one has , , and . As the greatest common divisor of the three norms is 2, the greatest common divisor of and has 1 or 2 as a norm. As a gaussian integer of norm 2 is necessary associated to , and as divides and , then the greatest common divisor is .
If is replaced by its conjugate , then the greatest common divisor of the three norms is 34, the norm of , thus one may guess that the greatest common divisor is , that is, that . In fact, one has .
Congruences and residue classes
Given a Gaussian integer , called a modulus, two Gaussian integers are congruent modulo , if their difference is a multiple of , that is if there exists a Gaussian integer such that . In other words, two Gaussian integers are congruent modulo , if their difference belongs to the ideal generated by . This is denoted as .
The congruence modulo is an equivalence relation (also called a congruence relation), which defines a partition of the Gaussian integers into equivalence classes, called here congruence classes or residue classes. The set of the residue classes is usually denoted , or , or simply .
The residue class of a Gaussian integer is the set
of all Gaussian integers that are congruent to . It follows that if and only if .
Addition and multiplication are compatible with congruences. This means that and imply and .
This defines well-defined operations (that is independent of the choice of representatives) on the residue classes:
With these operations, the residue classes form a commutative ring, the quotient ring of the Gaussian integers by the ideal generated by , which is also traditionally called the residue class ring modulo (for more details, see Quotient ring).
Examples
There are exactly two residue classes for the modulus , namely (all multiples of ), and , which form a checkerboard pattern in the complex plane. These two classes form thus a ring with two elements, which is, in fact, a field, the unique (up to an isomorphism) field with two elements, and may thus be identified with the integers modulo 2. These two classes may be considered as a generalization of the partition of integers into even and odd integers. Thus one may speak of even and odd Gaussian integers (Gauss divided further even Gaussian integers into even, that is divisible by 2, and half-even).
For the modulus 2 there are four residue classes, namely . These form a ring with four elements, in which for every . Thus this ring is not isomorphic with the ring of integers modulo 4, another ring with four elements. One has , and thus this ring is not the finite field with four elements, nor the direct product of two copies of the ring of integers modulo 2.
For the modulus there are eight residue classes, namely , whereof four contain only even Gaussian integers and four contain only odd Gaussian integers.
Describing residue classes
Given a modulus , all elements of a residue class have the same remainder for the Euclidean division by , provided one uses the division with unique quotient and remainder, which is described above. Thus enumerating the residue classes is equivalent with enumerating the possible remainders. This can be done geometrically in the following way.
In the complex plane, one may consider a square grid, whose squares are delimited by the two lines
with and integers (blue lines in the figure). These divide the plane in semi-open squares (where and are integers)
The semi-open intervals that occur in the definition of have been chosen in order that every complex number belong to exactly one square; that is, the squares form a partition of the complex plane. One has
This implies that every Gaussian integer is congruent modulo to a unique Gaussian integer in (the green square in the figure), which its remainder for the division by . In other words, every residue class contains exactly one element in .
The Gaussian integers in (or in its boundary) are sometimes called minimal residues because their norm are not greater than the norm of any other Gaussian integer in the same residue class (Gauss called them absolutely smallest residues).
From this one can deduce by geometrical considerations, that the number of residue classes modulo a Gaussian integer equals its norm (see below for a proof; similarly, for integers, the number of residue classes modulo is its absolute value ).
Residue class fields
The residue class ring modulo a Gaussian integer is a field if and only if is a Gaussian prime.
If is a decomposed prime or the ramified prime (that is, if its norm is a prime number, which is either 2 or a prime congruent to 1 modulo 4), then the residue class field has a prime number of elements (that is, ). It is thus isomorphic to the field of the integers modulo .
If, on the other hand, is an inert prime (that is, is the square of a prime number, which is congruent to 3 modulo 4), then the residue class field has elements, and it is an extension of degree 2 (unique, up to an isomorphism) of the prime field with elements (the integers modulo ).
Primitive residue class group and Euler's totient function
Many theorems (and their proofs) for moduli of integers can be directly transferred to moduli of Gaussian integers, if one replaces the absolute value of the modulus by the norm. This holds especially for the primitive residue class group (also called multiplicative group of integers modulo ) and Euler's totient function. The primitive residue class group of a modulus is defined as the subset of its residue classes, which contains all residue classes that are coprime to , i.e. . Obviously, this system builds a multiplicative group. The number of its elements shall be denoted by (analogously to Euler's totient function for integers ).
For Gaussian primes it immediately follows that and for arbitrary composite Gaussian integers
Euler's product formula can be derived as
where the product is to build over all prime divisors of (with ). Also the important theorem of Euler can be directly transferred:
For all with , it holds that .
Historical background
The ring of Gaussian integers was introduced by Carl Friedrich Gauss in his second monograph on quartic reciprocity (1832). The theorem of quadratic reciprocity (which he had first succeeded in proving in 1796) relates the solvability of the congruence to that of . Similarly, cubic reciprocity relates the solvability of to that of , and biquadratic (or quartic) reciprocity is a relation between and . Gauss discovered that the law of biquadratic reciprocity and its supplements were more easily stated and proved as statements about "whole complex numbers" (i.e. the Gaussian integers) than they are as statements about ordinary whole numbers (i.e. the integers).
In a footnote he notes that the Eisenstein integers are the natural domain for stating and proving results on cubic reciprocity and indicates that similar extensions of the integers are the appropriate domains for studying higher reciprocity laws.
This paper not only introduced the Gaussian integers and proved they are a unique factorization domain, it also introduced the terms norm, unit, primary, and associate, which are now standard in algebraic number theory.
Unsolved problems
Most of the unsolved problems are related to distribution of Gaussian primes in the plane.
Gauss's circle problem does not deal with the Gaussian integers per se, but instead asks for the number of lattice points inside a circle of a given radius centered at the origin. This is equivalent to determining the number of Gaussian integers with norm less than a given value.
There are also conjectures and unsolved problems about the Gaussian primes. Two of them are:
The real and imaginary axes have the infinite set of Gaussian primes 3, 7, 11, 19, ... and their associates. Are there any other lines that have infinitely many Gaussian primes on them? In particular, are there infinitely many Gaussian primes of the form ?
Is it possible to walk to infinity using the Gaussian primes as stepping stones and taking steps of a uniformly bounded length? This is known as the Gaussian moat problem; it was posed in 1962 by Basil Gordon and remains unsolved.
| Mathematics | Basics | null |
48774 | https://en.wikipedia.org/wiki/Lance | Lance | The English term lance is derived, via Middle English launce and Old French lance, from the Latin lancea, a generic term meaning a spear or javelin employed by both infantry and cavalry, with English initially keeping these generic meanings. It developed later into a term for spear-like weapons specially designed and modified to be part of a "weapon system" for use couched under the arm during a charge, being equipped with special features such as grappers to engage with lance rests attached to breastplates, and vamplates, small circular plates designed to prevent the hand sliding up the shaft upon impact. These specific features were in use by the beginning of the late 14th century.
Though best known as a military and sporting weapon carried by European knights and men-at-arms, the use of lances was widespread throughout East Asia, the Middle East, and North Africa wherever suitable mounts were available. Lances were the main weapon of lancers of the medieval period and beyond, and these troops also carried secondary weapons such as swords, battle axes, war hammers, maces, and daggers for use in hand-to-hand combat, since the lance was often a one-use-per-engagement weapon, becoming embedded in their targets or being broken on impact. Assuming the lance survived the initial impact without breaking, it could also prove inappropriate for more static, closer engagements where its length became a hindrance.
Etymology
The name is derived from the word , the Roman auxiliaries' javelin or throwing spear; although according to the OED, the word may be of Iberian origin. Also compare (), a Greek term for "spear" or "lance".
A lance in the original sense is a light throwing spear or javelin. The English verb to launch "fling, hurl, throw" is derived from the term (via Old French ), as well as the rarer or poetic to lance. The term from the 17th century came to refer specifically to spears not thrown, used for thrusting by heavy cavalry, and especially in jousting. The longer types of thrusting spear used by infantry are usually referred to simply as spears or later as pikes, though many other terms existed.
History of use
Late Roman
During the late 3rd century the weapons of the cavalry attached to each Roman legion evolved from javelins and swords to sometimes include long reaching lances (contus). These required the use of both hands to thrust.
Middle Ages
The Byzantine cavalry used lances (kontos or kontarion) almost exclusively, often in mixed formations of mounted archers and lancers (cursores et defensores). The Byzantines used lances in both overarm and underarm grips, as well as being couched under the arm (held horizontally). The length of the standard kontarion is estimated at , which is shorter than that of the medieval knight of Western Europe.
Formations of knights were known to use underarm-couched military lances in full-gallop closed-ranks charges against lines of opposing infantry or cavalry. Two variants on the couched lance charge developed, the French method, en haie, with lancers in a double line, and the German method, with lancers drawn up in a deeper formation which was often wedge-shaped. It is commonly believed that this became the dominant European cavalry tactic in the 11th century after the development of the cantled saddle and stirrups (the Great Stirrup Controversy), and of rowel spurs (which enabled better control of the mount). Cavalry thus outfitted and deployed had a tremendous collective force in their charge, and could shatter most contemporary infantry lines.
Because of the extreme stopping power of a thrusting spear, it quickly became a popular weapon of infantry in the Late Middle Ages. These eventually led to the rise of the longest type of spears, the pike. This adaptation of the cavalry lance to infantry use was largely tasked with stopping lance-armed cavalry charges. During the 14th, 15th, and 16th centuries, these weapons, both mounted and unmounted, were so effective that lancers and pikemen not only became a staple of every Western army, but also became highly sought-after mercenaries. (However, the pike had already been used by Philip II of Macedon in antiquity to great effect, in the form of the sarissa.)
In Europe, a jousting lance was a variation of the knight's lance which was modified from its original war design. In jousting, the lance tips would usually be blunt, often spread out like a cup or furniture foot, to provide a wider impact surface designed to unseat the opposing rider without spearing him through. The centre of the shaft of such lances could be designed to be hollow, in order for it to break on impact, as a further safeguard against impalement. They were on average long, and had hand guards built into the lance, often tapering for a considerable portion of the weapon's length. These are the versions that can most often be seen at medieval reenactment festivals. In war, lances were much more like stout spears, long and balanced for one-handed use, and with sharpened tips.
Lance (unit organization)
As a small unit that surrounded a knight when he went into battle during the 14th and 15th centuries, a lance might have consisted of one or two squires, the knight himself, one to three men-at-arms, and possibly an archer. Lances were often combined under the banner of a higher-ranking nobleman to form companies of knights that would act as an ad hoc unit.
17th and 18th century decline in Western Europe
The advent of wheellock technology spelled the end of the lance in Western Europe, with newer types of heavy cavalry such as reiters and cuirassiers spurning the old one-use weapon and increasingly supplanting the older gendarme type Medieval cavalry. While many Renaissance captains such as Sir Roger Williams continued to espouse the virtues of the lance, many such as François de la Noue openly encouraged its abandonment in the face of the pistol's greater armor piercing power, handiness and greater general utility. At the same time the adoption of pike and shot tactic by most infantry forces would neuter much of the power of the lancer's breakneck charge, making them a non-cost effective type of military unit due to their expensive horses in comparison to cuirassiers and reiters, who usually charging only at a trot could make do with lower quality mounts. After the success of pistol-armed Huguenot heavy horse against their Royalist counterparts during the French Wars of Religion, most Western European powers started rearming their lancers with pistols, initially as an adjunct weapon and eventually as a replacement, with the Spanish retaining the lance the longest.
Only the Polish–Lithuanian Commonwealth with its far greater emphasis on cavalry warfare, large population of Szlachta nobility and general lower military technology level among its foes retained the lance to a considerable degree, with the famously winged Polish hussars having their glory period during the 17th and 18th centuries against a wide variety of enemy forces.
Indigenous use in North America
After the Western introduction of the horse to Native Americans, the Plains Indians used the bow and lance, probably independently, as American cavalry of the time were armed with the pistol and sabre, firing forward at full gallop.
19th century revival in Western Europe
The mounted lancer experienced a renaissance in the 19th century. This followed on the demise of the pike and of body armor during the early 18th century, with the reintroduction of lances coming from Hungary and Poland, having retained large formations of lance-armed cavalry when they had become more or less obsolescent elsewhere in Europe. Lancers became especially prevalent during and after the Napoleonic Wars: a period when almost all the major European powers reintroduced the lance into their respective cavalry arsenals. Formations of uhlans and other types of cavalry used lances between in length as their primary weapons. The lance was usually employed in initial charges in close formation, with sabers being used in the melee that followed.
The Crimean War saw the use of the lance in the Charge of the Light Brigade. One of the four British regiments involved in the charge, plus the Russian Cossacks who counter-attacked, were armed with this weapon.
During the War of the Triple Alliance (1864–1870), the Paraguayan cavalry made effective use of locally manufactured lances, both of conventional design and of an antique pattern used by gauchos for cattle herding.
The 1860s and 1870s saw the increasing common usage of ash, bamboo, beech, or pine wood for lance shafts of varying lengths, each with steel points and butts, adopted by the uhlan regiments of the Saxon, Württemberg, Bavarian, and Prussian armies.
Twilight of use
In the American Civil War, the 6th Pennsylvania Cavalry Regiment was equipped with lances modeled after Napoleon Bonaparte's forces in France. American troops had never previously used the lance in combat. The lances proved ineffective in battle and were replaced with carbine rifles in 1863.
The Franco-Prussian War of 1870 saw the extensive deployment of cavalry armed with lances on both sides. While the opportunities for decisive use of this weapon proved infrequent during the actual conflict, the entire cavalry corps (93 regiments of hussars, dragoons, cuirassiers, and uhlans) of the post-war Imperial German Army subsequently adopted the lance as a primary weapon. After 1893 the standard German cavalry lance was made of drawn tubular steel, covered with clear lacquer and with a hemp hand-grip. At it was the longest version then in use.
The Austrian cavalry had included regiments armed with lances since 1784. In 1884, the lance ceased to be carried either as an active service or parade weapon. However the eleven Uhlan regiments continued in existence until 1918, armed with the standard cavalry sabre.
During the Second Boer War, British troops successfully used the lance on one occasion - against retreating Boers at the Battle of Elandslaagte (21 October 1899). However, the Boers made effective use of trench warfare, rapid-fire field artillery, continuous-fire machine guns, and accurate long-range repeating rifles from the beginning of the war. The combined effect was devastating, so much of the British cavalry was deployed as mounted infantry, dismounting to fight on foot. For some years after the Boer War, the six British lancer regiments officially carried the lance only for parades and other ceremonial duties. At the regimental level, training in the use of the lance continued, ostensibly to improve recruit riding skills. In 1909, the bamboo or ash lance with a steel head was reauthorized for general use on active service.
The Russian cavalry (except for the Cossacks) discarded the lance in the late 19th century, but in 1907, it was reissued for use by the front line of each squadron when charging in open formation. In its final form, the Russian lance was a long metal tube with a steel head and leather arm strap. It was intended as a shock weapon in the charge, to be dropped after impact and replaced by the sword for close combat in a melee. While demoralizing to an opponent, the lance was recognized as being an awkward encumbrance in forested regions.
The relative value of the lance and the sword as a principal weapon for mounted troops was an issue of dispute in the years immediately preceding World War I. Opponents of the lance argued that the weapon was clumsy, conspicuous, easily deflected, and inefficient in a melee. Arguments favoring the retention of the lance focused on the impact on morale of having charging cavalry preceded by "a hedge of steel" and on the effectiveness of the weapon against fleeing opponents.
World War I and after
Lances were still in use by the British, Turkish, Italian, Spanish, French, Belgian, Indian, German, and Russian armies at the outbreak of World War I. In initial cavalry skirmishes in France this antique weapon proved ineffective, German uhlans being "hampered by their long lances and a good many threw them away". A major action involving repeated charges by four regiments of German cavalry, all armed with lances, at Halen on 12 August 1914 was unsuccessful. Amongst the Belgian defenders was one regiment of lancers who fought dismounted.
With the advent of trench warfare, lances and the cavalry that carried them ceased to play a significant role. A Russian cavalry officer whose regiment carried lances throughout the war recorded only one instance where an opponent was killed by this weapon.
The Greco-Turkish War (1919–1922), saw an unexpected revival of lances amongst the cavalry of the Turkish National Army. During the successful Turkish offensives of the final stages of the war across the open plains of Asia Minor, Turkish mounted troops were armed with bamboo shafted-lances taken from military storage and inflicted heavy losses on the retreating Greek Army.
The cavalry branches of most armies which still retained lances as a service weapon at the end of World War I generally discarded them for all but ceremonial occasions during the 1920s and 1930s. There were exceptions during this era, such as the Polish cavalry, which retained the lance for combat use until either 1934 or 1937, but contrary to popular legend did not make use of it in World War II. The German cavalry retained the lance (Stahlrohrlanze) as a service weapon until 1927, as did the British cavalry until 1928. The Argentine cavalry were documented as carrying lances until the 1940s, but this appears to have been used as part of recruit riding school training, rather than serious preparation for use in active service.
Use as flagstaff
The United States Cavalry used a lance-like shaft as a flagstaff.
Mounted police use
When the Canadian North-West Mounted Police was established, it was modeled after certain British cavalry units that used lances. It made limited use of this weapon in small detachments during the 1870s, intended to impress indigenous peoples.
The modern Royal Canadian Mounted Police, the North-West Mounted Police's descendant, employs ceremonial, though functional, lances made of male bamboo. They feature a crimped swallowtail pennant, red above and white below.
The New South Wales Mounted Police, based at Redfern Barracks, Sydney, Australia, carry a lance with a navy blue and white pennant on ceremonial occasions.
Other weapons
"Lance" is also the name given by some anthropologists to the light flexible javelins (technically darts) thrown by atlatls (spear-throwing sticks), but these are usually called "atlatl javelins". Some were not much larger than arrows, and were typically feather-fletched like an arrow and unlike the vast majority of spears and javelins (one exception would be several instances of the many types of ballista bolt, a mechanically thrown spear).
A "tilting-spear" is a heraldric term for a lance.
| Technology | Polearms | null |
48775 | https://en.wikipedia.org/wiki/Halberd | Halberd | A halberd (also called halbard, halbert or Swiss voulge) is a two-handed polearm that came to prominent use from the 13th to 16th centuries. The halberd consists of an axe blade topped with a spike mounted on a long shaft. It can have a hook or thorn on the back side of the axe blade for grappling mounted combatants and protecting allied soldiers, typically musketeers. The halberd was usually long.
The word halberd is cognate with the German word Hellebarde, deriving from Middle High German halm (handle) and barte (battleaxe) joined to form helmbarte. Troops that used the weapon were called halberdiers. The word has also been used to describe a weapon of the early Bronze Age in Western Europe. This consisted of a blade mounted on a pole at a right angle.
History
The halberd is first mentioned (as ) in a work by 13th-century German poet Konrad von Würzburg. John of Winterthur described it as a new weapon used by the Swiss at the Battle of Morgarten of 1315. The halberd was inexpensive to produce and very versatile in battle. As the halberd was eventually refined, its point was more fully developed to allow it to better deal with spears and pikes (and make it able to push back approaching horsemen), as was the hook opposite the axe head, which could be used to pull horsemen to the ground. A Swiss peasant used a halberd to kill Charles the Bold, the Duke of Burgundy, at the Battle of Nancy, decisively ending the Burgundian Wars.
The halberd was the primary weapon of the early Swiss armies in the 14th and early 15th centuries. Later, the Swiss added the pike to better repel knightly attacks and roll over enemy infantry formations, with the halberd, hand-and-a-half sword, or the dagger known as the Schweizerdolch used for closer combat. The German Landsknechte, who imitated Swiss warfare methods, also used the pike, supplemented by the halberd—but their side arm of choice was a short sword called the Katzbalger.
As long as pikemen fought other pikemen, the halberd remained a useful supplemental weapon for push of pike, but when their position became more defensive, to protect the slow-loading arquebusiers and matchlock musketeers from sudden attacks by cavalry, the percentage of halberdiers in the pike units steadily decreased. By 1588, official Dutch infantry composition was down to 39% arquebuses, 34% pikes, 13% muskets, 9% halberds, and 2% one-handed swords. By 1600, troops armed exclusively with swords were no longer used and the halberd was only used by sergeants.
Researchers suspected that a halberd or a bill sliced through the back of King Richard III's skull at the Battle of Bosworth Field on 22 August 1485, leaving his brain visible before killing him during the battle, and were later able to confirm that it was a halberd.
While rarer than it had been from the late 15th to mid-16th centuries, the halberd was still used infrequently as an infantry weapon well into the mid-17th century. The armies of the Catholic League in 1625, for example, had halberdiers comprising 7% of infantry units, with musketeers comprising 58% and armored pikemen 35%. By 1627 this had changed to 65% muskets, 20% pikes, and 15% halberds. A near-contemporary depiction of the 1665 Battle of Montes Claros at Palace of the Marquises of Fronteira depicts a minority of the Portuguese and Spanish soldiers as armed with halberds. Antonio de Pereda's 1635 painting El Socorro a Génova depicting the Relief of Genoa has all the soldiers armed with halberds. The most consistent users of the halberd in the Thirty Years' War were German sergeants who would carry one as a sign of rank. While they could use them in melee combat, more often they were used for dressing the ranks by grasping the shaft in both hands and pushing it against several men simultaneously. They could also be used to push pikes or muskets up or down, especially to stop overexcited musketeers from firing prematurely.
The halberd has been used as a court bodyguard weapon for centuries, and is still the ceremonial weapon of the Swiss Guard in the Vatican and the Alabarderos (Halberdiers) Company of the Spanish Royal Guard. The halberd was one of the polearms sometimes carried by lower-ranking officers in European infantry units in the 16th through 18th centuries. In the British army, sergeants continued to carry halberds until 1793, when they were replaced by spontoons. The 18th-century halberd had, however, become simply a symbol of rank with no sharpened edge and insufficient strength to use as a weapon. It served as an instrument for ensuring that infantrymen in ranks stood correctly aligned with each other and that their muskets were aimed at the correct level.
The development of the halberd
The word helmbarte or variations thereof show up in German texts from the 13th century onwards. At that point, the halberd is not too distinct from other types of broad axes or bardiches used all over Europe. In the late 13th century the weapon starts to develop into a distinct weapon, with the top of the blade developing into a more acute thrusting point. This form of the halberd is erroneously sometimes called a voulge or a swiss voulge, but there is no evidence for the usage of these terms for this weapon historically. There were variations of these weapons with spikes on the back, though also plenty without. In the early 15th century the construction changes to incorporate sockets into the blade, instead of hoops as the previous designs had. With this development back spikes are directly integrated into the blade construction and become a universal part of the halberd design.
Similar and related polearms
Bardiche, a type of two-handed battle axe known in the 16th and 17th centuries in Eastern Europe
Bill, similar to a halberd but with a hooked blade form
Ge or dagger-axe, a Chinese weapon in use from the Shang dynasty (est. 1500 BC) that had a dagger-shaped blade mounted perpendicular to a spearhead
Fauchard, a curved blade atop a pole that was used in Europe between the 11th and 14th centuries
Guisarme, a medieval bladed weapon on the end of a long pole; later designs implemented a small reverse spike on the back of the blade
Glaive, a large blade, up to long, on the end of a pole
Guandao, a Chinese polearm from the 3rd century AD that had a heavy curved blade with a spike at the back
Ji (戟), a Chinese polearm combining a spear and dagger-axe
Kamayari, a Japanese spear with blade offshoots
Lochaber axe, a Scottish weapon that had a heavy blade attached to a pole in a similar fashion to a voulge
Naginata, a Japanese weapon that had a blade attached by a sword guard to a wooden shaft
Partisan, a large double-bladed spearhead mounted on a long shaft that had protrusions on either side for parrying sword thrusts
Poleaxe, a type of polearm with an axehead or hammerhead on the sides with either a spike or spearhead at the top and mounted on a long shaft. It was developed in the 14th century and remained in use until the 16th century to breach the plate armour worn by European knights and men-at-arms
Ranseur, a polearm consisting of a spearhead affixed with a cross hilt at its base derived from the earlier spetum
Spontoon, a 17th-century weapon that consisted of a large blade with two side blades mounted on a long pole, considered a more elaborate pike
Voulge, a crude single-edged blade bound to a wooden shaft
Tabarzin, a type of battle axe from Middle East.
War scythe, an improvised weapon that consisted of a blade from a scythe attached vertically to a shaft
Welsh hook, similar to a halberd and thought to originate from a forest-bill
Woldo, A Korean polearm that had a crescent-shaped blade mounted on a long shaft, similar in construction to the Chinese guandao, and primarily served as a symbol of the Royal Guard
Yue, a Chinese axe with long shaft.
Gallery
| Technology | Polearms | null |
48791 | https://en.wikipedia.org/wiki/Pathology | Pathology | Pathology is the study of disease. The word pathology also refers to the study of disease in general, incorporating a wide range of biology research fields and medical practices. However, when used in the context of modern medical treatment, the term is often used in a narrower fashion to refer to processes and tests that fall within the contemporary medical field of "general pathology", an area that includes a number of distinct but inter-related medical specialties that diagnose disease, mostly through analysis of tissue and human cell samples. Idiomatically, "a pathology" may also refer to the predicted or actual progression of particular diseases (as in the statement "the many different forms of cancer have diverse pathologies", in which case a more proper choice of word would be "pathophysiologies"). The suffix pathy is sometimes used to indicate a state of disease in cases of both physical ailment (as in cardiomyopathy) and psychological conditions (such as psychopathy). A physician practicing pathology is called a pathologist.
As a field of general inquiry and research, pathology addresses components of disease: cause, mechanisms of development (pathogenesis), structural alterations of cells (morphologic changes), and the consequences of changes (clinical manifestations). In common medical practice, general pathology is mostly concerned with analyzing known clinical abnormalities that are markers or precursors for both infectious and non-infectious disease, and is conducted by experts in one of two major specialties, anatomical pathology and clinical pathology. Further divisions in specialty exist on the basis of the involved sample types (comparing, for example, cytopathology, hematopathology, and histopathology), organs (as in renal pathology), and physiological systems (oral pathology), as well as on the basis of the focus of the examination (as with forensic pathology).
Pathology is a significant field in modern medical diagnosis and medical research.
Etymology
The Latin term pathology derives from the Ancient Greek roots pathos (), meaning "experience" or "suffering", and -logia (), meaning "study of". The term is of early 16th-century origin, and became increasingly popularized after the 1530s.
History
The study of pathology, including the detailed examination of the body, including dissection and inquiry into specific maladies, dates back to antiquity. Rudimentary understanding of many conditions was present in most early societies and is attested to in the records of the earliest historical societies, including those of the Middle East, India, and China. By the Hellenic period of ancient Greece, a concerted causal study of disease was underway (see Medicine in ancient Greece), with many notable early physicians (such as Hippocrates, for whom the modern Hippocratic Oath is named) having developed methods of diagnosis and prognosis for a number of diseases. The medical practices of the Romans and those of the Byzantines continued from these Greek roots, but, as with many areas of scientific inquiry, growth in understanding of medicine stagnated somewhat after the Classical Era, but continued to slowly develop throughout numerous cultures. Notably, many advances were made in the medieval era of Islam (see Medicine in medieval Islam), during which numerous texts of complex pathologies were developed, also based on the Greek tradition. Even so, growth in complex understanding of disease mostly languished until knowledge and experimentation again began to proliferate in the Renaissance, Enlightenment, and Baroque eras, following the resurgence of the empirical method at new centers of scholarship. By the 17th century, the study of rudimentary microscopy was underway and examination of tissues had led British Royal Society member Robert Hooke to coin the word "cell", setting the stage for later germ theory.
Modern pathology began to develop as a distinct field of inquiry during the 19th Century through natural philosophers and physicians that studied disease and the informal study of what they termed "pathological anatomy" or "morbid anatomy". However, pathology as a formal area of specialty was not fully developed until the late 19th and early 20th centuries, with the advent of detailed study of microbiology. In the 19th century, physicians had begun to understand that disease-causing pathogens, or "germs" (a catch-all for disease-causing, or pathogenic, microbes, such as bacteria, viruses, fungi, amoebae, molds, protists, and prions) existed and were capable of reproduction and multiplication, replacing earlier beliefs in humors or even spiritual agents, that had dominated for much of the previous 1,500 years in European medicine. With the new understanding of causative agents, physicians began to compare the characteristics of one germ's symptoms as they developed within an affected individual to another germ's characteristics and symptoms. This approach led to the foundational understanding that diseases are able to replicate themselves, and that they can have many profound and varied effects on the human host. To determine causes of diseases, medical experts used the most common and widely accepted assumptions or symptoms of their times, a general principle of approach that persists in modern medicine.
Modern medicine was particularly advanced by further developments of the microscope to analyze tissues, to which Rudolf Virchow gave a significant contribution, leading to a slew of research developments.
By the late 1920s to early 1930s pathology was deemed a medical specialty. Combined with developments in the understanding of general physiology, by the beginning of the 20th century, the study of pathology had begun to split into a number of distinct fields, resulting in the development of a large number of modern specialties within pathology and related disciplines of diagnostic medicine.
General pathology
The modern practice of pathology is divided into a number of subdisciplines within the distinct but deeply interconnected aims of biological research and medical practice. Biomedical research into disease incorporates the work of a vast variety of life science specialists, whereas, in most parts of the world, to be licensed to practice pathology as a medical specialty, one has to complete medical school and secure a license to practice medicine. Structurally, the study of disease is divided into many different fields that study or diagnose markers for disease using methods and technologies particular to specific scales, organs, and tissue types.
Anatomical pathology
Anatomical pathology (Commonwealth) or anatomic pathology (United States) is a medical specialty that is concerned with the diagnosis of disease based on the gross, microscopic, chemical, immunologic and molecular examination of organs, tissues, and whole bodies (as in a general examination or an autopsy). Anatomical pathology is itself divided into subfields, the main divisions being surgical pathology, cytopathology, and forensic pathology. Anatomical pathology is one of two main divisions of the medical practice of pathology, the other being clinical pathology, the diagnosis of disease through the laboratory analysis of bodily fluids and tissues. Sometimes, pathologists practice both anatomical and clinical pathology, a combination known as general pathology.
Cytopathology
Cytopathology (sometimes referred to as "cytology") is a branch of pathology that studies and diagnoses diseases on the cellular level. It is usually used to aid in the diagnosis of cancer, but also helps in the diagnosis of certain infectious diseases and other inflammatory conditions as well as thyroid lesions, diseases involving sterile body cavities (peritoneal, pleural, and cerebrospinal), and a wide range of other body sites. Cytopathology is generally used on samples of free cells or tissue fragments (in contrast to histopathology, which studies whole tissues) and cytopathologic tests are sometimes called smear tests because the samples may be smeared across a glass microscope slide for subsequent staining and microscopic examination. However, cytology samples may be prepared in other ways, including cytocentrifugation.
Dermatopathology
Dermatopathology is a subspecialty of anatomic pathology that focuses on the skin and the rest of the integumentary system as an organ. It is unique, in that there are two paths a physician can take to obtain the specialization. All general pathologists and general dermatologists train in the pathology of the skin, so the term dermatopathologist denotes either of these who has reached a certain level of accreditation and experience; in the US, either a general pathologist or a dermatologist can undergo a 1 to 2 year fellowship in the field of dermatopathology. The completion of this fellowship allows one to take a subspecialty board examination, and becomes a board certified dermatopathologist. Dermatologists are able to recognize most skin diseases based on their appearances, anatomic distributions, and behavior. Sometimes, however, those criteria do not lead to a conclusive diagnosis, and a skin biopsy is taken to be examined under the microscope using usual histological tests. In some cases, additional specialized testing needs to be performed on biopsies, including immunofluorescence, immunohistochemistry, electron microscopy, flow cytometry, and molecular-pathologic analysis. One of the greatest challenges of dermatopathology is its scope. More than 1500 different disorders of the skin exist, including cutaneous eruptions ("rashes") and neoplasms. Therefore, dermatopathologists must maintain a broad base of knowledge in clinical dermatology, and be familiar with several other specialty areas in Medicine.
Forensic pathology
Forensic pathology focuses on determining the cause of death by post-mortem examination of a corpse or partial remains. An autopsy is typically performed by a coroner or medical examiner, often during criminal investigations; in this role, coroners and medical examiners are also frequently asked to confirm the identity of a corpse. The requirements for becoming a licensed practitioner of forensic pathology varies from country to country (and even within a given nation) but typically a minimal requirement is a medical doctorate with a specialty in general or anatomical pathology with subsequent study in forensic medicine. The methods forensic scientists use to determine death include examination of tissue specimens to identify the presence or absence of natural disease and other microscopic findings, interpretations of toxicology on body tissues and fluids to determine the chemical cause of overdoses, poisonings or other cases involving toxic agents, and examinations of physical trauma. Forensic pathology is a major component in the trans-disciplinary field of forensic science.
Histopathology
Histopathology refers to the microscopic examination of various forms of human tissue. Specifically, in clinical medicine, histopathology refers to the examination of a biopsy or surgical specimen by a pathologist, after the specimen has been processed and histological sections have been placed onto glass slides. This contrasts with the methods of cytopathology, which uses free cells or tissue fragments. Histopathological examination of tissues starts with surgery, biopsy, or autopsy. The tissue is removed from the body of an organism and then placed in a fixative that stabilizes the tissues to prevent decay. The most common fixative is formalin, although frozen section fixing is also common. To see the tissue under a microscope, the sections are stained with one or more pigments. The aim of staining is to reveal cellular components; counterstains are used to provide contrast. Histochemistry refers to the science of using chemical reactions between laboratory chemicals and components within tissue. The histological slides are then interpreted diagnostically and the resulting pathology report describes the histological findings and the opinion of the pathologist. In the case of cancer, this represents the tissue diagnosis required for most treatment protocols.
Neuropathology
Neuropathology is the study of disease of nervous system tissue, usually in the form of either surgical biopsies or sometimes whole brains in the case of autopsy. Neuropathology is a subspecialty of anatomic pathology, neurology, and neurosurgery. In many English-speaking countries, neuropathology is considered a subfield of anatomical pathology. A physician who specializes in neuropathology, usually by completing a fellowship after a residency in anatomical or general pathology, is called a neuropathologist. In day-to-day clinical practice, a neuropathologist generates diagnoses for patients. If a disease of the nervous system is suspected, and the diagnosis cannot be made by less invasive methods, a biopsy of nervous tissue is taken from the brain or spinal cord to aid in diagnosis. Biopsy is usually requested after a mass is detected by medical imaging. With autopsies, the principal work of the neuropathologist is to help in the post-mortem diagnosis of various conditions that affect the central nervous system. Biopsies can also consist of the skin. Epidermal nerve fiber density testing (ENFD) is a more recently developed neuropathology test in which a punch skin biopsy is taken to identify small fiber neuropathies by analyzing the nerve fibers of the skin. This test is becoming available in select labs as well as many universities; it replaces the traditional nerve biopsy test as less invasive.
Pulmonary pathology
Pulmonary pathology is a subspecialty of anatomic (and especially surgical) pathology that deals with diagnosis and characterization of neoplastic and non-neoplastic diseases of the lungs and thoracic pleura. Diagnostic specimens are often obtained via bronchoscopic transbronchial biopsy, CT-guided percutaneous biopsy, or video-assisted thoracic surgery. These tests can be necessary to diagnose between infection, inflammation, or fibrotic conditions.
Renal pathology
Renal pathology is a subspecialty of anatomic pathology that deals with the diagnosis and characterization of disease of the kidneys. In a medical setting, renal pathologists work closely with nephrologists and transplant surgeons, who typically obtain diagnostic specimens via percutaneous renal biopsy. The renal pathologist must synthesize findings from traditional microscope histology, electron microscopy, and immunofluorescence to obtain a definitive diagnosis. Medical renal diseases may affect the glomerulus, the tubules and interstitium, the vessels, or a combination of these compartments.
Surgical pathology
Surgical pathology is one of the primary areas of practice for most anatomical pathologists. Surgical pathology involves the gross and microscopic examination of surgical specimens, as well as biopsies submitted by surgeons and non-surgeons such as general internists, medical subspecialists, dermatologists, and interventional radiologists. Often an excised tissue sample is the best and most definitive evidence of disease (or lack thereof) in cases where tissue is surgically removed from a patient. These determinations are usually accomplished by a combination of gross (i.e., macroscopic) and histologic (i.e., microscopic) examination of the tissue, and may involve evaluations of molecular properties of the tissue by immunohistochemistry or other laboratory tests.
There are two major types of specimens submitted for surgical pathology analysis: biopsies and surgical resections. A biopsy is a small piece of tissue removed primarily for surgical pathology analysis, most often in order to render a definitive diagnosis. Types of biopsies include core biopsies, which are obtained through the use of large-bore needles, sometimes under the guidance of radiological techniques such as ultrasound, CT scan, or magnetic resonance imaging. Incisional biopsies are obtained through diagnostic surgical procedures that remove part of a suspicious lesion, whereas excisional biopsies remove the entire lesion, and are similar to therapeutic surgical resections. Excisional biopsies of skin lesions and gastrointestinal polyps are very common. The pathologist's interpretation of a biopsy is critical to establishing the diagnosis of a benign or malignant tumor, and can differentiate between different types and grades of cancer, as well as determining the activity of specific molecular pathways in the tumor. Surgical resection specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs). These procedures are often intended as definitive surgical treatment of a disease in which the diagnosis is already known or strongly suspected, but pathological analysis of these specimens remains important in confirming the previous diagnosis.
Clinical pathology
Clinical pathology is a medical specialty that is concerned with the diagnosis of disease based on the laboratory analysis of bodily fluids such as blood and urine, as well as tissues, using the tools of chemistry, clinical microbiology, hematology and molecular pathology. Clinical pathologists work in close collaboration with medical technologists, hospital administrations, and referring physicians. Clinical pathologists learn to administer a number of visual and microscopic tests and an especially large variety of tests of the biophysical properties of tissue samples involving automated analysers and cultures. Sometimes the general term "laboratory medicine specialist" is used to refer to those working in clinical pathology, including medical doctors, Ph.D.s and doctors of pharmacology. Immunopathology, the study of an organism's immune response to infection, is sometimes considered to fall within the domain of clinical pathology.
Hematopathology
Hematopathology is the study of diseases of blood cells (including constituents such as white blood cells, red blood cells, and platelets) and the tissues, and organs comprising
the hematopoietic system. The term hematopoietic system refers to tissues and organs that produce and/or primarily host hematopoietic cells and includes bone marrow, the lymph nodes, thymus, spleen, and other lymphoid tissues. In the United States, hematopathology is a board certified subspecialty (licensed under the American Board of Pathology) practiced by those physicians who have completed a general pathology residency (anatomic, clinical, or combined) and an additional year of fellowship training in hematology. The hematopathologist reviews biopsies of lymph nodes, bone marrows and other tissues involved by an infiltrate of cells of the hematopoietic system. In addition, the hematopathologist may be in charge of flow cytometric and/or molecular hematopathology studies.
Molecular pathology
Molecular pathology is focused upon the study and diagnosis of disease through the examination of molecules within organs, tissues or bodily fluids. Molecular pathology is multidisciplinary by nature and shares some aspects of practice with both anatomic pathology and clinical pathology, molecular biology, biochemistry, proteomics and genetics. It is often applied in a context that is as much scientific as directly medical and encompasses the development of molecular and genetic approaches to the diagnosis and classification of human diseases, the design and validation of predictive biomarkers for treatment response and disease progression, and the susceptibility of individuals of different genetic constitution to particular disorders. The crossover between molecular pathology and epidemiology is represented by a related field "molecular pathological epidemiology". Molecular pathology is commonly used in diagnosis of cancer and infectious diseases. Molecular Pathology is primarily used to detect cancers such as melanoma, brainstem glioma, brain tumors as well as many other types of cancer and infectious diseases. Techniques are numerous but include quantitative polymerase chain reaction (qPCR), multiplex PCR, DNA microarray, in situ hybridization, DNA sequencing, antibody-based immunofluorescence tissue assays, molecular profiling of pathogens, and analysis of bacterial genes for antimicrobial resistance. Techniques used are based on analyzing samples of DNA and RNA. Pathology is widely used for gene therapy and disease diagnosis.
Oral and maxillofacial pathology
Oral and Maxillofacial Pathology is one of nine dental specialties recognized by the American Dental Association, and is sometimes considered a specialty of both dentistry and pathology. Oral Pathologists must complete three years of post doctoral training in an accredited program and subsequently obtain diplomate status from the American Board of Oral and Maxillofacial Pathology. The specialty focuses on the diagnosis, clinical management and investigation of diseases that affect the oral cavity and surrounding maxillofacial structures including but not limited to odontogenic, infectious, epithelial, salivary gland, bone and soft tissue pathologies. It also significantly intersects with the field of dental pathology. Although concerned with a broad variety of diseases of the oral cavity, they have roles distinct from otorhinolaryngologists ("ear, nose, and throat" specialists), and speech pathologists, the latter of which helps diagnose many neurological or neuromuscular conditions relevant to speech phonology or swallowing. Owing to the availability of the oral cavity to non-invasive examination, many conditions in the study of oral disease can be diagnosed, or at least suspected, from gross examination, but biopsies, cell smears, and other tissue analysis remain important diagnostic tools in oral pathology.
Medical training and accreditation
Becoming a pathologist generally requires specialty-training after medical school, but individual nations vary some in the medical licensing required of pathologists. In the United States, pathologists are physicians (D.O. or M.D.) who have completed a four-year undergraduate program, four years of medical school training, and three to four years of postgraduate training in the form of a pathology residency. Training may be within two primary specialties, as recognized by the American Board of Pathology: [anatomical pathology and clinical pathology, each of which requires separate board certification. The American Osteopathic Board of Pathology also recognizes four primary specialties: anatomic pathology, dermatopathology, forensic pathology, and laboratory medicine. Pathologists may pursue specialised fellowship training within one or more subspecialties of either anatomical or clinical pathology. Some of these subspecialties permit additional board certification, while others do not.
In the United Kingdom, pathologists are physicians licensed by the UK General Medical Council. The training to become a pathologist is under the oversight of the Royal College of Pathologists. After four to six years of undergraduate medical study, trainees proceed to a two-year foundation program. Full-time training in histopathology currently lasts between five and five and a half years and includes specialist training in surgical pathology, cytopathology, and autopsy pathology. It is also possible to take a Royal College of Pathologists diploma in forensic pathology, dermatopathology, or cytopathology, recognising additional specialist training and expertise and to get specialist accreditation in forensic pathology, pediatric pathology, and neuropathology. All postgraduate medical training and education in the UK is overseen by the General Medical Council.
In France, pathology is separated into two distinct specialties, anatomical pathology, and clinical pathology. Residencies for both lasts four years. Residency in anatomical pathology is open to physicians only, while clinical pathology is open to both physicians and pharmacists. At the end of the second year of clinical pathology residency, residents can choose between general clinical pathology and a specialization in one of the disciplines, but they can not practice anatomical pathology, nor can anatomical pathology residents practice clinical pathology.
Overlap with other diagnostic medicine
Though separate fields in terms of medical practice, a number of areas of inquiry in medicine and
medical science either overlap greatly with general pathology, work in tandem with it, or contribute significantly to the understanding of the pathology of a given disease or its course in an individual. As a significant portion of all general pathology practice is concerned with cancer, the practice of oncology makes extensive use of both anatomical and clinical pathology in diagnosis and treatment. In particular, biopsy, resection, and blood tests are all examples of pathology work that is essential for the diagnoses of many kinds of cancer and for the staging of cancerous masses. In a similar fashion, the tissue and blood analysis techniques of general pathology are of central significance to the investigation of serious infectious disease and as such inform significantly upon the fields of epidemiology, etiology, immunology, and parasitology. General pathology methods are of great importance to biomedical research into disease, wherein they are sometimes referred to as "experimental" or "investigative" pathology.
Medical imaging is the generating of visual representations of the interior of a body for clinical analysis and medical intervention. Medical imaging reveals details of internal physiology that help medical professionals plan appropriate treatments for tissue infection and trauma. Medical imaging is also central in supplying the biometric data necessary to establish baseline features of anatomy and physiology so as to increase the accuracy with which early or fine-detail abnormalities are detected. These diagnostic techniques are often performed in combination with general pathology procedures and are themselves often essential to developing new understanding of the pathogenesis of a given disease and tracking the progress of disease in specific medical cases. Examples of important subdivisions in medical imaging include radiology (which uses the imaging technologies of X-ray radiography) magnetic resonance imaging, medical ultrasonography (or ultrasound), endoscopy, elastography, tactile imaging, thermography, medical photography, nuclear medicine and functional imaging techniques such as positron emission tomography. Though they do not strictly relay images, readings from diagnostics tests involving electroencephalography, magnetoencephalography, and electrocardiography often give hints as to the state and function of certain tissues in the brain and heart respectively.
Pathology informatics
Pathology informatics is a subfield of health informatics. It is the use of information technology in pathology. It encompasses pathology laboratory operations, data analysis, and the interpretation of pathology-related information.
Key aspects of pathology informatics include:
Laboratory information management systems (LIMS): Implementing and managing computer systems specifically designed for pathology departments. These systems help in tracking and managing patient specimens, results, and other pathology data.
Digital pathology: Involves the use of digital technology to create, manage, and analyze pathology images. This includes side scanning and automated image analysis.
Telepathology: Using technology to enable remote pathology consultation and collaboration.
Quality assurance and reporting: Implementing informatics solutions to ensure the quality and accuracy of pathology processes.
Psychopathology
Psychopathology is the study of mental illness, particularly of severe disorders. Informed heavily by both psychology and neurology, its purpose is to classify mental illness, elucidate its underlying causes, and guide clinical psychiatric treatment accordingly. Although diagnosis and classification of mental norms and disorders is largely the purview of psychiatry—the results of which are guidelines such as the Diagnostic and Statistical Manual of Mental Disorders, which attempt to classify mental disease mostly on behavioural evidence, though not without controversy—the field is also heavily, and increasingly, informed upon by neuroscience and other of the biological cognitive sciences. Mental or social disorders or behaviours seen as generally unhealthy or excessive in a given individual, to the point where they cause harm or severe disruption to the person's lifestyle, are often called "pathological" (e.g., pathological gambling or pathological liar).
Non-humans
Although the vast majority of lab work and research in pathology concerns the development of disease in humans, pathology is of significance throughout the biological sciences. Two main catch-all fields exist to represent most complex organisms capable of serving as host to a pathogen or other form of disease: veterinary pathology (concerned with all non-human species of kingdom of Animalia) and phytopathology, which studies disease in plants.
Veterinary pathology
Veterinary pathology covers a vast array of species, but with a significantly smaller number of practitioners, so understanding of disease in non-human animals, especially as regards veterinary practice, varies considerably by species. Nevertheless, significant amounts of pathology research are conducted on animals, for two primary reasons: 1) The origins of diseases are typically zoonotic in nature, and many infectious pathogens have animal vectors and, as such, understanding the mechanisms of action for these pathogens in non-human hosts is essential to the understanding and application of epidemiology and 2) those animals that share physiological and genetic traits with humans can be used as surrogates for the study of the disease and potential treatments as well as the effects of various synthetic products. For this reason, as well as their roles as livestock and companion animals, mammals generally have the largest body of research in veterinary pathology. Animal testing remains a controversial practice, even in cases where it is used to research treatment for human disease. As in human medical pathology, the practice of veterinary pathology is customarily divided into the two main fields of anatomical and clinical pathology.
Plant pathology
Although the pathogens and their mechanics differ greatly from those of animals, plants are subject to a wide variety of diseases, including those caused by fungi, oomycetes, bacteria, viruses, viroids, virus-like organisms, phytoplasmas, protozoa, nematodes and parasitic plants. Damage caused by insects, mites, vertebrate, and other small herbivores is not considered a part of the domain of plant pathology. The field is connected to plant disease epidemiology and especially concerned with the horticulture of species that are of high importance to the human diet or other human utility.
| Biology and health sciences | Fields of medicine | Health |
48803 | https://en.wikipedia.org/wiki/Gamma-ray%20burst | Gamma-ray burst | In gamma-ray astronomy, gamma-ray bursts (GRBs) are immensely energetic events occurring in distant galaxies which represent the brightest and "most powerful class of explosion in the universe." These extreme electromagnetic events are second only to the Big Bang as the most energetic and luminous phenomenon ever known. Gamma-ray bursts can last from a few milliseconds to several hours. After the initial flash of gamma rays, a longer-lived §afterglow is emitted, usually in the longer wavelengths of X-ray, ultraviolet, optical, infrared, microwave or radio frequencies.
The intense radiation of most observed GRBs is thought to be released during a supernova or superluminous supernova as a high-mass star implodes to form a neutron star or a black hole. From gravitational wave observations, short-duration (sGRB) events describe a subclass of GRB signals that are now known to originate from the cataclysmic merger of binary neutron stars.
The sources of most GRB are billions of light years away from Earth, implying that the explosions are both extremely energetic (a typical burst releases as much energy in a few seconds as the Sun will in its entire 10-billion-year lifetime) and extremely rare (a few per galaxy per million years). All GRBs in recorded history have originated from outside the Milky Way galaxy, although a related class of phenomena, soft gamma repeaters, are associated with magnetars within our galaxy. This may be self-evident, since a gamma-ray burst in the Milky Way pointed directly at Earth would likely sterilize the planet or effect a mass extinction. The Late Ordovician mass extinction has been hypothesised by some researchers to have occurred as a result of such a gamma-ray burst.
GRB signals were first detected in 1967 by the Vela satellites, which were designed to detect covert nuclear weapons tests; after an "exhaustive" period of analysis, this was published as academic research in 1973. Following their discovery, hundreds of theoretical models were proposed to explain these bursts, such as collisions between comets and neutron stars. Little information was available to verify these models until the 1997 detection of the first X-ray and optical afterglows and direct measurement of their redshifts using optical spectroscopy, and thus their distances and energy outputs. These discoveries—and subsequent studies of the galaxies and supernovae associated with the bursts—clarified the distance and luminosity of GRBs, definitively placing them in distant galaxies.
History
Gamma-ray bursts were first observed in the late 1960s by the U.S. Vela satellites, which were built to detect gamma radiation pulses emitted by nuclear weapons tested in space. The United States suspected that the Soviet Union might attempt to conduct secret nuclear tests after signing the Nuclear Test Ban Treaty in 1963. On July 2, 1967, at 14:19 UTC, the Vela 4 and Vela 3 satellites detected a flash of gamma radiation unlike any known nuclear weapons signature. Uncertain what had happened but not considering the matter particularly urgent, the team at the Los Alamos National Laboratory, led by Ray Klebesadel, filed the data away for investigation. As additional Vela satellites were launched with better instruments, the Los Alamos team continued to find inexplicable gamma-ray bursts in their data. By analyzing the different arrival times of the bursts as detected by different satellites, the team was able to determine rough estimates for the sky positions of 16 bursts and definitively rule out a terrestrial or solar origin. Contrary to popular belief, the data was never classified. After thorough analysis, the findings were published in 1973 as an Astrophysical Journal article entitled "Observations of Gamma-Ray Bursts of Cosmic Origin".
Most early hypotheses of gamma-ray bursts posited nearby sources within the Milky Way Galaxy. From 1991, the Compton Gamma Ray Observatory (CGRO) and its Burst and Transient Source Explorer (BATSE) instrument, an extremely sensitive gamma-ray detector, provided data that showed the distribution of GRBs is isotropicnot biased towards any particular direction in space. If the sources were from within our own galaxy, they would be strongly concentrated in or near the galactic plane. The absence of any such pattern in the case of GRBs provided strong evidence that gamma-ray bursts must come from beyond the Milky Way. However, some Milky Way models are still consistent with an isotropic distribution.
Counterpart objects as candidate sources
For decades after the discovery of GRBs, astronomers searched for a counterpart at other wavelengths: i.e., any astronomical object in positional coincidence with a recently observed burst. Astronomers considered many distinct classes of objects, including white dwarfs, pulsars, supernovae, globular clusters, quasars, Seyfert galaxies, and BL Lac objects. All such searches were unsuccessful, and in a few cases particularly well-localized bursts (those whose positions were determined with what was then a high degree of accuracy) could be clearly shown to have no bright objects of any nature consistent with the position derived from the detecting satellites. This suggested an origin of either very faint stars or extremely distant galaxies. Even the most accurate positions contained numerous faint stars and galaxies, and it was widely agreed that final resolution of the origins of cosmic gamma-ray bursts would require both new satellites and faster communication.
Afterglow
Several models for the origin of gamma-ray bursts postulated that the initial burst of gamma rays should be followed by afterglow: slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. Early searches for this afterglow were unsuccessful, largely because it is difficult to observe a burst's position at longer wavelengths immediately after the initial burst. The breakthrough came in February 1997 when the satellite BeppoSAX detected a gamma-ray burst (GRB 970228) and when the X-ray camera was pointed towards the direction from which the burst had originated, it detected fading X-ray emission. The William Herschel Telescope identified a fading optical counterpart 20 hours after the burst. Once the GRB faded, deep imaging was able to identify a faint, distant host galaxy at the location of the GRB as pinpointed by the optical afterglow.
Because of the very faint luminosity of this galaxy, its exact distance was not measured for several years. Well after then, another major breakthrough occurred with the next event registered by BeppoSAX, GRB 970508. This event was localized within four hours of its discovery, allowing research teams to begin making observations much sooner than any previous burst. The spectrum of the object revealed a redshift of z = 0.835, placing the burst at a distance of roughly 6 billion light years from Earth. This was the first accurate determination of the distance to a GRB, and together with the discovery of the host galaxy of 970228 proved that GRBs occur in extremely distant galaxies. Within a few months, the controversy about the distance scale ended: GRBs were extragalactic events originating within faint galaxies at enormous distances. The following year, GRB 980425 was followed within a day by a bright supernova (SN 1998bw), coincident in location, indicating a clear connection between GRBs and the deaths of very massive stars. This burst provided the first strong clue about the nature of the systems that produce GRBs.
More recent instruments - launched from 2000
BeppoSAX functioned until 2002 and CGRO (with BATSE) was deorbited in 2000. However, the revolution in the study of gamma-ray bursts motivated the development of a number of additional instruments designed specifically to explore the nature of GRBs, especially in the earliest moments following the explosion. The first such mission, HETE-2, was launched in 2000 and functioned until 2006, providing most of the major discoveries during this period. One of the most successful space missions to date, Swift, was launched in 2004 and as of May 2024 is still operational. Swift is equipped with a very sensitive gamma-ray detector as well as on-board X-ray and optical telescopes, which can be rapidly and automatically slewed to observe afterglow emission following a burst. More recently, the Fermi mission was launched carrying the Gamma-Ray Burst Monitor, which detects bursts at a rate of several hundred per year, some of which are bright enough to be observed at extremely high energies with Fermi's Large Area Telescope. Meanwhile, on the ground, numerous optical telescopes have been built or modified to incorporate robotic control software that responds immediately to signals sent through the Gamma-ray Burst Coordinates Network. This allows the telescopes to rapidly repoint towards a GRB, often within seconds of receiving the signal and while the gamma-ray emission itself is still ongoing.
The Space Variable Objects Monitor is a small X-ray telescope satellite for studying the explosions of massive stars by analysing the resulting gamma-ray bursts, developed by China National Space Administration (CNSA), Chinese Academy of Sciences (CAS) and the French Space Agency (CNES), launched on 22 June 2024 (07:00:00 UTC).
The Taiwan Space Agency is launching a cubesat called The Gamma-ray Transients Monitor to track GRBs and other bright gamma-ray transients with energies ranging from 50 keV to 2 MeV in Q4 2026.
Short bursts and other observations
New developments since the 2000s include the recognition of short gamma-ray bursts as a separate class (likely from merging neutron stars and not associated with supernovae), the discovery of extended, erratic flaring activity at X-ray wavelengths lasting for many minutes after most GRBs, and the discovery of the most luminous and the former most distant objects in the universe. Prior to a flurry of discoveries from the James Webb Space Telescope, was the most distant known object in the universe.
In October 2018, astronomers reported that (detected in 2015) and GW170817, a gravitational wave event detected in 2017 (which has been associated with , a burst detected 1.7 seconds later), may have been produced by the same mechanism—the merger of two neutron stars. The similarities between the two events, in terms of gamma ray, optical, and x-ray emissions, as well as to the nature of the associated host galaxies, were considered "striking", suggesting the two separate events may both be the result of the merger of neutron stars, and both may be a kilonova, which may be more common in the universe than previously understood, according to the researchers.
The highest energy light observed from a gamma-ray burst was one teraelectronvolt, from in 2019. Although enormous for such a distant event, this energy is around 3 orders of magnitude lower than the highest energy light observed from closer gamma ray sources within our Milky Way galaxy, for example a 2021 event of 1.4 petaelectronvolts.
Classification
The light curves of gamma-ray bursts are extremely diverse and complex. No two gamma-ray burst light curves are identical, with large variation observed in almost every property: the duration of observable emission can vary from milliseconds to tens of minutes, there can be a single peak or several individual subpulses, and individual peaks can be symmetric or with fast brightening and very slow fading. Some bursts are preceded by a "precursor" event, a weak burst that is then followed (after seconds to minutes of no emission at all) by the much more intense "true" bursting episode. The light curves of some events have extremely chaotic and complicated profiles with almost no discernible patterns.
Although some light curves can be roughly reproduced using certain simplified models, little progress has been made in understanding the full diversity observed. Many classification schemes have been proposed, but these are often based solely on differences in the appearance of light curves and may not always reflect a true physical difference in the progenitors of the explosions. However, plots of the distribution of the observed duration for a large number of gamma-ray bursts show a clear bimodality, suggesting the existence of two separate populations: a "short" population with an average duration of about 0.3 seconds and a "long" population with an average duration of about 30 seconds. Both distributions are very broad with a significant overlap region in which the identity of a given event is not clear from duration alone. Additional classes beyond this two-tiered system have been proposed on both observational and theoretical grounds.
Short gamma-ray bursts
Events with a duration of less than about two seconds are classified as short gamma-ray bursts (sGRB). These account for about 30% of gamma-ray bursts, but until 2005, no afterglow had been successfully detected from any short event and little was known about their origins. Following this, several dozen short gamma-ray burst afterglows were detected and localized, several of them associated with regions of little or no star formation, such as large elliptical galaxies. This ruled out a link to massive stars, confirming the short events to be physically distinct from long events. In addition, there had been no association with supernovae.
The true nature of these objects was thus initially unknown, but the leading hypothesis was that they originated from the mergers of binary neutron stars or a neutron star with a black hole. Such mergers were hypothesized to produce kilonovae, and evidence for a kilonova associated with short GRB 130603B was reported in 2013. The mean duration of sGRB events of around 200 milliseconds implied (due to causality) that the sources must be of very small physical diameter in stellar terms: less than 0.2 light-seconds (60,000 km or 37,000 miles)—about four times the Earth's diameter. The observation of minutes to hours of X-ray flashes after an sGRB was seen as consistent with small particles of a precursor object like a neutron star initially being swallowed by a black hole in less than two seconds, followed by some hours of lower-energy events as remaining fragments of tidally disrupted neutron star material (no longer neutronium) would remain in orbit, spiraling into the black hole over a longer period of time. The origin of short gamma-ray bursts in kilonovae was finally conclusively established in 2017, when short GRB 170817A co-occurred with the detection of gravitational wave GW170817, a signal from the merger of two neutron stars.
Unrelated to these cataclysmic origins, short-duration gamma-ray signals are also produced by giant flares from soft gamma repeaters in our own—or nearby—galaxies.
Long gamma-ray bursts
Most observed events (70%) have a duration of greater than two seconds and are classified as long gamma-ray bursts. Because these events constitute the majority of the population and because they tend to have the brightest afterglows, they have been observed in much greater detail than their short counterparts. Almost every well-studied long gamma-ray burst has been linked to a galaxy with rapid star formation, and in many cases to a core-collapse supernova as well, unambiguously associating long GRBs with the deaths of massive stars. Long GRB afterglow observations, at high redshift, are also consistent with the GRB having originated in star-forming regions.
In December 2022, astronomers reported the observation of GRB 211211A for 51 seconds, the first evidence of a long GRB likely associated with mergers of "compact binary objects" such as neutron stars or white dwarfs. Following this, GRB 191019A (2019, 64s) and GRB 230307A (2023, 35s) have been argued to signify an emerging class of long GRB which may originate from these types of progenitor events.
Ultra-long gamma-ray bursts
ulGRB are defined as GRB lasting more than 10,000 seconds, covering the upper range to the limit of the GRB duration distribution. They have been proposed to form a separate class, caused by the collapse of a blue supergiant star, a tidal disruption event or a new-born magnetar. Only a small number have been identified to date, their primary characteristic being their gamma ray emission duration. The most studied ultra-long events include GRB 101225A and GRB 111209A. The low detection rate may be a result of low sensitivity of current detectors to long-duration events, rather than a reflection of their true frequency. A 2013 study, on the other hand, shows that the existing evidence for a separate ultra-long GRB population with a new type of progenitor is inconclusive, and further multi-wavelength observations are needed to draw a firmer conclusion.
Energetics
Gamma-ray bursts are very bright as observed from Earth despite their typically immense distances. An average long GRB has a bolometric flux comparable to a bright star of our galaxy despite a distance of billions of light years (compared to a few tens of light years for most visible stars). Most of this energy is released in gamma rays, although some GRBs have extremely luminous optical counterparts as well. GRB 080319B, for example, was accompanied by an optical counterpart that peaked at a visible magnitude of 5.8, comparable to that of the dimmest naked-eye stars despite the burst's distance of 7.5 billion light years. This combination of brightness and distance implies an extremely energetic source. Assuming the gamma-ray explosion to be spherical, the energy output of GRB 080319B would be within a factor of two of the rest-mass energy of the Sun (the energy which would be released were the Sun to be converted entirely into radiation).
Gamma-ray bursts are thought to be highly focused explosions, with most of the explosion energy collimated into a narrow jet. The jets of gamma-ray bursts are ultrarelativistic, and are the most relativistic jets in the universe. The matter in gamma-ray burst jets may also become superluminal, or faster than the speed of light in the jet medium, with there also being effects of time reversibility. The approximate angular width of the jet (that is, the degree of spread of the beam) can be estimated directly by observing the achromatic "jet breaks" in afterglow light curves: a time after which the slowly decaying afterglow begins to fade rapidly as the jet slows and can no longer beam its radiation as effectively. Observations suggest significant variation in the jet angle from between 2 and 20 degrees.
Because their energy is strongly focused, the gamma rays emitted by most bursts are expected to miss the Earth and never be detected. When a gamma-ray burst is pointed towards Earth, the focusing of its energy along a relatively narrow beam causes the burst to appear much brighter than it would have been were its energy emitted spherically. The total energy of typical gamma-ray bursts has been estimated at 3 × 1044 J,which is larger than the total energy (1044 J) of ordinary supernovae (type Ia, Ibc, II), with gamma-ray bursts also being more powerful than the typical supernova. Very bright supernovae have been observed to accompany several of the nearest GRBs. Further support for focusing of the output of GRBs comes from observations of strong asymmetries in the spectra of nearby type Ic supernovae and from radio observations taken long after bursts when their jets are no longer relativistic.
However, a competing model, the binary-driven hypernova model, developed by Remo Ruffini and others at ICRANet, accepts the extreme isotropic energy totals as being true, with there being no need to correct for beaming. They also note that the extreme beaming angles in the standard "fireball" model have never been physically corroborated.
With the discovery of GRB 190114C, astronomers may have been missing half of the total energy that gamma-ray bursts produce, with Konstancja Satalecka, an astrophysicist at the German Electron Synchrotron, stating that "Our measurements show that the energy released in very-high-energy gamma-rays is comparable to the amount radiated at all lower energies taken together".
Short (time duration) GRBs appear to come from a lower-redshift (i.e. less distant) population and are less luminous than long GRBs. The degree of beaming in short bursts has not been accurately measured, but as a population they are likely less collimated than long GRBs or possibly not collimated at all in some cases.
Progenitors
Because of the immense distances of most gamma-ray burst sources from Earth, identification of the progenitors, the systems that produce these explosions, is challenging. The association of some long GRBs with supernovae and the fact that their host galaxies are rapidly star-forming offer very strong evidence that long gamma-ray bursts are associated with massive stars. The most widely accepted mechanism for the origin of long-duration GRBs is the collapsar model, in which the core of an extremely massive, low-metallicity, rapidly rotating star collapses into a black hole in the final stages of its evolution. Matter near the star's core rains down towards the center and swirls into a high-density accretion disk. The infall of this material into a black hole drives a pair of relativistic jets out along the rotational axis, which pummel through the stellar envelope and eventually break through the stellar surface and radiate as gamma rays. Some alternative models replace the black hole with a newly formed magnetar, although most other aspects of the model (the collapse of the core of a massive star and the formation of relativistic jets) are the same.
However, a new model which has gained support and was developed by the Italian astrophysicist Remo Ruffini and other scientists at ICRANet is that of the binary-driven hypernova (BdHN) model. The model succeeds and improves upon both the fireshell model and the induced gravitational collapse (IGC) paradigm suggested before, and explains all aspects of gamma-ray bursts. The model posits long gamma-ray bursts as occurring in binary systems with a carbon–oxygen core and a companion neutron star or a black hole. Furthermore, the energy of GRBs in the model is isotropic instead of collimated. The creators of the model have noted the numerous drawbacks of the standard "fireball" model as motivation for developing the model, such as the markedly different energetics for supernova and gamma-ray bursts, and the fact that the existence of extremely narrow beaming angles have never been observationally corroborated.
The closest analogs within the Milky Way galaxy of the stars producing long gamma-ray bursts are likely the Wolf–Rayet stars, extremely hot and massive stars, which have shed most or all of their hydrogen envelope. Eta Carinae, Apep, and WR 104 have been cited as possible future gamma-ray burst progenitors. It is unclear if any star in the Milky Way has the appropriate characteristics to produce a gamma-ray burst.
The massive-star model probably does not explain all types of gamma-ray burst. There is strong evidence that some short-duration gamma-ray bursts occur in systems with no star formation and no massive stars, such as elliptical galaxies and galaxy halos. The favored hypothesis for the origin of most short gamma-ray bursts is the merger of a binary system consisting of two neutron stars. According to this model, the two stars in a binary slowly spiral towards each other because gravitational radiation releases energy until tidal forces suddenly rip the neutron stars apart and they collapse into a single black hole. The infall of matter into the new black hole produces an accretion disk and releases a burst of energy, analogous to the collapsar model. Numerous other models have also been proposed to explain short gamma-ray bursts, including the merger of a neutron star and a black hole, the accretion-induced collapse of a neutron star, or the evaporation of primordial black holes.
An alternative explanation proposed by Friedwardt Winterberg is that in the course of a gravitational collapse and in reaching the event horizon of a black hole, all matter disintegrates into a burst of gamma radiation.
Tidal disruption events
This class of GRB-like events was first discovered through the detection of Swift J1644+57 (originally classified as GRB 110328A) by the Swift Gamma-Ray Burst Mission on 28 March 2011. This event had a gamma-ray duration of about 2 days, much longer than even ultra-long GRBs, and was detected in many frequencies for months and years after. It occurred at the center of a small elliptical galaxy at redshift 3.8 billion light years away. This event has been accepted as a tidal disruption event (TDE), where a star wanders too close to a supermassive black hole, shredding the star. In the case of Swift J1644+57, an astrophysical jet traveling at near the speed of light was launched, and lasted roughly 1.5 years before turning off.
Since 2011, only 4 jetted TDEs have been discovered, of which 3 were detected in gamma-rays (including Swift J1644+57). It is estimated that just 1% of all TDEs are jetted events.
Emission mechanisms
The means by which gamma-ray bursts convert energy into radiation remains poorly understood, and as of 2010 there was still no generally accepted model for how this process occurs. Any successful model of GRB emission must explain the physical process for generating gamma-ray emission that matches the observed diversity of light curves, spectra, and other characteristics. Particularly challenging is the need to explain the very high efficiencies that are inferred from some explosions: some gamma-ray bursts may convert as much as half (or more) of the explosion energy into gamma-rays. Early observations of the bright optical counterparts to GRB 990123 and to GRB 080319B, whose optical light curves were extrapolations of the gamma-ray light spectra, have suggested that inverse Compton scattering may be the dominant process in some events. In this model, pre-existing low-energy photons are scattered by relativistic electrons within the explosion, augmenting their energy by a large factor and transforming them into gamma-rays.
The nature of the longer-wavelength afterglow emission (ranging from X-ray through radio) that follows gamma-ray bursts is better understood. Any energy released by the explosion not radiated away in the burst itself takes the form of matter or energy moving outward at nearly the speed of light. As this matter collides with the surrounding interstellar gas, it creates a relativistic shock wave that then propagates forward into interstellar space. A second shock wave, the reverse shock, may propagate back into the ejected matter. Extremely energetic electrons within the shock wave are accelerated by strong local magnetic fields and radiate as synchrotron emission across most of the electromagnetic spectrum. This model has generally been successful in modeling the behavior of many observed afterglows at late times (generally, hours to days after the explosion), although there are difficulties explaining all features of the afterglow very shortly after the gamma-ray burst has occurred.
Rate of occurrence and potential effects on life
Gamma ray bursts can have harmful or destructive effects on life. Considering the universe as a whole, the safest environments for life similar to that on Earth are the lowest density regions in the outskirts of large galaxies. Our knowledge of galaxy types and their distribution suggests that life as we know it can only exist in about 10% of all galaxies. Furthermore, galaxies with a redshift, z, higher than 0.5 are unsuitable for life as we know it, because of their higher rate of GRBs and their stellar compactness.
All GRBs observed to date have occurred well outside the Milky Way galaxy and have been harmless to Earth. However, if a GRB were to occur within the Milky Way within 5,000 to 8,000 light-years and its emission were beamed straight towards Earth, the effects could be harmful and potentially devastating for its ecosystems. Currently, orbiting satellites detect on average approximately one GRB per day. The closest observed GRB as of March 2014 was GRB 980425, located away (z=0.0085) in an SBc-type dwarf galaxy. GRB 980425 was far less energetic than the average GRB and was associated with the Type Ib supernova SN 1998bw.
Estimating the exact rate at which GRBs occur is difficult; for a galaxy of approximately the same size as the Milky Way, estimates of the expected rate (for long-duration GRBs) can range from one burst every 10,000 years, to one burst every 1,000,000 years. Only a small percentage of these would be beamed towards Earth. Estimates of rate of occurrence of short-duration GRBs are even more uncertain because of the unknown degree of collimation, but are probably comparable.
Since GRBs are thought to involve beamed emission along two jets in opposing directions, only planets in the path of these jets would be subjected to the high energy gamma radiation. A GRB could potentially vaporize anything in its beams' paths within a range of around 200 light-years.
Although nearby GRBs hitting Earth with a destructive shower of gamma rays are only hypothetical events, high energy processes across the galaxy have been observed to affect the Earth's atmosphere.
Effects on Earth
Earth's atmosphere is very effective at absorbing high energy electromagnetic radiation such as x-rays and gamma rays, so these types of radiation would not reach any dangerous levels at the surface during the burst event itself. The immediate effect on life on Earth from a GRB within a few kiloparsecs would only be a short increase in ultraviolet radiation at ground level, lasting from less than a second to tens of seconds. This ultraviolet radiation could potentially reach dangerous levels depending on the exact nature and distance of the burst, but it seems unlikely to be able to cause a global catastrophe for life on Earth.
The long-term effects from a nearby burst are more dangerous. Gamma rays cause chemical reactions in the atmosphere involving oxygen and nitrogen molecules, creating first nitrogen oxide then nitrogen dioxide gas. The nitrogen oxides cause dangerous effects on three levels. First, they deplete ozone, with models showing a possible global reduction of 25–35%, with as much as 75% in certain locations, an effect that would last for years. This reduction is enough to cause a dangerously elevated UV index at the surface. Secondly, the nitrogen oxides cause photochemical smog, which darkens the sky and blocks out parts of the sunlight spectrum. This would affect photosynthesis, but models show only about a 1% reduction of the total sunlight spectrum, lasting a few years. However, the smog could potentially cause a cooling effect on Earth's climate, producing a "cosmic winter" (similar to an impact winter, but without an impact), but only if it occurs simultaneously with a global climate instability. Thirdly, the elevated nitrogen dioxide levels in the atmosphere would wash out and produce acid rain. Nitric acid is toxic to a variety of organisms, including amphibian life, but models predict that it would not reach levels that would cause a serious global effect. The nitrates might in fact be of benefit to some plants.
All in all, a GRB within a few kiloparsecs, with its energy directed towards Earth, will mostly damage life by raising the UV levels during the burst itself and for a few years thereafter. Models show that the destructive effects of this increase can cause up to 16 times the normal levels of DNA damage. It has proved difficult to assess a reliable evaluation of the consequences of this on the terrestrial ecosystem, because of the uncertainty in biological field and laboratory data.
Hypothetical effects on Earth in the past
There is a very good chance (but no certainty) that at least one lethal GRB took place during the past 5 billion years close enough to Earth as to significantly damage life. There is a 50% chance that such a lethal GRB took place within two kiloparsecs of Earth during the last 500 million years, causing one of the major mass extinction events.
The major Ordovician–Silurian extinction event 450 million years ago may have been caused by a GRB. Estimates suggest that approximately 20–60% of the total phytoplankton biomass in the Ordovician oceans would have perished in a GRB, because the oceans were mostly oligotrophic and clear. The late Ordovician species of trilobites that spent portions of their lives in the plankton layer near the ocean surface were much harder hit than deep-water dwellers, which tended to remain within quite restricted areas. This is in contrast to the usual pattern of extinction events, wherein species with more widely spread populations typically fare better. A possible explanation is that trilobites remaining in deep water would be more shielded from the increased UV radiation associated with a GRB. Also supportive of this hypothesis is the fact that during the late Ordovician, burrowing bivalve species were less likely to go extinct than bivalves that lived on the surface.
A case has been made that the 774–775 carbon-14 spike was the result of a short GRB, though a very strong solar flare is another possibility.
GRB candidates in the Milky Way
No gamma-ray bursts from within our own galaxy, the Milky Way, have been observed, and the question of whether one has ever occurred remains unresolved. In light of evolving understanding of gamma-ray bursts and their progenitors, the scientific literature records a growing number of local, past, and future GRB candidates. Long duration GRBs are related to superluminous supernovae, or hypernovae, and most luminous blue variables (LBVs) and rapidly spinning Wolf–Rayet stars are thought to end their life cycles in core-collapse supernovae with an associated long-duration GRB. Knowledge of GRBs, however, is from metal-poor galaxies of former epochs of the universe's evolution, and it is impossible to directly extrapolate to encompass more evolved galaxies and stellar environments with a higher metallicity, such as the Milky Way.
| Physical sciences | Stellar astronomy | null |
48824 | https://en.wikipedia.org/wiki/Gravitational%20lens | Gravitational lens | A gravitational lens is matter, such as a cluster of galaxies or a point particle, that bends light from a distant source as it travels toward an observer. The amount of gravitational lensing is described by Albert Einstein's general theory of relativity. If light is treated as corpuscles travelling at the speed of light, Newtonian physics also predicts the bending of light, but only half of that predicted by general relativity.
Orest Khvolson (1924) and Frantisek Link (1936) are generally credited with being the first to discuss the effect in print, but it is more commonly associated with Einstein, who made unpublished calculations on it in 1912 and published an article on the subject in 1936.
In 1937, Fritz Zwicky posited that galaxy clusters could act as gravitational lenses, a claim confirmed in 1979 by observation of the Twin QSO SBS 0957+561.
Description
Unlike an optical lens, a point-like gravitational lens produces a maximum deflection of light that passes closest to its center, and a minimum deflection of light that travels furthest from its center. Consequently, a gravitational lens has no single focal point, but a focal line. The term "lens" in the context of gravitational light deflection was first used by O. J. Lodge, who remarked that it is "not permissible to say that the solar gravitational field acts like a lens, for it has no focal length". If the (light) source, the massive lensing object, and the observer lie in a straight line, the original light source will appear as a ring around the massive lensing object (provided the lens has circular symmetry). If there is any misalignment, the observer will see an arc segment instead.
This phenomenon was first mentioned in 1924 by the St. Petersburg physicist Orest Khvolson, and quantified by Albert Einstein in 1936. It is usually referred to in the literature as an Einstein ring, since Khvolson did not concern himself with the flux or radius of the ring image. More commonly, where the lensing mass is complex (such as a galaxy group or cluster) and does not cause a spherical distortion of spacetime, the source will resemble partial arcs scattered around the lens. The observer may then see multiple distorted images of the same source; the number and shape of these depending upon the relative positions of the source, lens, and observer, and the shape of the gravitational well of the lensing object.
There are three classes of gravitational lensing:
Strong lensing Where there are easily visible distortions such as the formation of Einstein rings, arcs, and multiple images. Despite being considered "strong", the effect is in general relatively small, such that even a galaxy with a mass more than 100 billion times that of the Sun will produce multiple images separated by only a few arcseconds. Galaxy clusters can produce separations of several arcminutes. In both cases the galaxies and sources are quite distant, many hundreds of megaparsecs away from our Galaxy.
Weak lensing Where the distortions of background sources are much smaller and can only be detected by analyzing large numbers of sources in a statistical way to find coherent distortions of only a few percent. The lensing shows up statistically as a preferred stretching of the background objects perpendicular to the direction to the centre of the lens. By measuring the shapes and orientations of large numbers of distant galaxies, their orientations can be averaged to measure the shear of the lensing field in any region. This, in turn, can be used to reconstruct the mass distribution in the area: in particular, the background distribution of dark matter can be reconstructed. Since galaxies are intrinsically elliptical and the weak gravitational lensing signal is small, a very large number of galaxies must be used in these surveys. These weak lensing surveys must carefully avoid a number of important sources of systematic error: the intrinsic shape of galaxies, the tendency of a camera's point spread function to distort the shape of a galaxy and the tendency of atmospheric seeing to distort images must be understood and carefully accounted for. The results of these surveys are important for cosmological parameter estimation, to better understand and improve upon the Lambda-CDM model, and to provide a consistency check on other cosmological observations. They may also provide an important future constraint on dark energy.
Microlensing Where no distortion in shape can be seen but the amount of light received from a background object changes in time. The lensing object may be stars in the Milky Way in one typical case, with the background source being stars in a remote galaxy, or, in another case, an even more distant quasar. In extreme cases, a star in a distant galaxy can act as a microlens and magnify another star much farther away. The first example of this was the star MACS J1149 Lensed Star 1 (also known as Icarus), thanks to the boost in flux due to the microlensing effect.
Gravitational lenses act equally on all kinds of electromagnetic radiation, not just visible light, and also in non-electromagnetic radiation, like gravitational waves. Weak lensing effects are being studied for the cosmic microwave background as well as galaxy surveys. Strong lenses have been observed in radio and x-ray regimes as well. If a strong lens produces multiple images, there will be a relative time delay between two paths: that is, in one image the lensed object will be observed before the other image.
History
Henry Cavendish in 1784 (in an unpublished manuscript) and Johann Georg von Soldner in 1801 (published in 1804) had pointed out that Newtonian gravity predicts that starlight will bend around a massive object as had already been supposed by Isaac Newton in 1704 in his Queries No.1 in his book Opticks. The same value as Soldner's was calculated by Einstein in 1911 based on the equivalence principle alone. However, Einstein noted in 1915, in the process of completing general relativity, that his (and thus Soldner's) 1911-result is only half of the correct value. Einstein became the first to calculate the correct value for light bending.
The first observation of light deflection was performed by noting the change in position of stars as they passed near the Sun on the celestial sphere. The observations were performed in 1919 by Arthur Eddington, Frank Watson Dyson, and their collaborators during the total solar eclipse on May 29. The solar eclipse allowed the stars near the Sun to be observed. Observations were made simultaneously in the cities of Sobral, Ceará, Brazil and in São Tomé and Príncipe on the west coast of Africa. The observations demonstrated that the light from stars passing close to the Sun was slightly bent, so that stars appeared slightly out of position.
The result was considered spectacular news and made the front page of most major newspapers. It made Einstein and his theory of general relativity world-famous. When asked by his assistant what his reaction would have been if general relativity had not been confirmed by Eddington and Dyson in 1919, Einstein said "Then I would feel sorry for the dear Lord. The theory is correct anyway." In 1912, Einstein had speculated that an observer could see multiple images of a single light source, if the light were deflected around a mass. This effect would make the mass act as a kind of gravitational lens. However, as he only considered the effect of deflection around a single star, he seemed to conclude that the phenomenon was unlikely to be observed for the foreseeable future since the necessary alignments between stars and observer would be highly improbable. Several other physicists speculated about gravitational lensing as well, but all reached the same conclusion that it would be nearly impossible to observe.
Although Einstein made unpublished calculations on the subject, the first discussion of the gravitational lens in print was by Khvolson, in a short article discussing the "halo effect" of gravitation when the source, lens, and observer are in near-perfect alignment, now referred to as the Einstein ring.
In 1936, after some urging by Rudi W. Mandl, Einstein reluctantly published the short article "Lens-Like Action of a Star By the Deviation of Light In the Gravitational Field" in the journal Science.
In 1937, Fritz Zwicky first considered the case where the newly discovered galaxies (which were called 'nebulae' at the time) could act as both source and lens, and that, because of the mass and sizes involved, the effect was much more likely to be observed.
In 1963 Yu. G. Klimov, S. Liebes, and Sjur Refsdal recognized independently that quasars are an ideal light source for the gravitational lens effect.
It was not until 1979 that the first gravitational lens would be discovered. It became known as the "Twin QSO" since it initially looked like two identical quasistellar objects. (It is officially named SBS 0957+561.) This gravitational lens was discovered by Dennis Walsh, Bob Carswell, and Ray Weymann using the Kitt Peak National Observatory 2.1 meter telescope.
In the 1980s, astronomers realized that the combination of CCD imagers and computers would allow the brightness of millions of stars to be measured each night. In a dense field, such as the galactic center or the Magellanic clouds, many microlensing events per year could potentially be found. This led to efforts such as Optical Gravitational Lensing Experiment, or OGLE, that have characterized hundreds of such events, including those of OGLE-2016-BLG-1190Lb and OGLE-2016-BLG-1195Lb.
Approximate Newtonian description
Newton wondered whether light, in the form of corpuscles, would be bent due to gravity. The Newtonian prediction for light deflection refers to the amount of deflection a corpuscle would feel under the effect of gravity, and therefore one should read "Newtonian" in this context as the referring to the following calculations and not a belief that Newton held in the validity of these calculations.
For a gravitational point-mass lens of mass , a corpuscle of mass feels a force
where is the lens-corpuscle separation. If we equate this force with Newton's second law, we can solve for the acceleration that the light undergoes:
The light interacts with the lens from initial time to , and the velocity boost the corpuscle receives is
If one assumes that initially the light is far enough from the lens to neglect gravity, the perpendicular distance between the light's initial trajectory and the lens is b (the impact parameter), and the parallel distance is , such that . We additionally assume a constant speed of light along the parallel direction, , and that the light is only being deflected a small amount. After plugging these assumptions into the above equation and further simplifying, one can solve for the velocity boost in the perpendicular direction. The angle of deflection between the corpuscle’s initial and final trajectories is therefore (see, e.g., M. Meneghetti 2021)
Although this result appears to be half the prediction from general relativity, classical physics predicts that the speed of light is observer-dependent (see, e.g., L. Susskind and A. Friedman 2018) which was superseded by a universal speed of light in special relativity.
Explanation in terms of spacetime curvature
In general relativity, light follows the curvature of spacetime, hence when light passes around a massive object, it is bent. This means that the light from an object on the other side will be bent towards an observer's eye, just like an ordinary lens. In general relativity the path of light depends on the shape of space (i.e. the metric). The gravitational attraction can be viewed as the motion of undisturbed objects in a background curved geometry or alternatively as the response of objects to a force in a flat geometry. The angle of deflection is
toward the mass M at a distance r from the affected radiation, where G is the universal constant of gravitation, and c is the speed of light in vacuum.
Since the Schwarzschild radius is defined as , and escape velocity is defined as , this can also be expressed in simple form as
Search for gravitational lenses
Most of the gravitational lenses in the past have been discovered accidentally. A search for gravitational lenses in the northern hemisphere (Cosmic Lens All Sky Survey, CLASS), done in radio frequencies using the Very Large Array (VLA) in New Mexico, led to the discovery of 22 new lensing systems, a major milestone. This has opened a whole new avenue for research ranging from finding very distant objects to finding values for cosmological parameters so we can understand the universe better.
A similar search in the southern hemisphere would be a very good step towards complementing the northern hemisphere search as well as obtaining other objectives for study. If such a search is done using well-calibrated and well-parameterized instruments and data, a result similar to the northern survey can be expected. The use of the Australia Telescope 20 GHz (AT20G) Survey data collected using the Australia Telescope Compact Array (ATCA) stands to be such a collection of data. As the data were collected using the same instrument maintaining a very stringent quality of data we should expect to obtain good results from the search. The AT20G survey is a blind survey at 20 GHz frequency in the radio domain of the electromagnetic spectrum. Due to the high frequency used, the chances of finding gravitational lenses increases as the relative number of compact core objects (e.g. quasars) are higher (Sadler et al. 2006). This is important as the lensing is easier to detect and identify in simple objects compared to objects with complexity in them. This search involves the use of interferometric methods to identify candidates and follow them up at higher resolution to identify them. Full detail of the project is currently under works for publication.
Microlensing techniques have been used to search for planets outside our solar system. A statistical analysis of specific cases of observed microlensing over the time period of 2002 to 2007 found that most stars in the Milky Way galaxy hosted at least one orbiting planet within 0.5 to 10 AU.
In 2009, weak gravitational lensing was used to extend the mass-X-ray-luminosity relation to older and smaller structures than was previously possible to improve measurements of distant galaxies.
the most distant gravitational lens galaxy, J1000+0221, had been found using NASA's Hubble Space Telescope. While it remains the most distant quad-image lensing galaxy known, an even more distant two-image lensing galaxy was subsequently discovered by an international team of astronomers using a combination of Hubble Space Telescope and Keck telescope imaging and spectroscopy. The discovery and analysis of the IRC 0218 lens was published in the Astrophysical Journal Letters on June 23, 2014.
Research published Sep 30, 2013 in the online edition of Physical Review Letters, led by McGill University in Montreal, Québec, Canada, has discovered the B-modes, that are formed due to gravitational lensing effect, using National Science Foundation's South Pole Telescope and with help from the Herschel space observatory. This discovery would open the possibilities of testing the theories of how our universe originated.
Solar gravitational lens
Albert Einstein predicted in 1936 that rays of light from the same direction that skirt the edges of the Sun would converge to a focal point approximately 542 AU from the Sun. Thus, a probe positioned at this distance (or greater) from the Sun could use the Sun as a gravitational lens for magnifying distant objects on the opposite side of the Sun. A probe's location could shift around as needed to select different targets relative to the Sun.
This distance is far beyond the progress and equipment capabilities of space probes such as Voyager 1, and beyond the known planets and dwarf planets, though over thousands of years 90377 Sedna will move farther away on its highly elliptical orbit. The high gain for potentially detecting signals through this lens, such as microwaves at the 21-cm hydrogen line, led to the suggestion by Frank Drake in the early days of SETI that a probe could be sent to this distance. A multipurpose probe SETISAIL and later FOCAL was proposed to the ESA in 1993, but is expected to be a difficult task. If a probe does pass 542 AU, magnification capabilities of the lens will continue to act at farther distances, as the rays that come to a focus at larger distances pass further away from the distortions of the Sun's corona. A critique of the concept was given by Landis, who discussed issues including interference of the solar corona, the high magnification of the target, which will make the design of the mission focal plane difficult, and an analysis of the inherent spherical aberration of the lens.
In 2020, NASA physicist Slava Turyshev presented his idea of Direct Multipixel Imaging and Spectroscopy of an Exoplanet with a Solar Gravitational Lens Mission. The lens could reconstruct the exoplanet image with ~25 km-scale surface resolution, enough to see surface features and signs of habitability.
Measuring weak lensing
Kaiser, Squires and Broadhurst (1995), Luppino & Kaiser (1997) and Hoekstra et al. (1998) prescribed a method to invert the effects of the point spread function (PSF) smearing and shearing, recovering a shear estimator uncontaminated by the systematic distortion of the PSF. This method (KSB+) is the most widely used method in weak lensing shear measurements.
Galaxies have random rotations and inclinations. As a result, the shear effects in weak lensing need to be determined by statistically preferred orientations. The primary source of error in lensing measurement is due to the convolution of the PSF with the lensed image. The KSB method measures the ellipticity of a galaxy image. The shear is proportional to the ellipticity. The objects in lensed images are parameterized according to their weighted quadrupole moments. For a perfect ellipse, the weighted quadrupole moments are related to the weighted ellipticity. KSB calculate how a weighted ellipticity measure is related to the shear and use the same formalism to remove the effects of the PSF.
KSB's primary advantages are its mathematical ease and relatively simple implementation. However, KSB is based on a key assumption that the PSF is circular with an anisotropic distortion. This is a reasonable assumption for cosmic shear surveys, but the next generation of surveys (e.g. LSST) may need much better accuracy than KSB can provide.
Gallery
| Physical sciences | Basics_2 | Astronomy |
48836 | https://en.wikipedia.org/wiki/Kava | Kava | Kava or kava kava (Piper methysticum: Latin 'pepper' and Latinized Greek 'intoxicating') is a plant in the pepper family, native to the Pacific Islands. The name kava is from Tongan and Marquesan, meaning 'bitter.’ Other names for kava include ʻawa (Hawaiʻi), ʻava (Samoa), yaqona or yagona (Fiji), sakau (Pohnpei), seka (Kosrae), and malok or malogu (parts of Vanuatu). Kava can refer to either the plant or a beverage made from its root. The beverage has sedative, anesthetic, psychoactive and mildly euphoriant properties. It is consumed throughout the Pacific Ocean cultures of Polynesia, including Hawaii and Vanuatu, Melanesia, some parts of Micronesia, such as Pohnpei and Kosrae, and the Philippines.
Kava consists of sterile cultivars clonally propagated from its wild ancestor, Piper wichmanii. It originated in northern Vanuatu, where it was domesticated by farmers around 3,000 years ago through selective cultivation. Historically, the beverage was made from fresh kava; preparation from dry kava emerged in response to the efforts of Christian missionaries in the 18th and 19th centuries to prohibit the drinking of kava.
Its active compounds are known as kavalactones. Systematic reviews and meta-analyses conducted in the last decade have typically indicated a modest positive effect of kava on anxiety and Generalized Anxiety Disorder, though the evidence is mixed and further research is frequently recommended.
Moderate consumption of kava in its traditional form, as a water-based suspension of kava roots, is considered by the World Health Organization to present an “acceptably low level of health risk.” However, consumption of kava extracts produced with organic solvents or excessive amounts of low-quality kava products may be linked to an increased risk of adverse health outcomes, including liver injury.
History and names
Kava is conspecific with Piper wichmannii, indicating Kava was domesticated from Piper wichmannii (syn. Piper subbullatum).
It was spread by the Austronesian Lapita culture after contact eastward into the rest of Polynesia. It is endemic to Oceania and is not found in other Austronesian groups. Kava reached Hawaii, but it is absent in New Zealand, where it cannot grow. Consumption of kava is also believed to be the reason why betel nut chewing, ubiquitous elsewhere, was lost for Austronesians in Oceania.
According to Lynch (2002), the reconstructed Proto-Polynesian term for the plant, *kava, was derived from the Proto-Oceanic term *kawaR in the sense of a "bitter root" or "potent root [used as fish poison]". It may have been related to reconstructed *wakaR (in Proto-Oceanic and Proto-Malayo-Polynesian) via metathesis. It originally referred to Zingiber zerumbet, used to make a similar mildly psychoactive bitter drink in Austronesian rituals. Cognates for *kava include Pohnpeian sa-kau; Tongan, Niue, Rapa Nui, Tuamotuan, and Rarotongan kava; Samoan, Tahitian, and Marquesan ava; and Hawaiian awa. In some languages, most notably Māori kawa, the cognates have come to mean "bitter", "sour", or "acrid" to the taste.
In the Cook Islands, the reduplicated forms of kawakawa or kavakava are also applied to the unrelated members of the genus Pittosporum. In other languages, such as Futunan, compound terms like kavakava atua refer to other species belonging to the genus Piper. The reduplication of the base form is indicative of falsehood or likeness, in the sense of "false kava". In New Zealand, it was applied to the kawakawa (Piper excelsum), which is endemic to New Zealand and nearby Norfolk Island and Lord Howe Island. It was exploited by the Māori based on previous knowledge of the kava, as the latter could not survive in the colder climates of New Zealand. The Māori name for the plant, kawakawa, is derived from the same etymon as kava, but reduplicated. It is a sacred tree among the Māori people. It is seen as a symbol of death, corresponding to the rangiora (Brachyglottis repanda), which is the symbol of life. However, kawakawa has no psychoactive properties. Its connection to kava is linked to its similarity in appearance and bitter taste.
Characteristics
Kava was historically grown only in the Pacific islands of Hawaii, Federated States of Micronesia, Vanuatu, Fiji, the Samoas, and Tonga. It appears to have originated in Vanuatu; an inventory of P. methysticum distribution showed it was cultivated on numerous islands of Micronesia, Melanesia, Polynesia, and Hawaii, whereas specimens of P. wichmannii were all from Papua New Guinea, the Solomon Islands, and Vanuatu.
Traditionally, plants are harvested around four years of age, as older plants have higher concentrations of kavalactones. After reaching about in height, plants grow a wider stalk and additional stalks, but not much taller. The roots can reach a depth of .
Cultivars
Kava consists of sterile cultivars cloned from its wild ancestor, Piper wichmanii. Today it comprises hundreds of different cultivars grown across the Pacific. Each cultivar has not only different requirements for successful cultivation, but also displays unique characteristics both in terms of its appearance and its psychoactive properties.
Noble and non-noble kava
Scholars make a distinction between the so-called noble and non-noble kava. The latter category comprises the so-called tudei (or "two-day") kavas, medicinal kavas, and wild kava (Piper wichmanii, the ancestor of domesticated Piper methysticum). Traditionally, only noble kavas have been used for regular consumption, due to their more favourable composition of kavalactones and other compounds that produce more pleasant effects and have lower potential for causing negative side effects, such as nausea, or "kava hangover".
The perceived benefits of noble cultivars explain why only these cultivars were spread around the Pacific by Polynesian and Melanesian migrants, with presence of non-noble cultivars limited to the islands of Vanuatu, from which they originated. More recently, it has been suggested that the widespread use of tudei cultivars in the manufacturing of several kava products might have been the key factor contributing to the rare reports of adverse reactions to kava observed among the consumers of kava-based products in Europe.
Tudei varieties have traditionally not been grown in Hawaii and Fiji, but in recent years there have been reports of farmers attempting to grow "isa" or "palisi" non-noble cultivars in Hawaii, and of imports of dried tudei kava into Fiji for further re-exporting. The tudei cultivars may be easier and cheaper to grow: while it takes up to 5 years for noble kava to mature, non-noble varieties can often be harvested just one year after being planted.
The concerns about the adverse effects of non-noble varieties, produced by their undesirable composition of kavalactones and high concentrations of potentially harmful compounds (flavokavains, which are not present in any significant concentration in the noble varieties), have led to legislation prohibiting exports from countries such as Vanuatu. Likewise, efforts have been made to educate non-traditional customers about the difference between noble and non-noble varieties and that non-noble varieties do not offer the same results as noble cultivars. In recent years, government regulatory bodies and non-profit NGOs have been set up with the declared aim of monitoring kava quality; producing regular reports; certifying vendors selling proper, noble kava; and warning customers against products that may contain tudei varieties.
Growing regions
In Vanuatu, exportation of kava is strictly regulated. Only cultivars classified as noble are allowed to be exported. Only the most desirable cultivars for everyday drinking are classified as noble to maintain quality control. In addition, their laws mandate that exported kava must be at least five years old and farmed organically. Their most popular noble cultivars are "Borogu" or "Borongoru" from Pentecost Island, "Melomelo" from Aoba Island (called Sese in the north Pentecost Island), and "Palarasul" kava from Espiritu Santo. In Vanuatu, Tudei ("two-day") kava is reserved for special ceremonial occasions and exporting it is not allowed. "Palisi" is a popular Tudei variety.
In Hawaii, there are many other cultivars of kava (). Some of the most popular cultivars are Mahakea, Moʻi, Hiwa, and Nene. The Aliʻi (kings) of precolonial Hawaii coveted the Moʻi variety, which had a strong cerebral effect due to a predominant amount of the kavalactone kavain. This sacred variety was so important to them that no one but royalty could ever experience it, "lest they suffer an untimely death". The reverence for Hiwa in old Hawaiʻi is evident in this portion of a chant recorded by Nathaniel Bright Emerson and quoted by E. S. Craighill and Elizabeth Green Handy: "This refers to the cup of sacramental ʻawa brewed from the strong, black ʻawa root (ʻawa hiwa), which was drunk sacramentally by the kumu hula":
Winter describes a hula prayer for inspiration that contains the line, He ʻike pū ʻawa hiwa. Pukui and Elbert translated this as "a knowledge from kava offerings". Winter explains that ʻawa, especially of the Hiwa variety, was offered to hula deities in return for knowledge and inspiration.
More recently, specialized kava varieties have been introduced to South Florida which have been acclimated and adapted to grow well in South Florida's unique soil and climate and have significant resistance to pest and disease pressures. As of 2024, cultivation of these varieties is limited to a small number of commercial farms and backyard growers.
Relationship with kawakawa
The Kawakawa (Piper excelsum) plant, known also as "Māori kava", may be confused with kava. While the two plants look similar and have similar names, they are different, but related, species. Kawakawa is a small tree endemic to New Zealand, having importance to traditional medicine and Māori culture. As noted by the Kava Society of New Zealand, "in all likelihood, the kava plant was known to the first settlers of Aotearoa [New Zealand]. It is also possible that (just like the Polynesian migrants that settled in Hawaii) the Maori explorers brought some kava with them. Unfortunately, most of New Zealand is simply too cold for growing kava and hence the Maori settlers lost their connection to the sacred plant." Further, "in New Zealand, where the climate is too cold for kava, the Maori gave the name kawa-kawa to another Piperaceae M. excelsum, in memory of the kava plants they undoubtedly brought with them and unsuccessfully attempted to cultivate. The Maori word kawa also means "ceremonial protocol", recalling the stylized consumption of the drug typical of Polynesian societies". Kawakawa is commonly used in Maori traditional medicine for the treatment of skin infections, wounds, and cuts, and (when prepared as a tea) for stomach upsets and other minor illnesses.
Composition
Fresh kava root contains on average 80% water. Dried root contains approximately 43% starch, 20% dietary fiber, 15% kavalactones, 12% water, 3.2% sugars, 3.6% protein, and 3.2% minerals.
In general, kavalactone content is greatest in the roots and decreases higher up the plant into the stems and leaves. Relative concentrations of 15%, 10%, and 5% have been observed in the root, stump, and basal stems, respectively. The relative content of kavalactones depends not only on plant segment but also on the kava plant variety, plant maturity, geographic location, and time of harvest. The kavalactones present are kavain, demethoxyyangonin, and yangonin, which are higher in the roots than in the stems and leaves, with dihydrokavain, methysticin, and dihydromethysticin also present.
The mature roots of the kava plant are harvested after a minimum of four years (at least five years, ideally) for peak kavalactone content. Most kava plants produce around of root when they are harvested. Kava root is classified into two categories: crown root (or chips) and lateral root. Crown roots are the large-diameter pieces that look like -diameter wooden poker chips. Most kava plants consist of approximately 80% crown root upon harvesting. Lateral roots are smaller-diameter roots that look more like a typical root. A mature kava plant is about 20% lateral roots. Kava lateral roots have the highest content of kavalactones in the kava plant. "Waka" grade kava is made of lateral roots only.
Pharmacology
Constituents
A total of 18 different kavalactones (or kavapyrones) have been identified to date, at least 15 of which are active. However, six of them, including kavain, dihydrokavain, methysticin, dihydromethysticin, yangonin, and desmethoxyyangonin, have been determined to be responsible for about 96% of the plant's pharmacological activity. Some minor constituents, including three chalcones — flavokavain A, flavokavain B, and flavokavain C — have also been identified, as well as a toxic alkaloid (not present in the consumable parts of the plant), pipermethystine. Alkaloids are present in the roots and leaves.
Pharmacodynamics
The following pharmacological actions have been reported for kava and/or its major active constituents:
Potentiation of GABAA receptor activity (by kavain, dihydrokavain, methysticin, dihydromethysticin, and yangonin).
Inhibition of the reuptake of norepinephrine (by kavain and methysticin) and possibly also of dopamine (by kavain and desmethoxyyangonin).
Binding to the CB1 receptor (by yangonin).
Inhibition of voltage-gated sodium channels and voltage-gated calcium channels (by kavain and methysticin).
Monoamine oxidase B reversible inhibition (by all six of the major kavalactones).
Receptor binding assays with botanical extracts have revealed direct interactions of leaf extracts of kava (which appear to be more active than root extracts) with the GABA (i.e., main) binding site of the GABAA receptor, the D2 receptor, the μ- and δ-opioid receptors, and the H1 and H2 receptors. Weak interaction with the 5-HT6 and 5-HT7 receptors and the benzodiazepine site of the GABAA receptor was also observed.
Potentiation of GABAA receptor activity may underlie the anxiolytic effects of kava, while elevation of dopamine levels in the nucleus accumbens likely underlie the moderately psychotropic effects the plant can produce. Changes in the activity of 5-HT neurons could explain the sleep-inducing action. However, failure of the GABAA receptor inhibitor flumazenil to reverse the anxiolytic effects of kava in mice suggests that benzodiazepine-like effects are not contributing to the pharmacological profile of kava extracts.
Heavy, long-term use of kava has been found to be free of association with reduced ability in saccade and cognitive tests, but has been associated with elevated liver enzymes.
Detection
Recent usage of kava has been documented in forensic investigations by quantitation of kavain in blood specimens. The principal urinary metabolite, conjugated 4'-OH-kavain, is generally detectable for up to 48 hours.
Preparations
Traditional preparation
Kava is consumed in various ways throughout the Pacific Ocean cultures of Polynesia, Vanuatu, Melanesia, and some parts of Micronesia and Australia. Traditionally, it is prepared by either chewing, grinding, or pounding the roots of the kava plant. Grinding is done by hand against a cone-shaped block of dead coral; the hand forms a mortar and the coral a pestle. The ground root/bark is combined with only a little water, as the fresh root releases moisture during grinding. Pounding is done in a large stone with a small log. The product is then added to cold water and consumed as quickly as possible.
The extract is an emulsion of kavalactone droplets in starch. The taste is slightly pungent, while the distinctive aroma depends on whether it was prepared from dry or fresh plant, and on the variety. The colour is grey to tan to opaque greenish.
Kava prepared as described above is much more potent than processed kava. Chewing produces the strongest effect because it produces the finest particles. Fresh, undried kava produces a stronger beverage than dry kava. The strength also depends on the species and techniques of cultivation.
In Vanuatu, a strong kava drink is normally followed by a hot meal or tea. The meal traditionally follows some time after the drink so that the psychoactives are absorbed into the bloodstream more quickly. Traditionally, no flavoring is added.
In Papua New Guinea, the locals in Madang province refer to their kava as waild koniak ("wild cognac" in English).
Fijians commonly share a drink called grog, made by pounding sun-dried kava root into a fine powder, straining and mixing it with cold water. Traditionally, grog is drunk from the shorn half-shell of a coconut, called a bilo. Grog is very popular in Fiji, especially among young men, and often brings people together for storytelling and socializing. Drinking grog for a few hours brings a numbing and relaxing effect to the drinker; grog also numbs the tongue, and grog drinking typically is followed by a "chaser" or sweet or spicy snack to follow a bilo.
Supplements and pharmaceutical preparations
Water extraction is the traditional method for preparation of the plant. Pharmaceutical and herbal supplement companies extract kavalactones from the kava plant using solvents such as supercritical carbon dioxide, acetone, and ethanol to produce pills standardized with between 30% and 90% kavalactones.
Concerns
Numerous scholars and regulatory bodies have raised concerns over the safety profile of such products.
One group of scholars say that organic solvents introduce compounds that may affect the liver into the standardized product; these compounds are not extracted by water and are consequently largely absent from kava prepared with water. For instance, when compared with water extraction, organic solvents extract vastly larger amounts of flavokavains, compounds associated with adverse reactions to kava that are present in very low concentrations in noble kava, but significant in non-noble.
Also, "chemical solvents used do not extract the same compounds as the natural water extracts in traditional use. The extraction process may exclude important modifying constituents soluble only in water". In particular, it has been noted that, unlike traditional water-based preparations, products obtained with the use of organic solvents do not contain glutathione, an important liver-protecting compound. Another group of researchers noted: "The extraction process (aqueous vs. acetone in the two types of preparations) is responsible for the difference in toxicity as extraction of glutathione in addition to the kava lactones is important to provide protection against hepatotoxicity".
It has also been argued that kavalactone extracts have often been made from low-quality plant material, including the toxic aerial parts of the plant that contain the hepatotoxic alkaloid Pipermethystine, non-noble kava varieties, or plants affected by mold — which, in light of the chemical solvents' ability to extract far greater amounts of the potentially toxic compounds than water, makes them particularly problematic.
In the context of these concerns, the World Health Organization advises against the consumption of ethanolic and acetonic kavalactone extracts, and says that "products should be developed from water-based suspensions of kava". The government of Australia prohibits the sales of such kavalactone extracts, and only permits the sale of kava products in their natural form or produced with cold water.
Kava culture
Kava is used for medicinal, religious, political, cultural, and social purposes throughout the Pacific. These cultures have a great respect for the plant and place a high importance on it. In Fiji, for example, a formal yaqona (kava) ceremony will often accompany important social, political, or religious functions, usually involving a ritual presentation of the bundled roots as a sevusevu (gift) and drinking of the yaqona itself. Due to the importance of kava in religious rituals and the seemingly (from the Western point of view) unhygienic preparation method, its consumption was discouraged or even banned by Christian missionaries.
Kava bars
With kava's increasing popularity, bars serving the plant in its liquid state are beginning to open up outside of the South Pacific.
While some bars have been committed to only serving the traditional forms and types of kava, other establishments have been accused of serving non-traditionally consumed non-noble kava varieties, which are cheaper but far more likely to cause unpleasant effects and adverse reactions, or of serving kava with other substances, including alcohol.
Effects of consumption
The nature of effects will largely depend on the cultivar of the kava plant and the form of its consumption. Traditionally, only noble kava cultivars have been consumed, as they are accepted as safe and produce desired effects. The specific effects of various noble kavas depend on various factors, such as the cultivar used (and the related specific composition of kavalactones), age of the plant, and method of consumption. However, it can be stated that in general, noble kava produces a state of calmness, relaxation, and well-being without diminishing cognitive performance. Kava may produce an initial talkative period, followed by muscle relaxation and eventual sleepiness.
As noted in one of the earliest Western publications on kava (1886): "A well prepared Kava potion drunk in small quantities produces only pleasant changes in behavior. It is therefore a slightly stimulating drink which helps relieve great fatigue. It relaxes the body after strenuous efforts, clarifies the mind and sharpens the mental faculties".
Other effects include euphoria, feelings of happiness and relaxation, reduced appetite, relaxed muscles and sedation.
In very high doses for some people it can cause a “dream” or “trance” like State, mild dissociation, stronger euphoric effects, and mild hallucinations but most of these effects come from drastically abusing kava. Kava is not a psychedelic like magic mushrooms or LSD or a dissociative like ketamine and it should not be used for psychedelic or dissociative purposes. This use is highly dangerous and can result in liver damage or other serious side effects.
Despite its psychoactive effects, kava is not considered to be physically addictive and its use does not lead to dependency.
Toxicity, safety, and potential side effects
General observations
There is limited safety information available on the effects of kava consumption, but in general, moderate consumption appears unlikely to be harmful, while there is evidence of harm from heavy use.
Effects on the liver
There is published evidence of the hepatotoxicity of kava extracts, and concerns about this led to kava being omitted from the US Pharmacopeia.
Other adverse reactions
Adverse reactions may result from the poor quality of kava raw material used in the manufacturing of various kava products. In addition to the potential for hepatotoxicity, adverse reactions from chronic use may include visual impairment, rashes or dermatitis, seizures, weight loss, and malnutrition, but there is only limited high-quality research on these possible effects.
On the basis of research findings and long history of safe use across the South Pacific, experts recommend using water-based extractions of high-quality peeled rhizome and roots of the noble kava cultivars to minimize the potential of adverse reactions to chronic use.
Potential interactions
Several adverse interactions with drugs have been documented, both prescription and nonprescription — including, but not limited to, anticonvulsants, alcohol, anxiolytics (central nervous system depressants such as benzodiazepines), antipsychotics, levodopa, diuretics, and drugs metabolized by CYP450 in the liver.
A few notable potential drug interactions are, but are not limited to:
Alcohol: It has been reported that combined use of alcohol and kava extract can have additive sedative effects. Regarding cognitive function, kava has been shown to have additive cognitive impairments while taken with alcohol when compared to taking placebo and alcohol alone.
Anxiolytics (CNS depressants such as benzodiazepines and barbiturates): Kava may have potential additive CNS depressant effects (such as sedation and anxiolytic effects) with benzodiazepines and barbiturates. Kava taken in combination with alprazolam can cause a semicomatose state in humans.
Dopamine agonist — levodopa: One of levodopa's chronic side effects that Parkinson's patients experience is the "on-off phenomenon" of motor fluctuations, where there will be periods of oscillations between "on", where the patient experiences symptomatic relief, and "off", where the therapeutic effect wears off early. When taking levodopa and kava together, it has been shown that there is an increased frequency of this "on-off phenomenon".
Kava dermopathy
Long-term and heavy kava consumption is associated with a reversible skin condition known as "kava dermopathy", or kanikani (in the Fijian language), characterised by dry and scaly skin covering the palms of the hands, soles of the feet, and back. The first symptom to appear is usually dry, peeling skin; some Pacific Islanders deliberately consume large quantities of kava for several weeks in order to get the peeling effect, resulting in a layer of new skin. These effects appeared at consumption levels between to a week of kava powder. Despite numerous studies, the mechanism that causes kava dermopathy is poorly understood "but may relate to interference with cholesterol metabolism". The condition is easily treatable with abstinence or lowering of kava intake as the skin appears to be returning to its normal state within a couple of weeks of reduced or no kava use. Kava dermopathy should not be confused with rare instances of allergic reactions to kava that are usually characterised by itchy rash or puffy face.
Research
Kava is under preliminary research for its potential psychoactive — primarily anxiolytic — sleep-inducing, and sleep-enhancing properties. Preliminary analysis of kava effects in people with short-term anxiety disorders indicated a small level of improvement.
Traditional medicine
Over centuries, kava has been used in the traditional medicine of the South Pacific Islands for central nervous system and peripheral effects. As noted in one literature review: "Peripherally, kava is indicated in traditional Pacific medicine for urogenital conditions (gonorrhea infections, chronic cystitis, difficulty urinating), reproductive and women's health (...), gastrointestinal upsets, respiratory ailments (asthma, coughs, and tuberculosis), skin diseases and topical wounds, and as an analgesic, with significant subtlety and nuance attending the precise strain, plant component (leaf, stem, root) and preparative method to be used".
Regulation
Kava remains legal in most countries. Regulations often treat it as a food or dietary supplement.
Australia
In Australia, the supply of kava is regulated through the National Code of Kava Management. Travellers to Australia are allowed to bring up to 4 kg of kava in their baggage, provided they are at least 18 years old and the kava is in root or dried form. Commercial import of larger quantities is allowed, under licence for medical or scientific purposes. These restrictions were introduced in 2007 after concern about abuse of kava in indigenous communities. Initially, the import limit was 2 kg per person; it was raised to 4 kg in December 2019, and a pilot program allowing for commercial importation was implemented on 1 December 2021.
The Australian Therapeutic Goods Administration has recommended no more than 250 mg of kavalactones be taken in a 24‑hour period.
Kava possession is limited to 2 kg per adult in the Northern Territory. While it was previously banned in Western Australia in the 2000s, the Western Australian Health Department announced the lifting of the ban in February 2017, bringing Western Australia "into line with other States" where it has always remained legal, albeit closely regulated.
Europe
Following discussions on the safety of certain pharmaceutical products derived from kava and sold in Germany, the EU imposed a temporary ban on imports of kava-based pharmaceutical products in 2002. The sale of kava plant became regulated in Switzerland, France, and in prepared form in the Netherlands. Some Pacific island states which had been benefiting from the export of kava to the pharmaceutical companies have attempted to overturn the EU ban on kava-based pharmaceutical products by invoking international trade agreements at the WTO: Fiji, Samoa, Tonga, and Vanuatu argued that the ban was imposed with insufficient evidence. The pressure prompted Germany to reconsider the evidence base for banning kava-based pharmaceutical products. On 10 June 2014, the German Administrative Court overturned the 2002 ban, making selling kava as a medicine legal (personal possession of kava has never been illegal), albeit strictly regulated. In Germany, kava-based pharmaceutical preparations are currently prescription drugs. Furthermore, patient and professional information brochures have been redesigned to warn about potential side effects. These strict measures have been opposed by some of the leading kava scientists. In early 2016, a court case was filed against the Bundesinstitut für Arzneimittel und Medizinprodukte (BfArM/German Federal Institute for Drugs and Medical Devices), arguing that the new regulatory regime is too strict and not justified.
In the United Kingdom, it is a criminal offence to sell, supply, or import any medicinal product containing kava for human consumption. It is legal to possess kava for personal use or to import it for purposes other than human consumption (e.g., for animals).
Until August 2018, Poland was the only EU country with an "outright ban on kava" and where the mere possession of kava was prohibited and may have resulted in a prison sentence. Under the new legislation, kava is no longer listed among prohibited substances and it is therefore legal to possess, import, and consume the plant, but it remains illegal to sell it within Poland for the purpose of human consumption.
In the Netherlands, for unknown reasons, the ban was never lifted, and it is still prohibited to prepare, manufacture, or trade kava or goods containing kava.
New Zealand
When used traditionally, kava is regulated as a food under the Food Standards Code. Kava may also be used as an herbal remedy, where it is currently regulated by the Dietary Supplements Regulations. Only traditionally consumed forms and parts of the kava plant (i.e., pure roots of the kava plant, water extractions prepared from these roots) can legally be sold as food or dietary supplements in New Zealand. The aerial parts of the plant (growing up and out of the ground), unlike the roots, contain relatively small amounts of kavalactones; instead, they contain a mildly toxic alkaloid, pipermethysticine. While not normally consumed, the sale of aerial plant sections and non-water based extract (such as , acetonic, or ethanol extractions) is prohibited for the purpose of human consumption (but can be sold as an ingredient in cosmetics or other products not intended for human consumption).
North America
In 2002, Health Canada issued an order prohibiting the sale of any product containing kava. While the restrictions on kava were lifted in 2012, Health Canada lists five kava ingredients as of 2017.
In 2002, the U.S. Food and Drug Administration issued a Consumer Advisory: "Kava-Containing Dietary Supplements May be Associated with Severe Liver Injury". No legal action was taken, and this advisory has since been archived.
Vanuatu
The Pacific island-state of Vanuatu has passed legislation to regulate the quality of its kava exports. Vanuatu prohibits the export or consumption of non-noble kava varieties or the parts of the plant that are unsuitable for consumption (such as leaves and stems).
| Biology and health sciences | Piperales | Plants |
48837 | https://en.wikipedia.org/wiki/Sidereal%20time | Sidereal time | Sidereal time ("sidereal" pronounced ) is a system of timekeeping used especially by astronomers. Using sidereal time and the celestial coordinate system, it is easy to locate the positions of celestial objects in the night sky. Sidereal time is a "time scale that is based on Earth's rate of rotation measured relative to the fixed stars".
Viewed from the same location, a star seen at one position in the sky will be seen at the same position on another night at the same time of day (or night), if the day is defined as a sidereal day (also known as the sidereal rotation period). This is similar to how the time kept by a sundial (Solar time) can be used to find the location of the Sun. Just as the Sun and Moon appear to rise in the east and set in the west due to the rotation of Earth, so do the stars. Both solar time and sidereal time make use of the regularity of Earth's rotation about its polar axis: solar time is reckoned according to the position of the Sun in the sky while sidereal time is based approximately on the position of the fixed stars on the theoretical celestial sphere.
More exactly, sidereal time is the angle, measured along the celestial equator, from the observer's meridian to the great circle that passes through the March equinox (the northern hemisphere's vernal equinox) and both celestial poles, and is usually expressed in hours, minutes, and seconds. (In the context of sidereal time, "March equinox" or "equinox" or "first point of Aries" is currently a direction, from the center of the Earth along the line formed by the intersection of the Earth's equator and the Earth's orbit around the Sun, toward the constellation Pisces; during ancient times it was toward the constellation Aries.) Common time on a typical clock (using mean Solar time) measures a slightly longer cycle, affected not only by Earth's axial rotation but also by Earth's orbit around the Sun.
The March equinox itself precesses slowly westward relative to the fixed stars, completing one revolution in about 25,800 years, so the misnamed "sidereal" day ("sidereal" is derived from the Latin sidus meaning "star") is 0.0084 seconds shorter than the stellar day, Earth's actual period of rotation relative to the fixed stars.
The slightly longer stellar period is measured as the Earth rotation angle (ERA), formerly the stellar angle. An increase of 360° in the ERA is a full rotation of the Earth.
A sidereal day on Earth is approximately 86164.0905 seconds (23 h 56 min 4.0905 s or 23.9344696 h).
(Seconds are defined as per International System of Units and are not to be confused with ephemeris seconds.)
Each day, the sidereal time at any given place and time will be about four minutes shorter than local civil time (which is based on solar time), so that for a complete year the number of sidereal "days" is one more than the number of solar days.
Comparison to solar time
Solar time is measured by the apparent diurnal motion of the Sun. Local noon in apparent solar time is the moment when the Sun is exactly due south or north (depending on the observer's latitude and the season). A mean solar day (what we normally measure as a "day") is the average time between local solar noons ("average" since this varies slightly over a year).
Earth makes one rotation around its axis each sidereal day; during that time it moves a short distance (about 1°) along its orbit around the Sun. So after a sidereal day has passed, Earth still needs to rotate slightly more before the Sun reaches local noon according to solar time. A mean solar day is, therefore, nearly 4 minutes longer than a sidereal day.
The stars are so far away that Earth's movement along its orbit makes nearly no difference to their apparent direction (except for the nearest stars if measured with extreme accuracy; see parallax), and so they return to their highest point at the same time each sidereal day.
Another way to understand this difference is to notice that, relative to the stars, as viewed from Earth, the position of the Sun at the same time each day appears to move around Earth once per year. A year has about 365.24 solar days but 366.24 sidereal days. Therefore, there is one fewer solar day per year than there are sidereal days, similar to an observation of the coin rotation paradox. This makes a sidereal day approximately times the length of the 24-hour solar day.
Effects of precession
Earth's rotation is not a simple rotation around an axis that remains always parallel to itself. Earth's rotational axis itself rotates about a second axis, orthogonal to the plane of Earth's orbit, taking about 25,800 years to perform a complete rotation. This phenomenon is termed the precession of the equinoxes. Because of this precession, the stars appear to move around Earth in a manner more complicated than a simple constant rotation.
For this reason, to simplify the description of Earth's orientation in astronomy and geodesy, it was conventional to chart the positions of the stars in the sky according to right ascension and declination, which are based on a frame of reference that follows Earth's precession, and to keep track of Earth's rotation, through sidereal time, relative to this frame as well. (The conventional reference frame, for purposes of star catalogues, was replaced in 1998 with the International Celestial Reference Frame, which is fixed with respect to extra-galactic radio sources. Because of the great distances, these sources have no appreciable proper motion.) In this frame of reference, Earth's rotation is close to constant, but the stars appear to rotate slowly with a period of about 25,800 years. It is also in this frame of reference that the tropical year (or solar year), the year related to Earth's seasons, represents one orbit of Earth around the Sun. The precise definition of a sidereal day is the time taken for one rotation of Earth in this precessing frame of reference.
Modern definitions
During the past, time was measured by observing stars with instruments such as photographic zenith tubes and Danjon astrolabes, and the passage of stars across defined lines would be timed with the observatory clock. Then, using the right ascension of the stars from a star catalog, the time when the star should have passed through the meridian of the observatory was computed, and a correction to the time kept by the observatory clock was computed. Sidereal time was defined such that the March equinox would transit the meridian of the observatory at 0 hours local sidereal time.
Beginning during the 1970s, the radio astronomy methods very-long-baseline interferometry (VLBI) and pulsar timing overtook optical instruments for the most precise astrometry. This resulted in the determination of UT1 (mean solar time at 0° longitude) using VLBI, a new measure of the Earth Rotation Angle, and new definitions of sidereal time. These changes became effective 1 January 2003.
Earth rotation angle
The Earth rotation angle (ERA) measures the rotation of the Earth from an origin on the celestial equator, the Celestial Intermediate Origin, also termed the Celestial Ephemeris Origin, that has no instantaneous motion along the equator; it was originally referred to as the non-rotating origin. This point is very close to the equinox of J2000.
ERA, measured in radians, is related to UT1 by a simple linear relation:
where tU is the Julian UT1 date (JD) minus 2451545.0.
The linear coefficient represents the Earth's rotation speed around its own axis.
ERA replaces Greenwich Apparent Sidereal Time (GAST). The origin on the celestial equator for GAST, termed the true equinox, does move, due to the movement of the equator and the ecliptic. The lack of motion of the origin of ERA is considered a significant advantage.
The ERA may be converted to other units; for example, the Astronomical Almanac for the Year 2017 tabulated it in degrees, minutes, and seconds.
As an example, the Astronomical Almanac for the Year 2017 gave the ERA at 0 h 1 January 2017 UT1 as 100° 37′ 12.4365″. Since Coordinated Universal Time (UTC) is within a second or two of UT1, this can be used as an anchor to give the ERA approximately for a given civil time and date.
Mean and apparent varieties
Although ERA is intended to replace sidereal time, there is a need to maintain definitions for sidereal time during the transition, and when working with older data and documents.
Similarly to mean solar time, every location on Earth has its own local sidereal time (LST), depending on the longitude of the point. Since it is not feasible to publish tables for every longitude, astronomical tables use Greenwich sidereal time (GST), which is sidereal time on the IERS Reference Meridian, less precisely termed the Greenwich, or Prime meridian. There are two varieties, mean sidereal time if the mean equator and equinox of date are used, and apparent sidereal time if the apparent equator and equinox of date are used. The former ignores the effect of astronomical nutation while the latter includes it. When the choice of location is combined with the choice of including astronomical nutation or not, the acronyms GMST, LMST, GAST, and LAST result.
The following relationships are true:
The new definitions of Greenwich mean and apparent sidereal time (since 2003, see above) are:
such that θ is the Earth Rotation Angle, EPREC is the accumulated precession, and E0 is equation of the origins, which represents accumulated precession and nutation. The calculation of precession and nutation was described in Chapter 6 of Urban & Seidelmann.
As an example, the Astronomical Almanac for the Year 2017 gave the ERA at 0 h 1 January 2017 UT1 as 100° 37′ 12.4365″ (6 h 42 m 28.8291 s). The GAST was 6 h 43 m 20.7109 s. For GMST the hour and minute were the same but the second was 21.1060.
Relationship between solar time and sidereal time intervals
If a certain interval I is measured in both mean solar time (UT1) and sidereal time, the numerical value will be greater in sidereal time than in UT1, because sidereal days are shorter than UT1 days. The ratio is:
such that t represents the number of Julian centuries elapsed since noon 1 January 2000 Terrestrial Time.
Sidereal days compared to solar days on other planets
Six of the eight solar planets have prograde rotation—that is, they rotate more than once per year in the same direction as they orbit the Sun, so the Sun rises in the east. Venus and Uranus, however, have retrograde rotation. For prograde rotation, the formula relating the lengths of the sidereal and solar days is:
or, equivalently:
When calculating the formula for a retrograde rotation, the operator of the denominator will be a plus sign (put another way, in the original formula the length of the sidereal day must be treated as negative). This is due to the solar day being shorter than the sidereal day for retrograde rotation, as the rotation of the planet would be against the direction of orbital motion.
If a planet rotates prograde, and the sidereal day exactly equals the orbital period, then the formula above gives an infinitely long solar day (division by zero). This is the case for a planet in synchronous rotation; in the case of zero eccentricity, one hemisphere experiences eternal day, the other eternal night, with a "twilight belt" separating them.
All the solar planets more distant from the Sun than Earth are similar to Earth in that, since they experience many rotations per revolution around the Sun, there is only a small difference between the length of the sidereal day and that of the solar day – the ratio of the former to the latter never being less than Earth's ratio of 0.997. But the situation is quite different for Mercury and Venus. Mercury's sidereal day is about two-thirds of its orbital period, so by the prograde formula its solar day lasts for two revolutions around the Sun – three times as long as its sidereal day. Venus rotates retrograde with a sidereal day lasting about 243.0 Earth days, or about 1.08 times its orbital period of 224.7 Earth days; hence by the retrograde formula its solar day is about 116.8 Earth days, and it has about 1.9 solar days per orbital period.
By convention, rotation periods of planets are given in sidereal terms unless otherwise specified.
| Physical sciences | Celestial mechanics | Astronomy |
48838 | https://en.wikipedia.org/wiki/Hour%20angle | Hour angle | In astronomy and celestial navigation, the hour angle is the dihedral angle between the meridian plane (containing Earth's axis and the zenith) and the hour circle (containing Earth's axis and a given point of interest).
It may be given in degrees, time, or rotations depending on the application.
The angle may be expressed as negative east of the meridian plane and positive west of the meridian plane, or as positive westward from 0° to 360°. The angle may be measured in degrees or in time, with 24h = 360° exactly.
In celestial navigation, the convention is to measure in degrees westward from the prime meridian (Greenwich hour angle, GHA), from the local meridian (local hour angle, LHA) or from the first point of Aries (sidereal hour angle, SHA).
The hour angle is paired with the declination to fully specify the location of a point on the celestial sphere in the equatorial coordinate system.
Relation with right ascension
The local hour angle (LHA) of an object in the observer's sky is
or
where LHAobject is the local hour angle of the object, LST is the local sidereal time, is the object's right ascension, GST is Greenwich sidereal time and is the observer's longitude (positive east from the prime meridian). These angles can be measured in time (24 hours to a circle) or in degrees (360 degrees to a circle)—one or the other, not both.
Negative hour angles (−180° < LHAobject < 0°) indicate the object is approaching the meridian, positive hour angles (0° < LHAobject < 180°) indicate the object is moving away from the meridian; an hour angle of zero means the object is on the meridian.
Right ascension is frequently given in sexagesimal hours-minutes-seconds format (HH:MM:SS) in astronomy, though may be given in decimal hours, sexagesimal degrees (DDD:MM:SS), or, decimal degrees.
Solar hour angle
Observing the Sun from Earth, the solar hour angle is an expression of time, expressed in angular measurement, usually degrees, from solar noon. At solar noon the hour angle is zero degrees, with the time before solar noon expressed as negative degrees, and the local time after solar noon expressed as positive degrees. For example, at 10:30 AM local apparent time the hour angle is −22.5° (15° per hour times 1.5 hours before noon).
The cosine of the hour angle (cos(h)) is used to calculate the solar zenith angle. At solar noon, so , and before and after solar noon the cos(± h) term = the same value for morning (negative hour angle) or afternoon (positive hour angle), so that the Sun is at the same altitude in the sky at 11:00AM and 1:00PM solar time.
Sidereal hour angle
The sidereal hour angle (SHA) of a body on the celestial sphere is its angular distance west of the March equinox generally measured in degrees. The SHA of a star varies by less than a minute of arc per year, due to precession, while the SHA of a planet varies significantly from night to night. SHA is often used in celestial navigation and navigational astronomy, and values are published in astronomical almanacs.
| Physical sciences | Celestial sphere: General | Astronomy |
48866 | https://en.wikipedia.org/wiki/Gadiformes | Gadiformes | Gadiformes , also called the Anacanthini, are an order of ray-finned fish that include the cod, hakes, pollock, haddock, burbot, rocklings and moras, many of which are food fish of major commercial value. They are mostly marine fish found throughout the world and the vast majority are found in temperate or colder regions (tropical species are typically deep-water) while a few species may enter brackish estuaries. Pacific tomcods, one of the two species that makes up the genus Microgadus, are able to enter freshwater, but there is no evidence that they breed there. Some populations of landlocked Atlantic tomcod on the other hand, complete their entire life cycle in freshwater. Yet only one species, the burbot (Lota lota), is a true freshwater fish.
Common characteristics include the positioning of the pelvic fins (if present), below or in front of the pectoral fins. Gadiformes are physoclists, which means their swim bladders do not have a pneumatic duct. The fins are spineless. Gadiform fish range in size from the codlets, which may be as small as in adult length, to the Atlantic cod, Gadus morhua, which reaches up to .
The earliest gadiforms are Palaeogadus weltoni from the Maastrichtian of the United States and the undescribed, informally named "Protocodus" from the Early Paleocene of Greenland.
Timeline of genera
| Biology and health sciences | Acanthomorpha | Animals |
48868 | https://en.wikipedia.org/wiki/Sparidae | Sparidae | Sparidae is a family of ray-finned fishes belonging to the order Spariformes, the seabreams and porgies, although they were traditionally classified in the order Perciformes. They are found in shallow temperate and tropical waters around the world and are demersal carnivores.
Taxonomy
Sparidae was first proposed as a family in 1818 by the French polymath and naturalist Constantine Samuel Rafinesque. Traditionally the taxa within the Spariformes were classified within the Perciformes, with some authorities using the term "Sparoid lineage" for the families Centracanthidae, Nemipteridae, Lethrinidae and Sparidae. Since then the use of molecular phylogenetics in more modern classifications has meant that the Spariformes is recognised as a valid order within the Percomorpha containing six families, with Callanthidae, Sillaginidae and Lobotidae included. Other workers have found that the Centracanthidae is synonymous with Sparidae and that the Spariformes contains only the remaining three families of the "Sparoid lineage".
In the past workers recognised six subfamilies within the Sparidae. These were Boopsinae, Denticinae, Diplodinae, Pagellinae, Pagrinae, and Sparinae. However, these taxa did not resolve as monophyletic in all the analyses undertaken. These analyses support Sparidae as a monophyletic family if Spicara, a genus formerly in the family Centracanthidae, was included. This meant that Spicara and Centracanthus were both now classified within Sparidae, so that Centracanthidae is a junior synonym of Sparidae.
Etymology
Sparidae takes its name from its type genus, Sparus, that name coming from the Greek for its only species the gilt-head bream (Sparus aurata).
Genera
The family Sparidae contains about 155 species in 38 genera:
Fossil genera include:
†Abromasta Day, 2003
†Burtinia van Beneden, 1873
†Crommyodus Cope, 1875
†Ctenodentex Storms, 1896
†Ellaserrata Day, 2003
†Kreyenhagenius David, 1946
†Paracalamus Arambourg, 1927
†Plectrites Jordan & Gilbert, 1920
†Pseudosparnodus Day, 2003
†Pshekharus Bannikov & Kotylar, 2015
†Rhythmias Jordan & Gilbert, 1920
†Sciaenurus Agassiz, 1845
†Sparnodus Agassiz, 1838
Characteristics
Sparidae breams have oblong, moderately deep and compressed bodies. The head is large, with a characteristic steep dorsal slant. There are no scales on the snout but there are scales on the cheeks. The preoperculum may or may not have scales and has no spines or serrations on its margin. The operculum is scaled and also has no spines. The mouth is slightly oblique and can be protruded a little. The upper jaw never extends back past a vertical line through the centre of the eye. There are teeth in the jaws which vary from conical or flattened but there are no teeth on the roof of the mouth. There is one dorsal fin which is supported by between 10 and 13 spines and 9 and 17 soft rays, with the ultimate ray being split into 2, and no incision separated the spines from the soft rays. The rearmost spines in the dorsal fin may be elongated or filamentous. The anal fin is supported by 3 robust spines and between 7 and 15 soft rays. The caudal fin varies from moderately deeply emarginate to forked. The pectoral fins are typically long and pointed and the pelvic fins are under or immediately to the rear of the bases of the pectoral fins, supported a single spine and 5 soft rays, with a scale in the axilla, referred to as the axillary pelvic process. The scales are typically smooth, cycloid, or slightly rough to the touch, weakly ctenoid, The lateral line is single and continuous and reached the base of the caudal fin. They are very variable in colour and may be pinkish or reddish to yellowish or greyish, frequently with tints of silver or gold and dark or coloured spots, stripes or bars. The two largest species of Sparid are the white steenbras (Lithognathus lithognathus) and the red steenbras (Petrus rupestris), both of which have a maximum published total length of , while the smallest species is the cherry seabream (Polysteganus cerasinus).
Distribution and habitat
Sparidae breams are found in tropical and temperate coastal waters around the world. They are demersal fishes on the continental shelf and slope. A few species are found in brackish water, and a few of these will enter fresh water.
Biology
Sparidae breams are predatory with most feeding on benthic invertebrates. Smaller species in the family usually gather in schools, as do the juveniles of the larger species. The larger adult fishes are normally solitary or, at least, are less sociable and prefer deeper waters. The juveniles and subadults are often markedly different in shape and colour patterns, and may be much more colourful. Many sparids are hermaphroditic and some have both male and female sex organs at the same time. Others change sex as the grow, either changing from male to female, i.e. protandrous. or from female to male, protogynous.
Fisheries
Sparids are highly regarded as food fish and are important target species for commercial fisheries wherever they occur. Between 1990 and 1995, the FAO Yearbook of Fishery Statistics reported that the annual weight of landings was between of sparids in the Western Central Pacific.
Cookery
The most celebrated of the breams in cookery are the gilt-head bream and the common dentex.
| Biology and health sciences | Acanthomorpha | Animals |
48869 | https://en.wikipedia.org/wiki/Clupeiformes | Clupeiformes | Clupeiformes is the order of ray-finned fish that includes the herring family, Clupeidae, and the anchovy family, Engraulidae and sardines. The group includes many of the most important forage and food fish.
Clupeiformes are physostomes, which means that their gas bladder has a pneumatic duct connecting it to the gut. They typically lack a lateral line, but still have the eyes, fins and scales that are common to most fish, though not all fish have these attributes. They are generally silvery fish with streamlined, spindle-shaped bodies, and they often school. Most species eat plankton which they filter from the water with their gill rakers.
The former order of Isospondyli was subsumed mostly by Clupeiformes, but some isospondylous fishes (isospondyls) were assigned to Osteoglossiformes, Salmoniformes, Cetomimiformes, etc.
Their sister group were the extinct Ellimmichthyiformes, which were dominant throughout much of the Cretaceous and into the Paleogene, and often coexisted with clupeiforms at many known localities. Both groups closely resembled each other morphologically, although the ellimmichthyiformes evolved some highly divergent body plans later in the Cretaceous.
Several fossil clupeiforms are known from the Early Cretaceous of South America that appear to be more closely allied with Clupeioidei over the Denticipitidae. This suggests a very deep divergence within the crown group Clupeiformes that must have occurred during the Early Cretaceous or before.
Families
The order includes about 405 species in ten families:
Order Clupeiformes
Genus †Histiothrissa Woodward, 1901
Genus ?†Jhingrania Misra & Saxena, 1959 (possibly a clupavid)
Genus †Santanaclupea Maisey, 1993
Genus †Spratticeps Patterson, 1970
Suborder Denticipitoidei Grande, 1982
Family Denticipitidae Clausen, 1959 (denticle herring)
Suborder Clupeoidei Bleeker, 1849
Genus †Beksinskiella Granica, Bieńowska-Wasiluki & Paldyna, 2004
Genus †Nolfia De Figueiredo, 2009
Genus †Pseudoellima De Figueiredo, 2009
Family †Cynoclupeidae Malabarba & Di Dario, 2017
Family Spratelloididae D. S. Jordan 1925 (dwarf herrings or small round herrings)
Family Engraulidae Gill, 1861 (anchovies)
Family Clupeidae Cuvier, 1816 (herrings and sprats)
Family Chirocentridae Bleeker, 1849 (wolf herrings)
Family Dussumieriidae Gill, 1861 (round herrings or rainbow sardines)
Family Pristigasteridae Bleeker, 1872 (longfin herrings)
Family Ehiravidae Deraniyagala, 1929 (river sprats)
Family Alosidae Svetovidov, 1952 (shads and sardines)
Family Dorosomatidae Gill, 1861 (thread herrings or gizzard shads and sardinellas)
Timeline of genera
| Biology and health sciences | Clupeiformes | Animals |
48870 | https://en.wikipedia.org/wiki/Clupeidae | Clupeidae | Clupeidae is a family of clupeiform ray-finned fishes, comprising, for instance, the herrings and sprats. Many members of the family have a body protected with shiny cycloid (very smooth and uniform) scales, a single dorsal fin, and a fusiform body for quick, evasive swimming and pursuit of prey composed of small planktonic animals. Due to their small size and position in the lower trophic level of many marine food webs, the levels of methylmercury they bioaccumulate are very low, reducing the risk of mercury poisoning when consumed.
The earliest known fossil members of this group are the stem-clupeids Italoclupea and Lecceclupea from the late Campanian/early Maastrichtian of Italy.
Description and biology
Clupeids are mostly marine forage fish, although a few species are found in fresh water. No species has scales on the head, and some are entirely scaleless. The lateral line is short or absent, and the teeth are unusually small where they are present at all. Clupeids typically feed on plankton, and range from in length.
Clupeids spawn huge numbers of eggs (up to 200,000 in some species) near the surface of the water. After hatching, the larvae live among the plankton until they develop a swim bladder and transform into adults. These eggs and fry are not protected or tended to by parents. The adults typically live in large shoals, seeking protection from piscivorous predators such as birds, sharks and other predatory fish, toothed whales, marine mammals, and jellyfish. They also form bait balls.
Commercially important species of the Clupeidae include the Atlantic and Baltic herrings (Clupea harengus), and the Pacific herring (C. pallasii).
Feeding physiology
The Clupeidae family primarily feed on small planktonic organisms. The teeth of members of this family are either reduced or absent, reduced teeth are miniature teeth that would be barely visible and line the interior of the fish's mouth. The structure of these teeth indicate that these organisms do not need to cut or tear their prey items as they would need fully formed teeth to complete this process. They do, however, possess long gill rakers that are designed for sifting plankton and other small particles out of the water as it passes through their gills. Gill rakers are protrusions along the gill arch, opposing the gill filaments, that help aquatic organisms to trap food particles.
The diet of many clupeids primarily consists of phytoplankton and plant matter during their larval stages. As the fish mature this diet begins to shift towards larger and more substantive organisms, including more zooplankton and copepods. This change in diet is possible due to their increase in body and gill raker size, which allows them to capture and process larger organisms to support themselves. Small organisms like these do not need to be ground or torn apart for consumption so pronounced teeth would not serve a purpose in the feeding habits of Clupeidae, instead the use of filter feeding allows for much more efficient nutrient collection.
The fusiform body shape of Clupeidae is also advantageous to their trophic ecology. The tapering body form is a highly hydrodynamic form that allows for quick increases in speed and a high maximum speed. Moving at high speeds allows the members of this family to regulate their feeding habits and avoid predators. Clupeidae can moderate the speed at which they swim to increase their uptake of nutrients. As with all filter feeders, Clupeidae cannot take in food if nutrient rich water does not pass over their gills. To moderate this, members of this family have been found to increase their swimming speed when they sense that there is a high concentration of food items in order to take advantage of this feeding period. Keeping a high swimming speed during periods of low food availability would not be efficient to maintain over long periods of time as the organisms would not net as much energy as they may need to in order to sustain themselves and increase their fitness. Increasing their swimming speed during feeding periods would allow them to take in more plankton while not suffering consequences from maintaining that speed.
Taxonomy
The following genera are classified within the family:
Clupea Linnaeus, 1758
Ethmidium W. F. Thompson, 1916
Hyperlophus Ogilby, 1892
Potamalosa Ogilby, 1897
Ramnogaster Whitehead, 1965
Sprattus Girgensohn 1846
Strangomera Whitehead,genera from ECoF 1965
The family arguably also contains the "Sundasalangidae", a paedomorphic taxon first thought to be a distinct salmoniform family, but then discovered to be deeply nested in the Clupeidae.
Until recently, the concept of Clupeidae was broader, but it has been subdivided into several distinct families (e.g. Alosidae)
Fossil genera
The following fossil genera have been variously suggested to be sensu stricto members of Clupeidae. Many were formerly placed in the subfamily Clupeinae:
?†Audenaerdia Taverne, 1973 (alternatively Clupeidae or Alosidae)
†Italoclupea Taverne, 2007
†Knightia Jordan, 1907
†Lecceclupea Taverne, 2011
†Xyne Jordan, 1921 (likely closely related to Clupea)
Disputed fossil genera
Known fossil genera classified under the sensu lato concept of Clupeidae include:
†Alisea
†Austroclupea
†Bolcaichthys
†Chasmoclupea
†Clupeidarum [otolith]
†Clupeops
†Eoalosa
†Eosardinella
†Etringus
†Ganoessus
†Ganolytes
†Gosiutichthys
†Horaclupea
†?Hypsospondylus
†Karaganops
†Marambionella
†Maicopiella
†Moldavichthys
†Paleopiquitinga
†Primisardinella
†Pseudohilsa
†Quisque
†Rupelia
†Sarmatella (=†Illusionella)
†Trollichthys
†Waihaoclupea
†Wisslerius
†Xenophanis
†Xyrinius
| Biology and health sciences | Clupeiformes | Animals |
48872 | https://en.wikipedia.org/wiki/Scombridae | Scombridae | The mackerel, tuna, and bonito family, Scombridae, includes many of the most important and familiar food fishes. The family consists of 51 species in 15 genera and two subfamilies. All species are in the subfamily Scombrinae, except the butterfly kingfish, which is the sole member of subfamily Gasterochismatinae.
Scombrids have two dorsal fins and a series of finlets behind the rear dorsal fin and anal fin. The caudal fin is strongly divided and rigid, with a slender, ridged base. The first (spiny) dorsal fin and the pelvic fins are normally retracted into body grooves. Species lengths vary from the of the island mackerel to the recorded for the immense Atlantic bluefin tuna.
Scombrids are generally predators of the open ocean, and are found worldwide in tropical and temperate waters. They are capable of considerable speed, due to a highly streamlined body and retractable fins. Some members of the family, in particular the tunas, are notable for being partially endothermic (warm-blooded), a feature that also helps them to maintain high speed and activity. Other adaptations include a large amount of red muscle, allowing them to maintain activity over long periods. Scombrids like the yellowfin tuna can reach speeds of 22 km/h (14 mph).
Classification
Jordan, Evermann, and Clark (1930) divide these fishes into the four families: Cybiidae, Katsuwonidae, Scombridae, and Thunnidae, but taxonomists later classified them all into a single family, the Scombridae.
The World Wildlife Fund and the Zoological Society of London jointly issued their "Living Blue Planet Report" on 16 September 2015 which states that a dramatic fall of 74% occurred in worldwide stocks of scombridae fish between 1970 and 2010, and the global overall "population sizes of mammals, birds, reptiles, amphibians and fish fell by half on average in just 40 years".
Extant genera
The 51 extant species are in 15 genera and two subfamilies – with the subfamily Scombrinae further grouped into four tribes, as:
Family Scombridae
Subfamily Gasterochismatinae
Genus Gasterochisma
Subfamily Scombrinae
Tribe Scombrini – mackerels
Genus Rastrelliger
Genus Scomber
Tribe Scomberomorini – Spanish mackerels
Genus Acanthocybium
Genus Grammatorcynus
Genus Orcynopsis
Genus Scomberomorus
Tribe Sardini – bonitos
Genus Sarda
Genus Cybiosarda
Genus Gymnosarda
Tribe Thunnini – tunas
Genus Allothunnus
Genus Auxis
Genus Euthynnus
Genus Katsuwonus
Genus Thunnus
Fossil genera
The following fossil genera are known:
Genus †Aramichthys (fossil; middle Eocene of Syria)
Genus †Eoscomber (fossil; early Eocene of Senegal)
Genus †Eoscombrus (fossil; late Eocene of California)
Genus †Godsilia (fossil; early Eocene of Italy)
Genus †Landanichthys (fossil; middle Paleocene of Angola)
Genus †Palaeocybium (fossil; Eocene to Oligocene of the United States and parts of Europe)
Genus †Pseudauxides (fossil; early Eocene of Italy)
Genus †Scombrinus (fossil; early Eocene of England)
Genus †Thunnoscomberoides (fossil; early Eocene of Italy)
Genus †Wetherellus (fossil; early Eocene of England)
Subfamily Scombrinae
Tribe †Eocoelopomini
Genus †Eocoelopoma (early Eocene of England & Turkmenistan)
Genus †Palaeothunnus (early Eocene of Turkmenistan)
Genus †Micrornatus (early Eocene of England)
Tribe Scomberomorini
Genus †Neocybium (Late Eocene of Kazakhstan, Early Oligocene of Germany & Georgia)
Tribe Scombrini
Genus †Auxides (early Eocene of Senegal, Turkmenistan, and much of Europe)
| Biology and health sciences | Acanthomorpha | Animals |
48873 | https://en.wikipedia.org/wiki/Serranidae | Serranidae | Serranidae is a large family of fishes belonging to the order Perciformes. The family contains about 450 species in 65 genera, including the sea basses and the groupers (subfamily Epinephelinae). Although many species are small, in some cases less than , the giant grouper (Epinephelus lanceolatus) is one of the largest bony fishes in the world, growing to in length and in weight. Representatives of this group live in tropical and subtropical seas worldwide.
Characteristics
Many serranid species are brightly colored, and many of the larger species are caught commercially for food. They are usually found over reefs, in tropical to subtropical waters along the coasts. Serranids are generally robust in form, with large mouths and small spines on the gill coverings. They typically have several rows of sharp teeth, usually with a pair of particularly large, canine-like teeth projecting from the lower jaw.
All serranids are carnivorous. Although some species, especially in the Anthiadinae subfamily, only feed on zooplankton, the majority feed on fish and crustaceans. They are typically ambush predators, hiding in cover on the reef and darting out to grab passing prey. Their bright colours are most likely a form of disruptive camouflage, similar to the stripes of a tiger.
Many species are protogynous hermaphrodites, meaning they start out as females and change sex to male later in life. They produce large quantities of eggs and their larvae are planktonic, generally at the mercy of ocean currents until they are ready to settle into adult populations.
Like other fish, serranids harbour parasites, including nematodes, cestodes, digeneans, monogeneans, isopods, and copepods. A study conducted in New Caledonia has shown that coral reef-associated serranids harbour about 10 species of parasites per fish species.
Classification
In recent times, this family has been proposed to be split. The two hypothetical families emerging from the remains of the possibly-obsolete taxon are the families Epinephilidae and Anthiadidae. This taxonomic separation is recognized by some authorities, including the IUCN.
Recent molecular classifications challenge the validity of the genera Cromileptes (sometimes spelled Chromileptes) and Anyperodon. Each of these two genera has a single species, which were included in the same clade as species of Epinephelus in a study based on five different genes.
The subfamilies and genera are as follows:
| Biology and health sciences | Acanthomorpha | null |
48878 | https://en.wikipedia.org/wiki/Salmonidae | Salmonidae | Salmonidae (, ) is a family of ray-finned fish that constitutes the only currently extant family in the order Salmoniformes (, lit. "salmon-shaped"), consisting of 11 extant genera and over 200 species collectively known as "salmonids" or "salmonoids". The family includes salmon (both Atlantic and Pacific species), trout (both ocean-going and landlocked), char, graylings, freshwater whitefishes, taimens and lenoks, all coldwater mid-level predatory fish that inhabit the subarctic and cool temperate waters of the Northern Hemisphere. The Atlantic salmon (Salmo salar), whose Latin name became that of its genus Salmo, is also the eponym of the family and order names.
Salmonids have a relatively primitive appearance among teleost fish, with the pelvic fins being placed far back, and an adipose fin towards the rear of the back. They have slender bodies with rounded scales and forked tail fins, and their mouths contain a single row of sharp teeth. Although the smallest salmonid species is just long for adults, most salmonids are much larger, with the largest reaching .
All salmonids are migratory fish that spawn in the shallow gravel beds of freshwater headstreams, spend the growing juvenile years in rivers, creeks, small lakes and wetlands, but migrate downstream upon maturity and spend most of their adult lives at much larger waterbodies. Many salmonid species are euryhaline and migrate to the sea or brackish estuaries as soon as they approach adulthood, returning to the upper streams only to reproduce. Such sea-run life cycle is described as anadromous, and other freshwater salmonids that migrate purely between lakes and rivers are considered potamodromous. Salmonids are carnivorous predators of the middle food chain, feeding on smaller fish, crustaceans, aquatic insects and larvae, tadpoles and sometimes fish eggs (even those of their own kind), and in turn being preyed upon by larger predators. Many species of salmonids are thus considered keystone organisms important for both freshwater and terrestrial ecosystems due to the biomass transfer provided by their mass migration from oceanic to inland waterbodies.
Evolution
Current salmonids comprise three main clades taxonomically treated as subfamilies: Coregoninae (freshwater whitefishes), Thymallinae (graylings), and Salmoninae (trout, salmon, char, taimens and lenoks). Generally, all three lineages are accepted to allocate a suite of derived traits indicating a monophyletic group.
The order Salmoniformes first appeared during the Santonian and Campanian stages of the Late Cretaceous, and is most closely related to pike and mudminnows in the order Esociformes, to the extent that some authors have grouped the Esociformes within the Salmoniformes. Although it is assumed that salmon and pike diverged from one another during the Cretaceous, no definitive salmonids appear before the Eocene. The Salmonidae first appear in the fossil record in the Early Eocene with Eosalmo driftwoodensis, a stem-salmonine, which was first described from fossils found at Driftwood Creek, central British Columbia, and has been recovered from most sites in the Eocene Okanagan Highlands. This genus shares traits found in all three subfamily lineages. Hence, E. driftwoodensis is an archaic salmonid, representing an important stage in salmonid evolution. Fossil scales of coregonines are known from the Late Eocene or Early Oligocene of California.
A gap appears in the salmonine fossil record after E. driftwoodensis until about 7 million years ago (mya), in the Late Miocene, when trout-like fossils appear in Idaho, in the Clarkia Lake beds. Several of these species appear to be Oncorhynchus — the current genus for Pacific salmon and Pacific trout. The presence of these species so far inland established that Oncorhynchus was not only present in the Pacific drainages before the beginning of the Pliocene (~5–6 mya), but also that rainbow and cutthroat trout, and Pacific salmon lineages had diverged before the beginning of the Pliocene. Consequently, the split between Oncorhynchus and Salmo (Atlantic salmon and European trout) must have occurred well before the Pliocene. Suggestions have gone back as far as the Early Miocene (about 20 mya).
Genetics
Based on the most current evidence, salmonids diverged from the rest of teleost fish no later than 88 million years ago, during the late Cretaceous. This divergence was marked by a whole-genome duplication event in the ancestral salmonid, where the diploid ancestor became tetraploid. This duplication is the fourth of its kind to happen in the evolutionary lineage of the salmonids, with two having occurred commonly to all bony vertebrates, and another specifically in the teleost fishes.
Extant salmonids all show evidence of partial tetraploidy, as studies show the genome has undergone selection to regain a diploid state. Work done in the rainbow trout (Onchorhynchus mykiss) has shown that the genome is still partially-tetraploid. Around half of the duplicated protein-coding genes have been deleted, but all apparent miRNA sequences still show full duplication, with potential to influence regulation of the rainbow trout's genome. This pattern of partial tetraploidy is thought to be reflected in the rest of extant salmonids.
The first fossil species representing a true salmonid fish (E. driftwoodensis) does not appear until the middle Eocene. This fossil already displays traits associated with extant salmonids, but as the genome of E. driftwoodensis cannot be sequenced, it cannot be confirmed if polyploidy was present in this animal at this point in time. This fossil is also significantly younger than the proposed salmonid divergence from the rest of the teleost fishes, and is the earliest confirmed salmonid currently known. This means that the salmonids have a ghost lineage of approximately 33 million years.
Given a lack of earlier transition fossils, and the inability to extract genomic data from specimens other than extant species, the dating of the whole-genome duplication event in salmonids was historically a very broad categorization of times, ranging from 25 to 100 million years in age. New advances in calibrated relaxed molecular clock analyses have allowed for a closer examination of the salmonid genome, and has allowed for a more precise dating of the whole-genome duplication of the group, that places the latest possible date for the event at 88 million years ago.
This more precise dating and examination of the salmonid whole-genome duplication event has allowed more speculation on the radiation of species within the group. Historically, the whole-genome duplication event was thought to be the reason for the variation within Salmonidae. Current evidence done with molecular clock analyses revealed that much of the speciation of the group occurred during periods of intense climate change associated with the last ice ages, with especially high speciation rates being observed in salmonids that developed an anadromous lifestyle.
Classification
Together with the closely related orders Esociformes (pikes and mudminnows), Osmeriformes (true smelts) and Argentiniformes (marine smelts and barreleyes), Salmoniformes comprise the superorder Protacanthopterygii.
The only extant family within Salmoniformes, Salmonidae, is divided into three subfamilies and around 10 genera containing about 220 species. The concepts of the number of species recognised vary among researchers and authorities; the numbers presented below represent the higher estimates of diversity:
Order Salmoniformes
Family: Salmonidae
Subfamily: Coregoninae
Coregonus - whitefishes (78 species)
Prosopium - round whitefishes (6 species)
Stenodus - beloribitsa and nelma (2 species)
†Beckius (1 species, Oligocene)
†Parastenodus (1 species, Eocene)
Subfamily: Thymallinae
Thymallus - graylings (14 species)
Subfamily: Salmoninae
†Eosalmo (1 species, Eocene)
Tribe: Salmonini
Salmo - Atlantic salmon and trout (47 species)
Salvelinus - Char and trout (e.g. brook trout, lake trout) (51 species)
Salvethymus - Long-finned char (1 species)
Tribe: Oncorhynchini
Brachymystax - lenoks (4 species)
Hucho - taimens (4 species)
Oncorhynchus - Pacific salmon and trout (12 species)
Parahucho - Sakhalin taimen (1 species)
Hybrid crossbreeding
The following table shows results of hybrid crossbreeding combination in Salmonidae.
note :- : The identical kind, O : (survivability), X : (Fatality)
| Biology and health sciences | Salmoniformes | null |
48885 | https://en.wikipedia.org/wiki/Scorpaeniformes | Scorpaeniformes | The Scorpaeniformes are a diverse order of ray-finned fish, including the lionfishes and sculpins, but have also been called the Scleroparei. It is one of the five largest orders of bony fishes by number of species, with over 1,320.
They are known as "mail-cheeked" fishes due to their distinguishing characteristic, the suborbital stay: a backwards extension of the third circumorbital bone (part of the lateral head/cheek skeleton, below the eye socket) across the cheek to the preoperculum, to which it is connected in most species.
Scorpaeniform fishes are carnivorous, mostly feeding on crustaceans and on smaller fish. Most species live on the sea bottom in relatively shallow waters, although species are known from deep water, from the midwater, and even from fresh water. They typically have spiny heads, and rounded pectoral and caudal fins. Most species are less than in length, but the full size range of the order varies from the velvetfishes belonging to the family Aploactinidae, which can be just long as adults, to the skilfish (Erilepis zonifer), which can reach in total length.
One of the suborders of the Scorpaeniformes is the Scorpaenoidei. This suborder is usually found in the benthic zone, which is the lowest region of any water body like oceans or lakes.
There are two groups of the Scorpaenoidei. The sea robins is the first, which are further classified into two families: the sea robins and the armored sea robins. One significant difference between the two families of sea robins is the presence of spine-bearing plate on the armored sea robins which is absent in the sea robins family.
The second group of the Scorpaenoidei suborder is the scorpionfishes, which according to Minouri Ishida's work in 1994 and recent studies, have twelve families. The scorpionfishes are very dynamic in size with the smallest one having a range of 2–3 cm, while the largest have a length of approximately 100 cm.
Classification
The division of Scorpaeniformes into families is not settled; accounts range from 26 to 35 families. The 5th edition of Fishes of the World classifies the order as follows:
Order Scorpaeniformes
Suborder Scorpaenoidei
Superfamily Congiopodoidea
Family Aploactinidae Jordan & Starks, 1904 (Velvetfishes)
Family Congiopodidae Gill, 1889 (Racehorses, pigfishes or horsefishes)
Superfamily Pataecoidea
Family Pataecidae Gill, 1872 (Australian prowfishes)
Family Gnathanacanthidae Gill, 1892 (Red velvetfish)
Superfamily Scorpaenoidea
Family Eschmeyeridae Mandrytsa, 2001 (the cofish)
Family Scorpaenidae Risso, 1827 (Scorpionfishes)
Suborder Platycephaloidei
Superfamily Platycephaloidea
Family Bembridae Kaup, 1873 (Deepwater flatheads)
Family Platycephalidae Swainson, 1839 (True flatheads)
Family Hoplichthyidae Kaup, 1873 (Ghost flatheads)
Superfamily Trigloidea
Family Triglidae Rafinesque, 1815 (Common searobins)
Family Peristediidae Jordan & Gilbert, 1883 (Armored searobins)
Suborder Normanichthyiodei
Family Normanichthyidae Clark, 1837 (the Barehead scorpionfish or mote sculpin)
Suborder Zoarcoidei
Superfamily Anarhichadoidea
Family Anarhichadidae Bonaparte, 1835 (Wolffishes)
Family Cryptacanthodidae Gill, 1861 (Wrymouths)
Family Stichaeidae Gill, 1864 (Pricklebacks)
Family Pholidae Gill, 1893 (Gunnels)
Superfamily Bathymasteroidea
Family Bathymasteridae Jordan & Gilbert, 1883 (Ronquils)
Family Ptilichthyidae Jordan & Gilbert, 1883 (Quillfish)
Superfamily Zoarcoidea
Family Eulophiidae H. M. Smith, 1902 (Spinous eelpouts)
Family Zoarcidae Swainson, 1839 (True Eelpouts)
Superfamily Zaproroidea
Family Scytalinidae Jordan & Starks, 1895 (Graveldivers)
Family Zaproridae Jordan, 1896 (Prowfishes)
Suborder Gasterosteoidei
Family Hypoptychidae Steindachner, 1880 (the Korean Sandlance)
Family Aulorhynchidae Gill (1861) (Tubesnouts)
Family Gasterosteidae Bonaparte, 1831 (Sticklebacks)
Suborder Cottoidei
Superfamily Anoplopomatoidea (Quast, 1965)
Family Anoplopomatidae Jordan & Gilbert, 1883 (Blackcod)
Superfamily Zaniolepidoidea Shinohara, 1994
Family Zaniolepididae Jordan & Gilbert, 1883 (Combfishes)
Superfamily Hexagrammoidea Gill, 1889
Family Hexagrammidae Jordan, 1888 (Greenlings)
Superfamily Trichodontoidea Nazarkin & Voskoboinikova, 2000
Family Trichodontidae Bleeker, 1859 (Sandfishes)
Superfamily Cottoidea Gill, 1889
Family Jordaniidae Jordan & Evermann, 1898 (Longfin sculpins)
Family Rhamphocottidae Jordan & Gilbert, 1883 (Grunt sculpins)
Family Scorpaenichthyidae Jordan & Evermann, 1898
Family Agonidae Swainson, 1839 (Poachers and searavens)
Family Cottidae Bonaparte, 1831 (Sculpins)
Family Psychrolutidae Günther, 1861 (Bighead sculpins)
Family Bathylutichthyidae Balushkin & Voskoboinikova, 1990 (Antarctic sculpins)
Superfamily Cyclopteroidea Gill, 1873
Family Cyclopteridae Bonaparte, 1831 (lumpfishes or lumpsuckers)
Family Liparidae Gill, 1861 (Snailfishes)
This classification is not settled, however, and some authorities classify these groupings largely within the Order Perciformes as the suborders Scorpaenoidei, Platycephaloidei, Triglioidei and Cottoidei, Cottodei including the infraorders Anoplopomatales, Zoarcales, Gasterosteales, Zaniolepidoales, Hexagrammales and Cottales. These infraorders largely correspond with the superfamilies in the Cottoidei set out in the 5th edition of Fishes of the World.
Timeline of genera
| Biology and health sciences | Fishes | null |
48896 | https://en.wikipedia.org/wiki/Lyrebird | Lyrebird | A lyrebird is either of two species of ground-dwelling Australian birds that compose the genus Menura, and the family Menuridae. They are most notable for their impressive ability to mimic natural and artificial sounds from their environment, and the striking beauty of the male bird's huge tail when it is fanned out in courtship display. Lyrebirds have unique plumes of neutral-coloured tailfeathers and are among Australia's best-known native birds.
Taxonomy
The classification of lyrebirds was the subject of much debate after the first specimens reached European scientists after 1798. Based on specimens sent from New South Wales to England, Major-General Thomas Davies illustrated and described this species as the superb lyrebird, which he called Menura superba, in an 1800 presentation to the Linnean Society of London, but this work was not published until 1802; in the intervening time period, however, the species was described and named Menura novaehollandiae by John Latham in 1801, and this is the accepted name by virtue of nomenclatural priority.
The genus name Menura refers to the pattern of repeated transparent crescents (or "lunules") on the superb lyrebird's outer tail-feathers, from the Ancient Greek words mēnē "moon" and ourá "tail".
Lyrebirds are named because their outer tail feathers are broad and curved in a S shape that together resemble the shape of a lyre.
Systematics
Lyrebirds were thought to be Galliformes like the broadly similar looking partridge, junglefowl, and pheasants familiar to Europeans, reflected in the early names given to the superb lyrebird, including native pheasant. They were also called peacock-wrens and Australian birds-of-paradise. The idea that they were related to the pheasants was abandoned when the first chicks, which are altricial, were described. They were not classed with the passerines until a paper was published in 1840, twelve years after they were assigned a discrete family, Menuridae. Within that family they compose a single genus, Menura.
It is generally accepted that the lyrebird family is most closely related to the scrub-birds (Atrichornithidae) and some authorities combine both in a single family, but evidence that they are also related to the bowerbirds remains controversial.
Lyrebirds are ancient Australian animals: the Australian Museum has fossils of lyrebirds dating back to about 15 million years ago. The prehistoric Menura tyawanoides has been described from Early Miocene fossils found at the famous Riversleigh site.
Species
Two species of lyrebird are extant:
Description
The lyrebirds are large passerine birds, amongst the largest in the order. They are ground living birds with strong legs and feet and short rounded wings. They are poor fliers and rarely fly except for periods of downhill gliding. The superb lyrebird is the larger of the two species. Lyrebirds measure 31 to 39 inches in length, including their tail. Males tend to be slightly larger than females. Females weigh around 2 pounds, and males weigh around 2.4 pounds.
Distribution and habitat
The superb lyrebird is found in areas of rainforest in Victoria, New South Wales, and south-east Queensland. It is also found in Tasmania where it was introduced in the 19th century. Many superb lyrebirds live in the Dandenong Ranges National Park and Kinglake National Park around Melbourne, the Royal National Park and Illawarra region south of Sydney, in many other parks along the east coast of Australia, and non protected bushland. Albert's lyrebird is found only in a small area of Southern Queensland rainforest.
Behaviour and ecology
Lyrebirds are shy and difficult to approach, particularly the Albert's lyrebird, with the result that little information about its behaviour has been documented. When lyrebirds detect potential danger, they pause and scan the surroundings, sound an alarm, and either flee the area on foot, or seek cover and freeze. Firefighters sheltering in mine shafts during bushfires have been joined by lyrebirds.
Diet and feeding
Lyrebirds feed on the ground and as individuals. A range of invertebrate prey is taken, including insects such as cockroaches, beetles (both adults and larvae), earwigs, fly larvae, and the adults and larvae of moths. Other prey taken includes centipedes, spiders, earthworms. Less commonly taken prey includes stick insects, bugs, amphipods, lizards, frogs and occasionally, seeds. They find food by scratching with their feet through the leaf-litter.
Breeding
Lyrebirds are long-lived birds that can live as long as 30 years. They have long breeding cycles and start breeding later in life than other passerine birds. Female superb lyrebirds start breeding at the age of five or six, and males at the age of six to eight. Males defend territories from other males, and those territories may contain the breeding territories of up to eight females. Within the male territories, the males create or use display platforms; for the superb lyrebird, this is a mound of bare soil; for the Albert's lyrebird, it is a pile of twigs on the forest floor.
Male lyrebirds call mostly during winter, when they construct and maintain an open arena-mound in dense bush, on which they sing and dance in an elaborate courtship display performed for potential mates, of which the male lyrebird has several. The strength, volume, and location of the nest built by the female lyrebird is dependent on the rainfall and predation during the nest building period. It is important for the nest to be water resistant and hidden in secluded areas so predators cannot attack. Once the nest is made in the preferred location, the female lyrebird lays a single egg. The egg is incubated over 50 days solely by the female, and the female also fosters the chick alone.
Vocalizations and mimicry
A lyrebird's song is one of the more distinctive aspects of its behavioural biology. Lyrebirds sing throughout the year, but the peak of the breeding season, from June to August, is when they sing with the most intensity. During this peak males may sing for four hours of the day, almost half the hours of daylight. The song of the lyrebird is a mixture of elements of its own song and mimicry of other species. Lyrebirds render with great fidelity the individual songs of other birds and the chatter of flocks of birds, and also mimic other animals such as possums, koalas and dingoes. Lyrebirds have been recorded mimicking human sounds such as a mill whistle, a cross-cut saw, chainsaws, car engines and car alarms, fire alarms, rifle-shots, camera shutters, dogs barking, crying babies, music, mobile phone ring tones, and even the human voice. However, while the mimicry of human noises is widely reported, the extent to which it happens is exaggerated and the phenomenon is unusual. Parts of the lyrebird's own song can resemble human-made sound effects, which has given rise to the urban legend that they frequently imitate video game or film sounds.
The superb lyrebird's mimicked calls are learned from the local environment, including from other superb lyrebirds. An instructive example is the population of superb lyrebirds in Tasmania, which have retained the calls of species not native to Tasmania in their repertoire, with some local Tasmanian endemic bird songs added. The female lyrebirds of both species are also mimics capable of complex vocalisations. Superb lyrebird females are silent during courtship; however, they regularly produce sophisticated vocal displays during foraging and nest defense. A recording of a superb lyrebird mimicking sounds of an electronic shooting game, workmen and chainsaws was added to the National Film and Sound Archive's Sounds of Australia registry in 2013.
Both species of lyrebird produced elaborate lyrebird-specific vocalisations including 'whistle songs'. Males also sing songs specifically associated with their song and dance displays.
One researcher, Sydney Curtis, has recorded flute-like lyrebird calls in the vicinity of the New England National Park. Similarly, in 1969, a park ranger, Neville Fenton, recorded a lyrebird song which resembled flute sounds in the New England National Park, near Dorrigo in northern coastal New South Wales. After much detective work by Fenton, it was discovered that in the 1930s, a flute player living on a farm adjoining the park used to play tunes near his pet lyrebird. The lyrebird adopted the tunes into his repertoire, and retained them after release into the park. Neville Fenton forwarded a tape of his recording to Norman Robinson. Because a lyrebird is able to carry two tunes at the same time, Robinson filtered out one of the tunes and put it on the phonograph for the purposes of analysis. One witness suggested that the song represents a modified version of two popular tunes in the 1930s: "The Keel Row" and "Mosquito's Dance". Musicologist David Rothenberg has endorsed this information. However, a "flute lyrebird" research group (including Curtis and Fenton) formed to investigate the veracity of this story found no evidence of "Mosquito Dance" and only remnants of "Keel Row" in contemporary and historical lyrebird recordings from this area. Neither were they able to prove that a lyrebird chick had been a pet, although they acknowledged compelling evidence on both sides of the argument.
Status and conservation
Until the 2019–2020 Australian bushfire season, superb lyrebirds were not considered threatened in the short to medium term. Concern has since grown as early analyses have shown the extent of destruction of the lyrebird's preferred wet-forest habitats, which in less intense previous bushfire seasons have been spared, in large part due to their moisture content. Albert's lyrebird has a very restricted habitat and had been listed as vulnerable by the IUCN, but because the species and its habitat were carefully managed, the species was re-assessed to near threatened in 2009. The superb lyrebird had already been seriously threatened by habitat destruction in the past. Its population had since recovered, but the 2019–2020 bushfires damaged much of its habitat, which may lead to a reclassification of its status from "common" to "threatened". Beyond this new threat are the long-term vulnerabilities to predation by cats and foxes, as well as human population pressure on its habitat.
In culture
Painting by John Gould
The lyrebird is so called because the male bird has a spectacular tail, consisting of 16 highly modified feathers (two long slender lyrates at the centre of the plume, two broader medians on the outside edges and twelve filamentaries arrayed between them), which was originally thought to resemble a lyre. This happened when a superb lyrebird specimen (which had been taken from Australia to England during the early 19th century) was prepared for display at the British Museum by a taxidermist who had never seen a live lyrebird. The taxidermist mistakenly thought that the tail would resemble a lyre, and that the tail would be held in a similar way to that of a peacock during courtship display, and so he arranged the feathers in this way. Later, John Gould (who had also never seen a live lyrebird), painted the lyrebird from the British Museum specimen.
The male lyrebird's tail is not held as in John Gould's painting. Instead, the male lyrebird's tail is fanned over the lyrebird during courtship display, with the tail completely covering his head and back—as can be seen in the image in the "breeding" section of this page, and also the image of the 10-cent coin, where the superb lyrebird's tail (in courtship display) is portrayed accurately.
Lyrebird emblems and logos
The lyrebird has been featured as a symbol and emblem many times, especially in New South Wales and Victoria (where the superb lyrebird has its natural habitat), and in Queensland (where Albert's lyrebird has its natural habitat).
A male superb lyrebird is featured on the reverse of the Australian 10-cent coin.
A superb lyrebird featured on the Australian one shilling postage stamp first issued in 1932.
A stylised superb lyrebird appears in the transparent window of the Australian 100 dollar note.
A silhouette of a male superb lyrebird is the logo of the Australian Film Commission.
An illustration of a male superb lyrebird, in courtship display, is the emblem of the New South Wales National Parks and Wildlife Service.
The pattern on the curtains of the Victorian State Theatre is the image of a male superb lyrebird, in courtship display, as viewed from the front.
A stylised illustration of a male Albert's lyrebird was the logo of the Queensland Conservatorium of Music, before the Conservatorium became part of Griffith University. In the logo, the top part of the lyrebird's tail became a music stave.
Australian band You Am I's 2008 album Dilettantes and its first single, "Erasmus", feature a drawing of a lyrebird by artist Ken Taylor.
A stylised illustration of part of a male superb lyrebird's tail is the logo for the Lyrebird Arts Council of Victoria.
The lyrebird is also featured atop the crest of Panhellenic Sorority Alpha Chi Omega, whose symbol is the lyre.
There are many other companies with the name of Lyrebird, and these also have lyrebird logos.
"Land of the Lyrebird" is an alternative name for the Strzelecki Ranges in the Gippsland region of Victoria.
A silhouetted male superb lyrebird in courtship display features in the masthead of The Betoota Advocate.
| Biology and health sciences | Corvoidea | null |
48900 | https://en.wikipedia.org/wiki/Atomic%20radius | Atomic radius | The atomic radius of a chemical element is a measure of the size of its atom, usually the mean or typical distance from the center of the nucleus to the outermost isolated electron. Since the boundary is not a well-defined physical entity, there are various non-equivalent definitions of atomic radius. Four widely used definitions of atomic radius are: Van der Waals radius, ionic radius, metallic radius and covalent radius. Typically, because of the difficulty to isolate atoms in order to measure their radii separately, atomic radius is measured in a chemically bonded state; however theoretical calculations are simpler when considering atoms in isolation. The dependencies on environment, probe, and state lead to a multiplicity of definitions.
Depending on the definition, the term may apply to atoms in condensed matter, covalently bonding in molecules, or in ionized and excited states; and its value may be obtained through experimental measurements, or computed from theoretical models. The value of the radius may depend on the atom's state and context.
Electrons do not have definite orbits nor sharply defined ranges. Rather, their positions must be described as probability distributions that taper off gradually as one moves away from the nucleus, without a sharp cutoff; these are referred to as atomic orbitals or electron clouds. Moreover, in condensed matter and molecules, the electron clouds of the atoms usually overlap to some extent, and some of the electrons may roam over a large region encompassing two or more atoms.
Under most definitions the radii of isolated neutral atoms range between 30 and 300 pm (trillionths of a meter), or between 0.3 and 3 ångströms. Therefore, the radius of an atom is more than 10,000 times the radius of its nucleus (1–10 fm), and less than 1/1000 of the wavelength of visible light (400–700 nm).
For many purposes, atoms can be modeled as spheres. This is only a crude approximation, but it can provide quantitative explanations and predictions for many phenomena, such as the density of liquids and solids, the diffusion of fluids through molecular sieves, the arrangement of atoms and ions in crystals, and the size and shape of molecules.
History
The concept of atomic radius was preceded in the 19th century by the concept of atomic volume, a relative measure of how much space would on average an atom occupy in a given solid or liquid material. By the end of the century this term was also used in an absolute sense, as a molar volume divided by Avogadro constant. Such a volume is different for different crystalline forms even of the same compound, but physicists used it for rough, order-of-magnitude estimates of the atomic size, getting 10−8–10−7 cm for copper.
The earliest estimates of the atomic size was made by opticians in the 1830s, particularly Cauchy, who developed models of light dispersion assuming a lattice of connected "molecules". In 1857 Clausius developed a gas-kinetic model which included the equation for mean free path. In the 1870s it was used to estimate gas molecule sizes, as well as an aforementioned comparison with visible light wavelength and an estimate from the thickness of soap bubble film at which its contractile force rapidly diminishes. By 1900, various estimates of mercury atom diameter averaged around 275±20 pm (modern estimates give 300±10 pm, see below).
In 1920, shortly after it had become possible to determine the sizes of atoms using X-ray crystallography, it was suggested that all atoms of the same element have the same radii. However, in 1923, when more crystal data had become available, it was found that the approximation of an atom as a sphere does not necessarily hold when comparing the same atom in different crystal structures.
Definitions
Widely used definitions of atomic radius include:
Van der Waals radius: In the simplest definition, half the minimum distance between the nuclei of two atoms of the element that are not otherwise bound by covalent or metallic interactions. The Van der Waals radius may be defined even for elements (such as metals) in which Van der Waals forces are dominated by other interactions. Because Van der Waals interactions arise through quantum fluctuations of the atomic polarisation, the polarisability (which can usually be measured or calculated more easily) may be used to define the Van der Waals radius indirectly.
Ionic radius: the nominal radius of the ions of an element in a specific ionization state, deduced from the spacing of atomic nuclei in crystalline salts that include that ion. In principle, the spacing between two adjacent oppositely charged ions (the length of the ionic bond between them) should equal the sum of their ionic radii.
Covalent radius: the nominal radius of the atoms of an element when covalently bound to other atoms, as deduced from the separation between the atomic nuclei in molecules. In principle, the distance between two atoms that are bound to each other in a molecule (the length of that covalent bond) should equal the sum of their covalent radii.
Metallic radius: the nominal radius of atoms of an element when joined to other atoms by metallic bonds.
Bohr radius: the radius of the lowest-energy electron orbit predicted by Bohr model of the atom (1913). It is only applicable to atoms and ions with a single electron, such as hydrogen, singly ionized helium, and positronium. Although the model itself is now obsolete, the Bohr radius for the hydrogen atom is still regarded as an important physical constant, because it is equivalent to the quantum-mechanical most probable distance of the electron from the nucleus.
Empirically measured atomic radius
The following table shows empirically measured covalent radii for the elements, as published by J. C. Slater in 1964. The values are in picometers (pm or 1×10−12 m), with an accuracy of about 5 pm. The shade of the box ranges from red to yellow as the radius increases; gray indicates lack of data.
Explanation of the general trends
Electrons in atoms fill electron shells from the lowest available energy level. As a consequence of the Aufbau principle, each new period begins with the first two elements filling the next unoccupied s-orbital. Because an atom's s-orbital electrons are typically farthest from the nucleus, this results in a significant increase in atomic radius with the first elements of each period.
The atomic radius of each element generally decreases across each period due to an increasing number of protons, since an increase in the number of protons increases the attractive force acting on the atom's electrons. The greater attraction draws the electrons closer to the protons, decreasing the size of the atom. Down each group, the atomic radius of each element typically increases because there are more occupied
electron energy levels and therefore a greater distance between protons and electrons.
The increasing nuclear charge is partly counterbalanced by the increasing number of electrons—a phenomenon that is known as shielding—which explains why the size of atoms usually increases down each column despite an increase in attractive force from the nucleus. Electron shielding causes the attraction of an atom's nucleus on its electrons to decrease, so electrons occupying higher energy states farther from the nucleus experience reduced attractive force, increasing the size of the atom. However, elements in the 5d-block (lutetium to mercury) are much smaller than this trend predicts due to the weak shielding of the 4f-subshell. This phenomenon is known as the lanthanide contraction. A similar phenomenon exists for actinides; however, the general instability of transuranic elements makes measurements for the remainder of the 5f-block difficult and for transactinides nearly impossible. Finally, for sufficiently heavy elements, the atomic radius may be decreased by relativistic effects. This is a consequence of electrons near the strongly charged nucleus traveling at a sufficient fraction of the speed of light to gain a nontrivial amount of mass.
The following table summarizes the main phenomena that influence the atomic radius of an element:
Lanthanide contraction
The electrons in the 4f-subshell, which is progressively filled from lanthanum (Z = 57) to ytterbium (Z = 70), are not particularly effective at shielding the increasing nuclear charge from the sub-shells further out. The elements immediately following the lanthanides have atomic radii which are smaller than would be expected and which are almost identical to the atomic radii of the elements immediately above them. Hence lutetium is in fact slightly smaller than yttrium, hafnium has virtually the same atomic radius (and chemistry) as zirconium, and tantalum has an atomic radius similar to niobium, and so forth. The effect of the lanthanide contraction is noticeable up to platinum (Z = 78), after which it is masked by a relativistic effect known as the inert-pair effect.
Due to lanthanide contraction, the 5 following observations can be drawn:
The size of Ln3+ ions regularly decreases with atomic number. According to Fajans' rules, decrease in size of Ln3+ ions increases the covalent character and decreases the basic character between Ln3+ and OH− ions in Ln(OH)3, to the point that Yb(OH)3 and Lu(OH)3 can dissolve with difficulty in hot concentrated NaOH. Hence the order of size of Ln3+ is given: La3+ > Ce3+ > ..., ... > Lu3+.
There is a regular decrease in their ionic radii.
There is a regular decrease in their tendency to act as a reducing agent, with an increase in atomic number.
The second and third rows of d-block transition elements are quite close in properties.
Consequently, these elements occur together in natural minerals and are difficult to separate.
d-block contraction
The d-block contraction is less pronounced than the lanthanide contraction but arises from a similar cause. In this case, it is the poor shielding capacity of the 3d-electrons which affects the atomic radii and chemistries of the elements immediately following the first row of the transition metals, from gallium (Z = 31) to bromine (Z = 35).
Calculated atomic radius
The following table shows atomic radii computed from theoretical models, as published by Enrico Clementi and others in 1967. The values are in picometres (pm).
| Physical sciences | Periodic table | Chemistry |
48902 | https://en.wikipedia.org/wiki/Kilogram%20per%20cubic%20metre | Kilogram per cubic metre | The kilogram per cubic metre (symbol: kg·m−3, or kg/m3) is the unit of density in the International System of Units (SI). It is defined by dividing the SI unit of mass, the kilogram, by the SI unit of volume, the cubic metre.
Conversions
1 kg/m3 = 1 g/L (exactly)
1 kg/m3 = 0.001 g/cm3 (exactly)
1 kg/m3 ≈ 0.06243 lb/ft3 (approximately)
1 kg/m3 ≈ 0.1335 oz/US gal (approximately)
1 kg/m3 ≈ 0.1604 oz/imp gal (approximately)
1 g/cm3 = 1000 kg/m3 (exactly)
1 lb/ft3 ≈ 16.02 kg/m3 (approximately)
1 oz/(US gal) ≈ 7.489 kg/m3 (approximately)
1 oz/(imp gal) ≈ 6.236 kg/m3 (approximately)
Relation to other measures
The density of water is about 1000 kg/m3 or 1 g/cm3, because the size of the gram was originally based on the mass of a cubic centimetre of water.
In chemistry, g/cm3 is more commonly used.
| Physical sciences | Density | Basics and measurement |
48903 | https://en.wikipedia.org/wiki/Nucleosynthesis | Nucleosynthesis | Nucleosynthesis is the process that creates new atomic nuclei from pre-existing nucleons (protons and neutrons) and nuclei. According to current theories, the first nuclei were formed a few minutes after the Big Bang, through nuclear reactions in a process called Big Bang nucleosynthesis. After about 20 minutes, the universe had expanded and cooled to a point at which these high-energy collisions among nucleons ended, so only the fastest and simplest reactions occurred, leaving our universe containing hydrogen and helium. The rest is traces of other elements such as lithium and the hydrogen isotope deuterium. Nucleosynthesis in stars and their explosions later produced the variety of elements and isotopes that we have today, in a process called cosmic chemical evolution. The amounts of total mass in elements heavier than hydrogen and helium (called 'metals' by astrophysicists) remains small (few percent), so that the universe still has approximately the same composition.
Stars fuse light elements to heavier ones in their cores, giving off energy in the process known as stellar nucleosynthesis. Nuclear fusion reactions create many of the lighter elements, up to and including iron and nickel in the most massive stars. Products of stellar nucleosynthesis remain trapped in stellar cores and remnants except if ejected through stellar winds and explosions. The neutron capture reactions of the r-process and s-process create heavier elements, from iron upwards.
Supernova nucleosynthesis within exploding stars is largely responsible for the elements between oxygen and rubidium: from the ejection of elements produced during stellar nucleosynthesis; through explosive nucleosynthesis during the supernova explosion; and from the r-process (absorption of multiple neutrons) during the explosion.
Neutron star mergers are a recently discovered major source of elements produced in the r-process. When two neutron stars collide, a significant amount of neutron-rich matter may be ejected which then quickly forms heavy elements.
Cosmic ray spallation is a process wherein cosmic rays impact nuclei and fragment them. It is a significant source of the lighter nuclei, particularly 3He, 9Be and 10,11B, that are not created by stellar nucleosynthesis. Cosmic ray spallation can occur in the interstellar medium, on asteroids and meteoroids, or on Earth in the atmosphere or in the ground.
This contributes to the presence on Earth of cosmogenic nuclides.
On Earth new nuclei are also produced by radiogenesis, the decay of long-lived, primordial radionuclides such as uranium, thorium, and potassium-40.
History
Timeline
It is thought that the primordial nucleons themselves were formed from the quark–gluon plasma around 13.8 billion years ago during the Big Bang as it cooled below two trillion degrees. A few minutes afterwards, starting with only protons and neutrons, nuclei up to lithium and beryllium (both with mass number 7) were formed, but hardly any other elements. Some boron may have been formed at this time, but the process stopped before significant carbon could be formed, as this element requires a far higher product of helium density and time than were present in the short nucleosynthesis period of the Big Bang. That fusion process essentially shut down at about 20 minutes, due to drops in temperature and density as the universe continued to expand. This first process, Big Bang nucleosynthesis, was the first type of nucleogenesis to occur in the universe, creating the so-called primordial elements.
A star formed in the early universe produces heavier elements by combining its lighter nucleihydrogen, helium, lithium, beryllium, and boronwhich were found in the initial composition of the interstellar medium and hence the star. Interstellar gas therefore contains declining abundances of these light elements, which are present only by virtue of their nucleosynthesis during the Big Bang, and also cosmic ray spallation. These lighter elements in the present universe are therefore thought to have been produced through thousands of millions of years of cosmic ray (mostly high-energy proton) mediated breakup of heavier elements in interstellar gas and dust. The fragments of these cosmic-ray collisions include helium-3 and the stable isotopes of the light elements lithium, beryllium, and boron. Carbon was not made in the Big Bang, but was produced later in larger stars via the triple-alpha process.
The subsequent nucleosynthesis of heavier elements (Z ≥ 6, carbon and heavier elements) requires the extreme temperatures and pressures found within stars and supernovae. These processes began as hydrogen and helium from the Big Bang collapsed into the first stars after about 500 million years. Star formation has been occurring continuously in galaxies since that time. The primordial nuclides were created by Big Bang nucleosynthesis, stellar nucleosynthesis, supernova nucleosynthesis, and by nucleosynthesis in exotic events such as neutron star collisions. Other nuclides, such as Ar, formed later through radioactive decay. On Earth, mixing and evaporation has altered the primordial composition to what is called the natural terrestrial composition. The heavier elements produced after the Big Bang range in atomic numbers from Z = 6 (carbon) to Z = 94 (plutonium). Synthesis of these elements occurred through nuclear reactions involving the strong and weak interactions among nuclei, and called nuclear fusion (including both rapid and slow multiple neutron capture), and include also nuclear fission and radioactive decays such as beta decay. The stability of atomic nuclei of different sizes and composition (i.e. numbers of neutrons and protons) plays an important role in the possible reactions among nuclei. Cosmic nucleosynthesis, therefore, is studied among researchers of astrophysics and nuclear physics ("nuclear astrophysics").
History of nucleosynthesis theory
The first ideas on nucleosynthesis were simply that the chemical elements were created at the beginning of the universe, but no rational physical scenario for this could be identified. Gradually it became clear that hydrogen and helium are much more abundant than any of the other elements. All the rest constitute less than 2% of the mass of the Solar System, and of other star systems as well. At the same time it was clear that oxygen and carbon were the next two most common elements, and also that there was a general trend toward high abundance of the light elements, especially those with isotopes composed of whole numbers of helium-4 nuclei (alpha nuclides).
Arthur Stanley Eddington first suggested in 1920 that stars obtain their energy by fusing hydrogen into helium and raised the possibility that the heavier elements may also form in stars. This idea was not generally accepted, as the nuclear mechanism was not understood. In the years immediately before World War II, Hans Bethe first elucidated those nuclear mechanisms by which hydrogen is fused into helium.
Fred Hoyle's original work on nucleosynthesis of heavier elements in stars, occurred just after World War II. His work explained the production of all heavier elements, starting from hydrogen. Hoyle proposed that hydrogen is continuously created in the universe from vacuum and energy, without need for universal beginning.
Hoyle's work explained how the abundances of the elements increased with time as the galaxy aged. Subsequently, Hoyle's picture was expanded during the 1960s by contributions from William A. Fowler, Alastair G. W. Cameron, and Donald D. Clayton, followed by many others. The seminal 1957 review paper by E. M. Burbidge, G. R. Burbidge, Fowler and Hoyle is a well-known summary of the state of the field in 1957. That paper defined new processes for the transformation of one heavy nucleus into others within stars, processes that could be documented by astronomers.
The Big Bang itself had been proposed in 1931, long before this period, by Georges Lemaître, a Belgian physicist, who suggested that the evident expansion of the Universe in time required that the Universe, if contracted backwards in time, would continue to do so until it could contract no further. This would bring all the mass of the Universe to a single point, a "primeval atom", to a state before which time and space did not exist. Hoyle is credited with coining the term "Big Bang" during a 1949 BBC radio broadcast, saying that Lemaître's theory was "based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." It is popularly reported that Hoyle intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Lemaître's model was needed to explain the existence of deuterium and nuclides between helium and carbon, as well as the fundamentally high amount of helium present, not only in stars but also in interstellar space. As it happened, both Lemaître and Hoyle's models of nucleosynthesis would be needed to explain the elemental abundances in the universe.
The goal of the theory of nucleosynthesis is to explain the vastly differing abundances of the chemical elements and their several isotopes from the perspective of natural processes. The primary stimulus to the development of this theory was the shape of a plot of the abundances versus the atomic number of the elements. Those abundances, when plotted on a graph as a function of atomic number, have a jagged sawtooth structure that varies by factors up to ten million. A very influential stimulus to nucleosynthesis research was an abundance table created by Hans Suess and Harold Urey that was based on the unfractionated abundances of the non-volatile elements found within unevolved meteorites. Such a graph of the abundances is displayed on a logarithmic scale below, where the dramatically jagged structure is visually suppressed by the many powers of ten spanned in the vertical scale of this graph.
Processes
There are a number of astrophysical processes which are believed to be responsible for nucleosynthesis. The majority of these occur within stars, and the chain of those nuclear fusion processes are known as hydrogen burning (via the proton–proton chain or the CNO cycle), helium burning, carbon burning, neon burning, oxygen burning and silicon burning. These processes are able to create elements up to and including iron and nickel. This is the region of nucleosynthesis within which the isotopes with the highest binding energy per nucleon are created. Heavier elements can be assembled within stars by a neutron capture process known as the s-process or in explosive environments, such as supernovae and neutron star mergers, by a number of other processes. Some of those others include the r-process, which involves rapid neutron captures, the rp-process, and the p-process (sometimes known as the gamma process), which results in the photodisintegration of existing nuclei.
Major types
Big Bang nucleosynthesis
Big Bang nucleosynthesis occurred within the first three minutes of the beginning of the universe and is responsible for much of the abundance of (protium), (D, deuterium), (helium-3), and (helium-4). Although continues to be produced by stellar fusion and alpha decays and trace amounts of continue to be produced by spallation and certain types of radioactive decay, most of the mass of the isotopes in the universe are thought to have been produced in the Big Bang. The nuclei of these elements, along with some and are considered to have been formed between 100 and 300 seconds after the Big Bang when the primordial quark–gluon plasma froze out to form protons and neutrons. Because of the very short period in which nucleosynthesis occurred before it was stopped by expansion and cooling (about 20 minutes), no elements heavier than beryllium (or possibly boron) could be formed. Elements formed during this time were in the plasma state, and did not cool to the state of neutral atoms until much later.
Stellar nucleosynthesis
Stellar nucleosynthesis is the nuclear process by which new nuclei are produced. It occurs in stars during stellar evolution. It is responsible for the galactic abundances of elements from carbon to iron. Stars are thermonuclear furnaces in which H and He are fused into heavier nuclei by increasingly high temperatures as the composition of the core evolves. Of particular importance is carbon because its formation from He is a bottleneck in the entire process. Carbon is produced by the triple-alpha process in all stars. Carbon is also the main element that causes the release of free neutrons within stars, giving rise to the s-process, in which the slow absorption of neutrons converts iron into elements heavier than iron and nickel.
The products of stellar nucleosynthesis are generally dispersed into the interstellar gas through mass loss episodes and the stellar winds of low mass stars. The mass loss events can be witnessed today in the planetary nebulae phase of low-mass star evolution, and the explosive ending of stars, called supernovae, of those with more than eight times the mass of the Sun.
The first direct proof that nucleosynthesis occurs in stars was the astronomical observation that interstellar gas has become enriched with heavy elements as time passed. As a result, stars that were born from it late in the galaxy, formed with much higher initial heavy element abundances than those that had formed earlier. The detection of technetium in the atmosphere of a red giant star in 1952, by spectroscopy, provided the first evidence of nuclear activity within stars. Because technetium is radioactive, with a half-life much less than the age of the star, its abundance must reflect its recent creation within that star. Equally convincing evidence of the stellar origin of heavy elements is the large overabundances of specific stable elements found in stellar atmospheres of asymptotic giant branch stars. Observation of barium abundances some 20–50 times greater than found in unevolved stars is evidence of the operation of the s-process within such stars. Many modern proofs of stellar nucleosynthesis are provided by the isotopic compositions of stardust, solid grains that have condensed from the gases of individual stars and which have been extracted from meteorites. Stardust is one component of cosmic dust and is frequently called presolar grains. The measured isotopic compositions in stardust grains demonstrate many aspects of nucleosynthesis within the stars from which the grains condensed during the star's late-life mass-loss episodes.
Explosive nucleosynthesis
Supernova nucleosynthesis occurs in the energetic environment in supernovae, in which the elements between silicon and nickel are synthesized in quasiequilibrium established during fast fusion that attaches by reciprocating balanced nuclear reactions to 28Si. Quasiequilibrium can be thought of as almost equilibrium except for a high abundance of the 28Si nuclei in the feverishly burning mix. This concept was the most important discovery in nucleosynthesis theory of the intermediate-mass elements since Hoyle's 1954 paper because it provided an overarching understanding of the abundant and chemically important elements between silicon (A = 28) and nickel (A = 60). It replaced the incorrect although much cited alpha process of the B2FH paper, which inadvertently obscured Hoyle's 1954 theory. Further nucleosynthesis processes can occur, in particular the r-process (rapid process) described by the B2FH paper and first calculated by Seeger, Fowler and Clayton, in which the most neutron-rich isotopes of elements heavier than nickel are produced by rapid absorption of free neutrons. The creation of free neutrons by electron capture during the rapid compression of the supernova core along with the assembly of some neutron-rich seed nuclei makes the r-process a primary process, and one that can occur even in a star of pure H and He. This is in contrast to the B2FH designation of the process as a secondary process. This promising scenario, though generally supported by supernova experts, has yet to achieve a satisfactory calculation of r-process abundances. The primary r-process has been confirmed by astronomers who had observed old stars born when galactic metallicity was still small, that nonetheless contain their complement of r-process nuclei; thereby demonstrating that the metallicity is a product of an internal process. The r-process is responsible for our natural cohort of radioactive elements, such as uranium and thorium, as well as the most neutron-rich isotopes of each heavy element.
The rp-process (rapid proton) involves the rapid absorption of free protons as well as neutrons, but its role and its existence are less certain.
Explosive nucleosynthesis occurs too rapidly for radioactive decay to decrease the number of neutrons, so that many abundant isotopes with equal and even numbers of protons and neutrons are synthesized by the silicon quasi-equilibrium process. During this process, the burning of oxygen and silicon fuses nuclei that themselves have equal numbers of protons and neutrons to produce nuclides which consist of whole numbers of helium nuclei, up to 15 (representing 60Ni). Such multiple-alpha-particle nuclides are totally stable up to 40Ca (made of 10 helium nuclei), but heavier nuclei with equal and even numbers of protons and neutrons are tightly bound but unstable. The quasi-equilibrium produces radioactive isobars 44Ti, 48Cr, 52Fe, and 56Ni, which (except 44Ti) are created in abundance but decay after the explosion and leave the most stable isotope of the corresponding element at the same atomic weight. The most abundant and extant isotopes of elements produced in this way are 48Ti, 52Cr, and 56Fe. These decays are accompanied by the emission of gamma-rays (radiation from the nucleus), whose spectroscopic lines can be used to identify the isotope created by the decay. The detection of these emission lines were an important early product of gamma-ray astronomy.
The most convincing proof of explosive nucleosynthesis in supernovae occurred in 1987 when those gamma-ray lines were detected emerging from supernova 1987A. Gamma-ray lines identifying 56Co and 57Co nuclei, whose half-lives limit their age to about a year, proved that their radioactive cobalt parents created them. This nuclear astronomy observation was predicted in 1969 as a way to confirm explosive nucleosynthesis of the elements, and that prediction played an important role in the planning for NASA's Compton Gamma-Ray Observatory.
Other proofs of explosive nucleosynthesis are found within the stardust grains that condensed within the interiors of supernovae as they expanded and cooled. Stardust grains are one component of cosmic dust. In particular, radioactive 44Ti was measured to be very abundant within supernova stardust grains at the time they condensed during the supernova expansion. This confirmed a 1975 prediction of the identification of supernova stardust (SUNOCONs), which became part of the pantheon of presolar grains. Other unusual isotopic ratios within these grains reveal many specific aspects of explosive nucleosynthesis.
Neutron star mergers
The merger of binary neutron stars (BNSs) is now believed to be the main source of r-process elements. Being neutron-rich by definition, mergers of this type had been suspected of being a source of such elements, but definitive evidence was difficult to obtain. In 2017 strong evidence emerged, when LIGO, VIRGO, the Fermi Gamma-ray Space Telescope and INTEGRAL, along with a collaboration of many observatories around the world, detected both gravitational wave and electromagnetic signatures of a likely neutron star merger, GW170817, and subsequently detected signals of numerous heavy elements such as gold as the ejected degenerate matter decays and cools. The first detection of the merger of a neutron star and black hole (NSBHs) came in July 2021 and more after but analysis seem to favor BNSs over NSBHs as the main contributors to heavy metal production.
Black hole accretion disk nucleosynthesis
Nucleosynthesis may happen in accretion disks of black holes.
Cosmic ray spallation
Cosmic ray spallation process reduces the atomic weight of interstellar matter by the impact with cosmic rays, to produce some of the lightest elements present in the universe (though not a significant amount of deuterium). Most notably spallation is believed to be responsible for the generation of almost all of 3He and the elements lithium, beryllium, and boron, although some and are thought to have been produced in the Big Bang. The spallation process results from the impact of cosmic rays (mostly fast protons) against the interstellar medium. These impacts fragment carbon, nitrogen, and oxygen nuclei present. The process results in the light elements beryllium, boron, and lithium in the cosmos at much greater abundances than they are found within solar atmospheres. The quantities of the light elements 1H and 4He produced by spallation are negligible relative to their primordial abundance.
Beryllium and boron are not significantly produced by stellar fusion processes, since 8Be has an extremely short half-life of seconds.
Empirical evidence
Theories of nucleosynthesis are tested by calculating isotope abundances and comparing those results with observed abundances. Isotope abundances are typically calculated from the transition rates between isotopes in a network. Often these calculations can be simplified as a few key reactions control the rate of other reactions.
Minor mechanisms and processes
Tiny amounts of certain nuclides are produced on Earth by artificial means. Those are our primary source, for example, of technetium. However, some nuclides are also produced by a number of natural means that have continued after primordial elements were in place. These often act to create new elements in ways that can be used to date rocks or to trace the source of geological processes. Although these processes do not produce the nuclides in abundance, they are assumed to be the entire source of the existing natural supply of those nuclides.
These mechanisms include:
Radioactive decay may lead to radiogenic daughter nuclides. The nuclear decay of many long-lived primordial isotopes, especially uranium-235, uranium-238, and thorium-232 produce many intermediate daughter nuclides before they too finally decay to isotopes of lead. The Earth's natural supply of elements like radon and polonium is via this mechanism. The atmosphere's supply of argon-40 is due mostly to the radioactive decay of potassium-40 in the time since the formation of the Earth. Little of the atmospheric argon is primordial. Helium-4 is produced by alpha-decay, and the helium trapped in Earth's crust is also mostly non-primordial. In other types of radioactive decay, such as cluster decay, larger species of nuclei are ejected (for example, neon-20), and these eventually become newly formed stable atoms.
Radioactive decay may lead to spontaneous fission. This is not cluster decay, as the fission products may be split among nearly any type of atom. Thorium-232, uranium-235, and uranium-238 are primordial isotopes that undergo spontaneous fission. Natural technetium and promethium are produced in this manner.
Nuclear reactions. Naturally occurring nuclear reactions powered by radioactive decay give rise to so-called nucleogenic nuclides. This process happens when an energetic particle from radioactive decay, often an alpha particle, reacts with a nucleus of another atom to change the nucleus into another nuclide. This process may also cause the production of further subatomic particles, such as neutrons. Neutrons can also be produced in spontaneous fission and by neutron emission. These neutrons can then go on to produce other nuclides via neutron-induced fission, or by neutron capture. For example, some stable isotopes such as neon-21 and neon-22 are produced by several routes of nucleogenic synthesis, and thus only part of their abundance is primordial.
Nuclear reactions due to cosmic rays. By convention, these reaction-products are not termed "nucleogenic" nuclides, but rather cosmogenic nuclides. Cosmic rays continue to produce new elements on Earth by the same cosmogenic processes discussed above that produce primordial beryllium and boron. One important example is carbon-14, produced from nitrogen-14 in the atmosphere by cosmic rays. Iodine-129 is another example.
| Physical sciences | Nuclear physics | null |
48909 | https://en.wikipedia.org/wiki/Zenith | Zenith | The zenith (, ) is the imaginary point on the celestial sphere directly "above" a particular location. "Above" means in the vertical direction (plumb line) opposite to the gravity direction at that location (nadir). The zenith is the "highest" point on the celestial sphere.
Origin
The word zenith derives from an inaccurate reading of the Arabic expression (), meaning "direction of the head" or "path above the head", by Medieval Latin scribes in the Middle Ages (during the 14th century), possibly through Old Spanish. It was reduced to samt ("direction") and miswritten as senit/cenit, the m being misread as ni. Through the Old French cenith, zenith first appeared in the 17th century.
Relevance and use
The term zenith sometimes means the highest point, way, or level reached by a celestial body on its daily apparent path around a given point of observation. This sense of the word is often used to describe the position of the Sun ("The sun reached its zenith..."), but to an astronomer, the Sun does not have its own zenith and is at the zenith only if it is directly overhead.
In a scientific context, the zenith is the direction of reference for measuring the zenith angle (or zenith angular distance), the angle between a direction of interest (e.g. a star) and the local zenith - that is, the complement of the altitude angle (or elevation angle).
The Sun reaches the observer's zenith when it is 90° above the horizon, and this only happens between the Tropic of Cancer and the Tropic of Capricorn. The point where this occurs is known as the subsolar point. In Islamic astronomy, the passing of the Sun over the zenith of Mecca becomes the basis of the qibla observation by shadows twice a year on 27/28 May and 15/16 July.
At a given location during the course of a day, the Sun reaches not only its zenith but also its nadir, at the antipode of that location 12 hours from solar noon.
In astronomy, the altitude in the horizontal coordinate system and the zenith angle are complementary angles, with the horizon perpendicular to the zenith. The astronomical meridian is also determined by the zenith, and is defined as a circle on the celestial sphere that passes through the zenith, nadir, and the celestial poles.
A zenith telescope is a type of telescope designed to point straight up at or near the zenith, and used for precision measurement of star positions, to simplify telescope construction, or both. The NASA Orbital Debris Observatory and the Large Zenith Telescope are both zenith telescopes, since the use of liquid mirrors meant these telescopes could only point straight up.
On the International Space Station, zenith and nadir are used instead of up and down, referring to directions within and around the station, relative to the earth.
Zenith star
Zenith stars (also "star on top", "overhead star", "latitude star") are stars whose declination equals the latitude of the observers location, and hence at some time in the day or night pass culminate (pass) through the zenith. When at the zenith the right ascension of the star equals the local sidereal time at your location. In celestial navigation this allows latitude to be determined, since the declination of the star equals the latitude of the observer. If the current time at Greenwich is known at the time of the observation, the observers longitude can also be determined from the right ascension of the star. Hence "Zenith stars" lie on or near the circle of declination equal to the latitude of the observer ("zenith circle"). Zenith stars are not to be confused with "steering stars" of a sidereal compass rose of a sidereal compass.
| Physical sciences | Celestial sphere: General | Astronomy |
48910 | https://en.wikipedia.org/wiki/Horizon | Horizon | The horizon is the apparent curve that separates the surface of a celestial body from its sky when viewed from the perspective of an observer on or near the surface of the relevant body. This curve divides all viewing directions based on whether it intersects the relevant body's surface or not.
The true horizon is a theoretical line, which can only be observed to any degree of accuracy when it lies along a relatively smooth surface such as that of Earth's oceans. At many locations, this line is obscured by terrain, and on Earth it can also be obscured by life forms such as trees and/or human constructs such as buildings. The resulting intersection of such obstructions with the sky is called the visible horizon. On Earth, when looking at a sea from a shore, the part of the sea closest to the horizon is called the offing.
The true horizon surrounds the observer and it is typically assumed to be a circle, drawn on the surface of a perfectly spherical model of the relevant celestial body, i.e., a small circle of the local osculating sphere. With respect to Earth, the center of the true horizon is below the observer and below sea level. Its radius or horizontal distance from the observer varies slightly from day to day due to atmospheric refraction, which is greatly affected by weather conditions. Also, the higher the observer's eyes are from sea level, the farther away the horizon is from the observer. For instance, in standard atmospheric conditions, for an observer with eye level above sea level by , the horizon is at a distance of about .
When observed from very high standpoints, such as a space station, the horizon is much farther away and it encompasses a much larger area of Earth's surface. In this case, the horizon would no longer be a perfect circle, not even a plane curve such as an ellipse, especially when the observer is above the equator, as the Earth's surface can be better modeled as an oblate ellipsoid than as a sphere.
Etymology
The word horizon derives from the Greek () 'separating circle', where is from the verb ὁρίζω () 'to divide, to separate', which in turn derives from () 'boundary, landmark'.
Appearance and usage
Historically, the distance to the visible horizon has long been vital to survival and successful navigation, especially at sea, because it determined an observer's maximum range of vision and thus of communication, with all the obvious consequences for safety and the transmission of information that this range implied. This importance lessened with the development of the radio and the telegraph, but even today, when flying an aircraft under visual flight rules, a technique called attitude flying is used to control the aircraft, where the pilot uses the visual relationship between the aircraft's nose and the horizon to control the aircraft. Pilots can also retain their spatial orientation by referring to the horizon.
In many contexts, especially perspective drawing, the curvature of the Earth is disregarded and the horizon is considered the theoretical line to which points on any horizontal plane converge (when projected onto the picture plane) as their distance from the observer increases. For observers near sea level, the difference between this geometrical horizon (which assumes a perfectly flat, infinite ground plane) and the true horizon (which assumes a spherical Earth surface) is imperceptible to the unaided eye. However, for someone on a hill looking out across the sea, the true horizon will be about a degree below a horizontal line.
In astronomy, the horizon is the horizontal plane through the eyes of the observer. It is the fundamental plane of the horizontal coordinate system, the locus of points that have an altitude of zero degrees. While similar in ways to the geometrical horizon, in this context a horizon may be considered to be a plane in space, rather than a line on a picture plane.
Distance to the horizon
Ignoring the effect of atmospheric refraction, distance to the true horizon from an observer close to the Earth's surface is about
where h is height above sea level and R is the Earth radius.
The expression can be simplified as:
where the constant equals k.
In this equation, Earth's surface is assumed to be perfectly spherical, with R equal to about .
Examples
Assuming no atmospheric refraction and a spherical Earth with radius R=:
For an observer standing on the ground with h = , the horizon is at a distance of .
For an observer standing on the ground with h = , the horizon is at a distance of .
For an observer standing on a hill or tower above sea level, the horizon is at a distance of .
For an observer standing on a hill or tower above sea level, the horizon is at a distance of .
For an observer standing on the roof of the Burj Khalifa, from ground, and about above sea level, the horizon is at a distance of .
For an observer atop Mount Everest ( in altitude), the horizon is at a distance of .
For an observer aboard a commercial passenger plane flying at a typical altitude of , the horizon is at a distance of .
For a U-2 pilot, whilst flying at its service ceiling , the horizon is at a distance of .
Other planets
On terrestrial planets and other solid celestial bodies with negligible atmospheric effects, the distance to the horizon for a "standard observer" varies as the square root of the planet's radius. Thus, the horizon on Mercury is 62% as far away from the observer as it is on Earth, on Mars the figure is 73%, on the Moon the figure is 52%, on Mimas the figure is 18%, and so on.
Derivation
If the Earth is assumed to be a featureless sphere (rather than an oblate spheroid) with no atmospheric refraction, then the distance to the horizon can easily be calculated.
The tangent-secant theorem states that
Make the following substitutions:
d = OC = distance to the horizon
D = AB = diameter of the Earth
h = OB = height of the observer above sea level
D+h = OA = diameter of the Earth plus height of the observer above sea level,
with d, D, and h all measured in the same units. The formula now becomes
or
where R is the radius of the Earth.
The same equation can also be derived using the Pythagorean theorem.
At the horizon, the line of sight is a tangent to the Earth and is also perpendicular to Earth's radius.
This sets up a right triangle, with the sum of the radius and the height as the hypotenuse.
With
d = distance to the horizon
h = height of the observer above sea level
R = radius of the Earth
referring to the second figure at the right leads to the following:
The exact formula above can be expanded as:
where R is the radius of the Earth (R and h must be in the same units). For example,
if a satellite is at a height of 2000 km, the distance to the horizon is ;
neglecting the second term in parentheses would give a distance of , a 7% error.
Approximation
If the observer is close to the surface of the Earth, then it is valid to disregard h in the term , and the formula becomes-
Using kilometres for d and R, and metres for h, and taking the radius of the Earth as 6371 km, the distance to the horizon is
.
Using imperial units, with d and R in statute miles (as commonly used on land), and h in feet, the distance to the horizon is
.
If d is in nautical miles, and h in feet, the constant factor is about 1.06, which is close enough to 1 that it is often ignored, giving:
These formulas may be used when h is much smaller than the radius of the Earth (6371 km or 3959 mi), including all views from any mountaintops, airplanes, or high-altitude balloons. With the constants as given, both the metric and imperial formulas are precise to within 1% (see the next section for how to obtain greater precision).
If h is significant with respect to R, as with most satellites, then the approximation is no longer valid, and the exact formula is required.
Related measures
Arc distance
Another relationship involves the great-circle distance s along the arc over the curved surface of the Earth to the horizon; this is more directly comparable to the geographical distance on a map.
It can be formulated in terms of γ in radians,
then
Solving for s gives
The distance s can also be expressed in terms of the line-of-sight distance d; from the second figure at the right,
substituting for γ and rearranging gives
The distances d and s are nearly the same when the height of the object is negligible compared to the radius (that is, h ≪ R).
Zenith angle
When the observer is elevated, the horizon zenith angle can be greater than 90°. The maximum visible zenith angle occurs when the ray is tangent to Earth's surface; from triangle OCG in the figure at right,
where is the observer's height above the surface and is the angular dip of the horizon. It is related to the horizon zenith angle by:
For a non-negative height , the angle is always ≥ 90°.
Objects above the horizon
To compute the greatest distance DBL at which an observer B can see the top of an object L above the horizon, simply add the distances to the horizon from each of the two points:
DBL = DB + DL
For example, for an observer B with a height of hB1.70 m standing on the ground, the horizon is DB4.65 km away. For a tower with a height of hL100 m, the horizon distance is DL35.7 km. Thus an observer on a beach can see the top of the tower as long as it is not more than DBL40.35 km away. Conversely, if an observer on a boat (hB1.7m) can just see the tops of trees on a nearby shore (hL10m), the trees are probably about DBL16 km away.
Referring to the figure at the right, and using the approximation above, the top of the lighthouse will be visible to a lookout in a crow's nest at the top of a mast of the boat if
where DBL is in kilometres and hB and hL are in metres.
As another example, suppose an observer, whose eyes are two metres above the level ground, uses binoculars to look at a distant building which he knows to consist of thirty storeys, each 3.5 metres high. He counts the stories he can see and finds there are only ten. So twenty stories or 70 metres of the building are hidden from him by the curvature of the Earth. From this, he can calculate his distance from the building:
which comes to about 35 kilometres.
It is similarly possible to calculate how much of a distant object is visible above the horizon. Suppose an observer's eye is 10 metres above sea level, and he is watching a ship that is 20 km away. His horizon is:
kilometres from him, which comes to about 11.3 kilometres away. The ship is a further 8.7 km away. The height of a point on the ship that is just visible to the observer is given by:
which comes to almost exactly six metres. The observer can therefore see that part of the ship that is more than six metres above the level of the water. The part of the ship that is below this height is hidden from him by the curvature of the Earth. In this situation, the ship is said to be hull-down.
Effect of atmospheric refraction
Due to atmospheric refraction the distance to the visible horizon is further than the distance based on a simple geometric calculation. If the ground (or water) surface is colder than the air above it, a cold, dense layer of air forms close to the surface, causing light to be refracted downward as it travels, and therefore, to some extent, to go around the curvature of the Earth. The reverse happens if the ground is hotter than the air above it, as often happens in deserts, producing mirages. As an approximate compensation for refraction, surveyors measuring distances longer than 100 meters subtract 14% from the calculated curvature error and ensure lines of sight are at least 1.5 metres from the ground, to reduce random errors created by refraction.
If the Earth were an airless world like the Moon, the above calculations would be accurate. However, Earth has an atmosphere of air, whose density and refractive index vary considerably depending on the temperature and pressure. This makes the air refract light to varying extents, affecting the appearance of the horizon. Usually, the density of the air just above the surface of the Earth is greater than its density at greater altitudes. This makes its refractive index greater near the surface than at higher altitudes, which causes light that is travelling roughly horizontally to be refracted downward. This makes the actual distance to the horizon greater than the distance calculated with geometrical formulas. With standard atmospheric conditions, the difference is about 8%. This changes the factor of 3.57, in the metric formulas used above, to about 3.86. For instance, if an observer is standing on seashore, with eyes 1.70 m above sea level, according to the simple geometrical formulas given above the horizon should be 4.7 km away. Actually, atmospheric refraction allows the observer to see 300 metres farther, moving the true horizon 5 km away from the observer.
This correction can be, and often is, applied as a fairly good approximation when atmospheric conditions are close to standard. When conditions are unusual, this approximation fails. Refraction is strongly affected by temperature gradients, which can vary considerably from day to day, especially over water. In extreme cases, usually in springtime, when warm air overlies cold water, refraction can allow light to follow the Earth's surface for hundreds of kilometres. Opposite conditions occur, for example, in deserts, where the surface is very hot, so hot, low-density air is below cooler air. This causes light to be refracted upward, causing mirage effects that make the concept of the horizon somewhat meaningless. Calculated values for the effects of refraction under unusual conditions are therefore only approximate. Nevertheless, attempts have been made to calculate them more accurately than the simple approximation described above.
Outside the visual wavelength range, refraction will be different. For radar (e.g. for wavelengths 300 to 3 mm i.e. frequencies between 1 and 100 GHz) the radius of the Earth may be multiplied by 4/3 to obtain an effective radius giving a factor of 4.12 in the metric formula i.e. the radar horizon will be 15% beyond the geometrical horizon or 7% beyond the visual. The 4/3 factor is not exact, as in the visual case the refraction depends on atmospheric conditions.
Integration method—Sweer
If the density profile of the atmosphere is known, the distance d to the horizon is given by
where RE is the radius of the Earth, ψ is the dip of the horizon and δ is the refraction of the horizon. The dip is determined fairly simply from
where h is the observer's height above the Earth, μ is the index of refraction of air at the observer's height, and μ0 is the index of refraction of air at Earth's surface.
The refraction must be found by integration of
where is the angle between the ray and a line through the center of the Earth. The angles ψ and are related by
Simple method—Young
A much simpler approach, which produces essentially the same results as the first-order approximation described above, uses the geometrical model but uses a radius . The distance to the horizon is then
Taking the radius of the Earth as 6371 km, with d in km and h in m,
with d in mi and h in ft,
In the case of radar one typically has resulting (with d in km and h in m) in
Results from Young's method are quite close to those from Sweer's method, and are sufficiently accurate for many purposes.
Vanishing points
The horizon is a key feature of the picture plane in the science of graphical perspective. Assuming the picture plane stands vertical to ground, and P is the perpendicular projection of the eye point O on the picture plane, the horizon is defined as the horizontal line through P. The point P is the vanishing point of lines perpendicular to the picture. If S is another point on the horizon, then it is the vanishing point for all lines parallel to OS. But Brook Taylor (1719) indicated that the horizon plane determined by O and the horizon was like any other plane:
The term of Horizontal Line, for instance, is apt to confine the Notions of a Learner to the Plane of the Horizon, and to make him imagine, that that Plane enjoys some particular Privileges, which make the Figures in it more easy and more convenient to be described, by the means of that Horizontal Line, than the Figures in any other plane;…But in this Book I make no difference between the Plane of the Horizon, and any other Plane whatsoever...
The peculiar geometry of perspective where parallel lines converge in the distance, stimulated the development of projective geometry which posits a point at infinity where parallel lines meet. In her book Geometry of an Art (2007), Kirsti Andersen described the evolution of perspective drawing and science up to 1800, noting that vanishing points need not be on the horizon. In a chapter titled "Horizon", John Stillwell recounted how projective geometry has led to incidence geometry, the modern abstract study of line intersection. Stillwell also ventured into foundations of mathematics in a section titled "What are the Laws of Algebra ?" The "algebra of points", originally given by Karl von Staudt deriving the axioms of a field was deconstructed in the twentieth century, yielding a wide variety of mathematical possibilities. Stillwell states
This discovery from 100 years ago seems capable of turning mathematics upside down, though it has not yet been fully absorbed by the mathematical community. Not only does it defy the trend of turning geometry into algebra, it suggests that both geometry and algebra have a simpler foundation than previously thought.
| Physical sciences | Celestial sphere | null |
48963 | https://en.wikipedia.org/wiki/Characiformes | Characiformes | Characiformes is an order of ray-finned fish, comprising the characins and their allies. Grouped in 18 recognized families, more than 2000 different species are described, including the well-known piranha and tetras.
Taxonomy
The Characiformes form part of a series called the Otophysi within the superorder Ostariophysi. The Otophysi contain three other orders, Cypriniformes, Siluriformes, and Gymnotiformes. The Characiformes form a group known as the Characiphysi with the Siluriformes and Gymnotiformes. The order Characiformes is the sister group to the orders Siluriformes and Gymnotiformes, though this has been debated in light of recent molecular evidence.
Originally, the characins were all grouped within a single family, the Characidae. Since then, 18 different families have been separated out. However, classification varies somewhat, and the most recent (2011) study confirms the circumscribed Characidae as monophyletic. Currently, 18 families, about 270 genera, and at least 1674 species are known.
The suborder Citharinoidei, which contains the families Distichodontidae and Citharinidae, is considered the sister group to the rest of the characins, suborder Characoidei. This group has a very ancient divergence from the rest of the Characiformes, dating back to the Early Cretaceous or earlier, and it has been suggested that it be better treated as its own order, the Cithariniformes.
Evolution
The Characiformes likely first originated and diversified on the supercontinent of West Gondwana (composed of modern Africa and South America) during the Cretaceous period, though fossils are poorly known. During the Cretaceous Period, the rift between South America and Africa would be forming; this may explain the contrast in diversity between the two continents. Their low diversity in Africa may explain why some primitive fish families and the Cypriniformes coexist with them whereas they are absent in South America, where these fish may have been driven extinct. The characiforms had not spread into Africa soon enough to also reach the land connection between Africa and Asia. The earliest they could have spread into Central America was the late Miocene.
Fossils
The earliest characiform fossils date back to the Late Cretaceous, around the Santonian. Other fossil teeth date back to the Cenomanian of Morocco, but it has been suggested that these teeth may be of early ginglymodians. Previously, the oldest characiform was assumed to be Santanichthys of the Early Cretaceous (Albian Age) of Brazil. This presumably marine taxon was used as evidence of characiformes potentially having marine origins. However, more recent studies indicate that Santanaichthys is likely a basal otophysan rather than a characiform. Similarly, Salminops from Spain and Sorbinicharax from Italy, previously also considered potential marine characiforms, are now thought to have no characiform affinities and are considered indeterminate teleosts. Given this, there is no paleontological support for characiforms having marine origins.
Uniquely, Late Cretaceous characiform fossils are found significantly north of their modern distribution. Indeterminate characiform teeth are known from the Santonian of Hungary and Maastrichtian of France, which have a large, multi-cusped appearance reminiscent of African alestids. Similarly, two Campanian freshwater characiform genera, Primuluchara and Eotexachara, are known from North America, with Primuluchara having a very wide distribution across Laramidia, ranging from Texas to as far north as southern Canada (Dinosaur Park Formation). It is likely that the warmer conditions of the Late Cretaceous allowed early characins to range farther north than the present day, with African characins colonizing Europe and South American characins colonizing North America. Early characins may have had some level of salt tolerance, allowing for such colonizations to take place.
Within their modern distribution, a number of modern South American characin families have their earliest occurrences in the Maastrichtian of Bolivia, with isolated teeth and skeletal elements identifiable to Acestrorhynchidae, Characidae, and Serrasalmidae.
Phylogeny
Below is a phylogeny of living Characiformes based on Betancur-Rodriguez et al. 2017 and Nelson, Grande & Wilson 2016.
Description
Characins possess a Weberian apparatus, a series of bony parts connecting the swim bladder and inner ear. Superficially, the Characiformes somewhat resemble their relatives of the order Cypriniformes, but have a small, fleshy adipose fin between the dorsal fin and tail. Most species have teeth within the mouth, since they are often carnivorous. The body is almost always covered in well-defined scales. The mouth is also usually not truly protractile.
The largest characins are Hydrocynus goliath and Salminus franciscanus and Hoplias aimara, both of which are up to . The smallest in size is about in the Bolivian pygmy blue characin, Xenurobrycon polyancistrus. Many members are under .
Distribution and habitat
Characins are most diverse in the Neotropics, where they are found in lakes and rivers throughout most of South and Central America. The red-bellied piranha, a member of the family Serrasalmidae within the Characiformes, is endemic to the Neotropical realm. At least 209 species of characins are found in Africa, including the distichodontids, citharinids, alestids, and hepsetids. The rest of the characins originate from the Americas.
Relationship to humans
A few characins become quite large, and are important as food or game. Most, however, are small shoaling fish. Many species commonly called tetras are popular in aquaria because of their bright colors, general hardiness, and tolerance towards other fish in community tanks.
| Biology and health sciences | Characiformes | null |
48971 | https://en.wikipedia.org/wiki/Characidae | Characidae | Characidae, the characids or characins, is a family of freshwater subtropical and tropical fish belonging to the order Characiformes. The name "characins" is a historical one, but scientists today tend to prefer "characids" to reflect their status as a, by and large, monophyletic group (at family rank). To arrive there, this family has undergone much systematic and taxonomic change. Among those fishes remaining in the Characidae currently are the tetras, comprising the very similar genera Hemigrammus and Hyphessobrycon, as well as a few related forms, such as the cave and neon tetras. Fish of this family are important as food in several regions, and also constitute a large percentage of captive freshwater aquarium fish species.
These fish vary in length; many are less than . One of the smallest species, Hyphessobrycon roseus, grows to a maximum length of 1.9 cm.
These fish inhabit a wide range and variety of habitats. New World fishes, they originate in the Americas, ranging from southwestern Texas and México through most of Central and South America, including such major waterways as the Amazon and Orinoco Rivers. Many of these fish come from rivers and tributaries, while the blind cave tetra, for example, inhabits flooded caves.
Systematics
This family has undergone a large amount of systematic and taxonomic change. More recent revision has moved many former members of the family into their own related but distinct families – the pencilfishes of the genus Nannostomus are a typical example, having now been moved into the Lebiasinidae, the assorted predatory species belonging to Hoplias and Hoplerythrinus have now been moved into the Erythrinidae, and the sabre-toothed fishes of the genus Hydrolycus have been moved into the Cynodontidae. The former subfamily Alestiinae was promoted to family level (Alestiidae) and the subfamilies Crenuchinae and Characidiinae were moved to the family Crenuchidae.
Other fish families that were formerly classified as members of the Characidae, but which were moved into separate families of their own during recent taxonomic revisions (after 1994) include Acestrorhynchidae, Anostomidae, Chilodontidae, Citharinidae, Ctenoluciidae, Curimatidae, Distichodontidae, Gasteropelecidae, Hemiodontidae, Hepsetidae, Parodontidae, Prochilodontidae, Serrasalmidae, and Triportheidae.
The larger piranhas were originally classified as belonging to the Characidae, but various revisions place them in their own related family, the Serrasalmidae. This reassignment has yet to enjoy universal acceptance, but is gaining in popularity among taxonomists working with these fishes. Given the current state of flux of the Characidae, a number of other changes will doubtless take place, reassigning once-familiar species to other families. Indeed, the entire phylogeny of the Ostariophysi – fishes possessing a Weberian apparatus – has yet to be settled conclusively. Until that phylogeny is settled, the opportunity for yet more upheavals within the taxonomy of the characoid fishes is considerable.
Classification
Phylogeny
Taxonomy
The subfamilies and tribes currently recognized by most if not all authors, and their respective genera, are:
Subfamily Spintherobolus clade
Amazonspinther
Spintherobolus
Subfamily Stethaprioninae
Tribe Rhoadsiini [Astyanax clade]
Astyanacinus
Astyanax
Carlana
Ctenobrycon
Inpaichthys
Nematobrycon
Oligosarcus
Parastremma
Psellogrammus
Rhoadsia
Tribe Stygichthyini [Jupiaba clade]
Coptobrycon
Erythrocharax
Jupiaba
Macropsobrycon
Parecbasis
Stygichthys
Tribe Pristellini [Hemigrammus clade; Aphyoditini]
Aphyodite
Atopomesus
Axelrodia
Brittanichthys
Bryconella
Nematocharax
Phycocharax
Tribe Stethaprionini
Brachychalcinus
Gymnocorymbus
Orthospinus
Poptella
Stethaprion
Stichonodon
Tribe Gymnocharacini
Andromakhe
Dectobrycon
Grundulus
Gymnocharacinus
Hollandichthys
Moenkhausia
Psalidodon
Pseudochalceus
Rachoviscus
Schultzites
Tribe Scissorini
Genycharax
Leptobrycon
Microschemobrycon
Mixobrycon
Oligobrycon
Oxybrycon
Scissor
Serrabrycon
Thrissobrycon
Tucanoichthys
Tyttobrycon
Subfamily Stevardiinae
Tribe Eretmobryconini
Eretmobrycon
Markiana
Tribe Xenurobryconini
Iotabrycon
Ptychocharax
Scopaeocharax
Tyttocharax
Xenurobrycon
Tribe Argopleura clade
Argopleura
Tribe Glandulocaudini
Glandulocauda
Lophiobrycon
Mimagoniates
Tribe Stevardiini
Chrysobrycon
Corynopoma
Gephyrocharax
Hysteronotus
Pseudocorynopoma
Pterobrycon
Tribe Hemibryconini
Acrobrycon
Boehlkea
Hemibrycon
Tribe Creagrutini
Carlastyanax
Creagrutus
Tribe Landonini
Landonia
Tribe Phenacobryconini
Phenacobrycon
Tribe Trochilocharacini
Trochilocharax
Tribe Diapomini
Attonitus
Aulixidens
Bryconacidnus
Bryconadenos
Bryconamericus
Caiapobrycon
Ceratobranchia
Cyanogaster
Diapoma
Hypobrycon
Knodus
Lepidocharax
Microgenys
Monotocheirodon
Othonocheirodus
Phallobrycon
Piabarchus
Piabina
Planaltina
Rhinobrycon
Rhinopetitia
Subfamily Characinae
Tribe Protocheirodontini
Protocheirodon
Tribe Pseudocheirodontini
Nanocheirodon
Pseudocheirodon
Tribe Aphyocharacini
Aphyocharacidium
Aphyocharax
Leptagoniates
Inpaichthys
Paragoniates
Phenagoniates
Prionobrama
Xenagoniates
Tribe Cheirodontini
Cheirodon
Heterocheirodon
Prodontocharax
Saccoderma
Tribe Compsurini
Acinocheirodon
Aphyocheirodon
Cheirodontops
Compsura
Ctenocheirodon
Kolpotocheirodon
Odontostilbe
Serrapinnus
Tribe Exodontini
Bryconexodon
Exodon
Roeboexodon
Tribe Tetragonopterini
Tetragonopterus
Tribe Characini
Acanthocharax
Acestrocephalus
Charax
Cynopotamus
Galeocharax
Phenacogaster
Priocharax
Roeboides
Subfamily Pristellinae
Bario
Deuterodon
Ectrepopterus
Hasemania
Hemigrammus
Hyphessobrycon
Moenkhausia
Myxiops
Paracheirodon
Parapristella
Petitella
Pristella
Probolodus
Thayeria
Former members
The Chalceidae, Iguanodectidae, Bryconidae and Heterocharacinae are the most recent clades to be removed in order to maintain a monophyletic Characidae.
Subfamily Iguanodectinae moved to Iguanodectidae
Bryconops
Iguanodectes
Piabucus
Subfamily Heterocharacinae moved to Acestrorhynchidae
Gnathocharax
Heterocharax
Hoplocharax
Lonchogenys
Subfamily Bryconinae moved to Bryconidae
Brycon
Chilobrycon
Henochilus
Subfamily Salmininae moved to Bryconidae
Salminus
Genera incertae sedis
Chalceus moved to Chalceidae
Genera incertae sedis
A large number of taxa in this family are incertae sedis. The relationships of many fish in this family – in particular species traditionally placed in the Tetragonopterinae, which had become something of a "wastebin taxon" – are poorly known, a comprehensive phylogenetic study for the entire family is needed. The genera Hyphessobrycon, Astyanax, Hemigrammus, Moenkhausia, and Bryconamericus include the largest number of currently recognized species
among characid fishes that are in need of revision; Astyanax and Hyphessobrycon in the usual delimitation are among the largest genera in this family. These genera were originally proposed between 1854 and 1908 and are still more or less defined as by Carl H. Eigenmann in 1917, though diverse species have been added to each genus since that time. The anatomical diversity within each genus, the fact that each of these generic groups at the present time cannot be well-defined, and the high number of species involved are the major reasons for the lack of phylogenetic analyses dealing with the relationships of the species within these generic "groups".
| Biology and health sciences | Characiformes | Animals |
48980 | https://en.wikipedia.org/wiki/Basidiomycota | Basidiomycota | Basidiomycota () is one of two large divisions that, together with the Ascomycota, constitute the subkingdom Dikarya (often referred to as the "higher fungi") within the kingdom Fungi. Members are known as basidiomycetes. More specifically, Basidiomycota includes these groups: agarics, puffballs, stinkhorns, bracket fungi, other polypores, jelly fungi, boletes, chanterelles, earth stars, smuts, bunts, rusts, mirror yeasts, and Cryptococcus, the human pathogenic yeast.
Basidiomycota are filamentous fungi composed of hyphae (except for basidiomycota-yeast) and reproduce sexually via the formation of specialized club-shaped end cells called basidia that normally bear external meiospores (usually four). These specialized spores are called basidiospores. However, some Basidiomycota are obligate asexual reproducers. Basidiomycota that reproduce asexually (discussed below) can typically be recognized as members of this division by gross similarity to others, by the formation of a distinctive anatomical feature (the clamp connection), cell wall components, and definitively by phylogenetic molecular analysis of DNA sequence data.
Classification
A 2007 classification, adopted by a coalition of 67 mycologists recognized three subphyla (Pucciniomycotina, Ustilaginomycotina, Agaricomycotina) and two other class level taxa (Wallemiomycetes, Entorrhizomycetes) outside of these, among the Basidiomycota. As now classified, the subphyla join and also cut across various obsolete taxonomic groups (see below) previously commonly used to describe Basidiomycota. According to a 2008 estimate, Basidiomycota comprise three subphyla (including six unassigned classes) 16 classes, 52 orders, 177 families, 1,589 genera, and 31,515 species.
Wijayawardene et al. 2020 produced an update that recognized 19 classes (Agaricomycetes, Agaricostilbomycetes, Atractiellomycetes, Bartheletiomycetes, Classiculomycetes, Cryptomycocolacomycetes, Cystobasidiomycetes, Dacrymycetes, Exobasidiomycetes, Malasseziomycetes, Microbotryomycetes, Mixiomycetes, Monilielliomycetes, Pucciniomycetes, Spiculogloeomycetes, Tremellomycetes, Tritirachiomycetes, Ustilaginomycetes and Wallemiomycetes) with multiple orders and genera.
Traditionally, the Basidiomycota were divided into two classes, now obsolete:
Homobasidiomycetes (alternatively called holobasidiomycetes), including true mushrooms
Heterobasidiomycetes, including the jelly, rust and smut fungi
Nonetheless these former concepts continue to be used as two types of growth habit groupings, the "mushrooms" (e.g. Schizophyllum commune) and the non-mushrooms (e.g. Mycosarcoma maydis).
Agaricomycotina
The Agaricomycotina include what had previously been called the Hymenomycetes (an obsolete morphological based class of Basidiomycota that formed hymenial layers on their fruitbodies), the Gasteromycetes (another obsolete class that included species mostly lacking hymenia and mostly forming spores in enclosed fruitbodies), as well as most of the jelly fungi. This sub-phyla also includes the "classic" mushrooms, polypores, corals, chanterelles, crusts, puffballs and stinkhorns. The three classes in the Agaricomycotina are the Agaricomycetes, the Dacrymycetes, and the Tremellomycetes.
The class Wallemiomycetes is not yet placed in a subdivision, but recent genomic evidence suggests that it is a sister group of Agaricomycotina.
Pucciniomycotina
The Pucciniomycotina include the rust fungi, the insect parasitic/symbiotic genus Septobasidium, a former group of smut fungi (in the Microbotryomycetes, which includes mirror yeasts), and a mixture of odd, infrequently seen, or seldom recognized fungi, often parasitic on plants. The eight classes in the Pucciniomycotina are Agaricostilbomycetes, Atractiellomycetes, Classiculomycetes, Cryptomycocolacomycetes, Cystobasidiomycetes, Microbotryomycetes, Mixiomycetes, and Pucciniomycetes.
Ustilaginomycotina
The Ustilaginomycotina are most (but not all) of the former smut fungi and the Exobasidiales. The classes of the Ustilaginomycotina are the Exobasidiomycetes, the Entorrhizomycetes, and the Ustilaginomycetes.
Genera included
There are several genera classified in the Basidiomycota that are 1) poorly known, 2) have not been subjected to DNA analysis, or 3) if analysed phylogenetically do not group with as yet named or identified families, and have not been assigned to a specific family (i.e., they are incertae sedis with respect to familial placement). These include:
Anastomyces W.P.Wu, B.Sutton & Gange (1997)
Anguillomyces Marvanová & Bärl. (2000)
Anthoseptobasidium Rick (1943)
Arcispora Marvanová & Bärl. (1998)
Arrasia Bernicchia, Gorjón & Nakasone (2011)
Brevicellopsis Hjortstam & Ryvarden (2008)
Celatogloea P.Roberts (2005)
Cleistocybe Ammirati, A.D.Parker & Matheny (2007)
Cystogloea P. Roberts (2006)
Dacryomycetopsis Rick (1958)
Eriocybe Vellinga (2011)
Hallenbergia Dhingra & Priyanka (2011)
Hymenoporus Tkalčec, Mešić & Chun Y.Deng (2015)
Kryptastrina Oberw. (1990)
Microstella K.Ando & Tubaki (1984)
Neotyphula Wakef. (1934)
Nodulospora Marvanová & Bärl. (2000)
Paraphelaria Corner (1966)
Punctulariopsis Ghob.-Nejh. (2010)
Radulodontia Hjortstam & Ryvarden (2008)
Restilago Vánky (2008)
Sinofavus W.Y.Zhuang (2008)
Zanchia Rick (1958)
Zygodesmus Corda (1837)
Zygogloea P.Roberts (1994)
Typical life cycle
Unlike animals and plants which have readily recognizable male and female counterparts, Basidiomycota (except for the Rust (Pucciniales)) tend to have mutually indistinguishable, compatible haploids which are usually mycelia being composed of filamentous hyphae. Typically haploid Basidiomycota mycelia fuse via plasmogamy and then the compatible nuclei migrate into each other's mycelia and pair up with the resident nuclei. Karyogamy is delayed, so that the compatible nuclei remain in pairs, called a dikaryon. The hyphae are then said to be dikaryotic. Conversely, the haploid mycelia are called monokaryons. Often, the dikaryotic mycelium is more vigorous than the individual monokaryotic mycelia, and proceeds to take over the substrate in which they are growing. The dikaryons can be long-lived, lasting years, decades, or centuries. The monokaryons are male nor female. They have either a () or a () mating system. This results in the fact that following meiosis, the resulting haploid basidiospores and resultant monokaryons, have nuclei that are compatible with 50% (if bipolar) or 25% (if tetrapolar) of their sister basidiospores (and their resultant monokaryons) because the mating genes must differ for them to be compatible. However, there are sometimes more than two possible alleles for a given locus, and in such species, depending on the specifics, over 90% of monokaryons could be compatible with each other.
The maintenance of the dikaryotic status in dikaryons in many Basidiomycota is facilitated by the formation of clamp connections that physically appear to help coordinate and re-establish pairs of compatible nuclei following synchronous mitotic nuclear divisions. Variations are frequent and multiple. In a typical Basidiomycota lifecycle the long lasting dikaryons periodically (seasonally or occasionally) produce basidia, the specialized usually club-shaped end cells, in which a pair of compatible nuclei fuse (karyogamy) to form a diploid cell. Meiosis follows shortly with the production of 4 haploid nuclei that migrate into 4 external, usually apical basidiospores. Variations occur, however. Typically the basidiospores are ballistic, hence they are sometimes also called ballistospores. In most species, the basidiospores disperse and each can start a new haploid mycelium, continuing the lifecycle. Basidia are microscopic but they are often produced on or in multicelled large fructifications called basidiocarps or basidiomes, or fruitbodies, variously called mushrooms, puffballs, etc. Ballistic basidiospores are formed on sterigmata which are tapered spine-like projections on basidia, and are typically curved, like the horns of a bull. In some Basidiomycota the spores are not ballistic, and the sterigmata may be straight, reduced to stubs, or absent. The basidiospores of these non-ballistosporic basidia may either bud off, or be released via dissolution or disintegration of the basidia.
In summary, meiosis takes place in a diploid basidium. Each one of the four haploid nuclei migrates into its own basidiospore. The basidiospores are ballistically discharged and start new haploid mycelia called monokaryons. There are no males or females, rather there are compatible thalli with multiple compatibility factors. Plasmogamy between compatible individuals leads to delayed karyogamy leading to establishment of a dikaryon. The dikaryon is long lasting but ultimately gives rise to either fruitbodies with basidia or directly to basidia without fruitbodies. The paired dikaryon in the basidium fuse (i.e. karyogamy takes place). The diploid basidium begins the cycle again.
Meiosis
Coprinopsis cinerea is a basidiomycete mushroom. It is particularly suited to the study of meiosis because meiosis progresses synchronously in about 10 million cells within the mushroom cap, and the meiotic prophase stage is prolonged. Burns et al. studied the expression of genes involved in the 15-hour meiotic process, and found that the pattern of gene expression of C. cinerea was similar to two other fungal species, the yeasts Saccharomyces cerevisiae and Schizosaccharomyces pombe. These similarities in the patterns of expression led to the conclusion that the core expression program of meiosis has been conserved in these fungi for over half a billion years of evolution since these species diverged.
Cryptococcus neoformans and Mycosarcoma maydis are examples of pathogenic basidiomycota. Such pathogens must be able to overcome the oxidative defenses of their respective hosts in order to produce a successful infection. The ability to undergo meiosis may provide a survival benefit for these fungi by promoting successful infection. A characteristic central feature of meiosis is recombination between homologous chromosomes. This process is associated with repair of DNA damage, particularly double-strand breaks. The ability of C. neoformans and M. maydis to undergo meiosis may contribute to their virulence by repairing the oxidative DNA damage caused by their host's release of reactive oxygen species.
Variations in lifecycles
Many variations occur: some variations are self-compatible and spontaneously form dikaryons without a separate compatible thallus being involved. These fungi are said to be homothallic, versus the normal heterothallic species with mating types. Others are secondarily homothallic, in that two compatible nuclei following meiosis migrate into each basidiospore, which is then dispersed as a pre-existing dikaryon. Often such species form only two spores per basidium, but that too varies. Following meiosis, mitotic divisions can occur in the basidium. Multiple numbers of basidiospores can result, including odd numbers via degeneration of nuclei, or pairing up of nuclei, or lack of migration of nuclei. For example, the chanterelle genus Craterellus often has six-spored basidia, while some corticioid Sistotrema species can have two-, four-, six-, or eight-spored basidia, and the cultivated button mushroom, Agaricus bisporus. can have one-, two-, three- or four-spored basidia under some circumstances. Occasionally, monokaryons of some taxa can form morphologically fully formed basidiomes and anatomically correct basidia and ballistic basidiospores in the absence of dikaryon formation, diploid nuclei, and meiosis. A rare few number of taxa have extended diploid lifecycles, but can be common species. Examples exist in the mushroom genera Armillaria and Xerula, both in the Physalacriaceae. Occasionally, basidiospores are not formed and parts of the "basidia" act as the dispersal agents, e.g. the peculiar mycoparasitic jelly fungus, Tetragoniomyces or the entire "basidium" acts as a "spore", e.g. in some false puffballs (Scleroderma). In the human pathogenic genus Cryptococcus, four nuclei following meiosis remain in the basidium, but continually divide mitotically, each nucleus migrating into synchronously forming nonballistic basidiospores that are then pushed upwards by another set forming below them, resulting in four parallel chains of dry "basidiospores".
Other variations occur: some as standard lifecycles (that themselves have variations within variations) within specific orders.
Rusts
Rusts (Pucciniales, previously known as Uredinales) at their greatest complexity, produce five different types of spores on two different host plants in two unrelated host families. Such rusts are heteroecious (requiring two hosts) and macrocyclic (producing all five spores types). Wheat stem rust is an example. By convention, the stages and spore states are numbered by Roman numerals. Typically, basidiospores infect host one, also known as the alternate or sexual host, and the mycelium forms pycnidia, which are miniature, flask-shaped, hollow, submicroscopic bodies embedded in the host tissue (such as a leaf). This stage, numbered "0", produces single-celled spores that ooze out in a sweet liquid and that act as nonmotile spermatia, and also protruding receptive hyphae. Insects and probably other vectors such as rain carry the spermatia from spermagonium to spermagonium, cross inoculating the mating types. Neither thallus is male or female. Once crossed, the dikaryons are established and a second spore stage is formed, numbered "I" and called aecia, which form dikaryotic aeciospores in dry chains in inverted cup-shaped bodies embedded in host tissue. These aeciospores then infect the second host, known as the primary or asexual host (in macrocyclic rusts). On the primary host a repeating spore stage is formed, numbered "II", the urediospores in dry pustules called uredinia. Urediospores are dikaryotic and can infect the same host that produced them. They repeatedly infect this host over the growing season. At the end of the season, a fourth spore type, the teliospore, is formed. It is thicker-walled and serves to overwinter or to survive other harsh conditions. It does not continue the infection process, rather it remains dormant for a period and then germinates to form basidia (stage "IV"), sometimes called a promycelium. In the Pucciniales, the basidia are cylindrical and become 3-septate after meiosis, with each of the 4 cells bearing one basidiospore each. The basidiospores disperse and start the infection process on host 1 again. Autoecious rusts complete their life-cycles on one host instead of two, and microcyclic rusts cut out one or more stages.
Smuts
The characteristic part of the life-cycle of smuts is the thick-walled, often darkly pigmented, ornate, teliospore that serves to survive harsh conditions such as overwintering and also serves to help disperse the fungus as dry diaspores. The teliospores are initially dikaryotic but become diploid via karyogamy. Meiosis takes place at the time of germination. A promycelium is formed that consists of a short hypha (equated to a basidium). In some smuts such as Mycosarcoma maydis the nuclei migrate into the promycelium that becomes septate (i.e., divided into cellular compartments separated by cell walls called septa), and haploid yeast-like conidia/basidiospores sometimes called sporidia, bud off laterally from each cell. In various smuts, the yeast phase may proliferate, or they may fuse, or they may infect plant tissue and become hyphal. In other smuts, such as Tilletia caries, the elongated haploid basidiospores form apically, often in compatible pairs that fuse centrally resulting in H-shaped diaspores which are by then dikaryotic. Dikaryotic conidia may then form. Eventually the host is infected by infectious hyphae. Teliospores form in host tissue. Many variations on these general themes occur.
Smuts with both a yeast phase and an infectious hyphal state are examples of dimorphic Basidiomycota. In plant parasitic taxa, the saprotrophic phase is normally the yeast while the infectious stage is hyphal. However, there are examples of animal and human parasites where the species are dimorphic but it is the yeast-like state that is infectious. The genus Filobasidiella forms basidia on hyphae but the main infectious stage is more commonly known by the anamorphic yeast name Cryptococcus, e.g. Cryptococcus neoformans and Cryptococcus gattii.
The dimorphic Basidiomycota with yeast stages and the pleiomorphic rusts are examples of fungi with anamorphs, which are the asexual stages. Some Basidiomycota are only known as anamorphs. Many are called basidiomycetous yeasts, which differentiates them from ascomycetous yeasts in the Ascomycota. Aside from yeast anamorphs and uredinia, aecia, and pycnidia, some Basidiomycota form other distinctive anamorphs as parts of their life cycles. Examples are Collybia tuberosa with its apple-seed-shaped and coloured sclerotium, Dendrocollybia racemosa with its sclerotium and its Tilachlidiopsis racemosa conidia, Armillaria with their rhizomorphs, Hohenbuehelia with their Nematoctonus nematode infectious, state and the coffee leaf parasite, Mycena citricolor, and its Decapitatus flavidus propagules called gemmae.
| Biology and health sciences | Fungi | null |
48981 | https://en.wikipedia.org/wiki/Ascomycota | Ascomycota | Ascomycota is a phylum of the kingdom Fungi that, together with the Basidiomycota, forms the subkingdom Dikarya. Its members are commonly known as the sac fungi or ascomycetes. It is the largest phylum of Fungi, with over 64,000 species. The defining feature of this fungal group is the "ascus" (), a microscopic sexual structure in which nonmotile spores, called ascospores, are formed. However, some species of Ascomycota are asexual and thus do not form asci or ascospores. Familiar examples of sac fungi include morels, truffles, brewers' and bakers' yeast, dead man's fingers, and cup fungi. The fungal symbionts in the majority of lichens (loosely termed "ascolichens") such as Cladonia belong to the Ascomycota.
Ascomycota is a monophyletic group (containing all of the descendants of a common ancestor). Previously placed in the Basidiomycota along with asexual species from other fungal taxa, asexual (or anamorphic) ascomycetes are now identified and classified based on morphological or physiological similarities to ascus-bearing taxa, and by phylogenetic analyses of DNA sequences.
Ascomycetes are of particular use to humans as sources of medicinally important compounds such as antibiotics, as well as for fermenting bread, alcoholic beverages, and cheese. Examples of ascomycetes include Penicillium species on cheeses and those producing antibiotics for treating bacterial infectious diseases.
Many ascomycetes are pathogens, both of animals, including humans, and of plants. Examples of ascomycetes that can cause infections in humans include Candida albicans, Aspergillus niger and several tens of species that cause skin infections. The many plant-pathogenic ascomycetes include apple scab, rice blast, the ergot fungi, black knot, and the powdery mildews. The members of the genus Cordyceps are entomopathogenic fungi, meaning that they parasitise and kill insects. Other entomopathogenic ascomycetes have been used successfully in biological pest control, such as Beauveria.
Several species of ascomycetes are biological model organisms in laboratory research. Most famously, Neurospora crassa, several species of yeasts, and Aspergillus species are used in many genetics and cell biology studies.
Sexual reproduction in ascomycetes
Ascomycetes are 'spore shooters'. They are fungi which produce microscopic spores inside special, elongated cells or sacs, known as 'asci', which give the group its name.
Asexual reproduction is the dominant form of propagation in the Ascomycota, and is responsible for the rapid spread of these fungi into new areas. Asexual reproduction of ascomycetes is very diverse from both structural and functional points of view. The most important and general is production of conidia, but chlamydospores are also frequently produced. Furthermore, Ascomycota also reproduce asexually through budding.
Conidia formation
Asexual reproduction may occur through vegetative reproductive spores, the conidia. The asexual, non-motile haploid spores of a fungus, which are named after the Greek word for dust (conia), are hence also known as . The conidiospores commonly contain one nucleus and are products of mitotic cell divisions and thus are sometimes called , which are genetically identical to the mycelium from which they originate. They are typically formed at the ends of specialized hyphae, the conidiophores. Depending on the species they may be dispersed by wind or water, or by animals. Conidiophores may simply branch off from the mycelia or they may be formed in fruiting bodies.
The hypha that creates the sporing (conidiating) tip can be very similar to the normal hyphal tip, or it can be differentiated. The most common differentiation is the formation of a bottle shaped cell called a , from which the spores are produced. Not all of these asexual structures are a single hypha. In some groups, the conidiophores (the structures that bear the conidia) are aggregated to form a thick structure.
E.g. In the order Moniliales, all of them are single hyphae with the exception of the aggregations, termed as coremia or synnema. These produce structures rather like corn-stokes, with many conidia being produced in a mass from the aggregated conidiophores.
The diverse conidia and conidiophores sometimes develop in asexual sporocarps with different characteristics (e.g. acervulus, pycnidium, sporodochium). Some species of ascomycetes form their structures within plant tissue, either as parasite or saprophytes. These fungi have evolved more complex asexual sporing structures, probably influenced by the cultural conditions of plant tissue as a substrate. These structures are called the . This is a cushion of conidiophores created from a pseudoparenchymatous stroma in plant tissue. The is a globose to flask-shaped parenchymatous structure, lined on its inner wall with conidiophores. The is a flat saucer shaped bed of conidiophores produced under a plant cuticle, which eventually erupt through the cuticle for dispersal.
Budding
Asexual reproduction process in ascomycetes also involves the budding which we clearly observe in yeast. This is termed a "blastic process". It involves the blowing out or blebbing of the hyphal tip wall. The blastic process can involve all wall layers, or there can be a new cell wall synthesized which is extruded from within the old wall.
The initial events of budding can be seen as the development of a ring of chitin around the point where the bud is about to appear. This reinforces and stabilizes the cell wall. Enzymatic activity and turgor pressure act to weaken and extrude the cell wall. New cell wall material is incorporated during this phase. Cell contents are forced into the progeny cell, and as the final phase of mitosis ends a cell plate, the point at which a new cell wall will grow inwards from, forms.
Characteristics of ascomycetes
Ascomycota are morphologically diverse. The group includes organisms from unicellular yeasts to complex cup fungi.
98% of lichens have an Ascomycota as the fungal part of the lichen.
There are 2000 identified genera and 30,000 species of Ascomycota.
The unifying characteristic among these diverse groups is the presence of a reproductive structure known as the , though in some cases it has a reduced role in the life cycle.
Many ascomycetes are of commercial importance. Some play a beneficial role, such as the yeasts used in baking, brewing, and wine fermentation, plus truffles and morels, which are held as gourmet delicacies.
Many of them cause tree diseases, such as Dutch elm disease and apple blights.
Some of the plant pathogenic ascomycetes are apple scab, rice blast, the ergot fungi, black knot, and the powdery mildews.
The yeasts are used to produce alcoholic beverages and breads. The mold Penicillium is used to produce the antibiotic penicillin.
Almost half of all members of the phylum Ascomycota form associations with algae to form lichens.
Others, such as morels (a highly prized edible fungi), form important relationships with plants, thereby providing enhanced water and nutrient uptake and, in some cases, protection from insects.
Most ascomycetes are terrestrial or parasitic. However, some have adapted to marine or freshwater environments. As of 2015, there were 805 marine fungi in the Ascomycota, distributed among 352 genera.
The cell walls of the hyphae are variably composed of chitin and β-glucans, just as in Basidiomycota. However, these fibers are set in a matrix of glycoprotein containing the sugars galactose and mannose.
The mycelium of ascomycetes is usually made up of septate hyphae. However, there is not necessarily any fixed number of nuclei in each of the divisions.
The septal walls have septal pores which provide cytoplasmic continuity throughout the individual hyphae. Under appropriate conditions, nuclei may also migrate between septal compartments through the septal pores.
A unique character of the Ascomycota (but not present in all ascomycetes) is the presence of Woronin bodies on each side of the septa separating the hyphal segments which control the septal pores. If an adjoining hypha is ruptured, the Woronin bodies block the pores to prevent loss of cytoplasm into the ruptured compartment. The Woronin bodies are spherical, hexagonal, or rectangular membrane bound structures with a crystalline protein matrix.
Modern classification
There are three subphyla that are described and accepted:
The Pezizomycotina are the largest subphylum and contains all ascomycetes that produce ascocarps (fruiting bodies), except for one genus, Neolecta, in the Taphrinomycotina. It is roughly equivalent to the previous taxon, Euascomycetes. The Pezizomycotina includes most macroscopic "ascos" such as truffles, ergot, ascolichens, cup fungi (discomycetes), pyrenomycetes, lorchels, and caterpillar fungus. It also contains microscopic fungi such as powdery mildews, dermatophytic fungi, and Laboulbeniales.
The Saccharomycotina comprise most of the "true" yeasts, such as baker's yeast and Candida, which are single-celled (unicellular) fungi, which reproduce vegetatively by budding. Most of these species were previously classified in a taxon called Hemiascomycetes.
The Taphrinomycotina include a disparate and basal group within the Ascomycota that was recognized following molecular (DNA) analyses. The taxon was originally named Archiascomycetes (or Archaeascomycetes). It includes hyphal fungi (Neolecta, Taphrina, Archaeorhizomyces), fission yeasts (Schizosaccharomyces), and the mammalian lung parasite Pneumocystis.
Outdated taxon names
Several outdated taxon names—based on morphological features—are still occasionally used for species of the Ascomycota. These include the following sexual (teleomorphic) groups, defined by the structures of their sexual fruiting bodies: the Discomycetes, which included all species forming apothecia; the Pyrenomycetes, which included all sac fungi that formed perithecia or pseudothecia, or any structure resembling these morphological structures; and the Plectomycetes, which included those species that form cleistothecia. Hemiascomycetes included the yeasts and yeast-like fungi that have now been placed into the Saccharomycotina or Taphrinomycotina, while the Euascomycetes included the remaining species of the Ascomycota, which are now in the Pezizomycotina, and the Neolecta, which are in the Taphrinomycotina.
Some ascomycetes do not reproduce sexually or are not known to produce asci and are therefore anamorphic species. Those anamorphs that produce conidia (mitospores) were previously described as mitosporic Ascomycota. Some taxonomists placed this group into a separate artificial phylum, the Deuteromycota (or "Fungi Imperfecti"). Where recent molecular analyses have identified close relationships with ascus-bearing taxa, anamorphic species have been grouped into the Ascomycota, despite the absence of the defining ascus. Sexual and asexual isolates of the same species commonly carry different binomial species names, as, for example, Aspergillus nidulans and Emericella nidulans, for asexual and sexual isolates, respectively, of the same species.
Species of the Deuteromycota were classified as Coelomycetes if they produced their conidia in minute flask- or saucer-shaped conidiomata, known technically as pycnidia and acervuli. The Hyphomycetes were those species where the conidiophores (i.e., the hyphal structures that carry conidia-forming cells at the end) are free or loosely organized. They are mostly isolated but sometimes also appear as bundles of cells aligned in parallel (described as synnematal) or as cushion-shaped masses (described as sporodochial).
Morphology
Most species grow as filamentous, microscopic structures called hyphae or as budding single cells (yeasts). Many interconnected hyphae form a thallus usually referred to as the mycelium, which—when visible to the naked eye (macroscopic)—is commonly called mold. During sexual reproduction, many Ascomycota typically produce large numbers of asci. The ascus is often contained in a multicellular, occasionally readily visible fruiting structure, the ascocarp (also called an ascoma). Ascocarps come in a very large variety of shapes: cup-shaped, club-shaped, potato-like, spongy, seed-like, oozing and pimple-like, coral-like, nit-like, golf-ball-shaped, perforated tennis ball-like, cushion-shaped, plated and feathered in miniature (Laboulbeniales), microscopic classic Greek shield-shaped, stalked or sessile. They can appear solitary or clustered. Their texture can likewise be very variable, including fleshy, like charcoal (carbonaceous), leathery, rubbery, gelatinous, slimy, powdery, or cob-web-like. Ascocarps come in multiple colors such as red, orange, yellow, brown, black, or, more rarely, green or blue. Some ascomyceous fungi, such as Saccharomyces cerevisiae, grow as single-celled yeasts, which—during sexual reproduction—develop into an ascus, and do not form fruiting bodies.
In lichenized species, the thallus of the fungus defines the shape of the symbiotic colony. Some dimorphic species, such as Candida albicans, can switch between growth as single cells and as filamentous, multicellular hyphae. Other species are pleomorphic, exhibiting asexual (anamorphic) as well as a sexual (teleomorphic) growth forms.
Except for lichens, the non-reproductive (vegetative) mycelium of most ascomycetes is usually inconspicuous because it is commonly embedded in the substrate, such as soil, or grows on or inside a living host, and only the ascoma may be seen when fruiting. Pigmentation, such as melanin in hyphal walls, along with prolific growth on surfaces can result in visible mold colonies; examples include Cladosporium species, which form black spots on bathroom caulking and other moist areas. Many ascomycetes cause food spoilage, and, therefore, the pellicles or moldy layers that develop on jams, juices, and other foods are the mycelia of these species or occasionally Mucoromycotina and almost never Basidiomycota. Sooty molds that develop on plants, especially in the tropics are the thalli of many species.
Large masses of yeast cells, asci or ascus-like cells, or conidia can also form macroscopic structures. For example. Pneumocystis species can colonize lung cavities (visible in x-rays), causing a form of pneumonia. Asci of Ascosphaera fill honey bee larvae and pupae causing mummification with a chalk-like appearance, hence the name "chalkbrood". Yeasts for small colonies in vitro and in vivo, and excessive growth of Candida species in the mouth or vagina causes "thrush", a form of candidiasis.
The cell walls of the ascomycetes almost always contain chitin and β-glucans, and divisions within the hyphae, called "septa", are the internal boundaries of individual cells (or compartments). The cell wall and septa give stability and rigidity to the hyphae and may prevent loss of cytoplasm in case of local damage to cell wall and cell membrane. The septa commonly have a small opening in the center, which functions as a cytoplasmic connection between adjacent cells, also sometimes allowing cell-to-cell movement of nuclei within a hypha. Vegetative hyphae of most ascomycetes contain only one nucleus per cell (uninucleate hyphae), but multinucleate cells—especially in the apical regions of growing hyphae—can also be present.
Metabolism
In common with other fungal phyla, the Ascomycota are heterotrophic organisms that require organic compounds as energy sources. These are obtained by feeding on a variety of organic substrates including dead matter, foodstuffs, or as symbionts in or on other living organisms. To obtain these nutrients from their surroundings, ascomycetous fungi secrete powerful digestive enzymes that break down organic substances into smaller molecules, which are then taken up into the cell. Many species live on dead plant material such as leaves, twigs, or logs. Several species colonize plants, animals, or other fungi as parasites or mutualistic symbionts and derive all their metabolic energy in form of nutrients from the tissues of their hosts.
Owing to their long evolutionary history, the Ascomycota have evolved the capacity to break down almost every organic substance. Unlike most organisms, they are able to use their own enzymes to digest plant biopolymers such as cellulose or lignin. Collagen, an abundant structural protein in animals, and keratin—a protein that forms hair and nails—, can also serve as food sources. Unusual examples include Aureobasidium pullulans, which feeds on wall paint, and the kerosene fungus Amorphotheca resinae, which feeds on aircraft fuel (causing occasional problems for the airline industry), and may sometimes block fuel pipes. Other species can resist high osmotic stress and grow, for example, on salted fish, and a few ascomycetes are aquatic.
The Ascomycota is characterized by a high degree of specialization; for instance, certain species of Laboulbeniales attack only one particular leg of one particular insect species. Many Ascomycota engage in symbiotic relationships such as in lichens—symbiotic associations with green algae or cyanobacteria—in which the fungal symbiont directly obtains products of photosynthesis. In common with many basidiomycetes and Glomeromycota, some ascomycetes form symbioses with plants by colonizing the roots to form mycorrhizal associations. The Ascomycota also represents several carnivorous fungi, which have developed hyphal traps to capture small protists such as amoebae, as well as roundworms (Nematoda), rotifers, tardigrades, and small arthropods such as springtails (Collembola).
Distribution and living environment
The Ascomycota are represented in all land ecosystems worldwide, occurring on all continents including Antarctica. Spores and hyphal fragments are dispersed through the atmosphere and freshwater environments, as well as ocean beaches and tidal zones. The distribution of species is variable; while some are found on all continents, others, as for example the white truffle Tuber magnatum, only occur in isolated locations in Italy and Eastern Europe. The distribution of plant-parasitic species is often restricted by host distributions; for example, Cyttaria is only found on Nothofagus (Southern Beech) in the Southern Hemisphere.
Reproduction
Asexual reproduction
Asexual reproduction is the dominant form of propagation in the Ascomycota, and is responsible for the rapid spread of these fungi into new areas. It occurs through vegetative reproductive spores, the conidia. The conidiospores commonly contain one nucleus and are products of mitotic cell divisions and thus are sometimes called mitospores, which are genetically identical to the mycelium from which they originate. They are typically formed at the ends of specialized hyphae, the conidiophores. Depending on the species they may be dispersed by wind or water, or by animals.
Asexual spores
Different types of asexual spores can be identified by colour, shape, and how they are released as individual spores. Spore types can be used as taxonomic characters in the classification within the Ascomycota. The most frequent types are the single-celled spores, which are designated amerospores. If the spore is divided into two by a cross-wall (septum), it is called a didymospore.
When there are two or more cross-walls, the classification depends on spore shape. If the septae are transversal, like the rungs of a ladder, it is a phragmospore, and if they possess a net-like structure it is a dictyospore. In staurospores ray-like arms radiate from a central body; in others (helicospores) the entire spore is wound up in a spiral like a spring. Very long worm-like spores with a length-to-diameter ratio of more than 15:1, are called scolecospores.
Conidiogenesis and dehiscence
Important characteristics of the anamorphs of the Ascomycota are conidiogenesis, which includes spore formation and dehiscence (separation from the parent structure). Conidiogenesis corresponds to Embryology in animals and plants and can be divided into two fundamental forms of development: blastic conidiogenesis, where the spore is already evident before it separates from the conidiogenic hypha, and thallic conidiogenesis, during which a cross-wall forms and the newly created cell develops into a spore. The spores may or may not be generated in a large-scale specialized structure that helps to spread them.
These two basic types can be further classified as follows:
blastic-acropetal (repeated budding at the tip of the conidiogenic hypha, so that a chain of spores is formed with the youngest spores at the tip),
blastic-synchronous (simultaneous spore formation from a central cell, sometimes with secondary acropetal chains forming from the initial spores),
blastic-sympodial (repeated sideways spore formation from behind the leading spore, so that the oldest spore is at the main tip),
blastic-annellidic (each spore separates and leaves a ring-shaped scar inside the scar left by the previous spore),
blastic-phialidic (the spores arise and are ejected from the open ends of special conidiogenic cells called phialides, which remain constant in length),
basauxic (where a chain of conidia, in successively younger stages of development, is emitted from the mother cell),
blastic-retrogressive (spores separate by formation of crosswalls near the tip of the conidiogenic hypha, which thus becomes progressively shorter),
thallic-arthric (double cell walls split the conidiogenic hypha into cells that develop into short, cylindrical spores called arthroconidia; sometimes every second cell dies off, leaving the arthroconidia free),
thallic-solitary (a large bulging cell separates from the conidiogenic hypha, forms internal walls, and develops to a phragmospore).
Sometimes the conidia are produced in structures visible to the naked eye, which help to distribute the spores. These structures are called "conidiomata" (singular: conidioma), and may take the form of pycnidia (which are flask-shaped and arise in the fungal tissue) or acervuli (which are cushion-shaped and arise in host tissue).
Dehiscence happens in two ways. In schizolytic dehiscence, a double-dividing wall with a central lamella (layer) forms between the cells; the central layer then breaks down thereby releasing the spores. In rhexolytic dehiscence, the cell wall that joins the spores on the outside degenerates and releases the conidia.
Heterokaryosis and parasexuality
Several Ascomycota species are not known to have a sexual cycle. Such asexual species may be able to undergo genetic recombination between individuals by processes involving heterokaryosis and parasexual events.
Parasexuality refers to the process of heterokaryosis, caused by merging of two hyphae belonging to different individuals, by a process called anastomosis, followed by a series of events resulting in genetically different cell nuclei in the mycelium.
The merging of nuclei is not followed by meiotic events, such as gamete formation and results in an increased number of chromosomes per nuclei. Mitotic crossover may enable recombination, i.e., an exchange of genetic material between homologous chromosomes. The chromosome number may then be restored to its haploid state by nuclear division, with each daughter nuclei being genetically different from the original parent nuclei. Alternatively, nuclei may lose some chromosomes, resulting in aneuploid cells. Candida albicans (class Saccharomycetes) is an example of a fungus that has a parasexual cycle (see Candida albicans and Parasexual cycle).
Sexual reproduction
Sexual reproduction in the Ascomycota leads to the formation of the ascus, the structure that defines this fungal group and distinguishes it from other fungal phyla. The ascus is a tube-shaped vessel, a meiosporangium, which contains the sexual spores produced by meiosis and which are called ascospores.
Apart from a few exceptions, such as Candida albicans, most ascomycetes are haploid, i.e., they contain one set of chromosomes per nucleus. During sexual reproduction there is a diploid phase, which commonly is very short, and meiosis restores the haploid state. The sexual cycle of one well-studied representative species of Ascomycota is described in greater detail in Neurospora crassa. Also, the adaptive basis for the maintenance of sexual reproduction in the Ascomycota fungi was reviewed by Wallen and Perlin. They concluded that the most plausible reason for the maintenance of this capability is the benefit of repairing DNA damage by using recombination that occurs during meiosis. DNA damage can be caused by a variety of stresses such as nutrient limitation.
Formation of sexual spores
The sexual part of the life cycle commences when two hyphal structures mate. In the case of homothallic species, mating is enabled between hyphae of the same fungal clone, whereas in heterothallic species, the two hyphae must originate from fungal clones that differ genetically, i.e., those that are of a different mating type. Mating types are typical of the fungi and correspond roughly to the sexes in plants and animals; however one species may have more than two mating types, resulting in sometimes complex vegetative incompatibility systems. The adaptive function of mating type is discussed in Neurospora crassa.
Gametangia are sexual structures formed from hyphae, and are the generative cells. A very fine hypha, called trichogyne emerges from one gametangium, the ascogonium, and merges with a gametangium (the antheridium) of the other fungal isolate. The nuclei in the antheridium then migrate into the ascogonium, and plasmogamy—the mixing of the cytoplasm—occurs. Unlike in animals and plants, plasmogamy is not immediately followed by the merging of the nuclei (called karyogamy). Instead, the nuclei from the two hyphae form pairs, initiating the dikaryophase of the sexual cycle, during which time the pairs of nuclei synchronously divide. Fusion of the paired nuclei leads to mixing of the genetic material and recombination and is followed by meiosis. A similar sexual cycle is present in the red algae (Rhodophyta). A discarded hypothesis held that a second karyogamy event occurred in the ascogonium prior to ascogeny, resulting in a tetraploid nucleus which divided into four diploid nuclei by meiosis and then into eight haploid nuclei by a supposed process called brachymeiosis, but this hypothesis was disproven in the 1950s.
From the fertilized ascogonium, dinucleate hyphae emerge in which each cell contains two nuclei. These hyphae are called ascogenous or fertile hyphae. They are supported by the vegetative mycelium containing uni– (or mono–) nucleate hyphae, which are sterile. The mycelium containing both sterile and fertile hyphae may grow into fruiting body, the ascocarp, which may contain millions of fertile hyphae.
An ascocarp is the fruiting body of the sexual phase in Ascomycota. There are five morphologically different types of ascocarp, namely:
Naked asci: these occur in simple ascomycetes; asci are produced on the organism's surface.
Perithecia: Asci are in flask-shaped ascoma (perithecium) with a pore (ostiole) at the top.
Cleistothecia: The ascocarp (a cleistothecium) is spherical and closed.
Apothecia: The asci are in a bowl shaped ascoma (apothecium). These are sometimes called the "cup fungi".
Pseudothecia: Asci with two layers, produced in pseudothecia that look like perithecia. The ascospores are arranged irregularly.
The sexual structures are formed in the fruiting layer of the ascocarp, the hymenium. At one end of ascogenous hyphae, characteristic U-shaped hooks develop, which curve back opposite to the growth direction of the hyphae. The two nuclei contained in the apical part of each hypha divide in such a way that the threads of their mitotic spindles run parallel, creating two pairs of genetically different nuclei. One daughter nucleus migrates close to the hook, while the other daughter nucleus locates to the basal part of the hypha. The formation of two parallel cross-walls then divides the hypha into three sections: one at the hook with one nucleus, one at the basal of the original hypha that contains one nucleus, and one that separates the U-shaped part, which contains the other two nuclei.
Fusion of the nuclei (karyogamy) takes place in the U-shaped cells in the hymenium, and results in the formation of a diploid zygote. The zygote grows into the ascus, an elongated tube-shaped or cylinder-shaped capsule. Meiosis then gives rise to four haploid nuclei, usually followed by a further mitotic division that results in eight nuclei in each ascus. The nuclei along with some cytoplasma become enclosed within membranes and a cell wall to give rise to ascospores that are aligned inside the ascus like peas in a pod.
Upon opening of the ascus, ascospores may be dispersed by the wind, while in some cases the spores are forcibly ejected form the ascus; certain species have evolved spore cannons, which can eject ascospores up to 30 cm. away. When the spores reach a suitable substrate, they germinate, form new hyphae, which restarts the fungal life cycle.
The form of the ascus is important for classification and is divided into four basic types: unitunicate-operculate, unitunicate-inoperculate, bitunicate, or prototunicate. See the article on asci for further details.
Ecology
The Ascomycota fulfil a central role in most land-based ecosystems. They are important decomposers, breaking down organic materials, such as dead leaves and animals, and helping the detritivores (animals that feed on decomposing material) to obtain their nutrients. Ascomycetes, along with other fungi, can break down large molecules such as cellulose or lignin, and thus have important roles in nutrient cycling such as the carbon cycle.
The fruiting bodies of the Ascomycota provide food for many animals ranging from insects and slugs and snails (Gastropoda) to rodents and larger mammals such as deer and wild boars.
Many ascomycetes also form symbiotic relationships with other organisms, including plants and animals.
Lichens
Probably since early in their evolutionary history, the Ascomycota have formed symbiotic associations with green algae (Chlorophyta), and other types of algae and cyanobacteria. These mutualistic associations are commonly known as lichens, and can grow and persist in terrestrial regions of the earth that are inhospitable to other organisms and characterized by extremes in temperature and humidity, including the Arctic, the Antarctic, deserts, and mountaintops. While the photoautotrophic algal partner generates metabolic energy through photosynthesis, the fungus offers a stable, supportive matrix and protects cells from radiation and dehydration. Around 42% of the Ascomycota (about 18,000 species) form lichens, and almost all the fungal partners of lichens belong to the Ascomycota.
Mycorrhizal fungi and endophytes
Members of the Ascomycota form two important types of relationship with plants: as mycorrhizal fungi and as endophytes. Mycorrhiza are symbiotic associations of fungi with the root systems of the plants, which can be of vital importance for growth and persistence for the plant. The fine mycelial network of the fungus enables the increased uptake of mineral salts that occur at low levels in the soil. In return, the plant provides the fungus with metabolic energy in the form of photosynthetic products.
Endophytic fungi live inside plants, and those that form mutualistic or commensal associations with their host, do not damage their hosts. The exact nature of the relationship between endophytic fungus and host depends on the species involved, and in some cases fungal colonization of plants can bestow a higher resistance against insects, roundworms (nematodes), and bacteria; in the case of grass endophytes the fungal symbiont produces poisonous alkaloids, which can affect the health of plant-eating (herbivorous) mammals and deter or kill insect herbivores.
Symbiotic relationships with animals
Several ascomycetes of the genus Xylaria colonize the nests of leafcutter ants and other fungus-growing ants of the tribe Attini, and the fungal gardens of termites (Isoptera). Since they do not generate fruiting bodies until the insects have left the nests, it is suspected that, as confirmed in several cases of Basidiomycota species, they may be cultivated.
Bark beetles (family Scolytidae) are important symbiotic partners of ascomycetes. The female beetles transport fungal spores to new hosts in characteristic tucks in their skin, the mycetangia. The beetle tunnels into the wood and into large chambers in which they lay their eggs. Spores released from the mycetangia germinate into hyphae, which can break down the wood. The beetle larvae then feed on the fungal mycelium, and, on reaching maturity, carry new spores with them to renew the cycle of infection. A well-known example of this is Dutch elm disease, caused by Ophiostoma ulmi, which is carried by the European elm bark beetle, Scolytus multistriatus.
Plant disease interactions
One of their most harmful roles is as the agent of many plant diseases. For instance:
Dutch elm disease, caused by the closely related species Ophiostoma ulmi and Ophiostoma novo-ulmi, has led to the death of many elms in Europe and North America.
The originally Asian Cryphonectria parasitica is responsible for attacking Sweet Chestnuts (Castanea sativa), and virtually eliminated the once-widespread American Chestnut (Castanea dentata),
A disease of maize (Zea mays), which is especially prevalent in North America, is brought about by Cochliobolus heterostrophus.
Taphrina deformans causes leaf curl of peach.
Uncinula necator is responsible for the disease powdery mildew, which attacks grapevines.
Species of Monilinia cause brown rot of stone fruit such as peaches (Prunus persica) and sour cherries (Prunus ceranus).
Members of the Ascomycota such as Stachybotrys chartarum are responsible for fading of woolen textiles, which is a common problem especially in the tropics.
Blue-green, red and brown molds attack and spoil foodstuffs – for instance Penicillium italicum rots oranges.
Cereals infected with Fusarium graminearum contain mycotoxins like deoxynivalenol (DON), which causes Fusarium ear blight and skin and mucous membrane lesions when eaten by pigs.
Human disease interactions
Aspergillus fumigatus, the most common cause of fungal infection in the lungs of immune-compromised patients often resulting in death. Also the most frequent cause of Allergic bronchopulmonary aspergillosis, which often occurs in patients with Cystic fibrosis as well as Asthma.
Candida albicans, a yeast that attacks the mucous membranes, can cause an infection of the mouth or vagina called thrush or candidiasis, and is also blamed for "yeast allergies".
Fungi like Epidermophyton cause skin infections but are not very dangerous for people with healthy immune systems. However, if the immune system is damaged they can be life-threatening; for instance, Pneumocystis jirovecii is responsible for severe lung infections that occur in AIDS patients.
Ergot (Claviceps purpurea) is a direct menace to humans when it attacks wheat or rye and produces highly poisonous alkaloids, causing ergotism if consumed. Symptoms include hallucinations, stomach cramps, and a burning sensation in the limbs ("Saint Anthony's Fire").
Aspergillus flavus, which grows on peanuts and other hosts, generates aflatoxin, which damages the liver and is highly carcinogenic.
Histoplasma capsulatum causes histoplasmosis, which affects immunocompromised patients.
Blastomyces dermatitidis is the causal agent of blastomycosis, an invasive and often serious fungal infection found occasionally in humans and other animals in regions where the fungus is endemic.
Paracoccidioides brasiliensis and Paracoccidioides lutzii are the causal agents of paracoccidioidomycosis.
Coccidioides immitis and Coccidioides posadasii are the causative agent of coccidioidomycosis (valley fever).
Talaromyces marneffei, formerly called Penicillium marneffei causes talaromycosis
Beneficial effects for humans
On the other hand, ascus fungi have brought some significant benefits to humanity.
The most famous case may be that of the mold Penicillium chrysogenum (formerly Penicillium notatum), which, probably to attack competing bacteria, produces an antibiotic that, under the name of penicillin, triggered a revolution in the treatment of bacterial infectious diseases in the 20th century.
The medical importance of Tolypocladium niveum as an immunosuppressor can hardly be exaggerated. It excretes Ciclosporin, which, as well as being given during Organ transplantation to prevent rejection, is also prescribed for auto-immune diseases such as multiple sclerosis. However, there is some doubt over the long-term side effects of the treatment.
Some ascomycete fungi can be easily altered through genetic engineering procedures. They can then produce useful proteins such as insulin, human growth hormone, or TPa, which is employed to dissolve blood clots.
Several species are common model organisms in biology, including Saccharomyces cerevisiae, Schizosaccharomyces pombe, and Neurospora crassa. The genomes of some ascomycete fungi have been fully sequenced.
Baker's Yeast (Saccharomyces cerevisiae) is used to make bread, beer and wine, during which process sugars such as glucose or sucrose are fermented to make ethanol and carbon dioxide. Bakers use the yeast for carbon dioxide production, causing the bread to rise, with the ethanol boiling off during cooking. Most vintners use it for ethanol production, releasing carbon dioxide into the atmosphere during fermentation. Brewers and traditional producers of sparkling wine use both, with a primary fermentation for the alcohol and a secondary one to produce the carbon dioxide bubbles that provide the drinks with a "sparkling" texture in the case of wine and the desirable foam in the case of beer.
Enzymes of Penicillium camemberti play a role in the manufacture of the cheeses Camembert and Brie, while those of Penicillium roqueforti do the same for Gorgonzola, Roquefort and Stilton.
In Asia, Aspergillus oryzae is added to a pulp of soaked soya beans to make soy sauce and is used to break down starch in rice and other grains into simple sugars for fermentation into East Asian alcoholic beverages such as huangjiu and sake.
Finally, some members of the Ascomycota are choice edibles; morels (Morchella spp.), truffles (Tuber spp.), and lobster mushroom (Hypomyces lactifluorum) are some of the most sought-after fungal delicacies.
Cordyceps militaris is known for its numerous medicinal benefits, including supporting the immune system, reducing inflammation, providing antioxidant effects, enhancing metabolic health, improving athletic performance, and promoting respiratory health. It contains bioactive compounds such as cordycepin, cordycepic acid, adenosine, and polysaccharides, beta-glucans, and ergosterol.
| Biology and health sciences | Fungi | null |
49020 | https://en.wikipedia.org/wiki/Road%20transport | Road transport | Road transport or road transportation is a type of transport using roads. Transport on roads can be roughly grouped into the transportation of goods and transportation of people. In many countries licensing requirements and safety regulations ensure a separation of the two industries. Movement along roads may be by bike, automobile, bus, truck, or by animal such as horse or oxen. Standard networks of roads were adopted by Romans, Persians, Aztec, and other early empires, and may be regarded as a feature of empires. Cargo may be transported by trucking companies, while passengers may be transported via mass transit. Commonly defined features of modern roads include defined lanes and signage. Various classes of road exist, from two-lane local roads with at-grade intersections to controlled-access highways with all cross traffic grade-separated.
The nature of road transportation of goods depends on, apart from the degree of development of the local infrastructure, the distance the goods are transported by road, the weight and volume of an individual shipment, and the type of goods transported. For short distances and light small shipments, a van or pickup truck may be used. For large shipments even if less than a full truckload a truck is more appropriate. (Also see Trucking and Hauling below). In some countries cargo is transported by road in horse-drawn carriages, donkey carts or other non-motorized mode. Delivery services are sometimes considered a separate category from cargo transport. In many places, fast food is transported on roads by various types of vehicles. For inner city delivery of small packages and documents bike couriers are quite common.
People are transported on roads. Special modes of individual transport by road such as cycle rickshaws may also be locally available. There are also specialist modes of road transport for particular situations, such as ambulances.
History
Early roads
The first methods of road transport were horses, oxen or even humans carrying goods over dirt tracks that often followed game trail. The Persians later built a network of Royal Roads across their empire.
With the advent of the Roman Empire, there was a need for armies to be able to travel quickly from one region to another, and the roads that existed were often muddy, which greatly delayed the movement of large masses of troops. To resolve this issue, the Romans built solid and lasting roads. The Roman roads used deep roadbeds of crushed stone as an underlying layer to ensure that they kept dry, as the water would flow out from the crushed stone, instead of becoming mud in clay soils. The Islamic Caliphate later built tar-paved roads in Baghdad.
New road networks
As states developed and became richer, especially with the Renaissance, new roads and bridges began to be built, often based on Roman designs. Although there were attempts to rediscover Roman methods, there was little useful innovation in road building before the 18th century.
Starting in the early 18th century, the British Parliament began to pass a series of acts that gave the local justices powers to erect toll-gates on the roads, in exchange for professional upkeep. The toll-gate erected at Wade's Mill became the first effective toll-gate in England. The first scheme that had trustees who were not justices was established through a turnpike act in 1707, for a section of the London-Chester road between Foothill and Stony Stafford. The basic principle was that the trustees would manage resources from the several parishes through which the highway passed, augment this with tolls from users from outside the parishes and apply the whole to the maintenance of the main highway. This became the pattern for the turnpiking of a growing number of highways, sought by those who wished to improve flow of commerce through their part of a county.
In 18th century West Africa, road transport throughout the Ashanti Empire was maintained via a network of well-kept roads that connected the Ashanti capital with territories within its jurisdiction and influence. After significant road construction undertaken by the kingdom of Dahomey, toll roads were established with the function of collecting yearly taxes based on the goods carried by the people of Dahomey and their occupation. The Royal Road was built in the late 18th century by King Kpengla which stretched from Abomey through Cana up to Ouidah.
The quality of early turnpike roads was varied. Although turnpiking did result in some improvement to each highway, the technologies used to deal with geological features, drainage, and the effects of weather were all in their infancy. Road construction improved slowly, initially through the efforts of individual surveyors such as John Metcalf in Yorkshire in the 1760s. British turnpike builders began to realize the importance of selecting clean stones for surfacing while excluding vegetable material and clay, resulting in more durable roads.
Industrial civil engineering
By the late 18th and early 19th centuries, new methods of highway construction had been pioneered by the work of three British engineers, John Metcalf, Thomas Telford and John Loudon McAdam, and by the French road engineer Pierre-Marie-Jérôme Trésaguet.
The first professional road builder to emerge during the Industrial Revolution was John Metcalf, who constructed about of turnpike road, mainly in the north of England, from 1765. He believed a good road should have good foundations, be well drained and have a smooth convex surface to allow rainwater to drain quickly into ditches at the side. He understood the importance of good drainage, knowing it was rain that caused most problems on the roads.
Pierre-Marie-Jérôme Trésaguet established the first scientific approach to road building in France at the same time. He wrote a memorandum on his method in 1775, which became general practice in France. It involved a layer of large rocks, covered by a layer of smaller gravel. The lower layer improved on Roman practice in that it was based on the understanding that the purpose of this layer (the sub-base or base course) is to transfer the weight of the road and its traffic to the ground, while protecting the ground from deformation by spreading the weight evenly. Therefore, the sub-base did not have to be a self-supporting structure. The upper running surface provided a smooth surface for vehicles while protecting the large stones of the sub-base.
The surveyor and engineer Thomas Telford also made substantial advances in the engineering of new roads and the construction of bridges. His method of road building involved the digging of a large trench in which a foundation of heavy rock was set. He also designed his roads so that they sloped downwards from the centre, allowing drainage to take place, a major improvement on the work of Trésaguet. The surface of his roads consisted of broken stone. He also improved on methods for the building of roads by improving the selection of stone based on thickness, taking into account traffic, alignment and slopes. During his later years, Telford was responsible for rebuilding sections of the London to Holyhead road, a task completed by his assistant of ten years, John MacNeill.
It was another Scottish engineer, John Loudon McAdam, who designed the first modern roads. He developed an inexpensive paving material of soil and stone aggregate (known as macadam). His road building method was simpler than Telford's, yet more effective at protecting roadways: he discovered that massive foundations of rock upon rock were unnecessary, and asserted that native soil alone would support the road and traffic upon it, as long as it was covered by a road crust that would protect the soil underneath from water and wear.
Also unlike Telford and other road builders, McAdam laid his roads as level as possible. His road required only a rise of three inches from the edges to the center. Cambering and elevation of the road above the water table enabled rainwater to run off into ditches on either side. Size of stones was central to the McAdam's road building theory. The lower road thickness was restricted to stones no larger than . The upper layer of stones was limited to size and stones were checked by supervisors who carried scales. A workman could check the stone size himself by seeing if the stone would fit into his mouth. The importance of the 20 mm stone size was that the stones needed to be much smaller than the 100 mm width of the iron carriage tyres that traveled on the road. Macadam roads were being built widely in the United States and Australia in the 1820s and in Europe in the 1830s and 1840s.
20th century
Macadam roads were adequate for use by horses and carriages or coaches, but they were very dusty and subject to erosion with heavy rain. The Good Roads Movement occurred in the United States between the late 1870s and the 1920s. Advocates for improved roads led by bicyclists turned local agitation into a national political movement.
Outside cities, roads were dirt or gravel; mud in the winter and dust in the summer. Early organizers cited Europe where road construction and maintenance was supported by national and local governments. In its early years, the main goal of the movement was education for road building in rural areas between cities and to help rural populations gain the social and economic benefits enjoyed by cities where citizens benefited from railroads, trolleys and paved streets. Even more than traditional vehicles, the newly invented bicycles could benefit from good country roads. Later on, they did not hold up to higher-speed motor vehicle use. Methods to stabilise macadam roads with tar date back to at least 1834 when John Henry Cassell, operating from Cassell's Patent Lava Stone Works in Millwall, patented "Pitch Macadam".
This method involved spreading tar on the subgrade, placing a typical macadam layer, and finally sealing the macadam with a mixture of tar and sand. Tar-grouted macadam was in use well before 1900 and involved scarifying the surface of an existing macadam pavement, spreading tar, and re-compacting. Although the use of tar in road construction was known in the 19th century, it was little used and was not introduced on a large scale until the motorcar arrived on the scene in the early 20th century.
Modern tarmac was patented by British civil engineer Edgar Purnell Hooley, who noticed that spilled tar on the roadway kept the dust down and created a smooth surface. He took out a patent in 1901 for tarmac. Hooley's 1901 patent involved mechanically mixing tar and aggregate prior to lay-down and then compacting the mixture with a steamroller. The tar was modified by adding small amounts of Portland cement, resin, and pitch.
The first version of modern controlled-access highways evolved during the first half of the 20th century. The Long Island Motor Parkway on Long Island, New York, opened in 1908 as a private venture, was the world's first limited-access roadway. It included many modern features, including banked turns, guard rails and reinforced concrete tarmac. Traffic could turn left between the parkway and connectors, crossing oncoming traffic, so it was not a controlled-access highway (or "freeway" as later defined by the federal government's Manual on Uniform Traffic Control Devices).
Modern controlled-access highways originated in the early 1920s in response to the rapidly increasing use of the automobile, the demand for faster movement between cities and as a consequence of improvements in paving processes, techniques and materials. These original high-speed roads were referred to as "dual highways" and have been modernized and are still in use today.
Italy was the first country in the world to build controlled-access highways reserved for fast traffic and for motor vehicles only. The Autostrada dei Laghi ("Lakes Motorway"), the first built in the world, connecting Milan to Lake Como and Lake Maggiore, and now parts of the A8 and A9 motorways, was devised by Piero Puricelli and was inaugurated in 1924. This motorway, called autostrada, contained only one lane in each direction and no interchanges. The Bronx River Parkway was the first road in North America to utilize a median strip to separate the opposing lanes, to be constructed through a park and where intersecting streets crossed over bridges. The Southern State Parkway opened in 1927, while the Long Island Motor Parkway was closed in 1937 and replaced by the Northern State Parkway (opened 1931) and the contiguous Grand Central Parkway (opened 1936). In Germany, construction of the Bonn-Cologne Autobahn began in 1929 and was opened in 1932 by Konrad Adenauer, then the mayor of Cologne.
In Canada, the first precursor with semi-controlled access was The Middle Road between Hamilton and Toronto, which featured a median divider between opposing traffic flow, as well as the nation's first cloverleaf interchange. This highway developed into the Queen Elizabeth Way, which featured a cloverleaf and trumpet interchange when it opened in 1937 and until the Second World War boasted the longest illuminated stretch of roadway built. A decade later, the first section of Highway 401 was opened, based on earlier designs. It has since become North America's busiest highway.
The word freeway was first used in February 1930 by Edward M. Bassett. Bassett argued that roads should be classified into three basic types: highways, parkways, and freeways. In Bassett's zoning and property law-based system, abutting property owners have the rights of light, air and access to highways but to not parkways and freeways; the latter two are distinguished in that the purpose of a parkway is recreation, while the purpose of a freeway is movement. Thus as originally conceived, a freeway is a strip of public land devoted to movement to which abutting property owners do not have rights of light, air or access.
Trucking and haulage
Trucking companies (in American English terminology) or haulage companies / hauliers (in British English) accept cargo for road transport. Truck drivers operate either independently – working directly for the client – or through freight carriers or shipping agents. Some big companies (e.g. grocery store chains) operate their own internal trucking operations. The market size for general freight trucking was nearly $125 billion in 2010.
In the U.S. many truckers own their truck (rig), and are known as owner-operators. Some road transportation is done on regular routes or for only one consignee per run (full truckload), while others transport goods from many different loading stations/shippers to various consignees per run (less-than-truckload). On some long runs only cargo for one leg of the route (to) is known when the cargo is loaded. Truckers may have to wait at the destination for a backhaul.
A bill of lading issued by the shipper provides the basic document for road freight. On cross-border transportation the trucker will present the cargo and documentation provided by the shipper to customs for inspection (for EC see also Schengen Agreement). This also applies to shipments that are transported out of a free port.
Hours of service
To avoid accidents caused by fatigue, truckers have to adhere to strict rules for drive time and required rest periods. In the United States and Canada, these regulations are known as hours of service, and in the European Union as drivers working hours. One such regulation is the Hours of Work and Rest Periods (Road Transport) Convention, 1979. Tachographs or Electronic on-board recorders record the times the vehicle is in motion and stopped. Some companies use two drivers per truck to ensure uninterrupted transportation; with one driver resting or sleeping in a bunk in the back of the cab while the other is driving.
Licenses
Truck drivers often need special licenses to drive, known in the U.S. as a commercial driver's license. In the U.K. a large goods vehicle licence is required. For transport of hazardous materials (see dangerous goods) truckers need a licence, which usually requires them to pass an exam (e.g. in the EU). They have to make sure they affix proper labels for the respective hazard(s) to their vehicle. Liquid goods are transported by road in tank trucks (in American English) or tanker lorries (in British English) (also road-tankers) or special tank containers for intermodal transport. For transportation of live animals special requirements have to be met in many countries to prevent cruelty to animals (see animal rights). For fresh and frozen goods refrigerator trucks or reefers are used.
Weights
Some loads are weighed at the point of origin and the driver is responsible for ensuring weights conform to maximum allowed standards. This may involve using on-board weight gauges (load pressure gauges), knowing the empty weight of the transport vehicle and the weight of the load, or using a commercial weight scale. In route weigh stations check that gross vehicle weights do not exceed the maximum weight for that particular jurisdiction and will include individual axle weights. This varies by country, states within a country, and may include federal standards. The United States uses FMCSA federal standards that include bridge law formulas. Many states, not on the national road system, use their own road and bridge standards. Enforcement scales may include portable scales, scale houses with low speed scales or weigh-in-motion (WIM) scales.
The European Union uses the International Recommendation, OIML R 134-2 (2009). The process may involve a scale house and low-speed scales or higher-speed WIM road or bridge scales with the goal of public safety, as well as road and bridge safety, according to the Bridges Act.
Modern roads
Today, roadways are primarily asphalt or concrete. Both are based on McAdam's concept of stone aggregate in a binder, asphalt cement or Portland cement respectively. Asphalt is known as a flexible pavement, one which slowly will "flow" under the pounding of traffic. Concrete is a rigid pavement, which can take heavier loads but is more expensive and requires more carefully prepared subbase. So, generally, major roads are concrete and local roads are asphalt. Concrete roads are often covered with a thin layer of asphalt to create a wearing surface.
Modern pavements are designed for heavier vehicle loads and faster speeds, requiring thicker slabs and deeper subbase. Subbase is the layer or successive layers of stone, gravel and sand supporting the pavement. It is needed to spread out the slab load bearing on the underlying soil and to conduct away any water getting under the slabs. Water will undermine a pavement over time, so much of pavement and pavement joint design are meant to minimize the amount of water getting and staying under the slabs.
Shoulders are also an integral part of highway design. They are multipurpose; they can provide a margin of side clearance, a refuge for incapacitated vehicles, an emergency lane, and parking space. They also serve a design purpose, and that is to prevent water from percolating into the soil near the main pavement's edge. Shoulder pavement is designed to a lower standard than the pavement in the traveled way and won't hold up as well to traffic, so driving on the shoulder is generally prohibited.
Pavement technology is still evolving, albeit in not easily noticed increments. For instance, chemical additives in the pavement mix make the pavement more weather resistant, grooving and other surface treatments improve resistance to skidding and hydroplaning, and joint seals which were once tar are now made of low maintenance neoprene.
Traffic control
Nearly all roadways are built with devices meant to control traffic. Most notable to the motorist are those meant to communicate directly with the driver. Broadly, these fall into three categories: signs, signals or pavement markings. They help the driver navigate; they assign the right-of-way at intersections; they indicate laws such as speed limits and parking regulations; they advise of potential hazards; they indicate passing and no passing zones; and otherwise deliver information and to assure traffic is orderly and safe.
Two hundred years ago these devices were signs, nearly all informal. In the late 19th century signals began to appear in the biggest cities at a few highly congested intersections. They were manually operated, and consisted of semaphores, flags or paddles, or in some cases colored electric lights, all modeled on railroad signals. In the 20th century signals were automated, at first with electromechanical devices and later with computers. Signals can be quite sophisticated: with vehicle sensors embedded in the pavement, the signal can control and choreograph the turning movements of heavy traffic in the most complex of intersections. In the 1920s traffic engineers learned how to coordinate signals along a thoroughfare to increase its speeds and volumes. In the 1980s, with computers, similar coordination of whole networks became possible.
In the 1920s pavement markings were introduced. Initially they were used to indicate the road's centerline. Soon after they were coded with information to aid motorists in passing safely. Later, with multi-lane roads they were used to define lanes. Other uses, such as indicating permitted turning movements and pedestrian crossings soon followed.
In the 20th century traffic control devices were standardized. Before then every locality decided on what its devices would look like and where they would be applied. This could be confusing, especially to traffic from outside the locality. In the United States standardization was first taken at the state level, and late in the century at the federal level. Each country has a Manual of Uniform Traffic Control Devices (MUTCD) and there are efforts to blend them into a worldwide standard.
Besides signals, signs, and markings, other forms of traffic control are designed and built into the roadway. For instance, curbs and rumble strips can be used to keep traffic in a given lane and median barriers can prevent left turns and even U-turns.
Toll roads
Early toll roads were usually built by private companies under a government franchise. They typically paralleled or replaced routes already with some volume of commerce, hoping the improved road would divert enough traffic to make the enterprise profitable. Plank roads were particularly attractive as they greatly reduced rolling resistance and mitigated the problem of getting mired in mud. Another improvement, better grading to lessen the steepness of the worst stretches, allowed draft animals to haul heavier loads.
A toll road in the United States is often called a turnpike. The term turnpike probably originated from the gate, often a simple pike, which blocked passage until the fare was paid at a toll house (or toll booth in current terminology). When the toll was paid the pike, which was mounted on a swivel, was turned to allow the vehicle to pass. Tolls were usually based on the type of cargo being transported, not the type of vehicle. The practice of selecting routes so as to avoid tolls is called shunpiking. This may be simply to avoid the expense, as a form of economic protest (or boycott), or simply to seek a road less traveled as a bucolic interlude.
Companies were formed to build, improve, and maintain a particular section of roadway, and tolls were collected from users to finance the enterprise. The enterprise was usually named to indicate the locale of its roadway, often including the name of one of both of the termini. The word turnpike came into common use in the names of these roadways and companies, and is essentially used interchangeably with toll road in current terminology.
In the United States, toll roads began with the Lancaster Turnpike in the 1790s, within Pennsylvania, connecting Philadelphia and Lancaster. In the state of New York, the Great Western Turnpike was started in Albany in 1799 and eventually extended, by several alternate routes, to near what is now Syracuse, New York.
Toll roads peaked in the mid 19th century, and by the turn of the twentieth century most toll roads were taken over by state highway departments. The demise of this early toll road era was due to the rise of canals and railroads, which were more efficient (and thus cheaper) in moving freight over long distances. Roads wouldn't again be competitive with rails and barges until the first half of the 20th century when the internal combustion engine replaces draft animals as the source of motive power.
With the development, mass production, and popular embrace of the automobile, faster and higher capacity roads were needed. In the 1920s limited access highways appeared. Their main characteristics were dual roadways with access points limited to (but not always) grade-separated interchanges. Their dual roadways allowed high volumes of traffic, the need for no or few traffic lights along with relatively gentle grades and curves allowed higher speeds.
The first limited access highways were Parkways, so called because of their often park-like landscaping and, in the metropolitan New York City area, they connected the region's system of parks. When the German autobahns built in the 1930s introduced higher design standards and speeds, road planners and road-builders in the United States started developing and building toll roads to similar high standards. The Pennsylvania Turnpike, which largely followed the path of a partially built railroad, was the first, opening in 1940.
After 1940 with the Pennsylvania Turnpike, toll roads saw a resurgence, this time to fund limited access highways. In the late 1940s and early 1950s, after World War II interrupted the evolution of the highway, the US resumed building toll roads. They were to still higher standards and one road, the New York State Thruway, had standards that became the prototype for the U.S. Interstate Highway System. Several other major toll-roads which connected with the Pennsylvania Turnpike were established before the creation of the Interstate Highway System. These were the Indiana Toll Road, Ohio Turnpike, and New Jersey Turnpike.
Interstate Highway System
In the United States, beginning in 1956, Dwight D. Eisenhower National System of Interstate and Defense Highways, commonly called the Interstate Highway System was built. It uses 12 foot (3.65m) lanes, wide medians, a maximum of 4% grade, and full access control, though many sections don't meet these standards due to older construction or constraints. This system created a continental-sized network meant to connect every population center of 50,000 people or more.
By 1956, most limited access highways in the eastern United States were toll roads. In that year, the Federal Aid Highway Act of 1956 was passed, funding non-toll roads with 90% federal dollars and 10% state match, giving little incentive for states to expand their turnpike system. Funding rules initially restricted collections of tolls on newly funded roadways, bridges, and tunnels. In some situations, expansion or rebuilding of a toll facility using Interstate Highway Program funding resulted in the removal of existing tolls. This occurred in Virginia on Interstate 64 at the Hampton Roads Bridge-Tunnel when a second parallel roadway to the regional 1958 bridge-tunnel was completed in 1976.
Since the completion of the initial portion of the Interstate Highway System, regulations were changed, and portions of toll facilities have been added to the system. Some states are again looking at toll financing for new roads and maintenance, to supplement limited federal funding. In some areas, new road projects have been completed with public-private partnerships funded by tolls, such as the Pocahontas Parkway (I-895) near Richmond, Virginia.
The newest policy passed by Congress and the Obama administration regarding highways is the Surface and Air Transportation Programs Extension Act of 2011.
Pneumatic tyres
As the horse-drawn carriage was replaced by the car, bus and lorry or truck, and speeds increased, the need for smoother roads and less vertical displacement became more apparent, and pneumatic tyres were developed to decrease the apparent roughness. Wagon and carriage wheels, made of wood, had a tyre in the form of an iron strip that kept the wheel from wearing out quickly. Pneumatic tyres, which had a larger footprint than iron tyres, also were less likely to get bogged down in the mud on unpaved roads.
| Technology | Road transport | null |
49033 | https://en.wikipedia.org/wiki/Epigenetics | Epigenetics | In biology, epigenetics is the study of heritable traits, or a stable change of cell function, that happen without changes to the DNA sequence. The Greek prefix epi- ( "over, outside of, around") in epigenetics implies features that are "on top of" or "in addition to" the traditional (DNA sequence based) genetic mechanism of inheritance. Epigenetics usually involves a change that is not erased by cell division, and affects the regulation of gene expression. Such effects on cellular and physiological phenotypic traits may result from environmental factors, or be part of normal development. Epigenetic factors can also lead to cancer.
The term also refers to the mechanism of changes: functionally relevant alterations to the genome that do not involve mutation of the nucleotide sequence. Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence. Further, non-coding RNA sequences have been shown to play a key role in the regulation of gene expression. Gene expression can be controlled through the action of repressor proteins that attach to silencer regions of the DNA. These epigenetic changes may last through cell divisions for the duration of the cell's life, and may also last for multiple generations, even though they do not involve changes in the underlying DNA sequence of the organism; instead, non-genetic factors cause the organism's genes to behave (or "express themselves") differently.
One example of an epigenetic change in eukaryotic biology is the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. In other words, as a single fertilized egg cell – the zygote – continues to divide, the resulting daughter cells change into all the different cell types in an organism, including neurons, muscle cells, epithelium, endothelium of blood vessels, etc., by activating some genes while inhibiting the expression of others.
Definitions
The term epigenesis has a generic meaning of "extra growth" that has been used in English since the 17th century. In scientific publications, the term epigenetics started to appear in the 1930s (see Fig. on the right). However, its contemporary meaning emerged only in the 1990s.
A definition of the concept of epigenetic trait as a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence" was formulated at a Cold Spring Harbor meeting in 2008, although alternate definitions that include non-heritable traits are still being used widely.
Waddington's canalisation, 1940s
The hypothesis of epigenetic changes affecting the expression of chromosomes was put forth by the Russian biologist Nikolai Koltsov. From the generic meaning, and the associated adjective epigenetic, British embryologist C. H. Waddington coined the term epigenetics in 1942 as pertaining to epigenesis, in parallel to Valentin Haecker's 'phenogenetics' (). Epigenesis in the context of the biology of that period referred to the differentiation of cells from their initial totipotent state during embryonic development.
When Waddington coined the term, the physical nature of genes and their role in heredity was not known. He used it instead as a conceptual model of how genetic components might interact with their surroundings to produce a phenotype; he used the phrase "epigenetic landscape" as a metaphor for biological development. Waddington held that cell fates were established during development in a process he called canalisation much as a marble rolls down to the point of lowest local elevation. Waddington suggested visualising increasing irreversibility of cell type differentiation as ridges rising between the valleys where the marbles (analogous to cells) are travelling.
In recent times, Waddington's notion of the epigenetic landscape has been rigorously formalized in the context of the systems dynamics state approach to the study of cell-fate. Cell-fate determination is predicted to exhibit certain dynamics, such as attractor-convergence (the attractor can be an equilibrium point, limit cycle or strange attractor) or oscillatory.
Contemporary
Robin Holliday defined in 1990 epigenetics as "the study of the mechanisms of temporal and spatial control of gene activity during the development of complex organisms."
More recent usage of the word in biology follows stricter definitions. As defined by Arthur Riggs and colleagues, it is "the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence."
The term has also been used, however, to describe processes which have not been demonstrated to be heritable, such as some forms of histone modification. Consequently, there are attempts to redefine "epigenetics" in broader terms that would avoid the constraints of requiring heritability. For example, Adrian Bird defined epigenetics as "the structural adaptation of chromosomal regions so as to register, signal or perpetuate altered activity states." This definition would be inclusive of transient modifications associated with DNA repair or cell-cycle phases as well as stable changes maintained across multiple cell generations, but exclude others such as templating of membrane architecture and prions unless they impinge on chromosome function. Such redefinitions however are not universally accepted and are still subject to debate. The NIH "Roadmap Epigenomics Project", which ran from 2008 to 2017, uses the following definition: "For purposes of this program, epigenetics refers to both heritable changes in gene activity and expression (in the progeny of cells or of individuals) and also stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable." In 2008, a consensus definition of the epigenetic trait, a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence," was made at a Cold Spring Harbor meeting.
The similarity of the word to "genetics" has generated many parallel usages. The "epigenome" is a parallel to the word "genome", referring to the overall epigenetic state of a cell, and epigenomics refers to global analyses of epigenetic changes across the entire genome. The phrase "genetic code" has also been adapted – the "epigenetic code" has been used to describe the set of epigenetic features that create different phenotypes in different cells from the same underlying DNA sequence. Taken to its extreme, the "epigenetic code" could represent the total state of the cell, with the position of each molecule accounted for in an epigenomic map, a diagrammatic representation of the gene expression, DNA methylation and histone modification status of a particular genomic region. More typically, the term is used in reference to systematic efforts to measure specific, relevant forms of epigenetic information such as the histone code or DNA methylation patterns.
Mechanisms
Covalent modification of either DNA (e.g. cytosine methylation and hydroxymethylation) or of histone proteins (e.g. lysine acetylation, lysine and arginine methylation, serine and threonine phosphorylation, and lysine ubiquitination and sumoylation) play central roles in many types of epigenetic inheritance. Therefore, the word "epigenetics" is sometimes used as a synonym for these processes. However, this can be misleading. Chromatin remodeling is not always inherited, and not all epigenetic inheritance involves chromatin remodeling. In 2019, a further lysine modification appeared in the scientific literature linking epigenetics modification to cell metabolism, i.e. lactylation
Because the phenotype of a cell or individual is affected by which of its genes are transcribed, heritable transcription states can give rise to epigenetic effects. There are several layers of regulation of gene expression. One way that genes are regulated is through the remodeling of chromatin. Chromatin is the complex of DNA and the histone proteins with which it associates. If the way that DNA is wrapped around the histones changes, gene expression can change as well. Chromatin remodeling is accomplished through two main mechanisms:
The first way is post translational modification of the amino acids that make up histone proteins. Histone proteins are made up of long chains of amino acids. If the amino acids that are in the chain are changed, the shape of the histone might be modified. DNA is not completely unwound during replication. It is possible, then, that the modified histones may be carried into each new copy of the DNA. Once there, these histones may act as templates, initiating the surrounding new histones to be shaped in the new manner. By altering the shape of the histones around them, these modified histones would ensure that a lineage-specific transcription program is maintained after cell division.
The second way is the addition of methyl groups to the DNA, mostly at CpG sites, to convert cytosine to 5-methylcytosine. 5-Methylcytosine performs much like a regular cytosine, pairing with a guanine in double-stranded DNA. However, when methylated cytosines are present in CpG sites in the promoter and enhancer regions of genes, the genes are often repressed. When methylated cytosines are present in CpG sites in the gene body (in the coding region excluding the transcription start site) expression of the gene is often enhanced. Transcription of a gene usually depends on a transcription factor binding to a (10 base or less) recognition sequence at the enhancer that interacts with the promoter region of that gene (Gene expression#Enhancers, transcription factors, mediator complex and DNA loops in mammalian transcription). About 22% of transcription factors are inhibited from binding when the recognition sequence has a methylated cytosine. In addition, presence of methylated cytosines at a promoter region can attract methyl-CpG-binding domain (MBD) proteins. All MBDs interact with nucleosome remodeling and histone deacetylase complexes, which leads to gene silencing. In addition, another covalent modification involving methylated cytosine is its demethylation by TET enzymes. Hundreds of such demethylations occur, for instance, during learning and memory forming events in neurons.
There is frequently a reciprocal relationship between DNA methylation and histone lysine methylation. For instance, the methyl binding domain protein MBD1, attracted to and associating with methylated cytosine in a DNA CpG site, can also associate with H3K9 methyltransferase activity to methylate histone 3 at lysine 9. On the other hand, DNA maintenance methylation by DNMT1 appears to partly rely on recognition of histone methylation on the nucleosome present at the DNA site to carry out cytosine methylation on newly synthesized DNA. There is further crosstalk between DNA methylation carried out by DNMT3A and DNMT3B and histone methylation so that there is a correlation between the genome-wide distribution of DNA methylation and histone methylation.
Mechanisms of heritability of histone state are not well understood; however, much is known about the mechanism of heritability of DNA methylation state during cell division and differentiation. Heritability of methylation state depends on certain enzymes (such as DNMT1) that have a higher affinity for 5-methylcytosine than for cytosine. If this enzyme reaches a "hemimethylated" portion of DNA (where 5-methylcytosine is in only one of the two DNA strands) the enzyme will methylate the other half. However, it is now known that DNMT1 physically interacts with the protein UHRF1. UHRF1 has been recently recognized as essential for DNMT1-mediated maintenance of DNA methylation. UHRF1 is the protein that specifically recognizes hemi-methylated DNA, therefore bringing DNMT1 to its substrate to maintain DNA methylation.
Although histone modifications occur throughout the entire sequence, the unstructured N-termini of histones (called histone tails) are particularly highly modified. These modifications include acetylation, methylation, ubiquitylation, phosphorylation, sumoylation, ribosylation and citrullination. Acetylation is the most highly studied of these modifications. For example, acetylation of the K14 and K9 lysines of the tail of histone H3 by histone acetyltransferase enzymes (HATs) is generally related to transcriptional competence (see Figure).
One mode of thinking is that this tendency of acetylation to be associated with "active" transcription is biophysical in nature. Because it normally has a positively charged nitrogen at its end, lysine can bind the negatively charged phosphates of the DNA backbone. The acetylation event converts the positively charged amine group on the side chain into a neutral amide linkage. This removes the positive charge, thus loosening the DNA from the histone. When this occurs, complexes like SWI/SNF and other transcriptional factors can bind to the DNA and allow transcription to occur. This is the "cis" model of the epigenetic function. In other words, changes to the histone tails have a direct effect on the DNA itself.
Another model of epigenetic function is the "trans" model. In this model, changes to the histone tails act indirectly on the DNA. For example, lysine acetylation may create a binding site for chromatin-modifying enzymes (or transcription machinery as well). This chromatin remodeler can then cause changes to the state of the chromatin. Indeed, a bromodomain – a protein domain that specifically binds acetyl-lysine – is found in many enzymes that help activate transcription, including the SWI/SNF complex. It may be that acetylation acts in this and the previous way to aid in transcriptional activation.
The idea that modifications act as docking modules for related factors is borne out by histone methylation as well. Methylation of lysine 9 of histone H3 has long been associated with constitutively transcriptionally silent chromatin (constitutive heterochromatin) (see bottom Figure). It has been determined that a chromodomain (a domain that specifically binds methyl-lysine) in the transcriptionally repressive protein HP1 recruits HP1 to K9 methylated regions. One example that seems to refute this biophysical model for methylation is that tri-methylation of histone H3 at lysine 4 is strongly associated with (and required for full) transcriptional activation (see top Figure). Tri-methylation, in this case, would introduce a fixed positive charge on the tail.
It has been shown that the histone lysine methyltransferase (KMT) is responsible for this methylation activity in the pattern of histones H3 & H4. This enzyme utilizes a catalytically active site called the SET domain (Suppressor of variegation, Enhancer of Zeste, Trithorax). The SET domain is a 130-amino acid sequence involved in modulating gene activities. This domain has been demonstrated to bind to the histone tail and causes the methylation of the histone.
Differing histone modifications are likely to function in differing ways; acetylation at one position is likely to function differently from acetylation at another position. Also, multiple modifications may occur at the same time, and these modifications may work together to change the behavior of the nucleosome. The idea that multiple dynamic modifications regulate gene transcription in a systematic and reproducible way is called the histone code, although the idea that histone state can be read linearly as a digital information carrier has been largely debunked. One of the best-understood systems that orchestrate chromatin-based silencing is the SIR protein based silencing of the yeast hidden mating-type loci HML and HMR.
DNA methylation
DNA methylation frequently occurs in repeated sequences, and helps to suppress the expression and mobility of 'transposable elements': Because 5-methylcytosine can be spontaneously deaminated (replacing nitrogen by oxygen) to thymidine, CpG sites are frequently mutated and become rare in the genome, except at CpG islands where they remain unmethylated. Epigenetic changes of this type thus have the potential to direct increased frequencies of permanent genetic mutation. DNA methylation patterns are known to be established and modified in response to environmental factors by a complex interplay of at least three independent DNA methyltransferases, DNMT1, DNMT3A, and DNMT3B, the loss of any of which is lethal in mice. DNMT1 is the most abundant methyltransferase in somatic cells, localizes to replication foci, has a 10–40-fold preference for hemimethylated DNA and interacts with the proliferating cell nuclear antigen (PCNA).
By preferentially modifying hemimethylated DNA, DNMT1 transfers patterns of methylation to a newly synthesized strand after DNA replication, and therefore is often referred to as the 'maintenance' methyltransferase. DNMT1 is essential for proper embryonic development, imprinting and X-inactivation. To emphasize the difference of this molecular mechanism of inheritance from the canonical Watson-Crick base-pairing mechanism of transmission of genetic information, the term 'Epigenetic templating' was introduced. Furthermore, in addition to the maintenance and transmission of methylated DNA states, the same principle could work in the maintenance and transmission of histone modifications and even cytoplasmic (structural) heritable states.
RNA methylation
RNA methylation of N6-methyladenosine (m6A) as the most abundant eukaryotic RNA modification has recently been recognized as an important gene regulatory mechanism.
Histone modifications
Histones H3 and H4 can also be manipulated through demethylation using histone lysine demethylase (KDM). This recently identified enzyme has a catalytically active site called the Jumonji domain (JmjC). The demethylation occurs when JmjC utilizes multiple cofactors to hydroxylate the methyl group, thereby removing it. JmjC is capable of demethylating mono-, di-, and tri-methylated substrates.
Chromosomal regions can adopt stable and heritable alternative states resulting in bistable gene expression without changes to the DNA sequence. Epigenetic control is often associated with alternative covalent modifications of histones. The stability and heritability of states of larger chromosomal regions are suggested to involve positive feedback where modified nucleosomes recruit enzymes that similarly modify nearby nucleosomes. A simplified stochastic model for this type of epigenetics is found here.
It has been suggested that chromatin-based transcriptional regulation could be mediated by the effect of small RNAs. Small interfering RNAs can modulate transcriptional gene expression via epigenetic modulation of targeted promoters.
RNA transcripts
Sometimes a gene, after being turned on, transcribes a product that (directly or indirectly) maintains the activity of that gene. For example, Hnf4 and MyoD enhance the transcription of many liver-specific and muscle-specific genes, respectively, including their own, through the transcription factor activity of the proteins they encode. RNA signalling includes differential recruitment of a hierarchy of generic chromatin modifying complexes and DNA methyltransferases to specific loci by RNAs during differentiation and development. Other epigenetic changes are mediated by the production of different splice forms of RNA, or by formation of double-stranded RNA (RNAi). Descendants of the cell in which the gene was turned on will inherit this activity, even if the original stimulus for gene-activation is no longer present. These genes are often turned on or off by signal transduction, although in some systems where syncytia or gap junctions are important, RNA may spread directly to other cells or nuclei by diffusion. A large amount of RNA and protein is contributed to the zygote by the mother during oogenesis or via nurse cells, resulting in maternal effect phenotypes. A smaller quantity of sperm RNA is transmitted from the father, but there is recent evidence that this epigenetic information can lead to visible changes in several generations of offspring.
MicroRNAs
MicroRNAs (miRNAs) are members of non-coding RNAs that range in size from 17 to 25 nucleotides. miRNAs regulate a large variety of biological functions in plants and animals. So far, in 2013, about 2000 miRNAs have been discovered in humans and these can be found online in a miRNA database. Each miRNA expressed in a cell may target about 100 to 200 messenger RNAs(mRNAs) that it downregulates. Most of the downregulation of mRNAs occurs by causing the decay of the targeted mRNA, while some downregulation occurs at the level of translation into protein.
It appears that about 60% of human protein coding genes are regulated by miRNAs. Many miRNAs are epigenetically regulated. About 50% of miRNA genes are associated with CpG islands, that may be repressed by epigenetic methylation. Transcription from methylated CpG islands is strongly and heritably repressed. Other miRNAs are epigenetically regulated by either histone modifications or by combined DNA methylation and histone modification.
mRNA
In 2011, it was demonstrated that the methylation of mRNA plays a critical role in human energy homeostasis. The obesity-associated FTO gene is shown to be able to demethylate N6-methyladenosine in RNA.
sRNAs
sRNAs are small (50–250 nucleotides), highly structured, non-coding RNA fragments found in bacteria. They control gene expression including virulence genes in pathogens and are viewed as new targets in the fight against drug-resistant bacteria. They play an important role in many biological processes, binding to mRNA and protein targets in prokaryotes. Their phylogenetic analyses, for example through sRNA–mRNA target interactions or protein binding properties, are used to build comprehensive databases. sRNA-gene maps based on their targets in microbial genomes are also constructed.
Long non-coding RNAs
Numerous investigations have demonstrated the pivotal involvement of long non-coding RNAs (lncRNAs) in the regulation of gene expression and chromosomal modifications, thereby exerting significant control over cellular differentiation. These long non-coding RNAs also contribute to genomic imprinting and the inactivation of the X chromosome.
In invertebrates such as social insects of honey bees, long non-coding RNAs are detected as a possible epigenetic mechanism via allele-specific genes underlying aggression via reciprocal crosses.
Prions
Prions are infectious forms of proteins. In general, proteins fold into discrete units that perform distinct cellular functions, but some proteins are also capable of forming an infectious conformational state known as a prion. Although often viewed in the context of infectious disease, prions are more loosely defined by their ability to catalytically convert other native state versions of the same protein to an infectious conformational state. It is in this latter sense that they can be viewed as epigenetic agents capable of inducing a phenotypic change without a modification of the genome.
Fungal prions are considered by some to be epigenetic because the infectious phenotype caused by the prion can be inherited without modification of the genome. PSI+ and URE3, discovered in yeast in 1965 and 1971, are the two best studied of this type of prion. Prions can have a phenotypic effect through the sequestration of protein in aggregates, thereby reducing that protein's activity. In PSI+ cells, the loss of the Sup35 protein (which is involved in termination of translation) causes ribosomes to have a higher rate of read-through of stop codons, an effect that results in suppression of nonsense mutations in other genes. The ability of Sup35 to form prions may be a conserved trait. It could confer an adaptive advantage by giving cells the ability to switch into a PSI+ state and express dormant genetic features normally terminated by stop codon mutations.
Prion-based epigenetics has also been observed in Saccharomyces cerevisiae.
Molecular basis
Epigenetic changes modify the activation of certain genes, but not the genetic code sequence of DNA. The microstructure (not code) of DNA itself or the associated chromatin proteins may be modified, causing activation or silencing. This mechanism enables differentiated cells in a multicellular organism to express only the genes that are necessary for their own activity. Epigenetic changes are preserved when cells divide. Most epigenetic changes only occur within the course of one individual organism's lifetime; however, these epigenetic changes can be transmitted to the organism's offspring through a process called transgenerational epigenetic inheritance. Moreover, if gene inactivation occurs in a sperm or egg cell that results in fertilization, this epigenetic modification may also be transferred to the next generation.
Specific epigenetic processes include paramutation, bookmarking, imprinting, gene silencing, X chromosome inactivation, position effect, DNA methylation reprogramming, transvection, maternal effects, the progress of carcinogenesis, many effects of teratogens, regulation of histone modifications and heterochromatin, and technical limitations affecting parthenogenesis and cloning.
DNA damage
DNA damage can also cause epigenetic changes. DNA damage is very frequent, occurring on average about 60,000 times a day per cell of the human body (see DNA damage (naturally occurring)). These damages are largely repaired, however, epigenetic changes can still remain at the site of DNA repair. In particular, a double strand break in DNA can initiate unprogrammed epigenetic gene silencing both by causing DNA methylation as well as by promoting silencing types of histone modifications (chromatin remodeling - see next section). In addition, the enzyme Parp1 (poly(ADP)-ribose polymerase) and its product poly(ADP)-ribose (PAR) accumulate at sites of DNA damage as part of the repair process. This accumulation, in turn, directs recruitment and activation of the chromatin remodeling protein, ALC1, that can cause nucleosome remodeling. Nucleosome remodeling has been found to cause, for instance, epigenetic silencing of DNA repair gene MLH1. DNA damaging chemicals, such as benzene, hydroquinone, styrene, carbon tetrachloride and trichloroethylene, cause considerable hypomethylation of DNA, some through the activation of oxidative stress pathways.
Foods are known to alter the epigenetics of rats on different diets. Some food components epigenetically increase the levels of DNA repair enzymes such as MGMT and MLH1 and p53. Other food components can reduce DNA damage, such as soy isoflavones. In one study, markers for oxidative stress, such as modified nucleotides that can result from DNA damage, were decreased by a 3-week diet supplemented with soy. A decrease in oxidative DNA damage was also observed 2 h after consumption of anthocyanin-rich bilberry (Vaccinium myrtillius L.) pomace extract.
DNA repair
Damage to DNA is very common and is constantly being repaired. Epigenetic alterations can accompany DNA repair of oxidative damage or double-strand breaks. In human cells, oxidative DNA damage occurs about 10,000 times a day and DNA double-strand breaks occur about 10 to 50 times a cell cycle in somatic replicating cells (see DNA damage (naturally occurring)). The selective advantage of DNA repair is to allow the cell to survive in the face of DNA damage. The selective advantage of epigenetic alterations that occur with DNA repair is not clear.
Repair of oxidative DNA damage can alter epigenetic markers
In the steady state (with endogenous damages occurring and being repaired), there are about 2,400 oxidatively damaged guanines that form 8-oxo-2'-deoxyguanosine (8-OHdG) in the average mammalian cell DNA. 8-OHdG constitutes about 5% of the oxidative damages commonly present in DNA. The oxidized guanines do not occur randomly among all guanines in DNA. There is a sequence preference for the guanine at a methylated CpG site (a cytosine followed by guanine along its 5' → 3' direction and where the cytosine is methylated (5-mCpG)). A 5-mCpG site has the lowest ionization potential for guanine oxidation.
Oxidized guanine has mispairing potential and is mutagenic. Oxoguanine glycosylase (OGG1) is the primary enzyme responsible for the excision of the oxidized guanine during DNA repair. OGG1 finds and binds to an 8-OHdG within a few seconds. However, OGG1 does not immediately excise 8-OHdG. In HeLa cells half maximum removal of 8-OHdG occurs in 30 minutes, and in irradiated mice, the 8-OHdGs induced in the mouse liver are removed with a half-life of 11 minutes.
When OGG1 is present at an oxidized guanine within a methylated CpG site it recruits TET1 to the 8-OHdG lesion (see Figure). This allows TET1 to demethylate an adjacent methylated cytosine. Demethylation of cytosine is an epigenetic alteration.
As an example, when human mammary epithelial cells were treated with H2O2 for six hours, 8-OHdG increased about 3.5-fold in DNA and this caused about 80% demethylation of the 5-methylcytosines in the genome. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene into messenger RNA. In cells treated with H2O2, one particular gene was examined, BACE1. The methylation level of the BACE1 CpG island was reduced (an epigenetic alteration) and this allowed about 6.5 fold increase of expression of BACE1 messenger RNA.
While six-hour incubation with H2O2 causes considerable demethylation of 5-mCpG sites, shorter times of H2O2 incubation appear to promote other epigenetic alterations. Treatment of cells with H2O2 for 30 minutes causes the mismatch repair protein heterodimer MSH2-MSH6 to recruit DNA methyltransferase 1 (DNMT1) to sites of some kinds of oxidative DNA damage. This could cause increased methylation of cytosines (epigenetic alterations) at these locations.
Jiang et al. treated HEK 293 cells with agents causing oxidative DNA damage, (potassium bromate (KBrO3) or potassium chromate (K2CrO4)). Base excision repair (BER) of oxidative damage occurred with the DNA repair enzyme polymerase beta localizing to oxidized guanines. Polymerase beta is the main human polymerase in short-patch BER of oxidative DNA damage. Jiang et al. also found that polymerase beta recruited the DNA methyltransferase protein DNMT3b to BER repair sites. They then evaluated the methylation pattern at the single nucleotide level in a small region of DNA including the promoter region and the early transcription region of the BRCA1 gene. Oxidative DNA damage from bromate modulated the DNA methylation pattern (caused epigenetic alterations) at CpG sites within the region of DNA studied. In untreated cells, CpGs located at −189, −134, −29, −19, +16, and +19 of the BRCA1 gene had methylated cytosines (where numbering is from the messenger RNA transcription start site, and negative numbers indicate nucleotides in the upstream promoter region). Bromate treatment-induced oxidation resulted in the loss of cytosine methylation at −189, −134, +16 and +19 while also leading to the formation of new methylation at the CpGs located at −80, −55, −21 and +8 after DNA repair was allowed.
Homologous recombinational repair alters epigenetic markers
At least four articles report the recruitment of DNA methyltransferase 1 (DNMT1) to sites of DNA double-strand breaks. During homologous recombinational repair (HR) of the double-strand break, the involvement of DNMT1 causes the two repaired strands of DNA to have different levels of methylated cytosines. One strand becomes frequently methylated at about 21 CpG sites downstream of the repaired double-strand break. The other DNA strand loses methylation at about six CpG sites that were previously methylated downstream of the double-strand break, as well as losing methylation at about five CpG sites that were previously methylated upstream of the double-strand break. When the chromosome is replicated, this gives rise to one daughter chromosome that is heavily methylated downstream of the previous break site and one that is unmethylated in the region both upstream and downstream of the previous break site. With respect to the gene that was broken by the double-strand break, half of the progeny cells express that gene at a high level and in the other half of the progeny cells expression of that gene is repressed. When clones of these cells were maintained for three years, the new methylation patterns were maintained over that time period.
In mice with a CRISPR-mediated homology-directed recombination insertion in their genome there were a large number of increased methylations of CpG sites within the double-strand break-associated insertion.
Non-homologous end joining can cause some epigenetic marker alterations
Non-homologous end joining (NHEJ) repair of a double-strand break can cause a small number of demethylations of pre-existing cytosine DNA methylations downstream of the repaired double-strand break. Further work by Allen et al. showed that NHEJ of a DNA double-strand break in a cell could give rise to some progeny cells having repressed expression of the gene harboring the initial double-strand break and some progeny having high expression of that gene due to epigenetic alterations associated with NHEJ repair. The frequency of epigenetic alterations causing repression of a gene after an NHEJ repair of a DNA double-strand break in that gene may be about 0.9%.
Techniques used to study epigenetics
Epigenetic research uses a wide range of molecular biological techniques to further understanding of epigenetic phenomena. These techniques include chromatin immunoprecipitation (together with its large-scale variants ChIP-on-chip and ChIP-Seq), fluorescent in situ hybridization, methylation-sensitive restriction enzymes, DNA adenine methyltransferase identification (DamID) and bisulfite sequencing. Furthermore, the use of bioinformatics methods has a role in computational epigenetics.
Chromatin Immunoprecipitation
Chromatin Immunoprecipitation (ChIP) has helped bridge the gap between DNA and epigenetic interactions. With the use of ChIP, researchers are able to make findings in regards to gene regulation, transcription mechanisms, and chromatin structure.
Fluorescent in situ hybridization
Fluorescent in situ hybridization (FISH) is very important to understand epigenetic mechanisms. FISH can be used to find the location of genes on chromosomes, as well as finding noncoding RNAs. FISH is predominantly used for detecting chromosomal abnormalities in humans.
Methylation-sensitive restriction enzymes
Methylation sensitive restriction enzymes paired with PCR is a way to evaluate methylation in DNA - specifically the CpG sites. If DNA is methylated, the restriction enzymes will not cleave the strand. Contrarily, if the DNA is not methylated, the enzymes will cleave the strand and it will be amplified by PCR.
Bisulfite sequencing
Bisulfite sequencing is another way to evaluate DNA methylation. Cytosine will be changed to uracil from being treated with sodium bisulfite, whereas methylated cytosines will not be affected.
Nanopore sequencing
Certain sequencing methods, such as nanopore sequencing, allow sequencing of native DNA. Native (=unamplified) DNA retains the epigenetic modifications which would otherwise be lost during the amplification step. Nanopore basecaller models can distinguish between the signals obtained for epigenetically modified bases and unaltered based and provide an epigenetic profile in addition to the sequencing result.
Structural inheritance
In ciliates such as Tetrahymena and Paramecium, genetically identical cells show heritable differences in the patterns of ciliary rows on their cell surface. Experimentally altered patterns can be transmitted to daughter cells. It seems existing structures act as templates for new structures. The mechanisms of such inheritance are unclear, but reasons exist to assume that multicellular organisms also use existing cell structures to assemble new ones.
Nucleosome positioning
Eukaryotic genomes have numerous nucleosomes. Nucleosome position is not random, and determine the accessibility of DNA to regulatory proteins. Promoters active in different tissues have been shown to have different nucleosome positioning features. This determines differences in gene expression and cell differentiation. It has been shown that at least some nucleosomes are retained in sperm cells (where most but not all histones are replaced by protamines). Thus nucleosome positioning is to some degree inheritable. Recent studies have uncovered connections between nucleosome positioning and other epigenetic factors, such as DNA methylation and hydroxymethylation.
Histone variants
Different histone variants are incorporated into specific regions of the genome non-randomly. Their differential biochemical characteristics can affect genome functions via their roles in gene regulation, and maintenance of chromosome structures.
Genomic architecture
The three-dimensional configuration of the genome (the 3D genome) is complex, dynamic and crucial for regulating genomic function and nuclear processes such as DNA replication, transcription and DNA-damage repair.
Functions and consequences
In the brain
Memory
Memory formation and maintenance are due to epigenetic alterations that cause the required dynamic changes in gene transcription that create and renew memory in neurons.
An event can set off a chain of reactions that result in altered methylations of a large set of genes in neurons, which give a representation of the event, a memory.
Areas of the brain important in the formation of memories include the hippocampus, medial prefrontal cortex (mPFC), anterior cingulate cortex and amygdala, as shown in the diagram of the human brain in this section.
When a strong memory is created, as in a rat subjected to contextual fear conditioning (CFC), one of the earliest events to occur is that more than 100 DNA double-strand breaks are formed by topoisomerase IIB in neurons of the hippocampus and the medial prefrontal cortex (mPFC). These double-strand breaks are at specific locations that allow activation of transcription of immediate early genes (IEGs) that are important in memory formation, allowing their expression in mRNA, with peak mRNA transcription at seven to ten minutes after CFC.
Two important IEGs in memory formation are EGR1 and the alternative promoter variant of DNMT3A, DNMT3A2. EGR1 protein binds to DNA at its binding motifs, 5′-GCGTGGGCG-3′ or 5′-GCGGGGGCGG-3', and there are about 12,000 genome locations at which EGR1 protein can bind. EGR1 protein binds to DNA in gene promoter and enhancer regions. EGR1 recruits the demethylating enzyme TET1 to an association, and brings TET1 to about 600 locations on the genome where TET1 can then demethylate and activate the associated genes.
The DNA methyltransferases DNMT3A1, DNMT3A2 and DNMT3B can all methylate cytosines (see image this section) at CpG sites in or near the promoters of genes. As shown by Manzo et al., these three DNA methyltransferases differ in their genomic binding locations and DNA methylation activity at different regulatory sites. Manzo et al. located 3,970 genome regions exclusively enriched for DNMT3A1, 3,838 regions for DNMT3A2 and 3,432 regions for DNMT3B. When DNMT3A2 is newly induced as an IEG (when neurons are activated), many new cytosine methylations occur, presumably in the target regions of DNMT3A2. Oliviera et al. found that the neuronal activity-inducible IEG levels of Dnmt3a2 in the hippocampus determined the ability to form long-term memories.
Rats form long-term associative memories after contextual fear conditioning (CFC). Duke et al. found that 24 hours after CFC in rats, in hippocampus neurons, 2,097 genes (9.17% of the genes in the rat genome) had altered methylation. When newly methylated cytosines are present in CpG sites in the promoter regions of genes, the genes are often repressed, and when newly demethylated cytosines are present the genes may be activated. After CFC, there were 1,048 genes with reduced mRNA expression and 564 genes with upregulated mRNA expression. Similarly, when mice undergo CFC, one hour later in the hippocampus region of the mouse brain there are 675 demethylated genes and 613 hypermethylated genes. However, memories do not remain in the hippocampus, but after four or five weeks the memories are stored in the anterior cingulate cortex. In the studies on mice after CFC, Halder et al. showed that four weeks after CFC there were at least 1,000 differentially methylated genes and more than 1,000 differentially expressed genes in the anterior cingulate cortex, while at the same time the altered methylations in the hippocampus were reversed.
The epigenetic alteration of methylation after a new memory is established creates a different pool of nuclear mRNAs. As reviewed by Bernstein, the epigenetically determined new mix of nuclear mRNAs are often packaged into neuronal granules, or messenger RNP, consisting of mRNA, small and large ribosomal subunits, translation initiation factors and RNA-binding proteins that regulate mRNA function. These neuronal granules are transported from the neuron nucleus and are directed, according to 3′ untranslated regions of the mRNA in the granules (their "zip codes"), to neuronal dendrites. Roughly 2,500 mRNAs may be localized to the dendrites of hippocampal pyramidal neurons and perhaps 450 transcripts are in excitatory presynaptic nerve terminals (dendritic spines). The altered assortments of transcripts (dependent on epigenetic alterations in the neuron nucleus) have different sensitivities in response to signals, which is the basis of altered synaptic plasticity. Altered synaptic plasticity is often considered the neurochemical foundation of learning and memory.
Aging
Epigenetics play a major role in brain aging and age-related cognitive decline, with relevance to life extension.
Other and general
In adulthood, changes in the epigenome are important for various higher cognitive functions. Dysregulation of epigenetic mechanisms is implicated in neurodegenerative disorders and diseases. Epigenetic modifications in neurons are dynamic and reversible. Epigenetic regulation impacts neuronal action, affecting learning, memory, and other cognitive processes.
Early events, including during embryonic development, can influence development, cognition, and health outcomes through epigenetic mechanisms.
Epigenetic mechanisms have been proposed as "a potential molecular mechanism for effects of endogenous hormones on the organization of developing brain circuits".
Nutrients could interact with the epigenome to "protect or boost cognitive processes across the lifespan".
A review suggests neurobiological effects of physical exercise via epigenetics seem "central to building an 'epigenetic memory' to influence long-term brain function and behavior" and may even be heritable.
With the axo-ciliary synapse, there is communication between serotonergic axons and antenna-like primary cilia of CA1 pyramidal neurons that alters the neuron's epigenetic state in the nucleus via the signalling distinct from that at the plasma membrane (and longer-term).
Epigenetics also play a major role in the brain evolution in and to humans.
Development
Developmental epigenetics can be divided into predetermined and probabilistic epigenesis. Predetermined epigenesis is a unidirectional movement from structural development in DNA to the functional maturation of the protein. "Predetermined" here means that development is scripted and predictable. Probabilistic epigenesis on the other hand is a bidirectional structure-function development with experiences and external molding development.
Somatic epigenetic inheritance, particularly through DNA and histone covalent modifications and nucleosome repositioning, is very important in the development of multicellular eukaryotic organisms. The genome sequence is static (with some notable exceptions), but cells differentiate into many different types, which perform different functions, and respond differently to the environment and intercellular signaling. Thus, as individuals develop, morphogens activate or silence genes in an epigenetically heritable fashion, giving cells a memory. In mammals, most cells terminally differentiate, with only stem cells retaining the ability to differentiate into several cell types ("totipotency" and "multipotency"). In mammals, some stem cells continue producing newly differentiated cells throughout life, such as in neurogenesis, but mammals are not able to respond to loss of some tissues, for example, the inability to regenerate limbs, which some other animals are capable of. Epigenetic modifications regulate the transition from neural stem cells to glial progenitor cells (for example, differentiation into oligodendrocytes is regulated by the deacetylation and methylation of histones). Unlike animals, plant cells do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. While plants do utilize many of the same epigenetic mechanisms as animals, such as chromatin remodeling, it has been hypothesized that some kinds of plant cells do not use or require "cellular memories", resetting their gene expression patterns using positional information from the environment and surrounding cells to determine their fate.
Epigenetic changes can occur in response to environmental exposure – for example, maternal dietary supplementation with genistein (250 mg/kg) have epigenetic changes affecting expression of the agouti gene, which affects their fur color, weight, and propensity to develop cancer. Ongoing research is focused on exploring the impact of other known teratogens, such as diabetic embryopathy, on methylation signatures.
Controversial results from one study suggested that traumatic experiences might produce an epigenetic signal that is capable of being passed to future generations. Mice were trained, using foot shocks, to fear a cherry blossom odor. The investigators reported that the mouse offspring had an increased aversion to this specific odor. They suggested epigenetic changes that increase gene expression, rather than in DNA itself, in a gene, M71, that governs the functioning of an odor receptor in the nose that responds specifically to this cherry blossom smell. There were physical changes that correlated with olfactory (smell) function in the brains of the trained mice and their descendants. Several criticisms were reported, including the study's low statistical power as evidence of some irregularity such as bias in reporting results. Due to limits of sample size, there is a probability that an effect will not be demonstrated to within statistical significance even if it exists. The criticism suggested that the probability that all the experiments reported would show positive results if an identical protocol was followed, assuming the claimed effects exist, is merely 0.4%. The authors also did not indicate which mice were siblings, and treated all of the mice as statistically independent. The original researchers pointed out negative results in the paper's appendix that the criticism omitted in its calculations, and undertook to track which mice were siblings in the future.
Transgenerational
Epigenetic mechanisms were a necessary part of the evolutionary origin of cell differentiation. Although epigenetics in multicellular organisms is generally thought to be a mechanism involved in differentiation, with epigenetic patterns "reset" when organisms reproduce, there have been some observations of transgenerational epigenetic inheritance (e.g., the phenomenon of paramutation observed in maize). Although most of these multigenerational epigenetic traits are gradually lost over several generations, the possibility remains that multigenerational epigenetics could be another aspect to evolution and adaptation.
As mentioned above, some define epigenetics as heritable.
A sequestered germ line or Weismann barrier is specific to animals, and epigenetic inheritance is more common in plants and microbes. Eva Jablonka, Marion J. Lamb and Étienne Danchin have argued that these effects may require enhancements to the standard conceptual framework of the modern synthesis and have called for an extended evolutionary synthesis. Other evolutionary biologists, such as John Maynard Smith, have incorporated epigenetic inheritance into population-genetics models or are openly skeptical of the extended evolutionary synthesis (Michael Lynch). Thomas Dickins and Qazi Rahman state that epigenetic mechanisms such as DNA methylation and histone modification are genetically inherited under the control of natural selection and therefore fit under the earlier "modern synthesis".
Two important ways in which epigenetic inheritance can differ from traditional genetic inheritance, with important consequences for evolution, are:
rates of epimutation can be much faster than rates of mutation
the epimutations are more easily reversible
In plants, heritable DNA methylation mutations are 100,000 times more likely to occur compared to DNA mutations. An epigenetically inherited element such as the PSI+ system can act as a "stop-gap", good enough for short-term adaptation that allows the lineage to survive for long enough for mutation and/or recombination to genetically assimilate the adaptive phenotypic change. The existence of this possibility increases the evolvability of a species.
More than 100 cases of transgenerational epigenetic inheritance phenomena have been reported in a wide range of organisms, including prokaryotes, plants, and animals. For instance, mourning-cloak butterflies will change color through hormone changes in response to experimentation of varying temperatures.
The filamentous fungus Neurospora crassa is a prominent model system for understanding the control and function of cytosine methylation. In this organism, DNA methylation is associated with relics of a genome-defense system called RIP (repeat-induced point mutation) and silences gene expression by inhibiting transcription elongation.
The yeast prion PSI is generated by a conformational change of a translation termination factor, which is then inherited by daughter cells. This can provide a survival advantage under adverse conditions, exemplifying epigenetic regulation which enables unicellular organisms to respond rapidly to environmental stress. Prions can be viewed as epigenetic agents capable of inducing a phenotypic change without modification of the genome.
Direct detection of epigenetic marks in microorganisms is possible with single molecule real time sequencing, in which polymerase sensitivity allows for measuring methylation and other modifications as a DNA molecule is being sequenced. Several projects have demonstrated the ability to collect genome-wide epigenetic data in bacteria.
Epigenetics in bacteria
While epigenetics is of fundamental importance in eukaryotes, especially metazoans, it plays a different role in bacteria. Most importantly, eukaryotes use epigenetic mechanisms primarily to regulate gene expression which bacteria rarely do. However, bacteria make widespread use of postreplicative DNA methylation for the epigenetic control of DNA-protein interactions. Bacteria also use DNA adenine methylation (rather than DNA cytosine methylation) as an epigenetic signal. DNA adenine methylation is important in bacteria virulence in organisms such as Escherichia coli, Salmonella, Vibrio, Yersinia, Haemophilus, and Brucella. In Alphaproteobacteria, methylation of adenine regulates the cell cycle and couples gene transcription to DNA replication. In Gammaproteobacteria, adenine methylation provides signals for DNA replication, chromosome segregation, mismatch repair, packaging of bacteriophage, transposase activity and regulation of gene expression. There exists a genetic switch controlling Streptococcus pneumoniae (the pneumococcus) that allows the bacterium to randomly change its characteristics into six alternative states that could pave the way to improved vaccines. Each form is randomly generated by a phase variable methylation system. The ability of the pneumococcus to cause deadly infections is different in each of these six states. Similar systems exist in other bacterial genera. In Bacillota such as Clostridioides difficile, adenine methylation regulates sporulation, biofilm formation and host-adaptation.
Medicine
Epigenetics has many and varied potential medical applications.
Twins
Direct comparisons of identical twins constitute an optimal model for interrogating environmental epigenetics. In the case of humans with different environmental exposures, monozygotic (identical) twins were epigenetically indistinguishable during their early years, while older twins had remarkable differences in the overall content and genomic distribution of 5-methylcytosine DNA and histone acetylation. The twin pairs who had spent less of their lifetime together and/or had greater differences in their medical histories were those who showed the largest differences in their levels of 5-methylcytosine DNA and acetylation of histones H3 and H4.
Dizygotic (fraternal) and monozygotic (identical) twins show evidence of epigenetic influence in humans. DNA sequence differences that would be abundant in a singleton-based study do not interfere with the analysis. Environmental differences can produce long-term epigenetic effects, and different developmental monozygotic twin subtypes may be different with respect to their susceptibility to be discordant from an epigenetic point of view.
A high-throughput study, which denotes technology that looks at extensive genetic markers, focused on epigenetic differences between monozygotic twins to compare global and locus-specific changes in DNA methylation and histone modifications in a sample of 40 monozygotic twin pairs. In this case, only healthy twin pairs were studied, but a wide range of ages was represented, between 3 and 74 years. One of the major conclusions from this study was that there is an age-dependent accumulation of epigenetic differences between the two siblings of twin pairs. This accumulation suggests the existence of epigenetic "drift". Epigenetic drift is the term given to epigenetic modifications as they occur as a direct function with age. While age is a known risk factor for many diseases, age-related methylation has been found to occur differentially at specific sites along the genome. Over time, this can result in measurable differences between biological and chronological age. Epigenetic changes have been found to be reflective of lifestyle and may act as functional biomarkers of disease before clinical threshold is reached.
A more recent study, where 114 monozygotic twins and 80 dizygotic twins were analyzed for the DNA methylation status of around 6000 unique genomic regions, concluded that epigenetic similarity at the time of blastocyst splitting may also contribute to phenotypic similarities in monozygotic co-twins. This supports the notion that microenvironment at early stages of embryonic development can be quite important for the establishment of epigenetic marks. Congenital genetic disease is well understood and it is clear that epigenetics can play a role, for example, in the case of Angelman syndrome and Prader–Willi syndrome. These are normal genetic diseases caused by gene deletions or inactivation of the genes but are unusually common because individuals are essentially hemizygous because of genomic imprinting, and therefore a single gene knock out is sufficient to cause the disease, where most cases would require both copies to be knocked out.
Genomic imprinting
Some human disorders are associated with genomic imprinting, a phenomenon in mammals where the father and mother contribute different epigenetic patterns for specific genomic loci in their germ cells. The best-known case of imprinting in human disorders is that of Angelman syndrome and Prader–Willi syndrome – both can be produced by the same genetic mutation, chromosome 15q partial deletion, and the particular syndrome that will develop depends on whether the mutation is inherited from the child's mother or from their father.
In the Överkalix study, paternal (but not maternal) grandsons of Swedish men who were exposed during preadolescence to famine in the 19th century were less likely to die of cardiovascular disease. If food was plentiful, then diabetes mortality in the grandchildren increased, suggesting that this was a transgenerational epigenetic inheritance. The opposite effect was observed for females – the paternal (but not maternal) granddaughters of women who experienced famine while in the womb (and therefore while their eggs were being formed) lived shorter lives on average.
Examples of drugs altering gene expression from epigenetic events
The use of beta-lactam antibiotics can alter glutamate receptor activity and the action of cyclosporine on multiple transcription factors. Additionally, lithium can impact autophagy of aberrant proteins, and opioid drugs via chronic use can increase the expression of genes associated with addictive phenotypes.
Parental nutrition, in utero exposure to stress or endocrine disrupting chemicals, male-induced maternal effects such as the attraction of differential mate quality, and maternal as well as paternal age, and offspring gender could all possibly influence whether a germline epimutation is ultimately expressed in offspring and the degree to which intergenerational inheritance remains stable throughout posterity. However, whether and to what extent epigenetic effects can be transmitted across generations remains unclear, particularly in humans.
Addiction
Addiction is a disorder of the brain's reward system which arises through transcriptional and neuroepigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., morphine, cocaine, sexual intercourse, gambling). Transgenerational epigenetic inheritance of addictive phenotypes has been noted to occur in preclinical studies. However, robust evidence in support of the persistence of epigenetic effects across multiple generations has yet to be established in humans; for example, an epigenetic effect of prenatal exposure to smoking that is observed in great-grandchildren who had not been exposed.
Research
The two forms of heritable information, namely genetic and epigenetic, are collectively called dual inheritance. Members of the APOBEC/AID family of cytosine deaminases may concurrently influence genetic and epigenetic inheritance using similar molecular mechanisms, and may be a point of crosstalk between these conceptually compartmentalized processes.
Fluoroquinolone antibiotics induce epigenetic changes in mammalian cells through iron chelation. This leads to epigenetic effects through inhibition of α-ketoglutarate-dependent dioxygenases that require iron as a co-factor.
Various pharmacological agents are applied for the production of induced pluripotent stem cells (iPSC) or maintain the embryonic stem cell (ESC) phenotypic via epigenetic approach. Adult stem cells like bone marrow stem cells have also shown a potential to differentiate into cardiac competent cells when treated with G9a histone methyltransferase inhibitor BIX01294.
Cell plasticity, which is the adaptation of cells to stimuli without changes in their genetic code, requires epigenetic changes. These have been observed in cell plasticity in cancer cells during epithelial-to-mesenchymal transition and also in immune cells, such as macrophages. Interestingly, metabolic changes underly these adaptations, since various metabolites play crucial roles in the chemistry of epigenetic marks. This includes for instance alpha-ketoglutarate, which is required for histone demethylation, and acetyl-Coenzyme A, which is required for histone acetylation.
Epigenome editing
Epigenetic regulation of gene expression that could be altered or used in epigenome editing are or include mRNA/lncRNA modification, DNA methylation modification and histone modification.
CpG sites, SNPs and biological traits
Methylation is a widely characterized mechanism of genetic regulation that can determine biological traits. However, strong experimental evidences correlate methylation patterns in SNPs as an important additional feature for the classical activation/inhibition epigenetic dogma. Molecular interaction data, supported by colocalization analyses, identify multiple nuclear regulatory pathways, linking sequence variation to disturbances in DNA methylation and molecular and phenotypic variation.
UBASH3B locus
UBASH3B encodes a protein with tyrosine phosphatase activity, which has been previously linked to advanced neoplasia. SNP rs7115089 was identified as influencing DNA methylation and expression of this locus, as well as and Body Mass Index (BMI). In fact, SNP rs7115089 is strongly associated with BMI and with genetic variants linked to other cardiovascular and metabolic traits in GWASs. New studies suggesting UBASH3B as a potential mediator of adiposity and cardiometabolic disease. In addition, animal models demonstrated that UBASH3B expression is an indicator of caloric restriction that may drive programmed susceptibility to obesity and it is associated with other measures of adiposity in human peripherical blood.
NFKBIE locus
SNP rs730775 is located in the first intron of NFKBIE and is a cis eQTL for NFKBIE in whole blood. Nuclear factor (NF)-κB inhibitor ε (NFKBIE) directly inhibits NF-κB1 activity and is significantly co-expressed with NF-κB1, also, it is associated with rheumatoid arthritis. Colocalization analysis supports that variants for the majority of the CpG sites in SNP rs730775 cause genetic variation at the NFKBIE locus which is suggestible linked to rheumatoid arthritis through trans acting regulation of DNA methylation by NF-κB.
FADS1 locus
Fatty acid desaturase 1 (FADS1) is a key enzyme in the metabolism of fatty acids. Moreover, rs174548 in the FADS1 gene shows increased correlation with DNA methylation in people with high abundance of CD8+ T cells. SNP rs174548 is strongly associated with concentrations of arachidonic acid and other metabolites in fatty acid metabolism, blood eosinophil counts. and inflammatory diseases such as asthma. Interaction results indicated a correlation between rs174548 and asthma, providing new insights about fatty acid metabolism in CD8+ T cells with immune phenotypes.
Pseudoscience
As epigenetics is in the early stages of development as a science and is surrounded by sensationalism in the public media, David Gorski and geneticist Adam Rutherford have advised caution against the proliferation of false and pseudoscientific conclusions by new age authors making unfounded suggestions that a person's genes and health can be manipulated by mind control. Misuse of the scientific term by quack authors has produced misinformation among the general public.
| Biology and health sciences | Genetics and taxonomy | null |
49084 | https://en.wikipedia.org/wiki/Glans%20penis | Glans penis | In male human anatomy, the glans penis or penile glans, commonly referred to as the glans, (; from Latin glans meaning "acorn") is the bulbous structure at the distal end of the human penis that is the human male's most sensitive erogenous zone and primary anatomical source of sexual pleasure. The glans penis is present in the male reproductive organs of humans and most other mammals where it may appear smooth, spiny, elongated or divided. It is externally lined with mucosal tissue, which creates a smooth texture and glossy appearance. In humans, the glans is located over the distal ends of the corpora cavernosa and is a continuation of the corpus spongiosum of the penis. At the summit appears the urinary meatus and at the base forms the corona glandis. An elastic band of tissue, known as the frenulum, runs on its ventral surface. In men who are not circumcised, it is completely or partially covered by a fold of skin called the foreskin. In adults, the foreskin can generally be retracted over and past the glans manually or sometimes automatically during an erection.
The glans penis develops as the terminal end of the genital tubercle during the embryonic development of the male fetus. The tubercle is present in the embryos of both sexes as an outgrowth in the caudal region that later develops into a primordial phallus. Exposure to male hormones (androgens) initiates the tubercle's development into a penis making the glans penis anatomically homologous to the clitoral glans in females.
The glans is more commonly known as the "head" or the "tip" of the penis, and colloquially referred to in British English and Irish English as the "bellend".
Structure
The glans penis is a body of spongy erectile tissue that is moulded on the rounded ends of the two corpora cavernosa penis, extending farther on their upper than on their lower surfaces. It is the expanded cap of the corpus spongiosum, a sponge-like region that surrounds the male urethra within the penis maintaining it as a viable channel for ejaculation. The glans is covered by a stratified squamous epithelium and a dense layer of connective tissue equivalent to the dermis of typical skin. The papillary layer of the dermis blends into the dense connective tissue forming the tunica albuginea of the corpus spongiosum behind the glans. The external lining with mucosal tissue is responsible for its typical smooth texture and appearance.
The increase of arterial flow during erection fills the erectile tissue with blood causing the glans to grow in size and sensitivity. While the penis is rigid when erect, the glans itself remains slightly softer. The soft cushiony texture of the glans absorbs impact during rigorous instances of copulation. The proportional size of the glans penis can vary among males. While the shape of the glans is typically acorn-like, in some men it might be wider in circumference than the shaft, giving the penis a mushroom-like appearance, while in others it might be narrower and more akin to a probe in shape. Some researchers have suggested that the glans has evolved to become acorn-, mushroom- or cone-shaped so that during copulation it acts to remove any semen still there from previous sex partners, but this is not supported when looking at primate relatives who have different mating behaviors.
At the summit of the glans is the slit-like vertical external urethral orifice, called the urinary meatus, through which urine, semen and pre-ejaculatory fluid exit the penis. The circumference of the base of the glans forms a rounded projecting border, the corona glandis, overhanging a deep retroglandular groove known as the coronal sulcus. Behind the corona is the neck of the penis, which separates the glans and the penile shaft. Ventrally, the two glans wings merge on the midline forming the septum glandis and a triangle or a V-shaped area under it. The frenulum is the highly vascularized elastic band of tissue located on the underside of the glans that connects the foreskin to the head of the penis. The frenulum is supple enough to allow the retraction of the foreskin over the glans and pull it back when the erection is gone. In flaccid state, it tightens to narrow the foreskin opening.
Innervation
The glans and the frenulum are innervated by the bilateral dorsal nerve of the penis and the perineal nerve, both divisions of the pudendal nerve. Branches of the dorsal nerve extend through the glans ventrolaterally displaying a three-dimensional innervation pattern. The main branches form smaller bundles of nerves that expand outwards into the tissue of the glans. The rich innervation of the glans penis reveals its function as a primary anatomical source of male sexual pleasure. Yang & Bradley argue; "the distinct pattern of innervation of the glans emphasizes its role as a sensory structure". While Yang & Bradley's (1998) report "showed no areas in the glans to be more densely innervated than others.", Halata & Munger (1986) report that the density of several nerve terminals is greatest in the corona glandis.
Halata & Spathe (1997) reported; "the glans penis contains a predominance of free nerve endings, numerous genital end bulbs and rarely Pacinian and Ruffinian corpuscles. Merkel nerve endings and Meissner's corpuscles (mechanoreceptors typically found in thick glabrous skin) are not present". The genital end bulbs, that are present throughout the glans, are most numerous in the corona and near the frenulum. Simple, Pacinian and Ruffinian corpuscles are identified predominantly in the corona glandis. The most numerous nerve terminals are free nerve endings present in almost every dermal papilla of the glans, as well as scattered throughout the deeper dermis.
Blood supply
The glans penis receives blood from the internal pudental artery through its branch, the dorsal artery of the penis, which also supplies the foreskin, and the penile shaft. Behind the corona, the terminal branches of the dorsal arteries anastomose with the axial arteries through perforating branches before they end in the glans. Branches of the dorsal artery curve around each side of the distal shaft to enter the glans and the frenulum ventrally. Venous drainage of the penis begins at the base of the glans. Small tributaries deriving from the corona form a venous plexus at the neck of the penis, known as the retro-coronal, or retro-balanic, plexus. Smaller paired venules run into the frenulum and the glans from its ventral surface. The deep dorsal vein, one of the two dorsal veins of the penis, serves as a common vessel receiving blood drained from the glans and the two corpora cavernosa through the circumflex veins that surround them.
Foreskin
The glans is completely or partially covered by a double-layered fold of skin, known as the foreskin. In adults, glans exposure can be easily achieved by manual retraction of the foreskin or sometimes automatically during erection. The degree of automatic foreskin retraction varies considerably depending on the foreskin length. The foreskin can be characterized as long when the preputial orifice extends beyond the glans during erection or medium when the orifice is located around the meatus. The primary purpose of the foreskin is considered to be the covering of the glans and the urinary meatus, while also maintaining the mucosa in a moist environment.
Foreskin rectractability gradually increases with age. In infancy the foreskin is fused to the glans, it remains non retractable in early childhood and it continues to be tight during preadolescence. The skin begins to loosen up significantly during puberty allowing the glans to be completely exposed when needed. By the age of eighteen most boys will have a fully retractable foreskin.
In some cases, for cultural, medical, or prophylactic reasons some men undergo circumcision or were circumcised as infants, a procedure where the foreskin is partially or completely removed from the penis. The glans of circumcised men remains fully exposed and dry. Several studies have suggested the glans is generally equally sensitive in both circumcised and uncircumcised penises.
Development
The glans develops as the terminal end of a phallic structure, called the genital tubercle, which forms in the embryo regardless of sex during the early weeks of pregnancy. Initially undifferentiated, the tubercle develops into a penis during the development of the reproductive system depending on the exposure to male hormones, such as androgens. In mammals, sexual differentiation is determined by the sperm that carries either an X or a Y (male) chromosome. The Y chromosome contains a sex-determining gene (SRY) that encodes a transcription factor for the protein TDF (testis determining factor) and triggers the creation of testosterone for the embryo's development into a male.
Although the sex of the infant is determined from the moment of conception, the complete external differentiation of the organs begins about eight or nine weeks after conception. Some sources state that the process will be completed by the twelfth week, while others state that it is clearly evident by the thirteenth week and that the sex organs are fully developed by the sixteenth week.
Both the penis and clitoris develop from the same tissues that become the glans and shaft of the penis and this shared embryonic origin makes these two organs homologous (different versions of the same structure). In the female fetus the absence of testosterone will stop the growth of the phallus causing the tubercle to shrink and form the clitoris. In the male fetus the presence of a Y chromosome leads to the development of the testes, which secrete a large amount of hormones called androgens. These hormones will cause the masculinization of the phenotypically indifferent organs. When exposed to testosterone, the genital tubercle elongates to form the penis. By fusion of the urogenital folds—elongated spindle-shaped structures that contribute to the formation of the urethral groove on the belly aspect of the genital tubercle—the urogenital sinus closes completely to form the spongy urethra and the labioscrotal swellings unite to form the scrotum. The secretion of testosterone during this phase plays a decisive role in the final shaping of the penis. After birth, testosterone levels drop significantly until puberty.
Clinical significance
The epithelium of the glans penis consists of mucosal tissue. Birley et al. report that excessive washing with soap may dry the mucous membrane which covers the glans penis and cause non-specific dermatitis. The condition is described as an inflammation of the skin, often caused by an irritating substance or a contact allergy. Sensitivity to chemicals in certain products can cause an allergic reaction, including irritation, itching and rash.
Inflammation of the glans penis is known as balanitis. It is a treatable condition that occurs in about 3–11% of males (up to 35% of diabetic males). Edwards reported that it is generally more common in males who have poor hygiene habits or have not been circumcised. It has many causes, including irritation or infection with a wide variety of pathogens. Symptoms of balanitis may appear suddenly or develop gradually. They might include pain, irritation, redness or red patches on the glans penis. Careful identification of the cause with the aid of patient history, physical examination, swabs and cultures, and biopsy are essential in order to determine the proper treatment.
The meatus (opening) of the urethra located at the tip of the glans might become subject to meatal stenosis, a condition mostly seen as a late complication of circumcision. It occurs in about 2–20% of circumcised boys and it is rarely seen in uncircumcised men. It is characterized by a narrowing of the meatus, which might cause sudden or often urges to urinate and burning during the process.
For some individuals who experience difficulty in achieving full glanular engorgement of glans penis, they may be diagnosed with soft glans syndrome (glans insufficiency syndrome). It is often undiagnosed in the general population due to the lack of a standardized nomenclature.
Other animals
Mammals
Carnivores
Male felids are able to urinate backwards by curving the tip of the glans penis backward. In cats, the glans penis is covered with spines. Penile spines also occur on the glans of male and female spotted hyenas. In male dogs the glans penis is smooth and consists of two parts called the bulbus glandis and pars longa glandis. The glans of a fossa's penis extends about halfway down the shaft and is spiny except at the tip. In comparison, the glans of felids is short and spiny, while that of viverrids is smooth and long.
Rodents
The glans penis of the marsh rice rat is long and robust, averaging long and broad. Winkelmann's mouse can most readily be distinguished from its close relatives by its partially corrugated glans penis. In Thomasomys ucucha, the glans penis is rounded, short, and small and is superficially divided into left and right halves by a trough at the top and a ridge at the bottom. Most of the glans is covered with spines, except for an area near the tip. The glans penis of a male cape ground squirrel is large with a prominent baculum.
Perissodactyls
When erect, the glans of a horse's penis increases by 3 to . The urethra opens within the urethral fossa, a small pouch at the distal end of the glans. Unlike the human glans, the glans of a horse's penis extends backwards on its shaft.
Marsupials, monotremes and bats
The shape of the glans varies among different marsupial species. In most marsupials, the glans is divided, but male macropods have an undivided glans penis.
The glans penis is also divided into two parts in platypuses and echidnas.
Males of Racey's pipistrelle bat have a narrow, egg-shaped glans penis.
| Biology and health sciences | Reproductive system | Biology |
49090 | https://en.wikipedia.org/wiki/Ohm%27s%20law | Ohm's law | Ohm's law states that the electric current through a conductor between two points is directly proportional to the voltage across the two points. Introducing the constant of proportionality, the resistance, one arrives at the three mathematical equations used to describe this relationship:
where is the current through the conductor, V is the voltage measured across the conductor and R is the resistance of the conductor. More specifically, Ohm's law states that the R in this relation is constant, independent of the current. If the resistance is not constant, the previous equation cannot be called Ohm's law, but it can still be used as a definition of static/DC resistance. Ohm's law is an empirical relation which accurately describes the conductivity of the vast majority of electrically conductive materials over many orders of magnitude of current. However some materials do not obey Ohm's law; these are called non-ohmic.
The law was named after the German physicist Georg Ohm, who, in a treatise published in 1827, described measurements of applied voltage and current through simple electrical circuits containing various lengths of wire. Ohm explained his experimental results by a slightly more complex equation than the modern form above (see below).
In physics, the term Ohm's law is also used to refer to various generalizations of the law; for example the vector form of the law used in electromagnetics and material science:
where J is the current density at a given location in a resistive material, E is the electric field at that location, and σ (sigma) is a material-dependent parameter called the conductivity, defined as the inverse of resistivity ρ (rho). This reformulation of Ohm's law is due to Gustav Kirchhoff.
History
In January 1781, before Georg Ohm's work, Henry Cavendish experimented with Leyden jars and glass tubes of varying diameter and length filled with salt solution. He measured the current by noting how strong a shock he felt as he completed the circuit with his body. Cavendish wrote that the "velocity" (current) varied directly as the "degree of electrification" (voltage). He did not communicate his results to other scientists at the time, and his results were unknown until James Clerk Maxwell published them in 1879.
Francis Ronalds delineated "intensity" (voltage) and "quantity" (current) for the dry pile—a high voltage source—in 1814 using a gold-leaf electrometer. He found for a dry pile that the relationship between the two parameters was not proportional under certain meteorological conditions.
Ohm did his work on resistance in the years 1825 and 1826, and published his results in 1827 as the book Die galvanische Kette, mathematisch bearbeitet ("The galvanic circuit investigated mathematically"). He drew considerable inspiration from Joseph Fourier's work on heat conduction in the theoretical explanation of his work. For experiments, he initially used voltaic piles, but later used a thermocouple as this provided a more stable voltage source in terms of internal resistance and constant voltage. He used a galvanometer to measure current, and knew that the voltage between the thermocouple terminals was proportional to the junction temperature. He then added test wires of varying length, diameter, and material to complete the circuit. He found that his data could be modeled through the equation
where x was the reading from the galvanometer, ℓ was the length of the test conductor, a depended on the thermocouple junction temperature, and b was a constant of the entire setup. From this, Ohm determined his law of proportionality and published his results.
In modern notation we would write,
where is the open-circuit emf of the thermocouple, is the internal resistance of the thermocouple and is the resistance of the test wire. In terms of the length of the wire this becomes,
where is the resistance of the test wire per unit length. Thus, Ohm's coefficients are,
Ohm's law was probably the most important of the early quantitative descriptions of the physics of electricity. We consider it almost obvious today. When Ohm first published his work, this was not the case; critics reacted to his treatment of the subject with hostility. They called his work a "web of naked fancies" and the Minister of Education proclaimed that "a professor who preached such heresies was unworthy to teach science." The prevailing scientific philosophy in Germany at the time asserted that experiments need not be performed to develop an understanding of nature because nature is so well ordered, and that scientific truths may be deduced through reasoning alone. Also, Ohm's brother Martin, a mathematician, was battling the German educational system. These factors hindered the acceptance of Ohm's work, and his work did not become widely accepted until the 1840s. However, Ohm received recognition for his contributions to science well before he died.
In the 1850s, Ohm's law was widely known and considered proved. Alternatives such as "Barlow's law", were discredited, in terms of real applications to telegraph system design, as discussed by Samuel F. B. Morse in 1855.
The electron was discovered in 1897 by J. J. Thomson, and it was quickly realized that it was the particle (charge carrier) that carried electric currents in electric circuits. In 1900, the first (classical) model of electrical conduction, the Drude model, was proposed by Paul Drude, which finally gave a scientific explanation for Ohm's law. In this model, a solid conductor consists of a stationary lattice of atoms (ions), with conduction electrons moving randomly in it. A voltage across a conductor causes an electric field, which accelerates the electrons in the direction of the electric field, causing a drift of electrons which is the electric current. However the electrons collide with atoms which causes them to scatter and randomizes their motion, thus converting kinetic energy to heat (thermal energy). Using statistical distributions, it can be shown that the average drift velocity of the electrons, and thus the current, is proportional to the electric field, and thus the voltage, over a wide range of voltages.
The development of quantum mechanics in the 1920s modified this picture somewhat, but in modern theories the average drift velocity of electrons can still be shown to be proportional to the electric field, thus deriving Ohm's law. In 1927 Arnold Sommerfeld applied the quantum Fermi-Dirac distribution of electron energies to the Drude model, resulting in the free electron model. A year later, Felix Bloch showed that electrons move in waves (Bloch electrons) through a solid crystal lattice, so scattering off the lattice atoms as postulated in the Drude model is not a major process; the electrons scatter off impurity atoms and defects in the material. The final successor, the modern quantum band theory of solids, showed that the electrons in a solid cannot take on any energy as assumed in the Drude model but are restricted to energy bands, with gaps between them of energies that electrons are forbidden to have. The size of the band gap is a characteristic of a particular substance which has a great deal to do with its electrical resistivity, explaining why some substances are electrical conductors, some semiconductors, and some insulators.
While the old term for electrical conductance, the mho (the inverse of the resistance unit ohm), is still used, a new name, the siemens, was adopted in 1971, honoring Ernst Werner von Siemens. The siemens is preferred in formal papers.
In the 1920s, it was discovered that the current through a practical resistor actually has statistical fluctuations, which depend on temperature, even when voltage and resistance are exactly constant; this fluctuation, now known as Johnson–Nyquist noise, is due to the discrete nature of charge. This thermal effect implies that measurements of current and voltage that are taken over sufficiently short periods of time will yield ratios of V/I that fluctuate from the value of R implied by the time average or ensemble average of the measured current; Ohm's law remains correct for the average current, in the case of ordinary resistive materials.
Ohm's work long preceded Maxwell's equations and any understanding of frequency-dependent effects in AC circuits. Modern developments in electromagnetic theory and circuit theory do not contradict Ohm's law when they are evaluated within the appropriate limits.
Scope
Ohm's law is an empirical law, a generalization from many experiments that have shown that current is approximately proportional to electric field for most materials. It is less fundamental than Maxwell's equations and is not always obeyed. Any given material will break down under a strong-enough electric field, and some materials of interest in electrical engineering are "non-ohmic" under weak fields.
Ohm's law has been observed on a wide range of length scales. In the early 20th century, it was thought that Ohm's law would fail at the atomic scale, but experiments have not borne out this expectation. As of 2012, researchers have demonstrated that Ohm's law works for silicon wires as small as four atoms wide and one atom high.
Microscopic origins
The dependence of the current density on the applied electric field is essentially quantum mechanical in nature; (see Classical and quantum conductivity.) A qualitative description leading to Ohm's law can be based upon classical mechanics using the Drude model developed by Paul Drude in 1900.
The Drude model treats electrons (or other charge carriers) like pinballs bouncing among the ions that make up the structure of the material. Electrons will be accelerated in the opposite direction to the electric field by the average electric field at their location. With each collision, though, the electron is deflected in a random direction with a velocity that is much larger than the velocity gained by the electric field. The net result is that electrons take a zigzag path due to the collisions, but generally drift in a direction opposing the electric field.
The drift velocity then determines the electric current density and its relationship to E and is independent of the collisions. Drude calculated the average drift velocity from p = −eEτ where p is the average momentum, −e is the charge of the electron and τ is the average time between the collisions. Since both the momentum and the current density are proportional to the drift velocity, the current density becomes proportional to the applied electric field; this leads to Ohm's law.
Hydraulic analogy
A hydraulic analogy is sometimes used to describe Ohm's law. Water pressure, measured by pascals (or PSI), is the analog of voltage because establishing a water pressure difference between two points along a (horizontal) pipe causes water to flow. The water volume flow rate, as in liters per second, is the analog of current, as in coulombs per second. Finally, flow restrictors—such as apertures placed in pipes between points where the water pressure is measured—are the analog of resistors. We say that the rate of water flow through an aperture restrictor is proportional to the difference in water pressure across the restrictor. Similarly, the rate of flow of electrical charge, that is, the electric current, through an electrical resistor is proportional to the difference in voltage measured across the resistor. More generally, the hydraulic head may be taken as the analog of voltage, and Ohm's law is then analogous to Darcy's law which relates hydraulic head to the volume flow rate via the hydraulic conductivity.
Flow and pressure variables can be calculated in fluid flow network with the use of the hydraulic ohm analogy. The method can be applied to both steady and transient flow situations. In the linear laminar flow region, Poiseuille's law describes the hydraulic resistance of a pipe, but in the turbulent flow region the pressure–flow relations become nonlinear.
The hydraulic analogy to Ohm's law has been used, for example, to approximate blood flow through the circulatory system.
Circuit analysis
In circuit analysis, three equivalent expressions of Ohm's law are used interchangeably:
Each equation is quoted by some sources as the defining relationship of Ohm's law,
or all three are quoted, or derived from a proportional form,
or even just the two that do not correspond to Ohm's original statement may sometimes be given.
The interchangeability of the equation may be represented by a triangle, where V (voltage) is placed on the top section, the I (current) is placed to the left section, and the R (resistance) is placed to the right. The divider between the top and bottom sections indicates division (hence the division bar).
Resistive circuits
Resistors are circuit elements that impede the passage of electric charge in agreement with Ohm's law, and are designed to have a specific resistance value R. In schematic diagrams, a resistor is shown as a long rectangle or zig-zag symbol. An element (resistor or conductor) that behaves according to Ohm's law over some operating range is referred to as an ohmic device (or an ohmic resistor) because Ohm's law and a single value for the resistance suffice to describe the behavior of the device over that range.
Ohm's law holds for circuits containing only resistive elements (no capacitances or inductances) for all forms of driving voltage or current, regardless of whether the driving voltage or current is constant (DC) or time-varying such as AC. At any instant of time Ohm's law is valid for such circuits.
Resistors which are in series or in parallel may be grouped together into a single "equivalent resistance" in order to apply Ohm's law in analyzing the circuit.
Reactive circuits with time-varying signals
When reactive elements such as capacitors, inductors, or transmission lines are involved in a circuit to which AC or time-varying voltage or current is applied, the relationship between voltage and current becomes the solution to a differential equation, so Ohm's law (as defined above) does not directly apply since that form contains only resistances having value R, not complex impedances which may contain capacitance (C) or inductance (L).
Equations for time-invariant AC circuits take the same form as Ohm's law. However, the variables are generalized to complex numbers and the current and voltage waveforms are complex exponentials.
In this approach, a voltage or current waveform takes the form Ae, where t is time, s is a complex parameter, and A is a complex scalar. In any linear time-invariant system, all of the currents and voltages can be expressed with the same s parameter as the input to the system, allowing the time-varying complex exponential term to be canceled out and the system described algebraically in terms of the complex scalars in the current and voltage waveforms.
The complex generalization of resistance is impedance, usually denoted Z; it can be shown that for an inductor,
and for a capacitor,
We can now write,
where V and I are the complex scalars in the voltage and current respectively and Z is the complex impedance.
This form of Ohm's law, with Z taking the place of R, generalizes the simpler form. When Z is complex, only the real part is responsible for dissipating heat.
In a general AC circuit, Z varies strongly with the frequency parameter s, and so also will the relationship between voltage and current.
For the common case of a steady sinusoid, the s parameter is taken to be , corresponding to a complex sinusoid . The real parts of such complex current and voltage waveforms describe the actual sinusoidal currents and voltages in a circuit, which can be in different phases due to the different complex scalars.
Linear approximations
Ohm's law is one of the basic equations used in the analysis of electrical circuits. It applies to both metal conductors and circuit components (resistors) specifically made for this behaviour. Both are ubiquitous in electrical engineering. Materials and components that obey Ohm's law are described as "ohmic" which means they produce the same value for resistance (R = V/I) regardless of the value of V or I which is applied and whether the applied voltage or current is DC (direct current) of either positive or negative polarity or AC (alternating current).
In a true ohmic device, the same value of resistance will be calculated from R = V/I regardless of the value of the applied voltage V. That is, the ratio of V/I is constant, and when current is plotted as a function of voltage the curve is linear (a straight line). If voltage is forced to some value V, then that voltage V divided by measured current I will equal R. Or if the current is forced to some value I, then the measured voltage V divided by that current I is also R. Since the plot of I versus V is a straight line, then it is also true that for any set of two different voltages V1 and V2 applied across a given device of resistance R, producing currents I1 = V1/R and I2 = V2/R, that the ratio (V1 − V2)/(I1 − I2) is also a constant equal to R. The operator "delta" (Δ) is used to represent a difference in a quantity, so we can write ΔV = V1 − V2 and ΔI = I1 − I2. Summarizing, for any truly ohmic device having resistance R, V/I = ΔV/ΔI = R for any applied voltage or current or for the difference between any set of applied voltages or currents.
There are, however, components of electrical circuits which do not obey Ohm's law; that is, their relationship between current and voltage (their I–V curve) is nonlinear (or non-ohmic). An example is the p–n junction diode (curve at right). As seen in the figure, the current does not increase linearly with applied voltage for a diode. One can determine a value of current (I) for a given value of applied voltage (V) from the curve, but not from Ohm's law, since the value of "resistance" is not constant as a function of applied voltage. Further, the current only increases significantly if the applied voltage is positive, not negative. The ratio V/I for some point along the nonlinear curve is sometimes called the static, or chordal, or DC, resistance, but as seen in the figure the value of total over total varies depending on the particular point along the nonlinear curve which is chosen. This means the "DC resistance" V/I at some point on the curve is not the same as what would be determined by applying an AC signal having peak amplitude volts or amps centered at that same point along the curve and measuring . However, in some diode applications, the AC signal applied to the device is small and it is possible to analyze the circuit in terms of the dynamic, small-signal, or incremental resistance, defined as the one over the slope of the V–I curve at the average value (DC operating point) of the voltage (that is, one over the derivative of current with respect to voltage). For sufficiently small signals, the dynamic resistance allows the Ohm's law small signal resistance to be calculated as approximately one over the slope of a line drawn tangentially to the V–I curve at the DC operating point.
Temperature effects
Ohm's law has sometimes been stated as, "for a conductor in a given state, the electromotive force is proportional to the current produced. "That is, that the resistance, the ratio of the applied electromotive force (or voltage) to the current, "does not vary with the current strength."The qualifier "in a given state" is usually interpreted as meaning "at a constant temperature," since the resistivity of materials is usually temperature dependent. Because the conduction of current is related to Joule heating of the conducting body, according to Joule's first law, the temperature of a conducting body may change when it carries a current. The dependence of resistance on temperature therefore makes resistance depend upon the current in a typical experimental setup, making the law in this form difficult to directly verify. Maxwell and others worked out several methods to test the law experimentally in 1876, controlling for heating effects. Usually, the measurements of a sample resistance are carried out at low currents to prevent Joule heating. However, even a small current causes heating(cooling) at the first(second) sample contact due to the Peltier effect. The temperatures at the sample contacts become different, their difference is linear in current. The voltage drop across the circuit includes additionally the Seebeck thermoelectromotive force which again is again linear in current. As a result, there exists a thermal correction to the sample resistance even at negligibly small current. The magnitude of the correction could be comparable with the sample resistance.
Relation to heat conductions
Ohm's principle predicts the flow of electrical charge (i.e. current) in electrical conductors when subjected to the influence of voltage differences; Jean-Baptiste-Joseph Fourier's principle predicts the flow of heat in heat conductors when subjected to the influence of temperature differences.
The same equation describes both phenomena, the equation's variables taking on different meanings in the two cases. Specifically, solving a heat conduction (Fourier) problem with temperature (the driving "force") and flux of heat (the rate of flow of the driven "quantity", i.e. heat energy) variables also solves an analogous electrical conduction (Ohm) problem having electric potential (the driving "force") and electric current (the rate of flow of the driven "quantity", i.e. charge) variables.
The basis of Fourier's work was his clear conception and definition of thermal conductivity. He assumed that, all else being the same, the flux of heat is strictly proportional to the gradient of temperature. Although undoubtedly true for small temperature gradients, strictly proportional behavior will be lost when real materials (e.g. ones having a thermal conductivity that is a function of temperature) are subjected to large temperature gradients.
A similar assumption is made in the statement of Ohm's law: other things being alike, the strength of the current at each point is proportional to the gradient of electric potential. The accuracy of the assumption that flow is proportional to the gradient is more readily tested, using modern measurement methods, for the electrical case than for the heat case.
Other versions
Ohm's law, in the form above, is an extremely useful equation in the field of electrical/electronic engineering because it describes how voltage, current and resistance are interrelated on a "macroscopic" level, that is, commonly, as circuit elements in an electrical circuit. Physicists who study the electrical properties of matter at the microscopic level use a closely related and more general vector equation, sometimes also referred to as Ohm's law, having variables that are closely related to the V, I, and R scalar variables of Ohm's law, but which are each functions of position within the conductor. Physicists often use this continuum form of Ohm's Law:
where is the electric field vector with units of volts per meter (analogous to of Ohm's law which has units of volts), is the current density vector with units of amperes per unit area (analogous to of Ohm's law which has units of amperes), and ρ "rho" is the resistivity with units of ohm·meters (analogous to of Ohm's law which has units of ohms). The above equation is also written as where "sigma" is the conductivity which is the reciprocal of .
The voltage between two points is defined as:
with the element of path along the integration of electric field vector E. If the applied E field is uniform and oriented along the length of the conductor as shown in the figure, then defining the voltage V in the usual convention of being opposite in direction to the field (see figure), and with the understanding that the voltage V is measured differentially across the length of the conductor allowing us to drop the Δ symbol, the above vector equation reduces to the scalar equation:
Since the field is uniform in the direction of wire length, for a conductor having uniformly consistent resistivity ρ, the current density will also be uniform in any cross-sectional area and oriented in the direction of wire length, so we may write:
Substituting the above 2 results (for E and J respectively) into the continuum form shown at the beginning of this section:
The electrical resistance of a uniform conductor is given in terms of resistivity by:
where ℓ is the length of the conductor in SI units of meters, is the cross-sectional area (for a round wire if is radius) in units of meters squared, and ρ is the resistivity in units of ohm·meters.
After substitution of R from the above equation into the equation preceding it, the continuum form of Ohm's law for a uniform field (and uniform current density) oriented along the length of the conductor reduces to the more familiar form:
A perfect crystal lattice, with low enough thermal motion and no deviations from periodic structure, would have no resistivity, but a real metal has crystallographic defects, impurities, multiple isotopes, and thermal motion of the atoms. Electrons scatter from all of these, resulting in resistance to their flow.
The more complex generalized forms of Ohm's law are important to condensed matter physics, which studies the properties of matter and, in particular, its electronic structure. In broad terms, they fall under the topic of constitutive equations and the theory of transport coefficients.
Magnetic effects
If an external B-field is present and the conductor is not at rest but moving at velocity , then an extra term must be added to account for the current induced by the Lorentz force on the charge carriers.
In the rest frame of the moving conductor this term drops out because . There is no contradiction because the electric field in the rest frame differs from the E-field in the lab frame: .
Electric and magnetic fields are relative, see Lorentz transformation.
If the current is alternating because the applied voltage or E-field varies in time, then reactance must be added to resistance to account for self-inductance, see electrical impedance. The reactance may be strong if the frequency is high or the conductor is coiled.
Conductive fluids
In a conductive fluid, such as a plasma, there is a similar effect. Consider a fluid moving with the velocity in a magnetic field . The relative motion induces an electric field which exerts electric force on the charged particles giving rise to an electric current . The equation of motion for the electron gas, with a number density , is written as
where , and are the charge, mass and velocity of the electrons, respectively. Also, is the frequency of collisions of the electrons with ions which have a velocity field . Since, the electron has a very small mass compared with that of ions, we can ignore the left hand side of the above equation to write
where we have used the definition of the current density, and also put which is the electrical conductivity. This equation can also be equivalently written as
where is the electrical resistivity. It is also common to write instead of which can be confusing since it is the same notation used for the magnetic diffusivity defined as .
| Physical sciences | Electrical circuits | null |
49105 | https://en.wikipedia.org/wiki/Vacuum%20cleaner | Vacuum cleaner | A vacuum cleaner, also known simply as a vacuum, is a device that uses suction, and often agitation, in order to remove dirt and other debris from carpets and hard floors.
The dirt is collected into a dust bag or a plastic bin. Vacuum cleaners, which are used in homes as well as in commercial settings, exist in a variety of sizes and types, including stick vacuums, handheld vacuums, upright vacuums, and canister vacuums. Specialized shop vacuums can be used to clean both solid debris and liquids.
Name
Although vacuum cleaner and the short form vacuum are neutral names, in some countries (UK, Ireland) hoover is used instead as a genericized trademark, and as a verb. The name comes from the Hoover Company, one of the first and most influential companies in the development of the device. In New Zealand, particularly the Southland region, it is sometimes called a lux, likewise a genericized trademark and used as a verb. The device is also sometimes called a sweeper although the same term also refers to a carpet sweeper, a similar invention.
History
The vacuum cleaner evolved from the carpet sweeper via manual vacuum cleaners. The first manual models, using bellows, were developed in the 1860s, and the first motorized designs appeared at the turn of the 20th century, with the first decade being the boom decade.
Manual vacuums
In 1860 a manual vacuum cleaner was invented by Daniel Hess of West Union, Iowa. Called a "carpet sweeper", it gathered dust with a rotating brush and had a bellows for generating suction.
Another early model (1869) was the "Whirlwind", invented in Chicago in 1868 by Ives W. McGaffey. The bulky device worked with a belt driven fan cranked by hand that made it awkward to operate, although it was commercially marketed with mixed success.
A similar model was constructed by Melville R. Bissell of Grand Rapids, Michigan in 1876, who also manufactured carpet sweepers. The company later added portable vacuum cleaners to its line of cleaning tools.
Powered vacuum cleaners
The end of the 19th century saw the introduction of powered cleaners, although early types used some variation of blowing air to clean instead of suction. One appeared in 1898 when John S. Thurman of St. Louis, Missouri, submitted a patent (U.S. No. 634,042) for a "pneumatic carpet renovator" which blew dust into a receptacle. Thurman's system, powered by an internal combustion engine, traveled to the customer's residence on a horse-drawn wagon as part of a door-to-door cleaning service. Corrine Dufour of Savannah, Georgia, received two patents in 1899 and 1900 for another blown-air system that seems to have featured the first use of an electric motor.
In 1901 powered vacuum cleaners using suction were invented independently by British engineer Hubert Cecil Booth and American inventor David T. Kenney. Booth also may have coined the word "vacuum cleaner". Booth's horse-drawn combustion-engine-powered "Puffing Billy", maybe derived from Thurman's blown-air design, relied upon just suction with air pumped through a cloth filter and was offered as part of his cleaning services. Kenney's was a stationary steam-engine-powered system with pipes and hoses reaching into all parts of the building.
Domestic vacuum cleaner
The first vacuum-cleaning device to be portable and marketed at the domestic market was built in 1905 by Walter Griffiths, a manufacturer in Birmingham, England. His Griffith's Improved Vacuum Apparatus for Removing Dust from Carpets resembled modern-day cleaners; it was portable, easy to store, and powered by "any one person (such as the ordinary domestic servant)", who would have the task of compressing a bellows-like contraption to suck up dust through a removable, flexible pipe, to which a variety of shaped nozzles could be attached.
In 1906 James B. Kirby developed his first of many vacuums called the "Domestic Cyclone". It used water for dirt separation. Later revisions came to be known as the Kirby Vacuum Cleaner. The Cleveland, Ohio factory was built in 1916 and remains open currently, and all Kirby vacuum cleaners are manufactured in the United States.
In 1907 department store janitor James Murray Spangler (1848–1915) of Canton, Ohio, invented the first portable electric vacuum cleaner, obtaining a patent for the Electric Suction Sweeper on 2 June 1908. Crucially, in addition to suction from an electric fan that blew the dirt and dust into a soap box and one of his wife's pillow cases, Spangler's design utilized a rotating brush to loosen debris. Unable to produce the design himself due to lack of funding, he sold the patent in 1908 to local leather goods manufacturer William Henry Hoover (1849–1932), who had Spangler's machine redesigned with a steel casing, casters, and attachments, founding the company that in 1922 was renamed the Hoover Company. Their first vacuum was the 1908 Model O, which sold for $60 ($ in dollars). Subsequent innovations included the beater bar in 1919 ("It beats as it sweeps as it cleans"), disposal filter bags in the 1920s, and an upright vacuum cleaner in 1926.
In Continental Europe, the Fisker and Nielsen company in Denmark was the first to sell vacuum cleaners in 1910. The design weighed just and could be operated by a single person. The Swedish company Electrolux launched their Model V in 1921 with the innovation of being able to lie on the floor on two thin metal runners. In the 1930s the German company Vorwerk started marketing vacuum cleaners of their own design which they sold through direct sales.
Post-Second World War
For many years after their introduction, vacuum cleaners remained a luxury item, but after the Second World War, they became common among the middle classes. Vacuums tend to be more common in Western countries, because in most other parts of the world, wall-to-wall carpeting is uncommon and homes have tile or hardwood floors, which are easily swept, wiped or mopped manually without power assist.
The last decades of the 20th century saw the more widespread use of technologies developed earlier, including filterless cyclonic dirt separation, central vacuum systems and rechargeable hand-held vacuums. In addition, miniaturized computer technology and improved batteries allowed the development of a new type of machine—the autonomous robotic vacuum cleaner. In 1997 Electrolux of Sweden demonstrated the Electrolux Trilobite, the first autonomous cordless robotic vacuum cleaner on the BBC-TV program Tomorrow's World, introducing it to the consumer market in 2001.
Recent developments
In 2004 a British company released AiRider, a hovering vacuum cleaner that floats on a cushion of air, similar to a hovercraft, to make it light-weight and easier to maneuver (compared to using wheels).
A British inventor has developed a new cleaning technology known as Air Recycling Technology, which, instead of using a vacuum, uses an air stream to collect dust from the carpet. This technology was tested by the Market Transformation Programme (MTP) and shown to be more energy-efficient than the vacuum method. Although working prototypes exist, Air Recycling Technology is not currently used in any production cleaner.
Modern configurations
A wide variety of technologies, designs, and configurations are available for both domestic and commercial cleaning jobs.
Upright
Upright vacuum cleaners are popular in the US, UK, and numerous Commonwealth countries, but unusual in some Continental European countries. They take the form of a cleaning head, onto which a handle and bag are attached. Upright designs generally employ a rotating brushroll or beater bar, which removes dirt through a combination of sweeping and vibration. There are two types of upright vacuums; dirty-air/direct fan (found mostly on commercial vacuums), or clean-air/fan-bypass (found on most of today's domestic vacuums).
The older of the two designs, direct-fan cleaners have a large impeller (fan) mounted close to the suction opening, through which the dirt passes directly, before being blown into a bag. The motor is often cooled by a separate cooling fan. Because of their large-bladed fans, and comparatively short airpaths, direct-fan cleaners create a very efficient airflow from a low amount of power, and make effective carpet cleaners. Their "above-floor" cleaning power is less efficient, since the airflow is lost when it passes through a long hose, and the fan has been optimized for airflow volume and not suction.
Fan-bypass uprights have their motor mounted after the filter bag. Dust is removed from the airstream by the bag, and usually a filter, before it passes through the fan. The fans are smaller, and are usually a combination of several moving and stationary turbines working in sequence to boost power. The motor is cooled by the airstream passing through it. Fan-bypass vacuums are good for both carpet and above-floor cleaning, since their suction does not significantly diminish over the distance of a hose, as it does in direct-fan cleaners. However, their air-paths are much less efficient, and can require more than twice as much power as direct-fan cleaners to achieve the same results.
The most common upright vacuum cleaners use a drive-belt powered by the suction motor to rotate the brush-roll. However, a more common design of dual motor upright is available. In these cleaners, the suction is provided via a large motor, while the brushroll is powered by a separate, smaller motor, which does not create any suction. The brush-roll motor can sometimes be switched off, so hard floors can be cleaned without the brush-roll scattering the dirt. It may also have an automatic cut-off feature which shuts the motor off if the brush-roll becomes jammed, protecting it from damage.
Canister
Canister models (in the UK also often called cylinder models) dominate the European market. They have the motor and dust collectors (using a bag or bagless) in a separate unit, usually mounted on wheels, which is connected to the vacuum head by a flexible hose. Their main advantage is flexibility, as the user can attach different heads for different tasks, and maneuverability (the head can reach under furniture and makes it very easy to vacuum stairs and vertical surfaces). Many cylinder models have power heads as standard or add-on equipment containing the same sort of mechanical beaters as in upright units, making them as efficient on carpets as upright models. Such beaters are driven by a separate electric motor or a turbine which uses the suction power to spin the brushroll via a drive belt.
Drum
Drum or shop vac models are essentially heavy-duty industrial versions of cylinder vacuum cleaners, where the canister consists of a large vertically positioned drum which can be stationary or on wheels. Smaller versions, for use in garages or small workshops, are usually electrically powered. Larger models, which can store over , are often hooked up to compressed air, utilizing the Venturi effect to produce a partial vacuum. Built-in dust collection systems are also used in many workshops.
Wet/dry
Wet or wet/dry vacuum cleaners are a specialized form of cylinder/drum models that can be used to clean up wet or liquid spills. They are generally designed to be used both indoors and outdoors and to accommodate both wet and dry debris; some are also equipped with an exhaust port or detachable blower for reversing the airflow, a useful function for everything from clearing a clogged hose to blowing dust into a corner for easy collection.
Shop vacs are able to collect large, bulky or otherwise inconvenient material that would damage or foul household vacuum cleaners, like sawdust, swarf, and liquids.
They use wide hoses, which open directly into the collection chamber (usually a bucket-like cylinder constituting the body of the vacuum). As the airstream enters the larger volume, its flow slows down, allowing the material to drop into the chamber before air is sucked out through the filter and to the vacuum's exhaust.
Shop vacs' performance can be evaluated by a number of metrics. Commonly used ones include the motor's rating (using power measurements like watts or horsepower), the vacuum's ability to develop suction (using pressure measurements like inches of water), and total airflow through the system (using volume rate measurements like cubic feet per minute).
Related to the wet vacuum is the extraction vacuum cleaner used mainly in hot water extraction, a method of cleaning hard-to-move pieces of fabric like carpets. These machines are able to spray hot soapy water and then suck it back out of the fabric, removing dirt in the process.
Wet vacuum cleaners have been modified by end users, adding an internally-mounted sump pump for continuous removal of liquids without having to stop to empty the tank.
Pneumatic
Pneumatic or pneumatic wet/dry vacuum cleaners are a specialized form of wet/dry models that hook up to compressed air. They commonly can accommodate both wet and dry soilage, a useful feature in industrial plants and manufacturing facilities.
Backpack
Backpack vacuum cleaners are commonly used for commercial cleaning: they allow the user to move rapidly about a large area. They are essentially small canister vacuums strapped onto the user's back.
Hand-held
Lightweight hand-held vacuum cleaners, either powered from rechargeable batteries or mains power, are also popular for cleaning up smaller spills. Frequently seen examples include the Black & Decker DustBuster, which was introduced in 1979, and numerous handheld models by Dirt Devil, which were first introduced in 1984. Some battery-powered handheld vacuums are wet/dry rated; the appliance must be partially disassembled and cleaned after picking up wet materials to avoid developing unpleasant odors.
Robotic
In the late 1990s and early 2000s, several companies developed robotic vacuum cleaners, a form of carpet sweeper usually equipped with limited suction power. Some prominent brands are Roomba, Neato, and bObsweep. These machines move autonomously while collecting surface dust and debris into a dustbin. They can usually navigate around furniture and come back to a docking station to charge their batteries, and a few are able to empty their dust containers into the dock as well. Most models are equipped with motorized brushes and a vacuum motor to collect dust and debris. While most robotic vacuum cleaners are designed for home use, some models are appropriate for operation in offices, hotels, hospitals, etc.
In December 2009, Neato Robotics launched the world's first robotic vacuum cleaner which uses a rotating laser-based range-finder (a form of lidar) to scan and map its surrounding. It uses this map to clean the floor methodically, even if it requires the robot to return to its base multiple times to recharge itself. In many cases it will notice when an area of the floor that was previously inaccessible becomes reachable, such as when a dog wakes up from a nap, and return to vacuum that area.
Cyclonic
Portable vacuum cleaners working on the cyclonic separation principle became popular in the 1990s. This dirt separation principle was well known and often used in central vacuum systems. Cleveland's P.A. Geier Company had obtained a patent on a cyclonic vacuum cleaner as early as 1928, which was later sold to Health-Mor in 1939, introducing the Filter Queen cyclonic canister vacuum cleaner.
In 1979, James Dyson introduced a portable unit with cyclonic separation, adapting this design from industrial saw mills. He launched his cyclone cleaner first in Japan in the 1980s at a cost of about US$1800 and in 1993 released the Dyson DC01 upright in the UK for £200. Critics expected that people would not buy a vacuum cleaner at twice the price of a conventional unit, but the Dyson design later became the most popular cleaner in the UK.
Cyclonic cleaners do not use filtration bags. Instead, the dust is separated in a detachable cylindrical collection vessel or bin. Air and dust are sucked at high speed into the collection vessel at a direction tangential to the vessel wall, creating a fast-spinning vortex. The dust particles and other debris move to the outside of the vessel by centrifugal force, where they fall due to gravity.
In fixed-installation central vacuum cleaners, the cleaned air may be exhausted directly outside without need for further filtration. A well-designed cyclonic filtration system loses suction power due to airflow restriction only when the collection vessel is almost full. This is in marked contrast to filter bag systems, which lose suction when pores in the filter become clogged as dirt and dust are collected.
In portable cyclonic models, the cleaned air from the center of the vortex is expelled from the machine after passing through a number of successively finer filters at the top of the container. The first filter is intended to trap particles which could damage the subsequent filters that remove fine dust particles. The filters must regularly be cleaned or replaced to ensure that the machine continues to perform efficiently.
Since Dyson's success in raising public awareness of cyclonic separation, several other companies have introduced cyclone models. Competing manufacturers include Hoover, Bissell, Bosch, Eureka, Electrolux and Vax. This high level of competition means the cheapest models are generally no more expensive than a conventional cleaner.
Central
Central vacuum cleaners, also known as built-in or ducted, are a type of canister/cylinder model which has the motor and dirt filtration unit located in a central location in a building, and connected by pipes to fixed vacuum inlets installed throughout the building. Only the hose and cleaning head need be carried from room to room, and the hose is commonly 8 m (25 ft) long, allowing a large range of movement without changing vacuum inlets. Plastic or metal piping connects the inlets to the central unit. The vacuum head may be unpowered, or have beaters operated by an electric motor or by an air-driven turbine.
The dirt bag or collection bin in a central vacuum system is usually so large that emptying or changing needs to be done less often, perhaps a few times per year for an ordinary household. The central unit usually stays in stand-by, and is turned on by a switch on the handle of the hose. Alternately, the unit powers up when the hose is plugged into the wall inlet, when the metal hose connector makes contact with two prongs in the wall inlet and control current is transmitted through low voltage wires to the main unit.
A central vacuum typically produces greater suction than common portable vacuum cleaners because a larger fan and more powerful motor can be used when they are not required to be portable. A cyclonic separation system, if used, does not lose suction as the collection container fills up, until the container is nearly full. This is in marked contrast to filter-bag designs, which start losing suction immediately as pores in the filter become clogged by accumulated dirt and dust.
A benefit to allergy sufferers is that unlike a standard vacuum cleaner, which must blow some of the dirt collected back into the room being cleaned (no matter how efficient its filtration), a central vacuum removes all the dirt collected to the central unit. Since this central unit is usually located outside the living area, no dust is recirculated back into the room being cleaned. Also it is possible on most newer models to vent the exhaust entirely outside, even with the unit inside the living quarters.
Another benefit of the central vacuum is, because of the remote location of the motor unit, there is much less noise in the room being cleaned than with a standard vacuum cleaner.
Constellation
Introduced in 1954, The Hoover Company's Constellation was of the cylinder type and lacked wheels. Instead the vacuum cleaner floated on its exhaust, operating as a hovercraft, although that was not true of the earliest models, which had a rotating hose, the intention being that the user would place the unit in the center of the room, and work around the cleaner.
The Constellation was changed and updated over the years until discontinued in 1975. Later Constellations routed all of the exhaust under the vacuum using an airfoil. The updated design was quiet even by modern standards, particularly on carpet, because it muffled the sound. Those models float on carpet or bare floor although, on hard flooring, the exhaust air tends to scatter any fluff or debris around.
Hoover re-released an updated version of the later-model Constellation in the US (model # S3341 in Pearl White and # S3345 in stainless steel). Changes included a HEPA filtration bag, a 12-amp motor, a turbine-powered brush roll, and a redesigned version of the handle. The same model was marketed in the UK under the Maytag brand, called the Satellite because of licensing restrictions. It was sold from 2006 to 2009.
Vehicles
See vacuum truck for very big vacuum cleaners mounted on vehicles.
Other
Some other vacuum cleaners include an electric mop in the same machine: for a dry and a later wet clean.
The iRobot company developed the Scooba, a robotic wet vacuum cleaner that carries its own cleaning solution, applies it and scrubs the floor, and vacuums the dirty water into a collection tank.
Technology
A vacuum's suction is caused by a difference in air pressure. A fan driven by an electric motor (often a universal motor) reduces the pressure inside the machine. Atmospheric pressure then pushes the air through the carpet and into the nozzle, and so the dust is literally pushed into the bag.
Tests have shown that vacuuming can kill 100% of young fleas and 96% of adult fleas.
Exhaust filtration
Vacuums by their nature cause dust to become airborne, by exhausting air that is not completely filtered. This can cause health problems since the operator ends up inhaling respirable dust, which is also redeposited into the area being cleaned. There are several methods manufacturers use to control this problem, some of which may be combined in a single appliance. Typically a filter is positioned so that the incoming air passes through it before it reaches the fan, and then the filtered air passes through the motor for cooling purposes. Some other designs use a completely separate air intake for cooling.
It is nearly impossible for a practical air filter to completely remove all ultrafine particles from a dirt-laden airstream. An ultra-efficient air filter will immediately clog up and become ineffective during everyday use, and practical filters are a compromise between filtering effectiveness and restriction of airflow. One way to sidestep this problem is to exhaust partially filtered air to the outdoors, which is a design feature of some central vacuum systems. Specially engineered portable vacuums may also utilize this design, but are more awkward to set up and use, requiring temporary installation of a separate exhaust hose to an exterior window.
Bag: The most common method to capture the debris vacuumed up involves a paper or fabric bag that allows air to pass through, but attempts to trap most of the dust and debris. The bag may become clogged with fine dust before it is full. The bag may be disposable, or designed to be cleaned and re-used.
Bagless: In non-cyclonic bagless models, the role of the bag is taken by a removable container and a reusable filter, equivalent to a reusable fabric bag.
Cyclonic separation: A vacuum cleaner employing this method is also bagless. It causes intake air to be cycled or spun so fast that most of the dust is forced out of the air and falls into a collection bin. The operation is similar to that of a centrifuge. Centrifugal separators eliminate the problem of a bag becoming clogged with fine dust.
Water filtration: First seen commercially in the 1920s in the form of the Newcombe Separator (later to become the Rexair Rainbow), a water filtration vacuum cleaner uses a water bath as a filter. It forces the dirt-laden intake air to pass through water before it is exhausted, so that wet dust cannot become airborne. The water trap filtration and low speed may also allow the user to use the machine as a stand-alone air purifier and humidifier unit. The dirty water must be dumped out and the appliance must be cleaned after each use, to avoid growth of bacteria and mold, causing unpleasant odors.
Ultra fine air filter: Also called HEPA filtered, this method is used as a secondary filter after the air has passed through the rest of the machine. It is meant to remove any remaining dust that could harm the operator. Some vacuum cleaners also use an activated charcoal filter to remove odors.
Ordinary vacuum cleaners should never be used to clean up asbestos fibers, even if fitted with a HEPA filter. Specially-designed machines are required to safely clean up asbestos.
Attachments
Most vacuum cleaners are supplied with numerous specialized attachments, such as tools, brushes and extension wands, which allow them to reach otherwise inaccessible places or to be used for cleaning a variety of surfaces. The most common of these tools are:
Hard floor brush (for non-upright designs)
Powered floor nozzle (for canister designs)
Dusting brush
Crevice tool
Upholstery nozzle
Specifications
The performance of a vacuum cleaner can be measured by several parameters:
Airflow, in litres per second [l/s] or cubic feet per minute (CFM or ft3/min)
Air speed, in metres per second [m/s] or miles per hour [mph]
Suction, vacuum, or water lift, in pascals [Pa] or inches of water
Other specifications of a vacuum cleaner are:
Weight, in kilograms [kg] or pounds [lb]
Noise, in decibels [dB]
Power cord length and hose length (as applicable)
Suction (Pa)
The suction is the maximum pressure difference that the pump can create. For example, a typical domestic model has a suction of about negative 20 kPa. This means that it can lower the pressure inside the hose from normal atmospheric pressure (about 100 kPa) by 20 kPa. The higher the suction rating, the more powerful the cleaner. One inch of water is equivalent to about 249 Pa; hence, the typical suction is of water.
Input power (W)
The power consumption of a vacuum cleaner, in watts, is often the only figure stated. Many North American vacuum manufacturers give the current only in amperes (e.g. "6 amps"), and the consumer is left to multiply that by the line voltage of 120 volts to get the approximate power ratings in watts. The rated input power does not indicate the effectiveness of the cleaner, only how much electricity it consumes.
After August 2014, due to EU rules, manufacture of vacuum cleaners with a power consumption greater than 1600 watts were banned within the EU, and from 2017 no vacuum cleaner with a wattage greater than 900 watts was permitted.
Output power (AW)
The amount of input power that is converted into airflow at the end of the cleaning hose is sometimes stated, and is measured in airwatts: the measurement units are simply watts. The word "air" is used to clarify that this is output power, not input electrical power.
The airwatt is derived from English units. ASTM International defines the airwatt as 0.117354 × F × S, where F is the rate of air flow in ft3/min and S is the pressure in inches of water. This makes one airwatt equal to 0.9983 watts.
Peak horsepower
The peak horsepower of a vacuum cleaner is often measured by removal of any cooling fans and calculating power based on the motor's power plus the rotational inertial energy stored the motor armature and centrifugal blower. A peak horsepower rating is often an impractical figure and is only valid for a very short period. Continuous power is typically far lower.
Cultural references
Vacuum cleaners have become closely associated with housecleaning, and artists have sometimes used them to symbolize the banality and routine of everyday life and culture. Visual artist Jeff Koons exhibited his The New series of household vacuums enshrined in museum-quality vitrines, such as New Shelton Wet/Dry Doubledecker (1981) at the Museum of Modern Art and New Hoover Convertibles, Green, Blue; New Hoover Convertibles, Green, Blue; Doubledecker (1981–1987) at the Whitney Museum of American Art. In 2002, fashion designer Tara Subkoff used topless models wielding upright vacuum cleaners to promote her controversial fashion label "Imitation of Christ".
In 2018, Paulius Markevičius organized performances of Dance for the Vacuum-Cleaner and Father choreographed by Greta Grinevičiūtė, and premiered in Vilnius, Lithuania. In 2019, Sandrina Lindgren choreographed dancers in Requiem for Vacuum Cleaning in the Barker Theatre of Turku, Finland, with each performer operating multiple machines simultaneously.
Musician Frank Zappa used vacuum cleaners in many of his different performances and on promotional artwork. Other performers have used a vacuum cleaner hose or wand as a modernized version of the Australian Aboriginal didgeridoo, or used the whine of the motor for techno music.
In 1996, Mister Rogers' Neighborhood episode #1702 featured vacuum cleaners, including dancing, magic, and a segment showing how a small Dirt Devil canister vacuum was manufactured.
| Technology | Household appliances | null |
49123 | https://en.wikipedia.org/wiki/Phoenix%20%28constellation%29 | Phoenix (constellation) | Phoenix is a minor constellation in the southern sky. Named after the mythical phoenix, it was first depicted on a celestial atlas by Johann Bayer in his 1603 Uranometria. The French explorer and astronomer Nicolas Louis de Lacaille charted the brighter stars and gave their Bayer designations in 1756. The constellation stretches from roughly −39° to −57° declination, and from 23.5h to 2.5h of right ascension. The constellations Phoenix, Grus, Pavo and Tucana, are known as the Southern Birds.
The brightest star, Alpha Phoenicis, is named Ankaa, an Arabic word meaning 'the Phoenix'. It is an orange giant of apparent magnitude 2.4. Next is Beta Phoenicis, actually a binary system composed of two yellow giants with a combined apparent magnitude of 3.3. Nu Phoenicis has a dust disk, while the constellation has ten star systems with known planets and the recently discovered galaxy clusters El Gordo and the Phoenix Cluster—located 7.2 and 5.7 billion light years away respectively, two of the largest objects in the visible universe. Phoenix is the radiant of two annual meteor showers: the Phoenicids in December, and the July Phoenicids.
History
Phoenix was the largest of the 12 constellations established by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in Johann Bayer's Uranometria of 1603. De Houtman included it in his southern star catalog the same year under the Dutch name Den voghel Fenicx, "The Bird Phoenix", symbolising the phoenix of classical mythology. One name of the brightest star Alpha Phoenicis—Ankaa—is derived from the , and was coined sometime after 1800 in relation to the constellation.
Celestial historian Richard Allen noted that unlike the other constellations introduced by Plancius and La Caille, Phoenix has actual precedent in ancient astronomy, as the Arabs saw this formation as representing young ostriches, Al Ri'āl, or as a griffin or eagle. In addition, the same group of stars was sometimes imagined by the Arabs as a boat, Al Zaurak, on the nearby river Eridanus. He observed, "the introduction of a Phoenix into modern astronomy was, in a measure, by adoption rather than by invention."
The Chinese incorporated Phoenix's brightest star, Ankaa (Alpha Phoenicis), and stars from the adjacent constellation Sculptor to depict Bakui, a net for catching birds. Phoenix and the neighbouring constellation of Grus together were seen by Julius Schiller as portraying Aaron the High Priest. These two constellations, along with nearby Pavo and Tucana, are called the Southern Birds.
Characteristics
Phoenix is a small constellation bordered by Fornax and Sculptor to the north, Grus to the west, Tucana to the south, touching on the corner of Hydrus to the south, and Eridanus to the east and southeast. The bright star Achernar is nearby. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Phe". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 10 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −39.31° and −57.84°. This means it remains below the horizon to anyone living north of the 40th parallel in the Northern Hemisphere, and remains low in the sky for anyone living north of the equator. It is most visible from locations such as Australia and South Africa during late Southern Hemisphere spring. Most of the constellation lies within, and can be located by, forming a triangle of the bright stars Achernar, Fomalhaut and Beta Ceti—Ankaa lies roughly in the centre of this.
Features
Stars
A curved line of stars comprising Alpha, Kappa, Mu, Beta, Nu and Gamma Phoenicis was seen as a boat by the ancient Arabs. French explorer and astronomer Nicolas Louis de Lacaille charted and designated 27 stars with the Bayer designations Alpha through to Omega in 1756. Of these, he labelled two stars close together Lambda, and assigned Omicron, Psi and Omega to three stars, which subsequent astronomers such as Benjamin Gould felt were too dim to warrant their letters. A different star was subsequently labelled Psi Phoenicis, while the other two designations fell out of use.
Ankaa is the brightest star in the constellation. It is an orange giant of apparent visual magnitude 2.37 and spectral type K0.5IIIb, 77 light years distant from Earth and orbited by a secondary object about which little is known. Lying close by Ankaa is Kappa Phoenicis, a main sequence star of spectral type A5IVn and apparent magnitude 3.90. Located centrally in the asterism, Beta Phoenicis is the second brightest star in the constellation and another binary star. Together the stars, both yellow giants of spectral type G8, shine with an apparent magnitude of 3.31, though the components are of individual apparent magnitudes of 4.0 and 4.1 and orbit each other every 168 years. Zeta Phoenicis or Wurren is an Algol-type eclipsing binary, with an apparent magnitude fluctuating between 3.9 and 4.4 with a period of around 1.7 days (40 hours); its dimming results from the component two blue-white B-type stars, which orbit and block out each other from Earth. The two stars are 0.05 AU from each other, while a third star is around 600 AU away from the pair, and has an orbital period exceeding 5000 years. The system is around 300 light years distant. In 1976, researchers Clausen, Gyldenkerne, and Grønbech calculated that a nearby 8th magnitude star is a fourth member of the system.
AI Phe is an eclipsing binary star identified in 1972. Its long mutual eclipses and combination of spectroscopic and astrometric data allows precise measurement of the masses and radii of the stars which is viewed as a potential cross-check on stellar properties and distances independent on Ceiphid Variables and such techniques. The long eclipse events require space-based observations to avoid Solar interference.
Gamma Phoenicis is a red giant of spectral type M0IIIa and varies between magnitudes 3.39 and 3.49. It lies 235 light years away. Psi Phoenicis is another red giant, this time of spectral type M4III, and has an apparent magnitude that ranges between 4.3 and 4.5 over a period of around 30 days. Lying 340 light years away, it has around 85 times the diameter, but only 85% of the mass, of the Sun. W Phoenicis is a Mira variable, ranging from magnitude 8.1 to 14.4 over 333.95 days. A red giant, its spectrum ranges between M5e and M6e. Located 6.5 degrees west of Ankaa is SX Phoenicis, a variable star which ranges from magnitude 7.1 to 7.5 over a period of a mere 79 minutes. Its spectral type varies between A2 and F4. It gives its name to a group of stars known as SX Phoenicis variables. Rho and BD Phoenicis are Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. Rho is spectral type F2III, and ranges between magnitudes 5.20 and 5.26 over a period of 2.85 hours. BD is of spectral type A1V, and ranges between magnitudes 5.90 and 5.94.
Nu Phoenicis is a yellow-white main sequence star of spectral type F9V and magnitude 4.96. Lying some 49 light years distant, it is around 1.2 times as massive as the Sun, and likely to be surrounded by a disk of dust. It is the closest star in the constellation that is visible with the unaided eye. Gliese 915 is a white dwarf only 26 light years away. It is of magnitude 13.05, too faint to be seen with the naked eye. White dwarfs are extremely dense stars compacted into a volume the size of the Earth. With around 85% of the mass of the Sun, Gliese 915 has a surface gravity of 108.39 ± 0.01 (2.45 · 108) cm·s−2, or approximately 250,000 of Earth's.
Ten stars have been found to have planets to date, and four planetary systems have been discovered with the SuperWASP project. HD 142 is a yellow giant that has an apparent magnitude of 5.7, and has a planet (HD 142 b) 1.36 times the mass of Jupiter which orbits every 328 days. HD 2039 is a yellow subgiant with an apparent magnitude of 9.0 around 330 light years away which has a planet (HD 2039 b) six times the mass of Jupiter. WASP-18 is a star of magnitude 9.29 which was discovered to have a hot Jupiter-like planet (WASP-18b) taking less than a day to orbit the star. The planet is suspected to be causing WASP-18 to appear older than it really is. WASP-4 and WASP-5 are solar-type yellow stars around 1000 light years distant and of 13th magnitude, each with a single planet larger than Jupiter. WASP-29 is an orange dwarf of spectral type K4V and visual magnitude 11.3, which has a planetary companion of similar size and mass to Saturn. The planet completes an orbit every 3.9 days.
WISE J003231.09-494651.4 and WISE J001505.87-461517.6 are two brown dwarfs discovered by the Wide-field Infrared Survey Explorer, and are 63 and 49 light years away respectively. Initially hypothesised before they were belatedly discovered, brown dwarfs are objects more massive than planets, but which are of insufficient mass for hydrogen fusion characteristic of stars to occur. Many are being found by sky surveys.
Phoenix contains HE0107-5240, possibly one of the oldest stars yet discovered. It has around 1/200,000 the metallicity that the Sun has and hence must have formed very early in the history of the universe. With a visual magnitude of 15.17, it is around 10,000 times dimmer than the faintest stars visible to the naked eye and is 36,000 light years distant.
Deep-sky objects
The constellation does not lie on the galactic plane of the Milky Way, and there are no prominent star clusters. NGC 625 is a dwarf irregular galaxy of apparent magnitude 11.0 and lying some 12.7 million light years distant. Only 24000 light years in diameter, it is an outlying member of the Sculptor Group. NGC 625 is thought to have been involved in a collision and is experiencing a burst of active star formation. NGC 37 is a lenticular galaxy of apparent magnitude 14.66. It is approximately 42 kiloparsecs (137,000 light-years) in diameter and about 12.9 billion years old. Robert's Quartet (composed of the irregular galaxy NGC 87, and three spiral galaxies NGC 88, NGC 89 and NGC 92) is a group of four galaxies located around 160 million light-years away which are in the process of colliding and merging. They are within a circle of radius of 1.6 arcmin, corresponding to about 75,000 light-years. Located in the galaxy ESO 243-49 is HLX-1, an intermediate-mass black hole—the first one of its kind identified. It is thought to be a remnant of a dwarf galaxy that was absorbed in a collision with ESO 243-49. Before its discovery, this class of black hole was only hypothesized.
Lying within the bounds of the constellation is the gigantic Phoenix cluster, which is around 7.3 million light years wide and 5.7 billion light years away, making it one of the most massive galaxy clusters. It was first discovered in 2010, and the central galaxy is producing an estimated 740 new stars a year. Larger still is El Gordo, or officially ACT-CL J0102-4915, whose discovery was announced in 2012. Located around 7.2 billion light years away, it is composed of two subclusters in the process of colliding, resulting in the spewing out of hot gas, seen in X-rays and infrared images.
Meteor showers
Phoenix is the radiant of two annual meteor showers. The Phoenicids, also known as the December Phoenicids, were first observed on 3 December 1887. The shower was particularly intense in December 1956, and is thought related to the breakup of the short-period comet 289P/Blanpain. It peaks around 4–5 December, though is not seen every year. A very minor meteor shower peaks around July 14 with around one meteor an hour, though meteors can be seen anytime from July 3 to 18; this shower is referred to as the July Phoenicids.
| Physical sciences | Other | Astronomy |
49159 | https://en.wikipedia.org/wiki/Hallucination | Hallucination | A hallucination is a perception in the absence of an external stimulus that has the compelling sense of reality. They are distinguishable from several related phenomena, such as dreaming (REM sleep), which does not involve wakefulness; pseudohallucination, which does not mimic real perception, and is accurately perceived as unreal; illusion, which involves distorted or misinterpreted real perception; and mental imagery, which does not mimic real perception, and is under voluntary control. Hallucinations also differ from "delusional perceptions", in which a correctly sensed and interpreted stimulus (i.e., a real perception) is given some additional significance.
Hallucinations can occur in any sensory modality—visual, auditory, olfactory, gustatory, tactile, proprioceptive, equilibrioceptive, nociceptive, thermoceptive and chronoceptive. Hallucinations are referred to as multimodal if multiple sensory modalities occur.
A mild form of hallucination is known as a disturbance, and can occur in most of the senses above. These may be things like seeing movement in peripheral vision, or hearing faint noises or voices. Auditory hallucinations are very common in schizophrenia. They may be benevolent (telling the subject good things about themselves) or malicious, cursing the subject. 55% of auditory hallucinations are malicious in content, for example, people talking about the subject, not speaking to them directly. Like auditory hallucinations, the source of the visual counterpart can also be behind the subject. This can produce a feeling of being looked or stared at, usually with malicious intent. Frequently, auditory hallucinations and their visual counterpart are experienced by the subject together.
Hypnagogic hallucinations and hypnopompic hallucinations are considered normal phenomena. Hypnagogic hallucinations can occur as one is falling asleep and hypnopompic hallucinations occur when one is waking up. Hallucinations can be associated with drug use (particularly deliriants), sleep deprivation, psychosis, neurological disorders, and delirium tremens. Many hallucinations happen also during sleep paralysis.
The word "hallucination" itself was introduced into the English language by the 17th-century physician Sir Thomas Browne in 1646 from the derivation of the Latin word alucinari meaning to wander in the mind. For Browne, hallucination means a sort of vision that is "depraved and receive[s] its objects erroneously".
Classification
Hallucinations may be manifested in a variety of forms. Various forms of hallucinations affect different senses, sometimes occurring simultaneously, creating multiple sensory hallucinations for those experiencing them.
Auditory
Auditory hallucinations (also known as paracusia) are the perception of sound without outside stimulus. Auditory hallucinations can be divided into elementary and complex, along with verbal and nonverbal. These hallucinations are the most common type of hallucination, with auditory verbal hallucinations being more common than nonverbal. Elementary hallucinations are the perception of sounds such as hissing, whistling, an extended tone, and more. In many cases, tinnitus is an elementary auditory hallucination. However, some people who experience certain types of tinnitus, especially pulsatile tinnitus, are actually hearing the blood rushing through vessels near the ear. Because the auditory stimulus is present in this situation, it does not qualify it as a hallucination.
Complex hallucinations are those of voices, music, or other sounds that may or may not be clear, may or may not be familiar, and may be friendly, aggressive, or among other possibilities. A hallucination of a single individual person of one or more talking voices is particularly associated with psychotic disorders such as schizophrenia, and hold special significance in diagnosing these conditions.
In schizophrenia, voices are normally perceived coming from outside the person, but in dissociative disorders they are perceived as originating from within the person, commenting in their head instead of behind their back. Differential diagnosis between schizophrenia and dissociative disorders is challenging due to many overlapping symptoms, especially Schneiderian first rank symptoms such as hallucinations. However, many people who do not have a diagnosable mental illness may sometimes hear voices as well. One important example to consider when forming a differential diagnosis for a patient with paracusia is lateral temporal lobe epilepsy. Despite the tendency to associate hearing voices, or otherwise hallucinating, and psychosis with schizophrenia or other psychiatric illnesses, it is crucial to take into consideration that, even if a person does exhibit psychotic features, they do not necessarily have a psychiatric disorder on its own. Disorders such as Wilson's disease, various endocrine diseases, numerous metabolic disturbances, multiple sclerosis, systemic lupus erythematosus, porphyria, sarcoidosis, and many others can present with psychosis.
Musical hallucinations are also relatively common in terms of complex auditory hallucinations and may be the result of a wide range of causes ranging from hearing-loss (such as in musical ear syndrome, the auditory version of Charles Bonnet syndrome), lateral temporal lobe epilepsy, arteriovenous malformation, stroke, lesion, abscess, or tumor.
The Hearing Voices Movement is a support and advocacy group for people who hallucinate voices, but do not otherwise show signs of mental illness or impairment.
High caffeine consumption has been linked to an increase in likelihood of one experiencing auditory hallucinations. A study conducted by the La Trobe University School of Psychological Sciences revealed that as few as five cups of coffee a day (approximately 500 mg of caffeine) could trigger the phenomenon.
Visual
A visual hallucination is "the perception of an external visual stimulus where none exists". A separate but related phenomenon is a visual illusion, which is a distortion of a real external stimulus. Visual hallucinations are classified as simple or complex:
Simple visual hallucinations (SVH) are also referred to as non-formed visual hallucinations and elementary visual hallucinations. These terms refer to lights, colors, geometric shapes, and indiscrete objects. These can be further subdivided into phosphenes which are SVH without structure, and photopsias which are SVH with geometric structures.
Complex visual hallucinations (CVH) are also referred to as formed visual hallucinations. CVHs are clear, lifelike images or scenes such as people, animals, objects, places, etc.
For example, one may report hallucinating a giraffe. A simple visual hallucination is an amorphous figure that may have a similar shape or color to a giraffe (looks like a giraffe), while a complex visual hallucination is a discrete, lifelike image that is, unmistakably, a giraffe.
Command
Command hallucinations are hallucinations in the form of commands; they appear to be from an external source, or can appear coming from the subject's head. The contents of the hallucinations can range from the innocuous to commands to cause harm to the self or others. Command hallucinations are often associated with schizophrenia. People experiencing command hallucinations may or may not comply with the hallucinated commands, depending on the circumstances. Compliance is more common for non-violent commands.
Command hallucinations are sometimes used to defend a crime that has been committed, often homicides. In essence, it is a voice that one hears and it tells the listener what to do. Sometimes the commands are quite benign directives such as "Stand up" or "Shut the door." Whether it is a command for something simple or something that is a threat, it is still considered a "command hallucination." Some helpful questions that can assist one in determining if they may have this includes: "What are the voices telling you to do?", "When did your voices first start telling you to do things?", "Do you recognize the person who is telling you to harm yourself (or others)?", "Do you think you can resist doing what the voices are telling you to do?"
Olfactory
Phantosmia (olfactory hallucinations), smelling an odor that is not actually there, and parosmia (olfactory illusions), inhaling a real odor but perceiving it as different scent than remembered, are distortions to the sense of smell (olfactory system), and in most cases, are not caused by anything serious and will usually go away on their own in time. It can result from a range of conditions such as nasal infections, nasal polyps, dental problems, migraines, head injuries, seizures, strokes, or brain tumors. Environmental exposures can sometimes cause it as well, such as smoking, exposure to certain types of chemicals (e.g., insecticides or solvents), or radiation treatment for head or neck cancer. It can also be a symptom of certain mental disorders such as depression, bipolar disorder, intoxication, substance withdrawal, or psychotic disorders (e.g., schizophrenia). The perceived odors are usually unpleasant and commonly described as smelling burned, foul, spoiled, or rotten.
Tactile
Tactile hallucinations are the illusion of tactile sensory input, simulating various types of pressure to the skin or other organs. One subtype of tactile hallucination, formication, is the sensation of insects crawling underneath the skin and is frequently associated with prolonged cocaine use. However, formication may also be the result of normal hormonal changes such as menopause, or disorders such as peripheral neuropathy, high fevers, Lyme disease, skin cancer, and more.
Gustatory
This type of hallucination is the perception of taste without a stimulus. These hallucinations, which are typically strange or unpleasant, are relatively common among individuals who have certain types of focal epilepsy, especially temporal lobe epilepsy. The regions of the brain responsible for gustatory hallucination in this case are the insula and the superior bank of the sylvian fissure.
Sexual
Sexual hallucinations are the perception of erogenous or orgasmic stimuli. They may be unimodal or multimodal in nature and frequently involve sensation in the genital region, though it is not exclusive. Frequent examples of sexual hallucinations include the sensation of being penetrated, experiencing orgasm, feeling as if one is being touched in an erogenous zone, sensing stimulation in the genitals, feeling the fondling of one's breasts or buttocks and tastes or smells related to sexual activity. Visualizations of sexual content and auditory voices making sexually explicit remarks may sometimes be included in this classification. While it features components of other classifications, sexual hallucinations are distinct due to the orgasmic component and unique presentation.
The regions of the brain responsible differ by the subsection of sexual hallucination. In orgasmic auras, the mesial temporal lobe, right amygdala and hippocampus are involved. In males, genital specific sensations are related to the postcentral gyrus and arousal and ejaculation are linked to stimulation in the posterior frontal lobe. In females, however, the hippocampus and amygdala are connected. Limited studies have been done to understand the mechanism of action behind sexual hallucinations in epilepsy, substance use, and post-traumatic stress disorder etiologies.
Somatic
Somatic hallucinations refer to an interoceptive sensory experience in the absence of stimulus. Somatic hallucinations can be broken down into further subcategories: general, algesic, kinesthetic, and cenesthopathic.
Cenesthopathic- Effecting the cenesthetic sensory modality, cenesthopathic hallucinations are a pathological alteration in the sense of bodily existence, caused by aberrant bodily sensations. Most often, cenesthopathic hallucinations will refer to sensation in the visceral organs. Therefore, it is also known as visceral hallucinations. Manifestations are often subjective, hard to describe and unique to the sufferer. Common manifestations include pressure, burning, tickling, or tightening in various body systems. While these hallucinations can be experienced by a variety of psychiatric and neurological disorder, cenesthopathic schizophrenia is recognized by the ICD as a subtype of schizophrenia marked by primarily cenesthopathic hallucinations and other body image aberrations.
Kinesthetic- Kinesthetic hallucinations, effecting the sensory modality of the same name, are the sensation of movement of the limbs or other body parts without actual movement.
Algesic- Algesic hallucinations, effecting the algesic sensory modality, refers to a perceived perception of pain.
General- General somatic hallucination refers to somatic hallucinations not otherwise categorized by the above subsections. Common examples include when an individual feels that their body is being mutilated, i.e. twisted, torn, or disemboweled. Other reported cases are invasion by animals in the person's internal organs, such as snakes in the stomach or frogs in the rectum. The general feeling that one's flesh is decomposing is also classified under this type of this hallucination.
Multimodal
A hallucination involving sensory modalities is called multimodal, analogous to unimodal hallucinations which have only one sensory modality. The multiple sensory modalities can occur at the same time (simultaneously) or with a delay (serial), be related or unrelated to each other, and be consistent with reality (congruent) or not (incongruent). For example, a person talking in a hallucination would be congruent with reality, but a cat talking would not be.
Multimodal hallucinations are correlated to poorer mental health outcomes, and are often experienced as feeling more real.
Cause
Hallucinations can be caused by a number of factors.
Hypnagogic hallucination
These hallucinations occur just before falling asleep and affect a high proportion of the population: in one survey 37% of the respondents experienced them twice a week. The hallucinations can last from seconds to minutes; all the while, the subject usually remains aware of the true nature of the images. These may be associated with narcolepsy. Hypnagogic hallucinations are sometimes associated with brainstem abnormalities, but this is rare.
Peduncular hallucinosis
Peduncular means pertaining to the peduncle, which is a neural tract running to and from the pons on the brain stem. These hallucinations usually occur in the evenings, but not during drowsiness, as in the case of hypnagogic hallucination. The subject is usually fully conscious and then can interact with the hallucinatory characters for extended periods of time. As in the case of hypnagogic hallucinations, insight into the nature of the images remains intact. The false images can occur in any part of the visual field, and are rarely polymodal.
Delirium tremens
One of the more enigmatic forms of visual hallucination is the highly variable, possibly polymodal delirium tremens. It is associated with withdrawal in alcohol use disorder. Individuals with delirium tremens may be agitated and confused, especially in the later stages of this disease. Insight is gradually reduced with the progression of this disorder. Sleep is disturbed and occurs for a shorter period of time, with rapid eye movement sleep.
Parkinson's disease and Lewy body dementia
Parkinson's disease is linked with Lewy body dementia for their similar hallucinatory symptoms. Presence hallucinations can be an early indicator of cognitive decline in Parkinson's Disease. The symptoms strike during the evening in any part of the visual field, and are rarely polymodal. The segue into hallucination may begin with illusions where sensory perception is greatly distorted, but no novel sensory information is present. These typically last for several minutes, during which time the subject may be either conscious and normal or drowsy/inaccessible. Insight into these hallucinations is usually preserved and REM sleep is usually reduced. Parkinson's disease is usually associated with a degraded substantia nigra pars compacta, but recent evidence suggests that PD affects a number of sites in the brain. Some places of noted degradation include the median raphe nuclei, the noradrenergic parts of the locus coeruleus, and the cholinergic neurons in the parabrachial area and pedunculopontine nuclei of the tegmentum.
Migraine coma
This type of hallucination is usually experienced during the recovery from a comatose state. The migraine coma can last for up to two days, and a state of depression is sometimes comorbid. The hallucinations occur during states of full consciousness, and insight into the hallucinatory nature of the images is preserved. It has been noted that ataxic lesions accompany the migraine coma.
Migraine attacks
Migraine attacks may result in visual hallucinations including auras and in rarer cases, auditory hallucinations.
Charles Bonnet syndrome
Charles Bonnet syndrome is the name given to visual hallucinations experienced by a partially or severely sight impaired person. The hallucinations can occur at any time and can distress people of any age, as they may not initially be aware that they are hallucinating. They may fear for their own mental health initially, which may delay them sharing with carers until they start to understand it themselves. The hallucinations can frighten and disconcert as to what is real and what is not. The hallucinations can sometimes be dispersed by eye movements, or by reasoned logic such as, "I can see fire but there is no smoke and there is no heat from it" or perhaps, "We have an infestation of rats but they have pink ribbons with a bell tied on their necks." Over elapsed months and years, the hallucinations may become more or less frequent with changes in ability to see. The length of time that the sight impaired person can have these hallucinations varies according to the underlying speed of eye deterioration. A differential diagnosis are ophthalmopathic hallucinations.
Focal epilepsy
Visual hallucinations due to focal seizures differ depending on the region of the brain where the seizure occurs. For example, visual hallucinations during occipital lobe seizures are typically visions of brightly colored, geometric shapes that may move across the visual field, multiply, or form concentric rings and generally persist from a few seconds to a few minutes. They are usually unilateral and localized to one part of the visual field on the contralateral side of the seizure focus, typically the temporal field. However, unilateral visions moving horizontally across the visual field begin on the contralateral side and move toward the ipsilateral side.
Temporal lobe seizures, on the other hand, can produce complex visual hallucinations of people, scenes, animals, and more as well as distortions of visual perception. Complex hallucinations may appear to be real or unreal, may or may not be distorted with respect to size, and may seem disturbing or affable, among other variables. One rare but notable type of hallucination is heautoscopy, a hallucination of a mirror image of one's self. These "other selves" may be perfectly still or performing complex tasks, may be an image of a younger self or the present self, and tend to be briefly present. Complex hallucinations are a relatively uncommon finding in temporal lobe epilepsy patients. Rarely, they may occur during occipital focal seizures or in parietal lobe seizures.
Distortions in visual perception during a temporal lobe seizure may include size distortion (micropsia or macropsia), distorted perception of movement (where moving objects may appear to be moving very slowly or to be perfectly still), a sense that surfaces such as ceilings and even entire horizons are moving farther away in a fashion similar to the dolly zoom effect, and other illusions. Even when consciousness is impaired, insight into the hallucination or illusion is typically preserved.
Drug-induced hallucination
Drug-induced hallucinations are caused by hallucinogens, dissociatives, and deliriants, including many drugs with anticholinergic actions and certain stimulants, which are known to cause visual and auditory hallucinations. Some psychedelics such as lysergic acid diethylamide (LSD) and psilocybin can cause hallucinations that range in the spectrum of mild to intense.
Hallucinations, pseudohallucinations, or intensification of pareidolia, particularly auditory, are known side effects of opioids to different degrees—it may be associated with the absolute degree of agonism or antagonism of especially the kappa opioid receptor, sigma receptors, delta opioid receptor and the NMDA receptors or the overall receptor activation profile as synthetic opioids like those of the pentazocine, levorphanol, fentanyl, pethidine, methadone and some other families are more associated with this side effect than natural opioids like morphine and codeine and semi-synthetics like hydromorphone, amongst which there also appears to be a stronger correlation with the relative analgesic strength. Three opioids, Cyclazocine (a benzormorphan opioid/pentazocine relative) and two levorphanol-related morphinan opioids, Cyclorphan and Dextrorphan are classified as hallucinogens, and Dextromethorphan as a dissociative. These drugs also can induce sleep (relating to hypnagogic hallucinations) and especially the pethidines have atropine-like anticholinergic activity, which was possibly also a limiting factor in the use, the psychotomimetic side effects of potentiating morphine, oxycodone, and other opioids with scopolamine (respectively in the Twilight Sleep technique and the combination drug Skophedal, which was eukodal (oxycodone), scopolamine and ephedrine, called the "wonder drug of the 1930s" after its invention in Germany in 1928, but only rarely specially compounded today) (q.q.v.).
Sensory deprivation hallucination
Hallucinations can be caused by sensory deprivation when it occurs for prolonged periods of time, and almost always occurs in the modality being deprived (visual for blindfolded/darkness, auditory for muffled conditions, etc.)
Experimentally-induced hallucinations
Anomalous experiences, such as so-called benign hallucinations, may occur in a person in a state of good mental and physical health, even in the apparent absence of a transient trigger factor such as fatigue, intoxication or sensory deprivation.
The evidence for this statement has been accumulating for more than a century. Studies of benign hallucinatory experiences go back to 1886 and the early work of the Society for Psychical Research, which suggested approximately 10% of the population had experienced at least one hallucinatory episode in the course of their life. More recent studies have validated these findings; the precise incidence found varies with the nature of the episode and the criteria of "hallucination" adopted, but the basic finding is now well-supported.
Non-celiac gluten sensitivity
There is tentative evidence of a relationship with non-celiac gluten sensitivity, the so-called "gluten psychosis".
Pathophysiology
Dopaminergic and serotonergic hallucinations
It has been reported that in serotonergic hallucinations, the person maintains an awareness that they are hallucinating, unlike dopaminergic hallucinations.
Neuroanatomy
Hallucinations are associated with structural and functional abnormalities in primary and secondary sensory cortices. Reduced grey matter in regions of the superior temporal gyrus/middle temporal gyrus, including Broca's area, is associated with auditory hallucinations as a trait, while acute hallucinations are associated with increased activity in the same regions along with the hippocampus, parahippocampus, and the right hemispheric homologue of Broca's area in the inferior frontal gyrus. Grey and white matter abnormalities in visual regions are associated with hallucinations in diseases such as Alzheimer's disease, further supporting the notion of dysfunction in sensory regions underlying hallucinations.
One proposed model of hallucinations posits that over-activity in sensory regions, which is normally attributed to internal sources via feedforward networks to the inferior frontal gyrus, is interpreted as originating externally due to abnormal connectivity or functionality of the feedforward network. This is supported by cognitive studies of those with hallucinations, who have demonstrated abnormal attribution of self generated stimuli.
Disruptions in thalamocortical circuitry may underlie the observed top down and bottom up dysfunction. Thalamocortical circuits, composed of projections between thalamic and cortical neurons and adjacent interneurons, underlie certain electrophysical characteristics (gamma oscillations) that are associated with sensory processing. Cortical inputs to thalamic neurons enable attentional modulation of sensory neurons. Dysfunction in sensory afferents, and abnormal cortical input may result in pre-existing expectations modulating sensory experience, potentially resulting in the generation of hallucinations. Hallucinations are associated with less accurate sensory processing, and more intense stimuli with less interference are necessary for accurate processing and the appearance of gamma oscillations (called "gamma synchrony"). Hallucinations are also associated with the absence of reduction in P50 amplitude in response to the presentation of a second stimuli after an initial stimulus; this is thought to represent failure to gate sensory stimuli, and can be exacerbated by dopamine release agents.
Abnormal assignment of salience to stimuli may be one mechanism of hallucinations. Dysfunctional dopamine signaling may lead to abnormal top down regulation of sensory processing, allowing expectations to distort sensory input.
Treatments
There are few treatments for many types of hallucinations. However, for those hallucinations caused by mental disease, a psychologist or psychiatrist should be consulted, and treatment will be based on the observations of those doctors. Antipsychotic and atypical antipsychotic medication may also be utilized to treat the illness if the symptoms are severe and cause significant distress. For other causes of hallucinations there is no factual evidence to support any one treatment is scientifically tested and proven. However, abstaining from hallucinogenic drugs, stimulant drugs, managing stress levels, living healthily, and getting plenty of sleep can help reduce the prevalence of hallucinations. In all cases of hallucinations, medical attention should be sought out and informed of one's specific symptoms. Meta-analyses show that cognitive behavioral therapy and metacognitive training can also reduce the severity of hallucinations. Furthermore, there are recovery movements all around the world that advocate for individuals with schizophrenia or voice-hearers (individuals that hear voices). The Hearing Voices Movement, starting in Europe, is a great example of utilizing the knowledge and experience of voice hearers and combining it with experts in disorders such as schizophrenia, such as psychiatrists.
Epidemiology
Prevalence of hallucinations varies depending on underlying medical conditions, which sensory modalities are affected, age and culture. auditory hallucinations are the most well studied and most common sensory modality of hallucinations, with an estimated lifetime prevalence of 9.6%. Children and adolescents have been found to experience similar rates (12.7% and 12.4% respectively) which occur mostly during late childhood and adolescence. This is compared with adults and those over 60 (with rates of 5.8% and 4.8% respectively). For those with schizophrenia, the lifetime prevalence of hallucinations is 80% and the estimated prevalence of visual hallucinations is 27%, compared to 79% for auditory hallucinations. A 2019 study suggested 16.2% of adults with hearing impairment experience hallucinations, with prevalence rising to 24% in the most hearing impaired group.
A risk factor for multimodal hallucinations is prior experience of unimodal hallucinations. In 90% cases of psychosis, a visual hallucination occurs in combination with another sensory modality, most often being auditory or somatic. In schizophrenia, multimodal hallucinations are twice as common as unimodal ones.
A 2015 review of 55 publications from 1962 to 2014 found 16–28.6% of those experiencing hallucinations report at least some religious content in them, along with 20–60% reporting some religious content in delusions. There is some evidence for delusions being a risk factor for religious hallucinations, with and 61.7% of people having experienced any delusion and 75.9% of those having experienced a religious delusion found to also experience hallucinations.
| Biology and health sciences | Miscellaneous | null |
49172 | https://en.wikipedia.org/wiki/Interval%20%28mathematics%29 | Interval (mathematics) | In mathematics, a real interval is the set of all real numbers lying between two fixed endpoints with no "gaps". Each endpoint is either a real number or positive or negative infinity, indicating the interval extends without a bound. A real interval can contain neither endpoint, either endpoint, or both endpoints, excluding any endpoint which is infinite.
For example, the set of real numbers consisting of , , and all numbers in between is an interval, denoted and called the unit interval; the set of all positive real numbers is an interval, denoted ; the set of all real numbers is an interval, denoted ; and any single real number is an interval, denoted .
Intervals are ubiquitous in mathematical analysis. For example, they occur implicitly in the epsilon-delta definition of continuity; the intermediate value theorem asserts that the image of an interval by a continuous function is an interval; integrals of real functions are defined over an interval; etc.
Interval arithmetic consists of computing with intervals instead of real numbers for providing a guaranteed enclosure of the result of a numerical computation, even in the presence of uncertainties of input data and rounding errors.
Intervals are likewise defined on an arbitrary totally ordered set, such as integers or rational numbers. The notation of integer intervals is considered in the special section below.
Definitions and terminology
An interval is a subset of the real numbers that contains all real numbers lying between any two numbers of the subset.
The endpoints of an interval are its supremum, and its infimum, if they exist as real numbers. If the infimum does not exist, one says often that the corresponding endpoint is Similarly, if the supremum does not exist, one says that the corresponding endpoint is
Intervals are completely determined by their endpoints and whether each endpoint belong to the interval. This is a consequence of the least-upper-bound property of the real numbers. This characterization is used to specify intervals by mean of , which is described below.
An does not include any endpoint, and is indicated with parentheses. For example, is the interval of all real numbers greater than and less than . (This interval can also be denoted by , see below). The open interval consists of real numbers greater than , i.e., positive real numbers. The open intervals are thus one of the forms
where and are real numbers such that When in the first case, the resulting interval is the empty set which is a degenerate interval (see below). The open intervals are those intervals that are open sets for the usual topology on the real numbers.
A is an interval that includes all its endpoints and is denoted with square brackets. For example, means greater than or equal to and less than or equal to . Closed intervals have one of the following forms in which and are real numbers such that
The closed intervals are those intervals that are closed sets for the usual topology on the real numbers. The empty set and are the only intervals that are both open and closed.
A has two endpoints and includes only one of them. It is said left-open or right-open depending on whether the excluded endpoint is on the left or on the right. These intervals are denoted by mixing notations for open and closed intervals. For example, means greater than and less than or equal to , while means greater than or equal to and less than . The half-open intervals have the form
Every closed interval is a closed set of the real line, but an interval that is a closed set need not be a closed interval. For example, intervals and are also closed sets in the real line. Intervals and are neither an open set nor a closed set. If one allows an endpoint in the closed side to be an infinity (such as ), the result will not be an interval, since it is not even a subset of the real numbers. Instead, the result can be seen as an interval in the extended real line, which occurs in measure theory, for example.
In summary, a set of the real numbers is an interval, if and only if it is an open interval, a closed interval, or a half-open interval.
A is any set consisting of a single real number (i.e., an interval of the form ). Some authors include the empty set in this definition. A real interval that is neither empty nor degenerate is said to be proper, and has infinitely many elements.
An interval is said to be left-bounded or right-bounded, if there is some real number that is, respectively, smaller than or larger than all its elements. An interval is said to be bounded, if it is both left- and right-bounded; and is said to be unbounded otherwise. Intervals that are bounded at only one end are said to be half-bounded. The empty set is bounded, and the set of all reals is the only interval that is unbounded at both ends. Bounded intervals are also commonly known as finite intervals.
Bounded intervals are bounded sets, in the sense that their diameter (which is equal to the absolute difference between the endpoints) is finite. The diameter may be called the length, width, measure, range, or size of the interval. The size of unbounded intervals is usually defined as , and the size of the empty interval may be defined as (or left undefined).
The centre (midpoint) of a bounded interval with endpoints and is , and its radius is the half-length . These concepts are undefined for empty or unbounded intervals.
An interval is said to be left-open if and only if it contains no minimum (an element that is smaller than all other elements); right-open if it contains no maximum; and open if it contains neither. The interval , for example, is left-closed and right-open. The empty set and the set of all reals are both open and closed intervals, while the set of non-negative reals, is a closed interval that is right-open but not left-open. The open intervals are open sets of the real line in its standard topology, and form a base of the open sets.
An interval is said to be left-closed if it has a minimum element or is left-unbounded, right-closed if it has a maximum or is right unbounded; it is simply closed if it is both left-closed and right closed. So, the closed intervals coincide with the closed sets in that topology.
The interior of an interval is the largest open interval that is contained in ; it is also the set of points in which are not endpoints of . The closure of is the smallest closed interval that contains ; which is also the set augmented with its finite endpoints.
For any set of real numbers, the interval enclosure or interval span of is the unique interval that contains , and does not properly contain any other interval that also contains .
An interval is a subinterval of interval if is a subset of . An interval is a proper subinterval of if is a proper subset of .
However, there is conflicting terminology for the terms segment and interval, which have been employed in the literature in two essentially opposite ways, resulting in ambiguity when these terms are used. The Encyclopedia of Mathematics defines interval (without a qualifier) to exclude both endpoints (i.e., open interval) and segment to include both endpoints (i.e., closed interval), while Rudin's Principles of Mathematical Analysis calls sets of the form [a, b] intervals and sets of the form (a, b) segments throughout. These terms tend to appear in older works; modern texts increasingly favor the term interval (qualified by open, closed, or half-open), regardless of whether endpoints are included.
Notations for intervals
The interval of numbers between and , including and , is often denoted . The two numbers are called the endpoints of the interval. In countries where numbers are written with a decimal comma, a semicolon may be used as a separator to avoid ambiguity.
Including or excluding endpoints
To indicate that one of the endpoints is to be excluded from the set, the corresponding square bracket can be either replaced with a parenthesis, or reversed. Both notations are described in International standard ISO 31-11. Thus, in set builder notation,
Each interval , , and represents the empty set, whereas denotes the singleton set . When , all four notations are usually taken to represent the empty set.
Both notations may overlap with other uses of parentheses and brackets in mathematics. For instance, the notation is often used to denote an ordered pair in set theory, the coordinates of a point or vector in analytic geometry and linear algebra, or (sometimes) a complex number in algebra. That is why Bourbaki introduced the notation to denote the open interval. The notation too is occasionally used for ordered pairs, especially in computer science.
Some authors such as Yves Tillé use to denote the complement of the interval ; namely, the set of all real numbers that are either less than or equal to , or greater than or equal to .
Infinite endpoints
In some contexts, an interval may be defined as a subset of the extended real numbers, the set of all real numbers augmented with and .
In this interpretation, the notations , , , and are all meaningful and distinct. In particular, denotes the set of all ordinary real numbers, while denotes the extended reals.
Even in the context of the ordinary reals, one may use an infinite endpoint to indicate that there is no bound in that direction. For example, is the set of positive real numbers, also written as The context affects some of the above definitions and terminology. For instance, the interval = is closed in the realm of ordinary reals, but not in the realm of the extended reals.
Integer intervals
When and are integers, the notation ⟦a, b⟧, or or or just , is sometimes used to indicate the interval of all integers between and included. The notation is used in some programming languages; in Pascal, for example, it is used to formally define a subrange type, most frequently used to specify lower and upper bounds of valid indices of an array.
Another way to interpret integer intervals are as sets defined by enumeration, using ellipsis notation.
An integer interval that has a finite lower or upper endpoint always includes that endpoint. Therefore, the exclusion of endpoints can be explicitly denoted by writing , , or . Alternate-bracket notations like or are rarely used for integer intervals.
Properties
The intervals are precisely the connected subsets of It follows that the image of an interval by any continuous function from to is also an interval. This is one formulation of the intermediate value theorem.
The intervals are also the convex subsets of The interval enclosure of a subset is also the convex hull of
The closure of an interval is the union of the interval and the set of its finite endpoints, and hence is also an interval. (The latter also follows from the fact that the closure of every connected subset of a topological space is a connected subset.) In other words, we have
The intersection of any collection of intervals is always an interval. The union of two intervals is an interval if and only if they have a non-empty intersection or an open end-point of one interval is a closed end-point of the other, for example
If is viewed as a metric space, its open balls are the open bounded intervals , and its closed balls are the closed bounded intervals . In particular, the metric and order topologies in the real line coincide, which is the standard topology of the real line.
Any element of an interval defines a partition of into three disjoint intervals 1, 2, 3: respectively, the elements of that are less than , the singleton and the elements that are greater than . The parts 1 and 3 are both non-empty (and have non-empty interiors), if and only if is in the interior of . This is an interval version of the trichotomy principle.
Dyadic intervals
A dyadic interval is a bounded real interval whose endpoints are and where and are integers. Depending on the context, either endpoint may or may not be included in the interval.
Dyadic intervals have the following properties:
The length of a dyadic interval is always an integer power of two.
Each dyadic interval is contained in exactly one dyadic interval of twice the length.
Each dyadic interval is spanned by two dyadic intervals of half the length.
If two open dyadic intervals overlap, then one of them is a subset of the other.
The dyadic intervals consequently have a structure that reflects that of an infinite binary tree.
Dyadic intervals are relevant to several areas of numerical analysis, including adaptive mesh refinement, multigrid methods and wavelet analysis. Another way to represent such a structure is p-adic analysis (for ).
Generalizations
Balls
An open finite interval is a 1-dimensional open ball with a center at and a radius of The closed finite interval is the corresponding closed ball, and the interval's two endpoints form a 0-dimensional sphere. Generalized to -dimensional Euclidean space, a ball is the set of points whose distance from the center is less than the radius. In the 2-dimensional case, a ball is called a disk.
If a half-space is taken as a kind of degenerate ball (without a well-defined center or radius), a half-space can be taken as analogous to a half-bounded interval, with its boundary plane as the (degenerate) sphere corresponding to the finite endpoint.
Multi-dimensional intervals
A finite interval is (the interior of) a 1-dimensional hyperrectangle. Generalized to real coordinate space an axis-aligned hyperrectangle (or box) is the Cartesian product of finite intervals. For this is a rectangle; for this is a rectangular cuboid (also called a "box").
Allowing for a mix of open, closed, and infinite endpoints, the Cartesian product of any intervals, is sometimes called an -dimensional interval.
A facet of such an interval is the result of replacing any non-degenerate interval factor by a degenerate interval consisting of a finite endpoint of The faces of comprise itself and all faces of its facets. The corners of are the faces that consist of a single point of
Convex polytopes
Any finite interval can be constructed as the intersection of half-bounded intervals (with an empty intersection taken to mean the whole real line), and the intersection of any number of half-bounded intervals is a (possibly empty) interval. Generalized to -dimensional affine space, an intersection of half-spaces (of arbitrary orientation) is (the interior of) a convex polytope, or in the 2-dimensional case a convex polygon.
Domains
An open interval is a connected open set of real numbers. Generalized to topological spaces in general, a non-empty connected open set is called a domain.
Complex intervals
Intervals of complex numbers can be defined as regions of the complex plane, either rectangular or circular.
Intervals in posets and preordered sets
Definitions
The concept of intervals can be defined in arbitrary partially ordered sets or more generally, in arbitrary preordered sets. For a preordered set and two elements one similarly defines the intervals
where means Actually, the intervals with single or no endpoints are the same as the intervals with two endpoints in the larger preordered set
defined by adding new smallest and greatest elements (even if there were ones), which are subsets of In the case of one may take to be the extended real line.
Convex sets and convex components in order theory
A subset of the preordered set is (order-)convex if for every and every we have Unlike in the case of the real line, a convex set of a preordered set need not be an interval. For example, in the totally ordered set of rational numbers, the set
is convex, but not an interval of since there is no square root of two in
Let be a preordered set and let The convex sets of contained in form a poset under inclusion. A maximal element of this poset is called a convex component of By the Zorn lemma, any convex set of contained in is contained in some convex component of but such components need not be unique. In a totally ordered set, such a component is always unique. That is, the convex components of a subset of a totally ordered set form a partition.
Properties
A generalization of the characterizations of the real intervals follows. For a non-empty subset of a linear continuum the following conditions are equivalent.
The set is an interval.
The set is order-convex.
The set is a connected subset when is endowed with the order topology.
For a subset of a lattice the following conditions are equivalent.
The set is a sublattice and an (order-)convex set.
There is an ideal and a filter such that
Applications
In general topology
Every Tychonoff space is embeddable into a product space of the closed unit intervals Actually, every Tychonoff space that has a base of cardinality is embeddable into the product of copies of the intervals.
The concepts of convex sets and convex components are used in a proof that every totally ordered set endowed with the order topology is completely normal or moreover, monotonically normal.
Topological algebra
Intervals can be associated with points of the plane, and hence regions of intervals can be associated with regions of the plane. Generally, an interval in mathematics corresponds to an ordered pair taken from the direct product of real numbers with itself, where it is often assumed that . For purposes of mathematical structure, this restriction is discarded, and "reversed intervals" where are allowed. Then, the collection of all intervals can be identified with the topological ring formed by the direct sum of with itself, where addition and multiplication are defined component-wise.
The direct sum algebra has two ideals, { [x,0] : x ∈ R } and { [0,y] : y ∈ R }. The identity element of this algebra is the condensed interval . If interval is not in one of the ideals, then it has multiplicative inverse . Endowed with the usual topology, the algebra of intervals forms a topological ring. The group of units of this ring consists of four quadrants determined by the axes, or ideals in this case. The identity component of this group is quadrant I.
Every interval can be considered a symmetric interval around its midpoint. In a reconfiguration published in 1956 by M Warmus, the axis of "balanced intervals" is used along with the axis of intervals that reduce to a point. Instead of the direct sum the ring of intervals has been identified with the hyperbolic numbers by M. Warmus and D. H. Lehmer through the identification
where
This linear mapping of the plane, which amounts of a ring isomorphism, provides the plane with a multiplicative structure having some analogies to ordinary complex arithmetic, such as polar decomposition.
| Mathematics | Real analysis | null |
49176 | https://en.wikipedia.org/wiki/Conjugacy%20class | Conjugacy class | In mathematics, especially group theory, two elements and of a group are conjugate if there is an element in the group such that This is an equivalence relation whose equivalence classes are called conjugacy classes. In other words, each conjugacy class is closed under for all elements in the group.
Members of the same conjugacy class cannot be distinguished by using only the group structure, and therefore share many properties. The study of conjugacy classes of non-abelian groups is fundamental for the study of their structure. For an abelian group, each conjugacy class is a set containing one element (singleton set).
Functions that are constant for members of the same conjugacy class are called class functions.
Definition
Let be a group. Two elements are conjugate if there exists an element such that in which case is called of and is called a conjugate of
In the case of the general linear group of invertible matrices, the conjugacy relation is called matrix similarity.
It can be easily shown that conjugacy is an equivalence relation and therefore partitions into equivalence classes. (This means that every element of the group belongs to precisely one conjugacy class, and the classes and are equal if and only if and are conjugate, and disjoint otherwise.) The equivalence class that contains the element is
and is called the conjugacy class of The of is the number of distinct (nonequivalent) conjugacy classes. All elements belonging to the same conjugacy class have the same order.
Conjugacy classes may be referred to by describing them, or more briefly by abbreviations such as "6A", meaning "a certain conjugacy class with elements of order 6", and "6B" would be a different conjugacy class with elements of order 6; the conjugacy class 1A is the conjugacy class of the identity which has order 1. In some cases, conjugacy classes can be described in a uniform way; for example, in the symmetric group they can be described by cycle type.
Examples
The symmetric group consisting of the 6 permutations of three elements, has three conjugacy classes:
No change . The single member has order 1.
Transposing two . The 3 members all have order 2.
A cyclic permutation of all three . The 2 members both have order 3.
These three classes also correspond to the classification of the isometries of an equilateral triangle.
The symmetric group consisting of the 24 permutations of four elements, has five conjugacy classes, listed with their description, cycle type, member order, and members:
No change. Cycle type = [14]. Order = 1. Members = { (1, 2, 3, 4) }. The single row containing this conjugacy class is shown as a row of black circles in the adjacent table.
Interchanging two (other two remain unchanged). Cycle type = [1221]. Order = 2. Members = { (1, 2, 4, 3), (1, 4, 3, 2), (1, 3, 2, 4), (4, 2, 3, 1), (3, 2, 1, 4), (2, 1, 3, 4) }). The 6 rows containing this conjugacy class are highlighted in green in the adjacent table.
A cyclic permutation of three (other one remains unchanged). Cycle type = [1131]. Order = 3. Members = { (1, 3, 4, 2), (1, 4, 2, 3), (3, 2, 4, 1), (4, 2, 1, 3), (4, 1, 3, 2), (2, 4, 3, 1), (3, 1, 2, 4), (2, 3, 1, 4) }). The 8 rows containing this conjugacy class are shown with normal print (no boldface or color highlighting) in the adjacent table.
A cyclic permutation of all four. Cycle type = [41]. Order = 4. Members = { (2, 3, 4, 1), (2, 4, 1, 3), (3, 1, 4, 2), (3, 4, 2, 1), (4, 1, 2, 3), (4, 3, 1, 2) }). The 6 rows containing this conjugacy class are highlighted in orange in the adjacent table.
Interchanging two, and also the other two. Cycle type = [22]. Order = 2. Members = { (2, 1, 4, 3), (4, 3, 2, 1), (3, 4, 1, 2) }). The 3 rows containing this conjugacy class are shown with boldface entries in the adjacent table.
The proper rotations of the cube, which can be characterized by permutations of the body diagonals, are also described by conjugation in
In general, the number of conjugacy classes in the symmetric group is equal to the number of integer partitions of This is because each conjugacy class corresponds to exactly one partition of into cycles, up to permutation of the elements of
In general, the Euclidean group can be studied by conjugation of isometries in Euclidean space.
Example
Let G =
a = ( 2 3 )
x = ( 1 2 3 )
x−1 = ( 3 2 1 )
Then xax−1
= ( 1 2 3 ) ( 2 3 ) ( 3 2 1 ) = ( 3 1 )
= ( 3 1 ) is Conjugate of ( 2 3 )
Properties
The identity element is always the only element in its class, that is
If is abelian then for all , i.e. for all (and the converse is also true: if all conjugacy classes are singletons then is abelian).
If two elements belong to the same conjugacy class (that is, if they are conjugate), then they have the same order. More generally, every statement about can be translated into a statement about because the map is an automorphism of called an inner automorphism. See the next property for an example.
If and are conjugate, then so are their powers and (Proof: if then ) Thus taking th powers gives a map on conjugacy classes, and one may consider which conjugacy classes are in its preimage. For example, in the symmetric group, the square of an element of type (3)(2) (a 3-cycle and a 2-cycle) is an element of type (3), therefore one of the power-up classes of (3) is the class (3)(2) (where is a power-up class of ).
An element lies in the center of if and only if its conjugacy class has only one element, itself. More generally, if denotes the of i.e., the subgroup consisting of all elements such that then the index is equal to the number of elements in the conjugacy class of (by the orbit-stabilizer theorem).
Take and let be the distinct integers which appear as lengths of cycles in the cycle type of (including 1-cycles). Let be the number of cycles of length in for each (so that ). Then the number of conjugates of is:
Conjugacy as group action
For any two elements let
This defines a group action of on The orbits of this action are the conjugacy classes, and the stabilizer of a given element is the element's centralizer.
Similarly, we can define a group action of on the set of all subsets of by writing
or on the set of the subgroups of
Conjugacy class equation
If is a finite group, then for any group element the elements in the conjugacy class of are in one-to-one correspondence with cosets of the centralizer This can be seen by observing that any two elements and belonging to the same coset (and hence, for some in the centralizer ) give rise to the same element when conjugating :
That can also be seen from the orbit-stabilizer theorem, when considering the group as acting on itself through conjugation, so that orbits are conjugacy classes and stabilizer subgroups are centralizers. The converse holds as well.
Thus the number of elements in the conjugacy class of is the index of the centralizer in ; hence the size of each conjugacy class divides the order of the group.
Furthermore, if we choose a single representative element from every conjugacy class, we infer from the disjointness of the conjugacy classes that
where is the centralizer of the element Observing that each element of the center forms a conjugacy class containing just itself gives rise to the class equation:
where the sum is over a representative element from each conjugacy class that is not in the center.
Knowledge of the divisors of the group order can often be used to gain information about the order of the center or of the conjugacy classes.
Example
Consider a finite -group (that is, a group with order where is a prime number and ). We are going to prove that .
Since the order of any conjugacy class of must divide the order of it follows that each conjugacy class that is not in the center also has order some power of where But then the class equation requires that From this we see that must divide so
In particular, when then is an abelian group since any non-trivial group element is of order or If some element of is of order then is isomorphic to the cyclic group of order hence abelian. On the other hand, if every non-trivial element in is of order hence by the conclusion above then or We only need to consider the case when then there is an element of which is not in the center of Note that includes and the center which does not contain but at least elements. Hence the order of is strictly larger than therefore therefore is an element of the center of a contradiction. Hence is abelian and in fact isomorphic to the direct product of two cyclic groups each of order
Conjugacy of subgroups and general subsets
More generally, given any subset ( not necessarily a subgroup), define a subset to be conjugate to if there exists some such that Let be the set of all subsets such that is conjugate to
A frequently used theorem is that, given any subset the index of (the normalizer of ) in equals the cardinality of :
This follows since, if then if and only if in other words, if and only if are in the same coset of
By using this formula generalizes the one given earlier for the number of elements in a conjugacy class.
The above is particularly useful when talking about subgroups of The subgroups can thus be divided into conjugacy classes, with two subgroups belonging to the same class if and only if they are conjugate.
Conjugate subgroups are isomorphic, but isomorphic subgroups need not be conjugate. For example, an abelian group may have two different subgroups which are isomorphic, but they are never conjugate.
Geometric interpretation
Conjugacy classes in the fundamental group of a path-connected topological space can be thought of as equivalence classes of free loops under free homotopy.
Conjugacy class and irreducible representations in finite group
In any finite group, the number of nonisomorphic irreducible representations over the complex numbers is precisely the number of conjugacy classes.
| Mathematics | Abstract algebra | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.