id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
13,244,943 | https://en.wikipedia.org/wiki/Acerinox%20accident | The Acerinox accident was a radioactive contamination accident in the province of Cádiz. In May 1998, a caesium-137 source managed to pass through the monitoring equipment in an Acerinox scrap metal reprocessing plant in Los Barrios, Spain. When melted, the caesium-137 caused the release of a radioactive cloud. The Acerinox chimney detectors failed to detect it, but it was eventually detected in France, Italy, Switzerland, Germany, and Austria. The activity concentrations measured were up to 1000 times higher than normal background levels, although the absolute values recorded are still regarded as negligible in terms of radiation protection.
The accident contaminated the scrap metal reprocessing plant, plus two other steel mills that sent its waste for decontamination. According to independent laboratories, the ashes produced by the Acerinox factory had between 640 and 1420 becquerels per gram (the Euratom norm is 10 Bq/g), high enough to be a threat to the public.
On the radiological consequences of this event, six people were exposed to slight levels of caesium-137 contamination. The estimated total costs for clean-up, waste storage, and lost production in the factory were around 26 million US dollars (most of it due to the lost production).
See also
List of civilian radiation accidents
References
External links
http://www.iaea.org/Publications/Booklets/SealedRadioactiveSources/scrap_lessons.html
http://www10.antenna.nl/wise/495/4895.html
Report of the Council of Nuclear Security of Spain (Spanish)
Man-made disasters in Spain
1998 industrial disasters
1998 in Spain
1998 health disasters
Radiation accidents and incidents
Waste disposal incidents
Caesium
Radioactively contaminated areas
May 1998 events in Europe
Los Barrios
Pollution in Spain | Acerinox accident | [
"Chemistry",
"Technology"
] | 381 | [
"Radioactively contaminated areas",
"Soil contamination",
"Radioactive contamination"
] |
13,245,649 | https://en.wikipedia.org/wiki/Bubble%20point | In thermodynamics, the bubble point is the temperature (at a given pressure) where the first bubble of vapor is formed when heating a liquid consisting of two or more components. Given that vapor will probably have a different composition than the liquid, the bubble point (along with the dew point) at different compositions are useful data when designing distillation systems.
For a single component the bubble point and the dew point are the same and are referred to as the boiling point.
Calculating the bubble point
At the bubble point, the following relationship holds:
where
.
K is the distribution coefficient or K factor, defined as the ratio of mole fraction in the vapor phase to the mole fraction in the liquid phase at equilibrium.
When Raoult's law and Dalton's law hold for the mixture, the K factor is defined as the ratio of the vapor pressure to the total pressure of the system:
Given either of or and either the temperature or pressure of a two-component system, calculations can be performed to determine the unknown information.
See also
Phase diagram
Azeotrope
Dew point
References
Temperature
Phase transitions
Gases | Bubble point | [
"Physics",
"Chemistry"
] | 224 | [
"Scalar physical quantities",
"Temperature",
"Physical phenomena",
"Phase transitions",
"Physical quantities",
"Gases",
"Thermodynamic properties",
"SI base quantities",
"Intensive quantities",
"Phases of matter",
"Critical phenomena",
"Thermodynamics",
"Statistical mechanics",
"Wikipedia ... |
42,967 | https://en.wikipedia.org/wiki/Ornithology | Ornithology is a branch of zoology that concerns the study of birds. Several aspects of ornithology differ from related disciplines, due partly to the high visibility and the aesthetic appeal of birds. It has also been an area with a large contribution made by amateurs in terms of time, resources, and financial support. Studies on birds have helped develop key concepts in biology including evolution, behaviour and ecology such as the definition of species, the process of speciation, instinct, learning, ecological niches, guilds, insular biogeography, phylogeography, and conservation.
While early ornithology was principally concerned with descriptions and distributions of species, ornithologists today seek answers to very specific questions, often using birds as models to test hypotheses or predictions based on theories. Most modern biological theories apply across life forms, and the number of scientists who identify themselves as "ornithologists" has therefore declined. A wide range of tools and techniques are used in ornithology, both inside the laboratory and out in the field, and innovations are constantly made. Most biologists who recognise themselves as "ornithologists" study specific biology research areas, such as anatomy, physiology, taxonomy (phylogenetics), ecology, or behaviour.
Definition and etymology
The word "ornithology" comes from the late 16th-century Latin ornithologia meaning "bird science" from the Greek ὄρνις ornis ("bird") and λόγος logos ("theory, science, thought").
History
The history of ornithology largely reflects the trends in the history of biology, as well as many other scientific disciplines, including ecology, anatomy, physiology, paleontology, and more recently, molecular biology. Trends include the move from mere descriptions to the identification of patterns, thus towards elucidating the processes that produce these patterns.
Early knowledge and study
Humans have had an observational relationship with birds since prehistory, with some stone-age drawings being amongst the oldest indications of an interest in birds. Birds were perhaps important as food sources, and bones of as many as 80 species have been found in excavations of early Stone Age settlements. Water bird and seabird remains have also been found in shell mounds on the island of Oronsay off the coast of Scotland.
Cultures around the world have rich vocabularies related to birds. Traditional bird names are often based on detailed knowledge of the behaviour, with many names being onomatopoeic, and still in use. Traditional knowledge may also involve the use of birds in folk medicine and knowledge of these practices are passed on through oral traditions (see ethnoornithology). Hunting of wild birds as well as their domestication would have required considerable knowledge of their habits. Poultry farming and falconry were practised from early times in many parts of the world. Artificial incubation of poultry was practised in China around 246 BC and around at least 400 BC in Egypt. The Egyptians also made use of birds in their hieroglyphic scripts, many of which, though stylized, are still identifiable to species. Early written records provide valuable information on the past distributions of species. For instance, Xenophon records the abundance of the ostrich in Assyria (Anabasis, i. 5); this subspecies from Asia Minor is extinct and all extant ostrich races are today restricted to Africa. Other old writings such as the Vedas (1500–800 BC) demonstrate the careful observation of avian life histories and include the earliest reference to the habit of brood parasitism by the Asian koel (Eudynamys scolopaceus). Like writing, the early art of China, Japan, Persia, and India also demonstrate knowledge, with examples of scientifically accurate bird illustrations.
Aristotle in 350 BC in his History of animals noted the habit of bird migration, moulting, egg laying, and lifespans, as well as compiling a list of 170 different bird species. However, he also introduced and propagated several myths, such as the idea that swallows hibernated in winter, although he noted that cranes migrated from the steppes of Scythia to the marshes at the headwaters of the Nile. The idea of swallow hibernation became so well established that even as late as in 1878, Elliott Coues could list as many as 182 contemporary publications dealing with the hibernation of swallows and little published evidence to contradict the theory. Similar misconceptions existed regarding the breeding of barnacle geese. Their nests had not been seen, and they were believed to grow by transformations of goose barnacles, an idea that became prevalent from around the 11th century and noted by Bishop Giraldus Cambrensis (Gerald of Wales) in Topographia Hiberniae (1187). Around 77 AD, Pliny the Elder described birds, among other creatures, in his Historia Naturalis.
The earliest record of falconry comes from the reign of Sargon II (722–705 BC) in Assyria. Falconry is thought to have made its entry to Europe only after AD 400, brought in from the east after invasions by the Huns and Alans. Starting from the eighth century, numerous Arabic works on the subject and general ornithology were written, as well as translations of the works of ancient writers from Greek and Syriac. In the 12th and 13th centuries, crusades and conquest had subjugated Islamic territories in southern Italy, central Spain, and the Levant under European rule, and for the first time translations into Latin of the great works of Arabic and Greek scholars were made with the help of Jewish and Muslim scholars, especially in Toledo, which had fallen into Christian hands in 1085 and whose libraries had escaped destruction. Michael Scotus from Scotland made a Latin translation of Aristotle's work on animals from Arabic here around 1215, which was disseminated widely and was the first time in a millennium that this foundational text on zoology became available to Europeans. Falconry was popular in the Norman court in Sicily, and a number of works on the subject were written in Palermo. Emperor Frederick II of Hohenstaufen (1194–1250) learned about an falconry during his youth in Sicily and later built up a menagerie and sponsored translations of Arabic texts, among which the popular Arabic work known as the Liber Moaminus by an unknown author which was translated into Latin by Theodore of Antioch from Syria in 1240–1241 as the De Scientia Venandi per Aves, and also Michael Scotus (who had removed to Palermo) translated Ibn Sīnā's Kitāb al-Ḥayawān of 1027 for the Emperor, a commentary and scientific update of Aristotle's work which was part of Ibn Sīnā's massive Kitāb al-Šifāʾ. Frederick II eventually wrote his own treatise on falconry, the De arte venandi cum avibus, in which he related his ornithological observations and the results of the hunts and experiments his court enjoyed performing.
Several early German and French scholars compiled old works and conducted new research on birds. These included Guillaume Rondelet, who described his observations in the Mediterranean, and Pierre Belon, who described the fish and birds that he had seen in France and the Levant. Belon's Book of Birds (1555) is a folio volume with descriptions of some 200 species. His comparison of the skeleton of humans and birds is considered as a landmark in comparative anatomy. Volcher Coiter (1534–1576), a Dutch anatomist, made detailed studies of the internal structures of birds and produced a classification of birds, De Differentiis Avium (around 1572), that was based on structure and habits. Konrad Gesner wrote the Vogelbuch and Icones avium omnium around 1557. Like Gesner, Ulisse Aldrovandi, an encyclopedic naturalist, began a 14-volume natural history with three volumes on birds, entitled ornithologiae hoc est de avibus historiae libri XII, which was published from 1599 to 1603. Aldrovandi showed great interest in plants and animals, and his work included 3000 drawings of fruits, flowers, plants, and animals, published in 363 volumes. His Ornithology alone covers 2000 pages and included such aspects as the chicken and poultry techniques. He used a number of traits including behaviour, particularly bathing and dusting, to classify bird groups.
William Turner's Historia Avium (History of Birds), published at Cologne in 1544, was an early ornithological work from England. He noted the commonness of kites in English cities where they snatched food out of the hands of children. He included folk beliefs such as those of anglers. Anglers believed that the osprey emptied their fishponds and would kill them, mixing the flesh of the osprey into their fish bait. Turner's work reflected the violent times in which he lived, and stands in contrast to later works such as Gilbert White's 1789 The Natural History and Antiquities of Selborne that were written in a tranquil era.
In the 17th century, Francis Willughby (1635–1672) and John Ray (1627–1705) created the first major system of bird classification that was based on function and morphology rather than on form or behaviour. Willughby's Ornithologiae libri tres (1676) completed by John Ray is sometimes considered to mark the beginning of scientific ornithology. Ray also worked on Ornithologia, which was published posthumously in 1713 as Synopsis methodica avium et piscium. The earliest list of British birds, Pinax Rerum Naturalium Britannicarum, was written by Christopher Merrett in 1667, but authors such as John Ray considered it of little value. Ray did, however, value the expertise of the naturalist Sir Thomas Browne (1605–82), who not only answered his queries on ornithological identification and nomenclature, but also those of Willoughby and Merrett in letter correspondence. Browne himself in his lifetime kept an eagle, owl, cormorant, bittern, and ostrich, penned a tract on falconry, and introduced the words "incubation" and "oviparous" into the English language.
Towards the late 18th century, Mathurin Jacques Brisson (1723–1806) and Comte de Buffon (1707–1788) began new works on birds. Brisson produced a six-volume work Ornithologie in 1760 and Buffon's included nine volumes (volumes 16–24) on birds Histoire naturelle des oiseaux (1770–1785) in his work on science Histoire naturelle générale et particulière (1749–1804). Jacob Temminck sponsored François Le Vaillant [1753–1824] to collect bird specimens in Southern Africa and Le Vaillant's six-volume Histoire naturelle des oiseaux d'Afrique (1796–1808) included many non-African birds. His other bird books produced in collaboration with the artist Barraband are considered among the most valuable illustrated guides ever produced. Louis Pierre Vieillot (1748–1831) spent 10 years studying North American birds and wrote the Histoire naturelle des oiseaux de l'Amerique septentrionale (1807–1808?). Vieillot pioneered in the use of life histories and habits in classification. Alexander Wilson composed a nine-volume work, American Ornithology, published 1808–1814, which is the first such record of North American birds, significantly antedating Audubon. In the early 19th century, Lewis and Clark studied and identified many birds in the western United States. John James Audubon, born in 1785, observed and painted birds in France and later in the Ohio and Mississippi valleys. From 1827 to 1838, Audubon published The Birds of America, which was engraved by Robert Havell Sr. and his son Robert Havell Jr. Containing 435 engravings, it is often regarded as the greatest ornithological work in history.
Scientific studies
The emergence of ornithology as a scientific discipline began in the 18th century, when Mark Catesby published his two-volume Natural History of Carolina, Florida, and the Bahama Islands, a landmark work which included 220 hand-painted engravings and was the basis for many of the species Carl Linnaeus described in the 1758 Systema Naturae. Linnaeus' work revolutionised bird taxonomy by assigning every species a binomial name, categorising them into different genera. However, ornithology did not emerge as a specialised science until the Victorian era—with the popularization of natural history, and the collection of natural objects such as bird eggs and skins. This specialization led to the formation in Britain of the British Ornithologists' Union in 1858. In 1859, the members founded its journal The Ibis. The sudden spurt in ornithology was also due in part to colonialism. At 100 years later, in 1959, R. E. Moreau noted that ornithology in this period was preoccupied with the geographical distributions of various species of birds.
The bird collectors of the Victorian era observed the variations in bird forms and habits across geographic regions, noting local specialization and variation in widespread species. The collections of museums and private collectors grew with contributions from various parts of the world. The naming of species with binomials and the organization of birds into groups based on their similarities became the main work of museum specialists. The variations in widespread birds across geographical regions caused the introduction of trinomial names.
The search for patterns in the variations of birds was attempted by many. Friedrich Wilhelm Joseph Schelling (1775–1854), his student Johann Baptist von Spix (1781–1826), and several others believed that a hidden and innate mathematical order existed in the forms of birds. They believed that a "natural" classification was available and superior to "artificial" ones. A particularly popular idea was the Quinarian system popularised by Nicholas Aylward Vigors (1785–1840), William Sharp Macleay (1792–1865), William Swainson, and others. The idea was that nature followed a "rule of five" with five groups nested hierarchically. Some had attempted a rule of four, but Johann Jakob Kaup (1803–1873) insisted that the number five was special, noting that other natural entities such as the senses also came in fives. He followed this idea and demonstrated his view of the order within the crow family. Where he failed to find five genera, he left a blank insisting that a new genus would be found to fill these gaps. These ideas were replaced by more complex "maps" of affinities in works by Hugh Edwin Strickland and Alfred Russel Wallace. A major advance was made by Max Fürbringer in 1888, who established a comprehensive phylogeny of birds based on anatomy, morphology, distribution, and biology. This was developed further by Hans Gadow and others.
The Galapagos finches were especially influential in the development of Charles Darwin's theory of evolution. His contemporary Alfred Russel Wallace also noted these variations and the geographical separations between different forms leading to the study of biogeography. Wallace was influenced by the work of Philip Lutley Sclater on the distribution patterns of birds.
For Darwin, the problem was how species arose from a common ancestor, but he did not attempt to find rules for delineation of species. The species problem was tackled by the ornithologist Ernst Mayr, who was able to demonstrate that geographical isolation and the accumulation of genetic differences led to the splitting of species.
Early ornithologists were preoccupied with matters of species identification. Only systematics counted as true science and field studies were considered inferior through much of the 19th century. In 1901, Robert Ridgway wrote in the introduction to The Birds of North and Middle America that:
This early idea that the study of living birds was merely recreation held sway until ecological theories became the predominant focus of ornithological studies. The study of birds in their habitats was particularly advanced in Germany with bird ringing stations established as early as 1903. By the 1920s, the Journal für Ornithologie included many papers on the behaviour, ecology, anatomy, and physiology, many written by Erwin Stresemann. Stresemann changed the editorial policy of the journal, leading both to a unification of field and laboratory studies and a shift of research from museums to universities. Ornithology in the United States continued to be dominated by museum studies of morphological variations, species identities, and geographic distributions, until it was influenced by Stresemann's student Ernst Mayr. In Britain, some of the earliest ornithological works that used the word ecology appeared in 1915. The Ibis, however, resisted the introduction of these new methods of study, and no paper on ecology appeared until 1943. The work of David Lack on population ecology was pioneering. Newer quantitative approaches were introduced for the study of ecology and behaviour, and this was not readily accepted. For instance, Claud Ticehurst wrote:
David Lack's studies on population ecology sought to find the processes involved in the regulation of population based on the evolution of optimal clutch sizes. He concluded that population was regulated primarily by density-dependent controls, and also suggested that natural selection produces life-history traits that maximize the fitness of individuals. Others, such as Wynne-Edwards, interpreted population regulation as a mechanism that aided the "species" rather than individuals. This led to widespread and sometimes bitter debate on what constituted the "unit of selection". Lack also pioneered the use of many new tools for ornithological research, including the idea of using radar to study bird migration.
Birds were also widely used in studies of the niche hypothesis and Georgii Gause's competitive exclusion principle. Work on resource partitioning and the structuring of bird communities through competition were made by Robert MacArthur. Patterns of biodiversity also became a topic of interest. Work on the relationship of the number of species to area and its application in the study of island biogeography was pioneered by E. O. Wilson and Robert MacArthur. These studies led to the development of the discipline of landscape ecology.
John Hurrell Crook studied the behaviour of weaverbirds and demonstrated the links between ecological conditions, behaviour, and social systems. Principles from economics were introduced to the study of biology by Jerram L. Brown in his work on explaining territorial behaviour. This led to more studies of behaviour that made use of cost-benefit analyses. The rising interest in sociobiology also led to a spurt of bird studies in this area.
The study of imprinting behaviour in ducks and geese by Konrad Lorenz and the studies of instinct in herring gulls by Nicolaas Tinbergen led to the establishment of the field of ethology. The study of learning became an area of interest and the study of bird songs has been a model for studies in neuroethology. The study of hormones and physiology in the control of behaviour has also been aided by bird models. These have helped in finding the proximate causes of circadian and seasonal cycles. Studies on migration have attempted to answer questions on the evolution of migration, orientation, and navigation.
The growth of genetics and the rise of molecular biology led to the application of the gene-centered view of evolution to explain avian phenomena. Studies on kinship and altruism, such as helpers, became of particular interest. The idea of inclusive fitness was used to interpret observations on behaviour and life history, and birds were widely used models for testing hypotheses based on theories postulated by W. D. Hamilton and others.
The new tools of molecular biology changed the study of bird systematics, which changed from being based on phenotype to the underlying genotype. The use of techniques such as DNA–DNA hybridization to study evolutionary relationships was pioneered by Charles Sibley and Jon Edward Ahlquist, resulting in what is called the Sibley–Ahlquist taxonomy. These early techniques have been replaced by newer ones based on mitochondrial DNA sequences and molecular phylogenetics approaches that make use of computational procedures for sequence alignment, construction of phylogenetic trees, and calibration of molecular clocks to infer evolutionary relationships. Molecular techniques are also widely used in studies of avian population biology and ecology.
Rise to popularity
The use of field glasses or telescopes for bird observation began in the 1820s and 1830s, with pioneers such as J. Dovaston (who also pioneered in the use of bird feeders), but instruction manuals did not begin to insist on the use of optical aids such as "a first-class telescope" or "field glass" until the 1880s.
The rise of field guides for the identification of birds was another major innovation. The early guides such as Thomas Bewick's two-volume guide and William Yarrell's three-volume guide were cumbersome, and mainly focused on identifying specimens in the hand. The earliest of the new generation of field guides was prepared by Florence Merriam, sister of Clinton Hart Merriam, the mammalogist. This was published in 1887 in a series Hints to Audubon Workers: Fifty Birds and How to Know Them in Grinnell's Audubon Magazine. These were followed by new field guides,
from the pioneering illustrated handbooks of Frank Chapman to the classic Field Guide to the Birds by Roger Tory Peterson in 1934, to Birds of the West Indies published in 1936 by Dr. James Bond - the same who inspired the amateur ornithologist Ian Fleming in naming his famous literary spy.
The interest in birdwatching grew in popularity in many parts of the world, and the possibility for amateurs to contribute to biological studies was soon realized. As early as 1916, Julian Huxley wrote a two-part article in The Auk, noting the tensions between amateurs and professionals, and suggested the possibility that the "vast army of bird lovers and bird watchers could begin providing the data scientists needed to address the fundamental problems of biology." The amateur ornithologist Harold F. Mayfield noted that the field was also funded by non-professionals. He noted that in 1975, 12% of the papers in American ornithology journals were written by persons who were not employed in biology related work.
Organizations were started in many countries, and these grew rapidly in membership, most notable among them being the Royal Society for the Protection of Birds (RSPB) in Britain and the Audubon Society in the US, which started in 1885. Both these organizations were started with the primary objective of conservation. The RSPB, born in 1889, grew from a small Croydon-based group of women, including Eliza Phillips, Etta Lemon, Catherine Hall and Hannah Poland. Calling themselves the "Fur, Fin, and Feather Folk", the group met regularly and took a pledge "to refrain from wearing the feathers of any birds not killed for the purpose of food, the ostrich only exempted." The organization did not allow men as members initially, avenging a policy of the British Ornithologists' Union to keep out women. Unlike the RSPB, which was primarily conservation oriented, the British Trust for Ornithology was started in 1933 with the aim of advancing ornithological research. Members were often involved in collaborative ornithological projects. These projects have resulted in atlases which detail the distribution of bird species across Britain. In Canada, citizen scientist Elsie Cassels studied migratory birds and was involved in establishing Gaetz Lakes bird sanctuary. In the United States, the Breeding Bird Surveys, conducted by the United States Geological Survey, have also produced atlases with information on breeding densities and changes in the density and distribution over time. Other volunteer collaborative ornithology projects were subsequently established in other parts of the world.
Techniques
The tools and techniques of ornithology are varied, and new inventions and approaches are quickly incorporated. The techniques may be broadly dealt under the categories of those that are applicable to specimens and those that are used in the field, but the classification is rough and many analysis techniques are usable both in the laboratory and field or may require a combination of field and laboratory techniques.
Collections
The earliest approaches to modern bird study involved the collection of eggs, a practice known as oology. While collecting became a pastime for many amateurs, the labels associated with these early egg collections made them unreliable for the serious study of bird breeding. To preserve eggs, a tiny hole was made and the contents extracted. This technique became standard with the invention of the blow drill around 1830. Egg collection is no longer popular; however, historic museum collections have been of value in determining the effects of pesticides such as DDT on physiology. Museum bird collections continue to act as a resource for taxonomic studies.
The use of bird skins to document species has been a standard part of systematic ornithology. Bird skins are prepared by retaining the key bones of the wings, legs, and skull along with the skin and feathers. In the past, they were treated with arsenic to prevent fungal and insect (mostly dermestid) attack. Arsenic, being toxic, was replaced by less-toxic borax. Amateur and professional collectors became familiar with these skinning techniques and started sending in their skins to museums, some of them from distant locations. This led to the formation of huge collections of bird skins in museums in Europe and North America. Many private collections were also formed. These became references for comparison of species, and the ornithologists at these museums were able to compare species from different locations, often places that they themselves never visited. Morphometrics of these skins, particularly the lengths of the tarsus, bill, tail, and wing became important in the descriptions of bird species. These skin collections have been used in more recent times for studies on molecular phylogenetics by the extraction of ancient DNA. The importance of type specimens in the description of species make skin collections a vital resource for systematic ornithology. However, with the rise of molecular techniques, establishing the taxonomic status of new discoveries, such as the Bulo Burti boubou (Laniarius liberatus, no longer a valid species) and the Bugun liocichla (Liocichla bugunorum), using blood, DNA and feather samples as the holotype material, has now become possible.
Other methods of preservation include the storage of specimens in spirit. Such wet specimens have special value in physiological and anatomical study, apart from providing better quality of DNA for molecular studies. Freeze drying of specimens is another technique that has the advantage of preserving stomach contents and anatomy, although it tends to shrink, making it less reliable for morphometrics.
In the field
The study of birds in the field was helped enormously by improvements in optics. Photography made it possible to document birds in the field with great accuracy. High-power spotting scopes today allow observers to detect minute morphological differences that were earlier possible only by examination of the specimen "in the hand".
The capture and marking of birds enable detailed studies of life history. Techniques for capturing birds are varied and include the use of bird liming for perching birds, mist nets for woodland birds, cannon netting for open-area flocking birds, the bal-chatri trap for raptors, decoys and funnel traps for water birds.
The bird in the hand may be examined and measurements can be made, including standard lengths and weights. Feather moult and skull ossification provide indications of age and health. Sex can be determined by examination of anatomy in some sexually nondimorphic species. Blood samples may be drawn to determine hormonal conditions in studies of physiology, identify DNA markers for studying genetics and kinship in studies of breeding biology and phylogeography. Blood may also be used to identify pathogens and arthropod-borne viruses. Ectoparasites may be collected for studies of coevolution and zoonoses. In many cryptic species, measurements (such as the relative lengths of wing feathers in warblers) are vital in establishing identity. Captured birds are often marked for future recognition. Rings or bands provide long-lasting identification, but require capture for the information on them to be read. Field-identifiable marks such as coloured bands, wing tags, or dyes enable short-term studies where individual identification is required. Mark and recapture techniques make demographic studies possible. Ringing has traditionally been used in the study of migration. In recent times, satellite transmitters provide the ability to track migrating birds in near-real time.
Techniques for estimating population density include point counts, transects, and territory mapping. Observations are made in the field using carefully designed protocols and the data may be analysed to estimate bird diversity, relative abundance, or absolute population densities. These methods may be used repeatedly over large timespans to monitor changes in the environment. Camera traps have been found to be a useful tool for the detection and documentation of elusive species, nest predators and in the quantitative analysis of frugivory, seed dispersal and behaviour.
In the laboratory
Many aspects of bird biology are difficult to study in the field. These include the study of behavioural and physiological changes that require a long duration of access to the bird. Nondestructive samples of blood or feathers taken during field studies may be studied in the laboratory. For instance, the variation in the ratios of stable hydrogen isotopes across latitudes makes establishing the origins of migrant birds possible using mass spectrometric analysis of feather samples. These techniques can be used in combination with other techniques such as ringing.
The first attenuated vaccine developed by Louis Pasteur, for fowl cholera, was tested on poultry in 1878. Anti-malarials were tested on birds which harbour avian-malarias. Poultry continues to be used as a model for many studies in non-mammalian immunology.
Studies in bird behaviour include the use of tamed and trained birds in captivity. Studies on bird intelligence and song learning have been largely laboratory-based. Field researchers may make use of a wide range of techniques such as the use of dummy owls to elicit mobbing behaviour, and dummy males or the use of call playback to elicit territorial behaviour and thereby to establish the boundaries of bird territories. Studies of bird migration including aspects of navigation, orientation, and physiology are often studied using captive birds in special cages that record their activities. The Emlen funnel, for instance, makes use of a cage with an inkpad at the centre and a conical floor where the ink marks can be counted to identify the direction in which the bird attempts to fly. The funnel can have a transparent top and visible cues such as the direction of sunlight may be controlled using mirrors or the positions of the stars simulated in a planetarium.
The entire genome of the domestic fowl (Gallus gallus) was sequenced in 2004, and was followed in 2008 by the genome of the zebra finch (Taeniopygia guttata). Such whole-genome sequencing projects allow for studies on evolutionary processes involved in speciation. Associations between the expression of genes and behaviour may be studied using candidate genes. Variations in the exploratory behaviour of great tits (Parus major) have been found to be linked with a gene orthologous to the human gene DRD4 (Dopamine receptor D4) which is known to be associated with novelty-seeking behaviour. The role of gene expression in developmental differences and morphological variations have been studied in Darwin's finches. The difference in the expression of Bmp4 have been shown to be associated with changes in the growth and shape of the beak.
The chicken has long been a model organism for studying vertebrate developmental biology. As the embryo is readily accessible, its development can be easily followed (unlike mice). This also allows the use of electroporation for studying the effect of adding or silencing a gene. Other tools for perturbing their genetic makeup are chicken embryonic stem cells and viral vectors.
Collaborative studies
With the widespread interest in birds, use of a large number of people to work on collaborative ornithological projects that cover large geographic scales has been possible. These citizen science projects include nationwide projects such as the Christmas Bird Count, Backyard Bird Count, the North American Breeding Bird Survey, the Canadian EPOQ or regional projects such as the Asian Waterfowl Census and Spring Alive in Europe. These projects help to identify distributions of birds, their population densities and changes over time, arrival and departure dates of migration, breeding seasonality, and even population genetics. The results of many of these projects are published as bird atlases. Studies of migration using bird ringing or colour marking often involve the cooperation of people and organizations in different countries.
Applications
Wild birds impact many human activities, while domesticated birds are important sources of eggs, meat, feathers, and other products. Applied and economic ornithology aim to reduce the ill effects of problem birds and enhance gains from beneficial species.
The role of some species of birds as pests has been well known, particularly in agriculture. Granivorous birds such as the queleas in Africa are among the most numerous birds in the world, and foraging flocks can cause devastation. Many insectivorous birds are also noted as beneficial in agriculture. Many early studies on the benefits or damages caused by birds in fields were made by analysis of stomach contents and observation of feeding behaviour. Modern studies aimed to manage birds in agriculture make use of a wide range of principles from ecology. Intensive aquaculture has brought humans in conflict with fish-eating birds such as cormorants.
Large flocks of pigeons and starlings in cities are often considered as a nuisance, and techniques to reduce their populations or their impacts are constantly innovated. Birds are also of medical importance, and their role as carriers of human diseases such as Japanese encephalitis, West Nile virus, and influenza H5N1 have been widely recognized. Bird strikes and the damage they cause in aviation are of particularly great importance, due to the fatal consequences and the level of economic losses caused. The airline industry incurs worldwide damages of an estimated US$1.2 billion each year.
Many species of birds have been driven to extinction by human activities. Being conspicuous elements of the ecosystem, they have been considered as indicators of ecological health. They have also helped in gathering support for habitat conservation. Bird conservation requires specialized knowledge in aspects of biology and ecology, and may require the use of very location-specific approaches. Ornithologists contribute to conservation biology by studying the ecology of birds in the wild and identifying the key threats and ways of enhancing the survival of species. Critically endangered species such as the California condor have had to be captured and bred in captivity. Such ex situ conservation measures may be followed by reintroduction of the species into the wild.
See also
Avian ecology field methods
Bird observatory
List of birdwatchers
List of ornithological societies
List of ornithologists
List of ornithologists abbreviated names
List of ornithology awards
List of ornithology journals
References
Additional sources
(Reprinted from the 1884 Encyclopædia Britannica)
External links
Lewis, Daniel. The Feathery Tribe: Robert Ridgway and the Modern Study of Birds. Yale University Press. .
Ornithologie (1773–1792) Francois Nicholas Martinet Digital Edition Smithsonian Digital Libraries
History of ornithology in North America
History of ornithology and ornithology collections in Victoria, Australia on Culture Victoria
History of ornithology in China
Hill ornithology collections
Subfields of zoology
Scoutcraft | Ornithology | [
"Biology"
] | 7,326 | [
"Subfields of zoology"
] |
42,975 | https://en.wikipedia.org/wiki/Hubble%27s%20law | Hubble's law, also known as the Hubble–Lemaître law, is the observation in physical cosmology that galaxies are moving away from Earth at speeds proportional to their distance. In other words, the farther a galaxy is from the Earth, the faster it moves away. A galaxy's recessional velocity is typically determined by measuring its redshift, a shift in the frequency of light emitted by the galaxy.
The discovery of Hubble's law is attributed to work published by Edwin Hubble in 1929, but the notion of the universe expanding at a calculable rate was first derived from general relativity equations in 1922 by Alexander Friedmann. The Friedmann equations showed the universe might be expanding, and presented the expansion speed if that were the case. Before Hubble, astronomer Carl Wilhelm Wirtz had, in 1922 and 1924, deduced with his own data that galaxies that appeared smaller and dimmer had larger redshifts and thus that more distant galaxies recede faster from the observer. In 1927, Georges Lemaître concluded that the universe might be expanding by noting the proportionality of the recessional velocity of distant bodies to their respective distances. He estimated a value for this ratio, which—after Hubble confirmed cosmic expansion and determined a more precise value for it two years later—became known as the Hubble constant. Hubble inferred the recession velocity of the objects from their redshifts, many of which were earlier measured and related to velocity by Vesto Slipher in 1917. Combining Slipher's velocities with Henrietta Swan Leavitt's intergalactic distance calculations and methodology allowed Hubble to better calculate an expansion rate for the universe.
Hubble's law is considered the first observational basis for the expansion of the universe, and is one of the pieces of evidence most often cited in support of the Big Bang model. The motion of astronomical objects due solely to this expansion is known as the Hubble flow. It is described by the equation , with the constant of proportionality—the Hubble constant—between the "proper distance" to a galaxy (which can change over time, unlike the comoving distance) and its speed of separation , i.e. the derivative of proper distance with respect to the cosmic time coordinate. Though the Hubble constant is constant at any given moment in time, the Hubble parameter , of which the Hubble constant is the current value, varies with time, so the term constant is sometimes thought of as somewhat of a misnomer.
The Hubble constant is most frequently quoted in km/s/Mpc, which gives the speed of a galaxy away as . Simplifying the units of the generalized form reveals that specifies a frequency (SI unit: s−1), leading the reciprocal of to be known as the Hubble time (14.4 billion years). The Hubble constant can also be stated as a relative rate of expansion. In this form = 7%/Gyr, meaning that, at the current rate of expansion, it takes one billion years for an unbound structure to grow by 7%.
Discovery
A decade before Hubble made his observations, a number of physicists and mathematicians had established a consistent theory of an expanding universe by using Einstein field equations of general relativity. Applying the most general principles to the nature of the universe yielded a dynamic solution that conflicted with the then-prevalent notion of a static universe.
Slipher's observations
In 1912, Vesto M. Slipher measured the first Doppler shift of a "spiral nebula" (the obsolete term for spiral galaxies) and soon discovered that almost all such objects were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside the Milky Way galaxy.
FLRW equations
In 1922, Alexander Friedmann derived his Friedmann equations from Einstein field equations, showing that the universe might expand at a rate calculable by the equations. The parameter used by Friedmann is known today as the scale factor and can be considered as a scale invariant form of the proportionality constant of Hubble's law. Georges Lemaître independently found a similar solution in his 1927 paper discussed in the following section. The Friedmann equations are derived by inserting the metric for a homogeneous and isotropic universe into Einstein's field equations for a fluid with a given density and pressure. This idea of an expanding spacetime would eventually lead to the Big Bang and Steady State theories of cosmology.
Lemaître's equation
In 1927, two years before Hubble published his own article, the Belgian priest and astronomer Georges Lemaître was the first to publish research deriving what is now known as Hubble's law. According to the Canadian astronomer Sidney van den Bergh, "the 1927 discovery of the expansion of the universe by Lemaître was published in French in a low-impact journal. In the 1931 high-impact English translation of this article, a critical equation was changed by omitting reference to what is now known as the Hubble constant." It is now known that the alterations in the translated paper were carried out by Lemaître himself.
Shape of the universe
Before the advent of modern cosmology, there was considerable talk about the size and shape of the universe. In 1920, the Shapley–Curtis debate took place between Harlow Shapley and Heber D. Curtis over this issue. Shapley argued for a small universe the size of the Milky Way galaxy, and Curtis argued that the universe was much larger. The issue was resolved in the coming decade with Hubble's improved observations.
Cepheid variable stars outside the Milky Way
Edwin Hubble did most of his professional astronomical observing work at Mount Wilson Observatory, home to the world's most powerful telescope at the time. His observations of Cepheid variable stars in "spiral nebulae" enabled him to calculate the distances to these objects. Surprisingly, these objects were discovered to be at distances which placed them well outside the Milky Way. They continued to be called nebulae, and it was only gradually that the term galaxies replaced it.
Combining redshifts with distance measurements
The velocities and distances that appear in Hubble's law are not directly measured. The velocities are inferred from the redshift of radiation and distance is inferred from brightness. Hubble sought to correlate brightness with parameter .
Combining his measurements of galaxy distances with Vesto Slipher and Milton Humason's measurements of the redshifts associated with the galaxies, Hubble discovered a rough proportionality between redshift of an object and its distance. Though there was considerable scatter (now known to be caused by peculiar velocities—the 'Hubble flow' is used to refer to the region of space far enough out that the recession velocity is larger than local peculiar velocities), Hubble was able to plot a trend line from the 46 galaxies he studied and obtain a value for the Hubble constant of 500 (km/s)/Mpc (much higher than the currently accepted value due to errors in his distance calibrations; see cosmic distance ladder for details).
Hubble diagram
Hubble's law can be easily depicted in a "Hubble diagram" in which the velocity (assumed approximately proportional to the redshift) of an object is plotted with respect to its distance from the observer. A straight line of positive slope on this diagram is the visual depiction of Hubble's law.
Cosmological constant abandoned
After Hubble's discovery was published, Albert Einstein abandoned his work on the cosmological constant, a term he had inserted into his equations of general relativity to coerce them into producing the static solution he previously considered the correct state of the universe. The Einstein equations in their simplest form model either an expanding or contracting universe, so Einstein introduced the constant to counter expansion or contraction and lead to a static and flat universe. After Hubble's discovery that the universe was, in fact, expanding, Einstein called his faulty assumption that the universe is static his "greatest mistake". On its own, general relativity could predict the expansion of the universe, which (through observations such as the bending of light by large masses, or the precession of the orbit of Mercury) could be experimentally observed and compared to his theoretical calculations using particular solutions of the equations he had originally formulated.
In 1931, Einstein went to Mount Wilson Observatory to thank Hubble for providing the observational basis for modern cosmology.
The cosmological constant has regained attention in recent decades as a hypothetical explanation for dark energy.
Interpretation
The discovery of the linear relationship between redshift and distance, coupled with a supposed linear relation between recessional velocity and redshift, yields a straightforward mathematical expression for Hubble's law as follows:
where
is the recessional velocity, typically expressed in km/s.
is Hubble's constant and corresponds to the value of (often termed the Hubble parameter which is a value that is time dependent and which can be expressed in terms of the scale factor) in the Friedmann equations taken at the time of observation denoted by the subscript . This value is the same throughout the universe for a given comoving time.
is the proper distance (which can change over time, unlike the comoving distance, which is constant) from the galaxy to the observer, measured in mega parsecs (Mpc), in the 3-space defined by given cosmological time. (Recession velocity is just ).
Hubble's law is considered a fundamental relation between recessional velocity and distance. However, the relation between recessional velocity and redshift depends on the cosmological model adopted and is not established except for small redshifts.
For distances larger than the radius of the Hubble sphere , objects recede at a rate faster than the speed of light (See Uses of the proper distance for a discussion of the significance of this):
Since the Hubble "constant" is a constant only in space, not in time, the radius of the Hubble sphere may increase or decrease over various time intervals. The subscript '0' indicates the value of the Hubble constant today. Current evidence suggests that the expansion of the universe is accelerating (see Accelerating universe), meaning that for any given galaxy, the recession velocity is increasing over time as the galaxy moves to greater and greater distances; however, the Hubble parameter is actually thought to be decreasing with time, meaning that if we were to look at some distance and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones.
Redshift velocity and recessional velocity
Redshift can be measured by determining the wavelength of a known transition, such as hydrogen α-lines for distant quasars, and finding the fractional shift compared to a stationary reference. Thus, redshift is a quantity unambiguously acquired from observation. Care is required, however, in translating these to recessional velocities: for small redshift values, a linear relation of redshift to recessional velocity applies, but more generally the redshift-distance law is nonlinear, meaning the co-relation must be derived specifically for each given model and epoch.
Redshift velocity
The redshift is often described as a redshift velocity, which is the recessional velocity that would produce the same redshift it were caused by a linear Doppler effect (which, however, is not the case, as the velocities involved are too large to use a non-relativistic formula for Doppler shift). This redshift velocity can easily exceed the speed of light. In other words, to determine the redshift velocity , the relation:
is used. That is, there is between redshift velocity and redshift: they are rigidly proportional, and not related by any theoretical reasoning. The motivation behind the "redshift velocity" terminology is that the redshift velocity agrees with the velocity from a low-velocity simplification of the so-called Fizeau–Doppler formula
Here, , are the observed and emitted wavelengths respectively. The "redshift velocity" is not so simply related to real velocity at larger velocities, however, and this terminology leads to confusion if interpreted as a real velocity. Next, the connection between redshift or redshift velocity and recessional velocity is discussed.
Recessional velocity
Suppose is called the scale factor of the universe, and increases as the universe expands in a manner that depends upon the cosmological model selected. Its meaning is that all measured proper distances between co-moving points increase proportionally to . (The co-moving points are not moving relative to their local environments.) In other words:
where is some reference time. If light is emitted from a galaxy at time and received by us at , it is redshifted due to the expansion of the universe, and this redshift is simply:
Suppose a galaxy is at distance , and this distance changes with time at a rate . We call this rate of recession the "recession velocity" :
We now define the Hubble constant as
and discover the Hubble law:
From this perspective, Hubble's law is a fundamental relation between (i) the recessional velocity associated with the expansion of the universe and (ii) the distance to an object; the connection between redshift and distance is a crutch used to connect Hubble's law with observations. This law can be related to redshift approximately by making a Taylor series expansion:
If the distance is not too large, all other complications of the model become small corrections, and the time interval is simply the distance divided by the speed of light:
or
According to this approach, the relation is an approximation valid at low redshifts, to be replaced by a relation at large redshifts that is model-dependent. See velocity-redshift figure.
Observability of parameters
Strictly speaking, neither nor in the formula are directly observable, because they are properties of a galaxy, whereas our observations refer to the galaxy in the past, at the time that the light we currently see left it.
For relatively nearby galaxies (redshift much less than one), and will not have changed much, and can be estimated using the formula where is the speed of light. This gives the empirical relation found by Hubble.
For distant galaxies, (or ) cannot be calculated from without specifying a detailed model for how changes with time. The redshift is not even directly related to the recession velocity at the time the light set out, but it does have a simple interpretation: is the factor by which the universe has expanded while the photon was traveling towards the observer.
Expansion velocity vs. peculiar velocity
In using Hubble's law to determine distances, only the velocity due to the expansion of the universe can be used. Since gravitationally interacting galaxies move relative to each other independent of the expansion of the universe, these relative velocities, called peculiar velocities, need to be accounted for in the application of Hubble's law. Such peculiar velocities give rise to redshift-space distortions.
Time-dependence of Hubble parameter
The parameter is commonly called the "Hubble constant", but that is a misnomer since it is constant in space only at a fixed time; it varies with time in nearly all cosmological models, and all observations of far distant objects are also observations into the distant past, when the "constant" had a different value. "Hubble parameter" is a more correct term, with denoting the present-day value.
Another common source of confusion is that the accelerating universe does imply that the Hubble parameter is actually increasing with time; since in most accelerating models increases relatively faster than so decreases with time. (The recession velocity of one chosen galaxy does increase, but different galaxies passing a sphere of fixed radius cross the sphere more slowly at later times.)
On defining the dimensionless deceleration parameter it follows that
From this it is seen that the Hubble parameter is decreasing with time, unless ; the latter can only occur if the universe contains phantom energy, regarded as theoretically somewhat improbable.
However, in the standard Lambda cold dark matter model (Lambda-CDM or ΛCDM model), will tend to −1 from above in the distant future as the cosmological constant becomes increasingly dominant over matter; this implies that will approach from above to a constant value of ≈ 57 (km/s)/Mpc, and the scale factor of the universe will then grow exponentially in time.
Idealized Hubble's law
The mathematical derivation of an idealized Hubble's law for a uniformly expanding universe is a fairly elementary theorem of geometry in 3-dimensional Cartesian/Newtonian coordinate space, which, considered as a metric space, is entirely homogeneous and isotropic (properties do not vary with location or direction). Simply stated, the theorem is this:
In fact, this applies to non-Cartesian spaces as long as they are locally homogeneous and isotropic, specifically to the negatively and positively curved spaces frequently considered as cosmological models (see shape of the universe).
An observation stemming from this theorem is that seeing objects recede from us on Earth is not an indication that Earth is near to a center from which the expansion is occurring, but rather that observer in an expanding universe will see objects receding from them.
Ultimate fate and age of the universe
The value of the Hubble parameter changes over time, either increasing or decreasing depending on the value of the so-called deceleration parameter , which is defined by
In a universe with a deceleration parameter equal to zero, it follows that , where is the time since the Big Bang. A non-zero, time-dependent value of simply requires integration of the Friedmann equations backwards from the present time to the time when the comoving horizon size was zero.
It was long thought that was positive, indicating that the expansion is slowing down due to gravitational attraction. This would imply an age of the universe less than (which is about 14 billion years). For instance, a value for of 1/2 (once favoured by most theorists) would give the age of the universe as . The discovery in 1998 that is apparently negative means that the universe could actually be older than . However, estimates of the age of the universe are very close to .
Olbers' paradox
The expansion of space summarized by the Big Bang interpretation of Hubble's law is relevant to the old conundrum known as Olbers' paradox: If the universe were infinite in size, static, and filled with a uniform distribution of stars, then every line of sight in the sky would end on a star, and the sky would be as bright as the surface of a star. However, the night sky is largely dark.
Since the 17th century, astronomers and other thinkers have proposed many possible ways to resolve this paradox, but the currently accepted resolution depends in part on the Big Bang theory, and in part on the Hubble expansion: in a universe that existed for a finite amount of time, only the light of a finite number of stars has had enough time to reach us, and the paradox is resolved. Additionally, in an expanding universe, distant objects recede from us, which causes the light emanated from them to be redshifted and diminished in brightness by the time we see it.
Dimensionless Hubble constant
Instead of working with Hubble's constant, a common practice is to introduce the dimensionless Hubble constant, usually denoted by and commonly referred to as "little h", then to write Hubble's constant as , all the relative uncertainty of the true value of being then relegated to . The dimensionless Hubble constant is often used when giving distances that are calculated from redshift using the formula . Since is not precisely known, the distance is expressed as:
In other words, one calculates 2998 × and one gives the units as Mpc or Mpc.
Occasionally a reference value other than 100 may be chosen, in which case a subscript is presented after to avoid confusion; e.g. denotes , which implies .
This should not be confused with the dimensionless value of Hubble's constant, usually expressed in terms of Planck units, obtained by multiplying by (from definitions of parsec and ), for example for , a Planck unit version of is obtained.
Acceleration of the expansion
A value for measured from standard candle observations of Type Ia supernovae, which was determined in 1998 to be negative, surprised many astronomers with the implication that the expansion of the universe is currently "accelerating" (although the Hubble factor is still decreasing with time, as mentioned above in the Interpretation section; see the articles on dark energy and the ΛCDM model).
Derivation of the Hubble parameter
Start with the Friedmann equation:
where is the Hubble parameter, is the scale factor, is the gravitational constant, is the normalised spatial curvature of the universe and equal to −1, 0, or 1, and is the cosmological constant.
Matter-dominated universe (with a cosmological constant)
If the universe is matter-dominated, then the mass density of the universe can be taken to include just matter so
where is the density of matter today. From the Friedmann equation and thermodynamic principles we know for non-relativistic particles that their mass density decreases proportional to the inverse volume of the universe, so the equation above must be true. We can also define (see density parameter for )
therefore:
Also, by definition,
where the subscript refers to the values today, and . Substituting all of this into the Friedmann equation at the start of this section and replacing with gives
Matter- and dark energy-dominated universe
If the universe is both matter-dominated and dark energy-dominated, then the above equation for the Hubble parameter will also be a function of the equation of state of dark energy. So now:
where is the mass density of the dark energy. By definition, an equation of state in cosmology is , and if this is substituted into the fluid equation, which describes how the mass density of the universe evolves with time, then
If is constant, then
implying:
Therefore, for dark energy with a constant equation of state , If this is substituted into the Friedman equation in a similar way as before, but this time set , which assumes a spatially flat universe, then (see shape of the universe)
If the dark energy derives from a cosmological constant such as that introduced by Einstein, it can be shown that . The equation then reduces to the last equation in the matter-dominated universe section, with set to zero. In that case the initial dark energy density is given by
If dark energy does not have a constant equation-of-state , then
and to solve this, must be parametrized, for example if , giving
Other ingredients have been formulated.
Units derived from the Hubble constant
Hubble time
The Hubble constant has units of inverse time; the Hubble time is simply defined as the inverse of the Hubble constant, i.e.
This is slightly different from the age of the universe, which is approximately 13.8 billion years. The Hubble time is the age it would have had if the expansion had been linear, and it is different from the real age of the universe because the expansion is not linear; it depends on the energy content of the universe (see ).
We currently appear to be approaching a period where the expansion of the universe is exponential due to the increasing dominance of vacuum energy. In this regime, the Hubble parameter is constant, and the universe grows by a factor each Hubble time:
Likewise, the generally accepted value of 2.27 Es−1 means that (at the current rate) the universe would grow by a factor of in one exasecond.
Over long periods of time, the dynamics are complicated by general relativity, dark energy, inflation, etc., as explained above.
Hubble length
The Hubble length or Hubble distance is a unit of distance in cosmology, defined as — the speed of light multiplied by the Hubble time. It is equivalent to 4,420 million parsecs or 14.4 billion light years. (The numerical value of the Hubble length in light years is, by definition, equal to that of the Hubble time in years.) Substituting into the equation for Hubble's law, reveals that the Hubble distance specifies the distance from our location to those galaxies which are receding from us at the speed of light
Hubble volume
The Hubble volume is sometimes defined as a volume of the universe with a comoving size of . The exact definition varies: it is sometimes defined as the volume of a sphere with radius , or alternatively, a cube of side . Some cosmologists even use the term Hubble volume to refer to the volume of the observable universe, although this has a radius approximately three times larger.
Determining the Hubble constant
The value of the Hubble constant, , cannot be measured directly, but is derived from a combination of astronomical observations and model-dependent assumptions. Increasingly accurate observations and new models over many decades have led to two sets of highly precise values which do not agree. This difference is known as the "Hubble tension".
Earlier measurements
For the original 1929 estimate of the constant now bearing his name, Hubble used observations of Cepheid variable stars as "standard candles" to measure distance. The result he obtained was , much larger than the value astronomers currently calculate. Later observations by astronomer Walter Baade led him to realize that there were distinct "populations" for stars (Population I and Population II) in a galaxy. The same observations led him to discover that there are two types of Cepheid variable stars with different luminosities. Using this discovery, he recalculated Hubble constant and the size of the known universe, doubling the previous calculation made by Hubble in 1929. He announced this finding to considerable astonishment at the 1952 meeting of the International Astronomical Union in Rome.
For most of the second half of the 20th century, the value of was estimated to be between .
The value of the Hubble constant was the topic of a long and rather bitter controversy between Gérard de Vaucouleurs, who claimed the value was around 100, and Allan Sandage, who claimed the value was near 50. In one demonstration of vitriol shared between the parties, when Sandage and Gustav Andreas Tammann (Sandage's research colleague) formally acknowledged the shortcomings of confirming the systematic error of their method in 1975, Vaucouleurs responded "It is unfortunate that this sober warning was so soon forgotten and ignored by most astronomers and textbook writers". In 1996, a debate moderated by John Bahcall between Sidney van den Bergh and Gustav Tammann was held in similar fashion to the earlier Shapley–Curtis debate over these two competing values.
This previously wide variance in estimates was partially resolved with the introduction of the ΛCDM model of the universe in the late 1990s. Incorporating the ΛCDM model, observations of high-redshift clusters at X-ray and microwave wavelengths using the Sunyaev–Zel'dovich effect, measurements of anisotropies in the cosmic microwave background radiation, and optical surveys all gave a value of around 50–70 km/s/Mpc for the constant.
Precision cosmology and the Hubble tension
By the late 1990s, advances in ideas and technology allowed higher precision measurements.
However, two major categories of methods, each with high precision, fail to agree.
"Late universe" measurements using calibrated distance ladder techniques have converged on a value of approximately . Since 2000, "early universe" techniques based on measurements of the cosmic microwave background have become available, and these agree on a value near . (This accounts for the change in the expansion rate since the early universe, so is comparable to the first number.) Initially, this discrepancy was within the estimated measurement uncertainties and thus no cause for concern. However, as techniques have improved, the estimated measurement uncertainties have shrunk, but the discrepancies have not, to the point that the disagreement is now highly statistically significant. This discrepancy is called the Hubble tension.
An example of an "early" measurement, the Planck mission published in 2018 gives a value for of . In the "late" camp is the higher value of determined by the Hubble Space Telescope
and confirmed by the James Webb Space Telescope in 2023.
The "early" and "late" measurements disagree at the >5 σ level, beyond a plausible level of chance. The resolution to this disagreement is an ongoing area of active research.
Reducing systematic errors
Since 2013 much effort has gone in to new measurements to check for possible systematic errors and improved reproducibility.
The "late universe" or distance ladder measurements typically employ three stages or "rungs". In the first rung distances to Cepheids are determined while trying to reduce luminosity errors from dust and correlations of metallicity with luminosity. The second rung uses
Type Ia supernova, explosions of almost constant amount of mass and thus very similar amounts of light; the primary source of systematic error is the limited number of objects that can be observed. The third rung of the distance ladder measures the red-shift of supernova to extract the Hubble flow and from that the constant. At this rung corrections due to motion other than expansion are applied.
As an example of the kind of work needed to reduce systematic errors, photometry on observations from the James Webb Space Telescope of extra-galactic Cepheids confirm the findings from the HST. The higher resolution avoided confusion from crowding of stars in the field of view but came to the same value for H0.
The "early universe" or inverse distance ladder measures the observable consequences of spherical sound waves on primordial plasma density. These pressure waves – called baryon acoustic oscillations (BAO) – cease once the universe cooled enough for electrons to stay bound to nuclei, ending the plasma and allowing the photons trapped by interaction with the plasma to escape. The pressure waves then become very small perturbations in density imprinted on the cosmic microwave background and on the large scale density of galaxies across the sky. Detailed structure in high precision measurements of the CMB can matched to physics models of the oscillations. These models depend upon the Hubble constant such that a match reveals a value for the constant. Similarly, the BAO affects the statistical distribution of matter, observed as distant galaxies across the sky. These two independent kinds of measurements produce similar values for the constant from the current models, giving strong evidence that systematic errors in the measurements themselves do not affect the result.
Other kinds of measurements
In addition to measurements based on calibrated distance ladder techniques or measurements of the CMB, other methods have been used to determine the Hubble constant.
In October 2018, scientists used information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817), of determining the Hubble constant.
In July 2019, astronomers reported that a new method to determine the Hubble constant, and resolve the discrepancy of earlier methods, has been proposed based on the mergers of pairs of neutron stars, following the detection of the neutron star merger of GW170817, an event known as a dark siren. Their measurement of the Hubble constant is (km/s)/Mpc.
Also in July 2019, astronomers reported another new method, using data from the Hubble Space Telescope and based on distances to red giant stars calculated using the tip of the red-giant branch (TRGB) distance indicator. Their measurement of the Hubble constant is .
In February 2020, the Megamaser Cosmology Project published independent results based on astrophysical masers visible at cosmological distances and which do not require multi-step calibration. That work confirmed the distance ladder results and differed from the early-universe results at a statistical significance level of 95%.
In July 2020, measurements of the cosmic background radiation by the Atacama Cosmology Telescope predict that the Universe should be expanding more slowly than is currently observed.
In July 2023, an independent estimate of the Hubble constant was derived from a kilonova, the optical afterglow of a neutron star merger. Due to the blackbody nature of early kilonova spectra, such systems provide strongly constraining estimators of cosmic distance. Using the kilonova AT2017gfo (the aftermath of, once again, GW170817), these measurements indicate a local-estimate of the Hubble constant of .
Possible resolutions of the Hubble tension
The cause of the Hubble tension is unknown, and there are many possible proposed solutions. The most conservative is that there is an unknown systematic error affecting either early-universe or late-universe observations. Although intuitively appealing, this explanation requires multiple unrelated effects regardless of whether early-universe or late-universe observations are incorrect, and there are no obvious candidates. Furthermore, any such systematic error would need to affect multiple different instruments, since both the early-universe and late-universe observations come from several different telescopes.
Alternatively, it could be that the observations are correct, but some unaccounted-for effect is causing the discrepancy. If the cosmological principle fails (see ), then the existing interpretations of the Hubble constant and the Hubble tension have to be revised, which might resolve the Hubble tension. In particular, we would need to be located within a very large void, up to about a redshift of 0.5, for such an explanation to conflate with supernovae and baryon acoustic oscillation observations. Yet another possibility is that the uncertainties in the measurements could have been underestimated, but given the internal agreements this is neither likely, nor resolves the overall tension.
Finally, another possibility is new physics beyond the currently accepted cosmological model of the universe, the ΛCDM model. There are very many theories in this category, for example, replacing general relativity with a modified theory of gravity could potentially resolve the tension, as can a dark energy component in the early universe, dark energy with a time-varying equation of state, or dark matter that decays into dark radiation. A problem faced by all these theories is that both early-universe and late-universe measurements rely on multiple independent lines of physics, and it is difficult to modify any of those lines while preserving their successes elsewhere. The scale of the challenge can be seen from how some authors have argued that new early-universe physics alone is not sufficient; while other authors argue that new late-universe physics alone is also not sufficient. Nonetheless, astronomers are trying, with interest in the Hubble tension growing strongly since the mid 2010s.
Measurements of the Hubble constant
See also
S8 tension- a similar problem from another parameter of the ΛCDM model.
Notes
References
Bibliography
External links
NASA's WMAP B ig Bang Expansion: the Hubble Constant
The Hubble Key Project
The Hubble Diagram Project
Coming to terms with different Hubble Constants (Forbes; 3 May 2019)
Law
Eponymous laws of physics
Large-scale structure of the cosmos
Physical cosmology
Equations of astronomy | Hubble's law | [
"Physics",
"Astronomy"
] | 7,331 | [
"Astronomical sub-disciplines",
"Concepts in astronomy",
"Theoretical physics",
"Astrophysics",
"Equations of astronomy",
"Physical cosmology"
] |
42,986 | https://en.wikipedia.org/wiki/Alternating%20current | Alternating current (AC) is an electric current that periodically reverses direction and changes its magnitude continuously with time, in contrast to direct current (DC), which flows only in one direction. Alternating current is the form in which electric power is delivered to businesses and residences, and it is the form of electrical energy that consumers typically use when they plug kitchen appliances, televisions, fans and electric lamps into a wall socket. The abbreviations AC and DC are often used to mean simply alternating and direct, respectively, as when they modify current or voltage.
The usual waveform of alternating current in most electric power circuits is a sine wave, whose positive half-period corresponds with positive direction of the current and vice versa (the full period is called a cycle). "Alternating current" most commonly refers to power distribution, but a wide range of other applications are technically alternating current although it is less common to describe them by that term. In many applications, like guitar amplifiers, different waveforms are used, such as triangular waves or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. These types of alternating current carry information such as sound (audio) or images (video) sometimes carried by modulation of an AC carrier signal. These currents typically alternate at higher frequencies than those used in power transmission.
Transmission, distribution, and domestic power supply
Electrical energy is distributed as alternating current because AC voltage may be increased or decreased with a transformer. This allows the power to be transmitted through power lines efficiently at high voltage, which reduces the energy lost as heat due to resistance of the wire, and transformed to a lower, safer voltage for use. Use of a higher voltage leads to significantly more efficient transmission of power. The power losses () in the wire are a product of the square of the current ( I ) and the resistance (R) of the wire, described by the formula:
This means that when transmitting a fixed power on a given wire, if the current is halved (i.e. the voltage is doubled), the power loss due to the wire's resistance will be reduced to one quarter.
The power transmitted is equal to the product of the current and the voltage (assuming no phase difference); that is,
Consequently, power transmitted at a higher voltage requires less loss-producing current than for the same power at a lower voltage. Power is often transmitted at hundreds of kilovolts on pylons, and transformed down to tens of kilovolts to be transmitted on lower level lines, and finally transformed down to 100 V – 240 V for domestic use.
High voltages have disadvantages, such as the increased insulation required, and generally increased difficulty in their safe handling. In a power plant, energy is generated at a convenient voltage for the design of a generator, and then stepped up to a high voltage for transmission. Near the loads, the transmission voltage is stepped down to the voltages used by equipment. Consumer voltages vary somewhat depending on the country and size of load, but generally motors and lighting are built to use up to a few hundred volts between phases. The voltage delivered to equipment such as lighting and motor loads is standardized, with an allowable range of voltage over which equipment is expected to operate. Standard power utilization voltages and percentage tolerance vary in the different mains power systems found in the world.
High-voltage direct-current (HVDC) electric power transmission systems have become more viable as technology has provided efficient means of changing the voltage of DC power. Transmission with high voltage direct current was not feasible in the early days of electric power transmission, as there was then no economically viable way to step the voltage of DC down for end user applications such as lighting incandescent bulbs.
Three-phase electrical generation is very common. The simplest way is to use three separate coils in the generator stator, physically offset by an angle of 120° (one-third of a complete 360° phase) to each other. Three current waveforms are produced that are equal in magnitude and 120° out of phase to each other. If coils are added opposite to these (60° spacing), they generate the same phases with reverse polarity and so can be simply wired together. In practice, higher pole orders are commonly used. For example, a 12-pole machine would have 36 coils (10° spacing). The advantage is that lower rotational speeds can be used to generate the same frequency. For example, a 2-pole machine running at 3600 rpm and a 12-pole machine running at 600 rpm produce the same frequency; the lower speed is preferable for larger machines. If the load on a three-phase system is balanced equally among the phases, no current flows through the neutral point. Even in the worst-case unbalanced (linear) load, the neutral current will not exceed the highest of the phase currents. Non-linear loads (e.g. the switch-mode power supplies widely used) may require an oversized neutral bus and neutral conductor in the upstream distribution panel to handle harmonics. Harmonics can cause neutral conductor current levels to exceed that of one or all phase conductors.
For three-phase at utilization voltages a four-wire system is often used. When stepping down three-phase, a transformer with a Delta (3-wire) primary and a Star (4-wire, center-earthed) secondary is often used so there is no need for a neutral on the supply side. For smaller customers (just how small varies by country and age of the installation) only a single phase and neutral, or two phases and neutral, are taken to the property. For larger installations, all three phases and neutral are taken to the main distribution panel. From the three-phase main panel, both single and three-phase circuits may lead off. Three-wire single-phase systems, with a single center-tapped transformer giving two live conductors, is a common distribution scheme for residential and small commercial buildings in North America. This arrangement is sometimes incorrectly referred to as two phase. A similar method is used for a different reason on construction sites in the UK. Small power tools and lighting are supposed to be supplied by a local center-tapped transformer with a voltage of 55 V between each power conductor and earth. This significantly reduces the risk of electric shock in the event that one of the live conductors becomes exposed through an equipment fault whilst still allowing a reasonable voltage of 110 V between the two conductors for running the tools.
An additional wire, called the bond (or earth) wire, is often connected between non-current-carrying metal enclosures and earth ground. This conductor provides protection from electric shock due to accidental contact of circuit conductors with the metal chassis of portable appliances and tools. Bonding all non-current-carrying metal parts into one complete system ensures there is always a low electrical impedance path to ground sufficient to carry any fault current for as long as it takes for the system to clear the fault. This low impedance path allows the maximum amount of fault current, causing the overcurrent protection device (breakers, fuses) to trip or burn out as quickly as possible, bringing the electrical system to a safe state. All bond wires are bonded to ground at the main service panel, as is the neutral/identified conductor if present.
AC power supply frequencies
The frequency of the electrical system varies by country and sometimes within a country; most electric power is generated at either 50 or 60 Hertz. Some countries have a mixture of 50 Hz and 60 Hz supplies, notably electricity power transmission in Japan.
Low frequency
A low frequency eases the design of electric motors, particularly for hoisting, crushing and rolling applications, and commutator-type traction motors for applications such as railways. However, low frequency also causes noticeable flicker in arc lamps and incandescent light bulbs. The use of lower frequencies also provided the advantage of lower transmission losses, which are proportional to frequency.
The original Niagara Falls generators were built to produce 25 Hz power, as a compromise between low frequency for traction and heavy induction motors, while still allowing incandescent lighting to operate (although with noticeable flicker). Most of the 25 Hz residential and commercial customers for Niagara Falls power were converted to 60 Hz by the late 1950s, although some 25 Hz industrial customers still existed as of the start of the 21st century. 16.7 Hz power (formerly 16 2/3 Hz) is still used in some European rail systems, such as in Austria, Germany, Norway, Sweden and Switzerland.
High frequency
Off-shore, military, textile industry, marine, aircraft, and spacecraft applications sometimes use 400 Hz, for benefits of reduced weight of apparatus or higher motor speeds. Computer mainframe systems were often powered by 400 Hz or 415 Hz for benefits of ripple reduction while using smaller internal AC to DC conversion units.
Effects at high frequencies
A direct current flows uniformly throughout the cross-section of a homogeneous electrically conducting wire. An alternating current of any frequency is forced away from the wire's center, toward its outer surface. This is because an alternating current (which is the result of the acceleration of electric charge) creates electromagnetic waves (a phenomenon known as electromagnetic radiation). Electric conductors are not conducive to electromagnetic waves (a perfect electric conductor prohibits all electromagnetic waves within its boundary), so a wire that is made of a non-perfect conductor (a conductor with finite, rather than infinite, electrical conductivity) pushes the alternating current, along with their associated electromagnetic fields, away from the wire's center. The phenomenon of alternating current being pushed away from the center of the conductor is called skin effect, and a direct current does not exhibit this effect, since a direct current does not create electromagnetic waves.
At very high frequencies, the current no longer flows in the wire, but effectively flows on the surface of the wire, within a thickness of a few skin depths. The skin depth is the thickness at which the current density is reduced by 63%. Even at relatively low frequencies used for power transmission (50 Hz – 60 Hz), non-uniform distribution of current still occurs in sufficiently thick conductors. For example, the skin depth of a copper conductor is approximately 8.57 mm at 60 Hz, so high-current conductors are usually hollow to reduce their mass and cost. This tendency of alternating current to flow predominantly in the periphery of conductors reduces the effective cross-section of the conductor. This increases the effective AC resistance of the conductor since resistance is inversely proportional to the cross-sectional area. A conductor's AC resistance is higher than its DC resistance, causing a higher energy loss due to ohmic heating (also called I2R loss).
Techniques for reducing AC resistance
For low to medium frequencies, conductors can be divided into stranded wires, each insulated from the others, with the relative positions of individual strands specially arranged within the conductor bundle. Wire constructed using this technique is called Litz wire. This measure helps to partially mitigate skin effect by forcing more equal current throughout the total cross section of the stranded conductors. Litz wire is used for making high-Q inductors, reducing losses in flexible conductors carrying very high currents at lower frequencies, and in the windings of devices carrying higher radio frequency current (up to hundreds of kilohertz), such as switch-mode power supplies and radio frequency transformers.
Techniques for reducing radiation loss
As written above, an alternating current is made of electric charge under periodic acceleration, which causes radiation of electromagnetic waves. Energy that is radiated is lost. Depending on the frequency, different techniques are used to minimize the loss due to radiation.
Twisted pairs
At frequencies up to about 1 GHz, pairs of wires are twisted together in a cable, forming a twisted pair. This reduces losses from electromagnetic radiation and inductive coupling. A twisted pair must be used with a balanced signaling system so that the two wires carry equal but opposite currents. Each wire in a twisted pair radiates a signal, but it is effectively canceled by radiation from the other wire, resulting in almost no radiation loss.
Coaxial cables
Coaxial cables are commonly used at audio frequencies and above for convenience. A coaxial cable has a conductive wire inside a conductive tube, separated by a dielectric layer. The current flowing on the surface of the inner conductor is equal and opposite to the current flowing on the inner surface of the outer tube. The electromagnetic field is thus completely contained within the tube, and (ideally) no energy is lost to radiation or coupling outside the tube. Coaxial cables have acceptably small losses for frequencies up to about 5 GHz. For microwave frequencies greater than 5 GHz, the losses (due mainly to the dielectric separating the inner and outer tubes being a non-ideal insulator) become too large, making waveguides a more efficient medium for transmitting energy. Coaxial cables often use a perforated dielectric layer to separate the inner and outer conductors in order to minimize the power dissipated by the dielectric.
Waveguides
Waveguides are similar to coaxial cables, as both consist of tubes, with the biggest difference being that waveguides have no inner conductor. Waveguides can have any arbitrary cross section, but rectangular cross sections are the most common. Because waveguides do not have an inner conductor to carry a return current, waveguides cannot deliver energy by means of an electric current, but rather by means of a guided electromagnetic field. Although surface currents do flow on the inner walls of the waveguides, those surface currents do not carry power. Power is carried by the guided electromagnetic fields. The surface currents are set up by the guided electromagnetic fields and have the effect of keeping the fields inside the waveguide and preventing leakage of the fields to the space outside the waveguide. Waveguides have dimensions comparable to the wavelength of the alternating current to be transmitted, so they are feasible only at microwave frequencies. In addition to this mechanical feasibility, electrical resistance of the non-ideal metals forming the walls of the waveguide causes dissipation of power (surface currents flowing on lossy conductors dissipate power). At higher frequencies, the power lost to this dissipation becomes unacceptably large.
Fiber optics
At frequencies greater than 200 GHz, waveguide dimensions become impractically small, and the ohmic losses in the waveguide walls become large. Instead, fiber optics, which are a form of dielectric waveguides, can be used. For such frequencies, the concepts of voltages and currents are no longer used.
Formulation
Alternating currents are accompanied (or caused) by alternating voltages. An AC voltage v can be described mathematically as a function of time by the following equation:
,
where
is the peak voltage (unit: volt),
is the angular frequency (unit: radians per second). The angular frequency is related to the physical frequency, (unit: hertz), which represents the number of cycles per second, by the equation .
is the time (unit: second).
The peak-to-peak value of an AC voltage is defined as the difference between its positive peak and its negative peak. Since the maximum value of is +1 and the minimum value is −1, an AC voltage swings between and . The peak-to-peak voltage, usually written as or , is therefore .
Root mean square voltage
Below an AC waveform (with no DC component) is assumed.
The RMS voltage is the square root of the mean over one cycle of the square of the instantaneous voltage.
Power
The relationship between voltage and the power delivered is:
,
where represents a load resistance.
Rather than using instantaneous power, , it is more practical to use a time-averaged power (where the averaging is performed over any integer number of cycles). Therefore, AC voltage is often expressed as a root mean square (RMS) value, written as , because
Power oscillation
For this reason, AC power's waveform becomes Full-wave rectified sine, and its fundamental frequency is double of the one of the voltage's.
Examples of alternating current
To illustrate these concepts, consider a 230 V AC mains supply used in many countries around the world. It is so called because its root mean square value is 230 V. This means that the time-averaged power delivered is equivalent to the power delivered by a DC voltage of 230 V. To determine the peak voltage (amplitude), we can rearrange the above equation to:
For 230 V AC, the peak voltage is therefore , which is about 325 V, and the peak power is , that is 460 RW. During the course of one cycle (two cycle as the power) the voltage rises from zero to 325 V, the power from zero to 460 RW, and both falls through zero. Next, the voltage descends to reverse direction, -325 V, but the power ascends again to 460 RW, and both returns to zero.
Information transmission
Alternating current is used to transmit information, as in the cases of telephone and cable television. Information signals are carried over a wide range of AC frequencies. POTS telephone signals have a frequency of about 3 kHz, close to the baseband audio frequency. Cable television and other cable-transmitted information currents may alternate at frequencies of tens to thousands of megahertz. These frequencies are similar to the electromagnetic wave frequencies often used to transmit the same types of information over the air.
History
The first alternator to produce alternating current was an electric generator based on Michael Faraday's principles constructed by the French instrument maker Hippolyte Pixii in 1832. Pixii later added a commutator to his device to produce the (then) more commonly used direct current. The earliest recorded practical application of alternating current is by Guillaume Duchenne, inventor and developer of electrotherapy. In 1855, he announced that AC was superior to direct current for electrotherapeutic triggering of muscle contractions. Alternating current technology was developed further by the Hungarian Ganz Works company (1870s), and in the 1880s: Sebastian Ziani de Ferranti, Lucien Gaulard, and Galileo Ferraris.
In 1876, Russian engineer Pavel Yablochkov invented a lighting system where sets of induction coils were installed along a high-voltage AC line. Instead of changing voltage, the primary windings transferred power to the secondary windings which were connected to one or several electric candles (arc lamps) of his own design, used to keep the failure of one lamp from disabling the entire circuit. In 1878, the Ganz factory, Budapest, Hungary, began manufacturing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment.
Transformers
The development of the alternating current transformer to change voltage from low to high level and back, allowed generation and consumption at low voltages and transmission, over great distances, at high voltage, with savings in the cost of conductors and energy losses. A bipolar open-core power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They exhibited an AC system powering arc and incandescent lights was installed along five railway stations for the Metropolitan Railway in London and a single-phase multiple-user AC distribution system Turin in 1884. These early induction coils with open magnetic circuits were inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil. The direct current systems did not have these drawbacks, giving it significant advantages over early AC systems.
In the UK, Sebastian de Ferranti, who had been developing AC generators and transformers in London since 1882, redesigned the AC system at the Grosvenor Gallery power station in 1886 for the London Electric Supply Corporation (LESCo) including alternators of his own design and open core transformer designs with serial connections for utilization loads - similar to Gaulard and Gibbs. In 1890, he designed their power station at Deptford and converted the Grosvenor Gallery station across the Thames into an electrical substation, showing the way to integrate older plants into a universal AC supply system.
In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three engineers associated with the Ganz Works of Budapest, determined that open-core devices were impractical, as they were incapable of reliably regulating voltage. Bláthy had suggested the use of closed cores, Zipernowsky had suggested the use of parallel shunt connections, and Déri had performed the experiments; In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either wound around a ring core of iron wires or else surrounded by a core of iron wires. In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see toroidal cores). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs. The Ganz factory in 1884 shipped the world's first five high-efficiency AC transformers. This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form.
The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 140 to 2000 V) than the voltage of utilization loads (100 V initially preferred). When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces.
The other essential milestone was the introduction of 'voltage source, voltage intensive' (VSVI) systems' by the invention of constant voltage generators in 1885. In early 1885, the three engineers also eliminated the problem of eddy current losses with the invention of the lamination of electromagnetic cores. Ottó Bláthy also invented the first AC electricity meter.
Adoption
The AC power system was developed and adopted rapidly after 1886. In March of that year, Westinghouse engineer William Stanley, designing a system based on the Gaulard and Gibbs transformer, demonstrated a lighting system in Great Barrington: A Siemens generator's voltage of 500 volts was converted into 3000 volts, and then the voltage was stepped down to 500 volts by six Westinghouse transformers. With this setup, the Westinghouse company successfully powered thirty 100-volt incandescent bulbs in twenty shops along the main street of Great Barrington. By the fall of that year Ganz engineers installed a ZBD transformer power system with AC generators in Rome.
Based on Stanley's success, the new Westinghouse Electric went on to develop alternating current (AC) electric infrastructure throughout the United States. The spread of Westinghouse and other AC systems triggered a push back in late 1887 by Thomas Edison (a proponent of direct current), who attempted to discredit alternating current as too dangerous in a public campaign called the "war of the currents".
In 1888, alternating current systems gained further viability with the introduction of a functional AC motor, something these systems had lacked up till then. The design, an induction motor, was independently invented by Galileo Ferraris and Nikola Tesla (with Tesla's design being licensed by Westinghouse in the US). This design was independently further developed into the modern practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown in Germany on one side, and Jonas Wenström in Sweden on the other, though Brown favored the two-phase system.
The Ames Hydroelectric Generating Plant, constructed in 1890, was among the first hydroelectric alternating current power plants. A long-distance transmission of single-phase electricity from a hydroelectric generating plant in Oregon at Willamette Falls sent power fourteen miles downriver to downtown Portland for street lighting in 1890. In 1891, another transmission system was installed in Telluride Colorado. The first three-phase system was established in 1891 in Frankfurt, Germany. The Tivoli–Rome transmission was completed in 1892. The San Antonio Canyon Generator was the third commercial single-phase hydroelectric AC power plant in the United States to provide long-distance electricity. It was completed on December 31, 1892, by Almarian William Decker to provide power to the city of Pomona, California, which was 14 miles away. Meanwhile, the possibility of transferring electrical power from a waterfall at a distance was explored at the Grängesberg mine in Sweden. A fall at Hällsjön, Smedjebackens kommun, where a small iron work had been located, was selected. In 1893, a three-phase system was used to transfer 400 horsepower a distance of , becoming the first commercial application. In 1893, Westinghouse built an alternating current system for the Chicago World Exposition. In 1893, Decker designed the first American commercial three-phase power plant using alternating current—the hydroelectric Mill Creek No. 1 Hydroelectric Plant near Redlands, California. Decker's design incorporated 10 kV three-phase transmission and established the standards for the complete system of generation, transmission and motors used in USA today. The original Niagara Falls Adams Power Plant with three two-phase generators was put into operation in August 1895, but was connected to the remote transmission system only in 1896. The Jaruga Hydroelectric Power Plant in Croatia was set in operation two days later, on 28 August 1895. Its generator (42 Hz, 240 kW) was made and installed by the Hungarian company Ganz, while the transmission line from the power plant to the City of Šibenik was long, and the municipal distribution grid 3000 V/110 V included six transforming stations.
Alternating current circuit theory developed rapidly in the latter part of the 19th and early 20th century. Notable contributors to the theoretical basis of alternating current calculations include Charles Steinmetz, Oliver Heaviside, and many others. Calculations in unbalanced three-phase systems were simplified by the symmetrical components methods discussed by Charles LeGeyt Fortescue in 1918.
See also
AC power
Electrical wiring
Heavy-duty power plugs
Hertz
Leading and lagging current
Mains electricity by country
AC power plugs and sockets
Utility frequency
War of the currents
AC/DC receiver design
References
Further reading
Willam A. Meyers, History and Reflections on the Way Things Were: Mill Creek Power Plant – Making History with AC, IEEE Power Engineering Review, February 1997, pp. 22–24
External links
"AC/DC: What's the Difference?". Edison's Miracle of Light, American Experience. (PBS)
"AC/DC: Inside the AC Generator ". Edison's Miracle of Light, American Experience. (PBS)
Professor Mark Csele's tour of the 25 Hz Rankine generating station
Blalock, Thomas J., "The Frequency Changer Era: Interconnecting Systems of Varying Cycles". The history of various frequencies and interconversion schemes in the US at the beginning of the 20th century
AC Power History and Timeline
Electrical engineering
Electric current
Electric power
AC power | Alternating current | [
"Physics",
"Engineering"
] | 5,720 | [
"Physical quantities",
"Electrical engineering",
"Power (physics)",
"Electric power",
"Electric current",
"Wikipedia categories named after physical quantities"
] |
43,001 | https://en.wikipedia.org/wiki/Guar%20gum | Guar gum, also called guaran, is a galactomannan polysaccharide extracted from guar beans that has thickening and stabilizing properties useful in food, feed, and industrial applications. The guar seeds are mechanically dehusked, hydrated, milled and screened according to application. It is typically produced as a free-flowing, off-white powder.
Production and trade
The guar bean is principally grown in India, Pakistan, the United States, Australia and Africa. India is the largest producer, accounting for nearly 80% of world production. In India, Rajasthan, Gujarat, and Haryana are the main producing regions. The US has produced 4,600 to 14,000 tonnes of guar over the last 5 years. Texas acreage since 1999 has fluctuated from about 7,000 to 50,000 acres. The world production for guar gum and its derivatives is about 1.0 million tonnes. Non-food guar gum accounts for about 40% of the total demand.
Properties
Chemical composition
Chemically, guar gum is an exo-polysaccharide composed of the sugars galactose and mannose. The backbone is a linear chain of β 1,4-linked mannose residues to which galactose residues are 1,6-linked at every second mannose, forming short side-branches. Guar gum has the ability to withstand temperatures of 80 °C (176 °F) for five minutes.
Solubility and viscosity
Guar gum is more soluble than locust bean gum due to its extra galactose branch points. Unlike locust bean gum, it is not self-gelling. Either borax or calcium can cross-link guar gum, causing it to gel. In water, it is nonionic and hydrocolloidal. It is not affected by ionic strength or pH, but will degrade at extreme pH and temperature (e.g., pH 3 at 50 °C). It remains stable in solution over pH range 5–7. Strong acids cause hydrolysis, and loss of viscosity and alkalis in strong concentration also tend to reduce viscosity. It is insoluble in most hydrocarbon solvents. The viscosity attained is dependent on time, temperature, concentration, pH, rate of agitation and particle size of the powdered gum used. The lower the temperature, the lower the rate at which viscosity increases, and the lower the final viscosity. Above 80°C, the final viscosity is slightly reduced. Finer guar powders swell more rapidly than larger particle size coarse powdered gum.
Guar gum shows a clear low shear plateau on the flow curve and is strongly shear-thinning. The rheology of guar gum is typical for a random coil polymer. It does not show the very high low shear plateau viscosities seen with more rigid polymer chains such as xanthan gum. It is very thixotropic above 1% concentration, but below 0.3%, the thixotropy is slight. Guar gum shows viscosity synergy with xanthan gum. Guar gum and micellar casein mixtures can be slightly thixotropic if a biphase system forms.
Thickening
One use of guar gum is as a thickening agent in foods and medicines for humans and animals. Because it is gluten-free, it is used as an additive to replace wheat flour in baked goods.:41 It has been shown to reduce serum cholesterol and lower blood glucose levels.
Guar gum is also economical because it has almost eight times the water-thickening ability of other agents (e.g., cornstarch) and only a small quantity is needed for producing sufficient viscosity. Because less is required, costs are reduced.
In addition to guar gum's effects on viscosity, its high ability to flow, or deform, gives it favorable rheological properties. It forms breakable gels when cross-linked with boron. It is used in various multi-phase formulations for hydraulic fracturing, in some as an emulsifier because it helps prevent oil droplets from coalescing, and in others as a stabilizer to help prevent solid particles from settling and/or separating.
Fracking entails the pumping of sand-laden fluids into an oil or natural gas reservoir at high pressure and flow rate. This cracks the reservoir rock and then props the cracks open. Water alone is too thin to be effective at carrying proppant sand, so guar gum is one of the ingredients added to thicken the slurry mixture and improve its ability to carry proppant. There are several properties which are important 1. Thixotropic: the fluid should be thixotropic, meaning it should gel within a few hours. 2. Gelling and de-gelling: The desired viscosity changes over the course of a few hours. When the fracking slurry is mixed, it needs to be thin enough to make it easier to pump. Then as it flows down the pipe, the fluid needs to gel to support the proppant and flush it deep into the fractures. After that process, the gel has to break down so that it is possible to recover the fracking fluid but leave the proppant behind. This requires a chemical process which produces then breaks the gel cross-linking at a predictable rate. Guar+boron+proprietary chemicals can accomplish both of these goals at once.
Ice crystal growth
Guar gum retards ice crystal growth by slowing mass transfer across the solid/liquid interface. It shows good stability during freeze-thaw cycles. Thus, it is used in egg-free ice cream. Guar gum has synergistic effects with locust bean gum and sodium alginate. May be synergistic with xanthan: together with xanthan gum, it produces a thicker product (0.5% guar gum / 0.35% xanthan gum), which is used in applications such as soups, which do not require clear results.
Guar gum is a hydrocolloid, hence is useful for making thick pastes without forming a gel, and for keeping water bound in a sauce or emulsion. Guar gum can be used for thickening cold and hot liquids, to make hot gels, light foams and as an emulsion stabilizer. Guar gum can be used for cottage cheeses, curds, yoghurt, sauces, soups and frozen desserts. Guar gum is also a good source of fiber, with 80% soluble dietary fiber on a dry weight basis.
Grading
Guar gum is analysed for
Guar gum powder standards are:
HS-Code- 130 232 30
CAS No.- 9000-30-0
EEC No.- E 412
BT No.- 1302 3290
EINECS No. - 232-536-8
Imco Code- Harmless
Manufacturing process
Depending upon the requirement of end product, various processing techniques are used. The commercial production of guar gum normally uses roasting, differential attrition, sieving, and polishing. Food-grade guar gum is manufactured in stages. Guar split selection is important in this process. The split is screened to clean it and then soaked to pre-hydrate it in a double-cone mixer. The prehydrating stage is very important because it determines the rate of hydration of the final product. The soaked splits, which have reasonably high moisture content, are passed through a flaker. The flaked guar split is ground and then dried. The powder is screened through rotary screens to deliver the required particle size. Oversize particles are either recycled to main ultra fine or reground in a separate regrind plant, according to the viscosity requirement.
This stage helps to reduce the load at the grinder. The soaked splits are difficult to grind. Direct grinding of those generates more heat in the grinder, which is not desired in the process, as it reduces the hydration of the product. Through the heating, grinding, and polishing process, the husk is separated from the endosperm halves and the refined guar split is obtained. Through the further grinding process, the refined guar split is then treated and converted into powder. The split manufacturing process yields husk and germ called “guar meal”, widely sold in the international market as cattle feed. It is high in protein and contains oil and albuminoids, about 50% in germ and about 25% in husks. The quality of the food-grade guar gum powder is defined from its particle size, rate of hydration, and microbial content.
Manufacturers define different grades and qualities of guar gum by the particle size, the viscosity generated with a given concentration, and the rate at which that viscosity develops. Coarse-mesh guar gums will typically, but not always, develop viscosity more slowly. They may achieve a reasonably high viscosity, but will take longer to achieve. On the other hand, they will disperse better than fine-mesh, all conditions being equal. A finer mesh, such as a 200 mesh, requires more effort to dissolve. Modified forms of guar gum are available commercially, including enzyme-modified, cationic and hydropropyl guar.
Industrial applications
Textile industry – sizing, finishing and printing
Paper industry – improved sheet formation, folding and denser surface for printing
Explosives industry – as waterproofing agent mixed with ammonium nitrate, nitroglycerin, etc.
Pharmaceutical industry – as binder or as disintegrator in tablets; main ingredient in some bulk-forming laxatives
Cosmetics and toiletries industries – thickener in toothpastes, conditioner in shampoos (usually in a chemically modified version)
Hydraulic fracturing – Shale oil and gas extraction industries consumes about 90% of guar gum produced from India and Pakistan.
Fracturing fluids normally consist of many additives that serve two main purposes, firstly to enhance fracture creation and proppant carrying capability and secondly to minimize formation damage. Viscosifiers, such as polymers and crosslinking agents, temperature stabilizers, pH control agents, and fluid loss control materials are among the additives that assist fracture creation. Formation damage is minimized by incorporating breakers, biocides, and surfactants. More appropriate gelling agents are linear polysaccharides, such as guar gum, cellulose, and their derivatives.
Guar gums are preferred as thickeners for enhanced oil recovery (EOR). Guar gum and its derivatives account for most of the gelled fracturing fluids. Guar is more water-soluble than other gums, and it is also a better emulsifier, because it has more galactose branch points. Guar gum shows high low-shear viscosity, but it is strongly shear-thinning. Being non-ionic, it is not affected by ionic strength or pH but will degrade at low pH at moderate temperature (pH 3 at 50 °C). Guar's derivatives demonstrate stability in high temperature and pH environments. Guar use allows for achieving exceptionally high viscosities, which improves the ability of the fracturing liquid to transport proppant. Guar hydrates fairly rapidly in cold water to give highly viscous pseudoplastic solutions of, generally, greater low-shear viscosity than other hydrocolloids. The colloidal solids present in guar make fluids more efficient by creating less filter cake. Proppant pack conductivity is maintained by utilizing a fluid that has excellent fluid loss control, such as the colloidal solids present in guar gum.
Guar has up to eight times the thickening power of starch. Derivatization of guar gum leads to subtle changes in properties, such as decreased hydrogen bonding, increased solubility in water-alcohol mixture, and improved electrolyte compatibility. These changes in properties result in increased use in different fields, like textile printing, explosives, and oil-water fracturing applications.
Crosslinking guar
Guar molecules have a tendency to aggregate during the hydraulic fracturing process, mainly due to intermolecular hydrogen bonding. These aggregates are detrimental to oil recovery because they clog the fractures, restricting the flow of oil. Cross-linking guar polymer chains prevents aggregation by forming metal–hydroxyl complexes. The first crosslinked guar gels were developed in the late 1960s. Several metal additives have been used for crosslinking; among them are chromium, aluminium, antimony, zirconium, and the more commonly used boron. Boron, in the form of B(OH)4, reacts with the hydroxyl groups on the polymer in a two-step process to link two polymer strands together to form bis-diol complexes.
1:1 1,2 diol complex and a 1:1 1,3 diol complex, place the negatively charged borate ion onto the polymer chain as a pendant group. Boric acid itself does not apparently complex to the polymer so that all bound boron is negatively charged. The primary form of crosslinking may be due to ionic association between the anionic borate complex and adsorbed cations on the second polymer chain . The development of cross-linked gels was a major advance in fracturing fluid technology. Viscosity is enhanced by tying together the low molecular weight strands, effectively yielding higher molecular weight strands and a rigid structure. Cross-linking agents are added to linear polysaccharide slurries to provide higher proppant transport performance, relative to linear gels.
Lower concentrations of guar gelling agents are needed when linear guar chains are cross-linked. It has been determined that reduced guar concentrations provide better and more complete breaks in a fracture. The breakdown of cross-linked guar gel after the fracturing process restores formation permeability and allows increased production flow of petroleum products.
Mining
Hydroseeding – formation of seed-bearing "guar tack"
Medical institutions, especially nursing homes - used to thicken liquids and foods for patients with dysphagia
Fire retardant industry – as a thickener in Phos-Chek
Nanoparticles industry – to produce silver or gold nanoparticles, or develop innovative medicine delivery mechanisms for drugs in pharmaceutical industry
Slime (toy), based on guar gum crosslinked with sodium tetraborate
Food applications
The largest market for guar gum is in the food industry. In the US, differing percentages are set for its allowable concentration in various food applications. In Europe, guar gum has EU food additive code E412. Xanthan gum and guar gum are the most frequently used gums in gluten-free recipes and gluten-free products.
Applications include:
In baked goods, it increases dough yield, gives greater resiliency, and improves texture and shelf life; in pastry fillings, it prevents "weeping" (syneresis) of the water in the filling, keeping the pastry crust crisp. It is primarily used in hypoallergenic recipes that use different types of whole-grain flours. Because the consistency of these flours allows the escape of gas released by leavening, guar gum is needed to improve the thickness of these flours, allowing them to rise as a normal flour would.
In dairy products, it thickens milk, yogurt, kefir, and liquid cheese products, and helps maintain homogeneity and texture of ice creams and sherbets. It is used for similar purposes in plant milks.
For meat, it functions as a binder.
In condiments, it improves the stability and appearance of salad dressings, barbecue sauces, relishes, ketchups and others.
In canned soup, it is used as a thickener and stabilizer.
It is also used in dry soups, instant oatmeal, sweet desserts, canned fish in sauce, frozen food items, and animal feed.
The FDA has banned guar gum as a weight loss pill due to reports of the substance swelling and obstructing the intestines and esophagus.
For cattle feed, as it enhances in the production of more milk as well as more percentage of fat in the milk.
Nutritional and medicinal effects
Guar gum, as a water-soluble fiber, acts as a bulk-forming laxative. Several studies have found it decreases cholesterol levels. These decreases are thought to be a function of its high soluble fiber content.
Moreover, its low digestibility lends its use in recipes as a filler, which can help to provide satiety or slow the digestion of a meal, thus lowering the glycemic index of that meal. In the late 1980s, guar gum was used and heavily promoted in several weight-loss drugs. The US Food and Drug Administration eventually recalled these due to reports of esophageal blockage from insufficient fluid intake, after one brand alone caused at least 10 users to be hospitalized, and a death. For this reason, guar gum is no longer approved for use in over-the-counter weight loss drugs in the United States, although this restriction does not apply to supplements. Moreover, a meta-analysis found guar gum supplements were not effective in reducing body weight.
Guar-based compounds, such as hydroxypropyl guar, have been used in artificial tears to treat dry eye.
Allergies
Some studies have found an allergic sensitivity to guar gum developed in a few individuals working in an industrial environment where airborne concentrations of the substance were present. In those affected by the inhalation of the airborne particles, common adverse reactions were occupational rhinitis and asthma.
Dioxin contamination
In July 2007, the European Commission issued a health warning to its member states after high levels of dioxins were detected in guar gum, which was used as a thickener in small quantities in meat, dairy, dessert and delicatessen products. The source was traced to guar gum from India that was contaminated with pentachlorophenol (PCP), a pesticide no longer in use. PCP contains dioxins, which damage the human immune system.
References
Natural gums
Edible thickening agents
Pyrotechnic chemicals
E-number additives | Guar gum | [
"Chemistry"
] | 3,826 | [
"Pyrotechnic chemicals"
] |
43,017 | https://en.wikipedia.org/wiki/Gruinard%20Island | Gruinard Island ( ;
) is a small, oval-shaped Scottish island approximately long by wide, located in Gruinard Bay, about halfway between Gairloch and Ullapool. At its closest point to the mainland, it is about offshore. In 1942, the island became a sacrifice zone, and was dangerous for all mammals after military experiments with the anthrax bacterium, until it was decontaminated in 1990.
Early history
The island was mentioned by Dean Munro who travelled the area in the mid-16th century. He wrote that it was Clan MacKenzie territory, "full of woods" (it is treeless today), and that it was "guid for fostering of thieves and rebellis".
Historically, the counties of Ross-shire and Cromartyshire have both laid claim to Gruinard Island due to the position of the island in between Gairloch and Ullapool. In the late 1780s, the villages became substantial fishing and sheep farming communities leading Gruinard Island to be utilized as an area of land for grazing sheep or as a small dock for fishing. By 1881, the population on the island was 6, soon becoming uninhabited with no record detailing any established population.
In 1926, Rosalynd Maitland purchased the Eilean Darach estate which included Gruinard Island. Rosalynd Maitland bequeathed the island to her niece Molly Dunphie who was friends with Winston Churchill.
Biological warfare testing
In 1942, during World War II, a biological warfare test was carried out on Gruinard by scientists from the Biology Department of Porton Down. The test was conducted as part of Operation Vegetarian, an ultimately unused plan which called for the dispersal of linseed cakes spiked with anthrax across the German countryside. It was recognised that tests would cause long-lasting contamination of the immediate area by anthrax spores, so a remote and uninhabited island was required. Gruinard was surveyed, deemed suitable, and requisitioned from its owners by the British government. Porton Down meteorologist Sir Oliver Graham Sutton was put in charge of a fifty-man team to conduct the trial, with David Willis Wilson Henderson in charge of the germ bomb. Biology Department head Paul Fildes made frequent visits.
The anthrax strain chosen was a highly virulent type called "Vollum 14578", named after R. L. Vollum, Professor of Bacteriology at the University of Oxford, who supplied it. Eighty sheep were taken to the island and bombs filled with anthrax spores were detonated close to where selected groups were tethered. The sheep became infected with anthrax and began to die within days of exposure.
Some of the experiments were recorded on 16 mm colour movie film, which was declassified in 1997. One sequence shows the detonation of an anthrax bomb fixed at the end of a tall pole supported with guy ropes. After the bomb explodes, a brownish aerosol cloud drifts away towards the target animals. A later sequence shows anthrax-infected sheep carcasses being burned in incinerators at the end of the experiment. After the tests were completed, scientists concluded that a large release of anthrax spores would thoroughly pollute German cities, rendering them uninhabitable for decades afterwards. Those conclusions were supported by the inability to decontaminate the island after the experiment—the spores were sufficiently durable to resist any efforts at decontamination.
In 1945, when the island's owner sought its return, the Ministry of Supply recognised that the island was contaminated, and so could not be de-requisitioned until it was deemed safe. In 1946, the government agreed to acquire the island and to take responsibility for it. The owner or their heirs would be able to repurchase the island for £500 when it was declared "fit for habitation by man and beast". For many years, it was judged too hazardous and expensive to decontaminate the island sufficiently to allow public access, and Gruinard Island was quarantined indefinitely. Visits to the island were prohibited, except for periodic checks by Porton Down personnel to determine the level of contamination.
Operation Dark Harvest
In 1981 newspapers began receiving messages with the heading "Operation Dark Harvest" which demanded that the government decontaminate the island, and reported that a "team of microbiologists from two universities" had landed on the island with the aid of local people and collected of soil.
The group threatened to leave samples of the soil "at appropriate points that will ensure the rapid loss of indifference of the government and the equally rapid education of the general public". The same day a sealed package of soil was left outside the military research facility at Porton Down; tests revealed that it contained anthrax bacilli. A few days later another sealed package of soil was left in Blackpool, where the governing Conservative Party was holding its annual conference. The soil did not contain anthrax, but officials said that the soil was similar to that found on the island.
Decontamination
Starting in 1986 a determined effort was made to decontaminate Gruinard Island, 280 tonnes of formaldehyde solution diluted in sea water was sprayed over all 485 acres (196 hectares) of the island and the worst-contaminated topsoil around the dispersal site was removed. Run-off from the formaldehyde seeped into the ocean and slowly led to the destruction of intertidal organisms such as barnacles, crustaceans, and seaweed. By 2000, research into intertidal organisms recovery launched and is still ongoing; however, researchers from that survey project in 2007 have said that “recolonization is ongoing, rather than complete.”
A flock of sheep was placed on the island not long after the cleanup in 1987 and remained healthy.
On 24 April 1990, after 48 years of quarantine and four years after the solution was applied, junior defence minister Michael Neubert visited the island and announced its safety by removing the warning signs. On 1 May 1990, the island was repurchased by the heirs of the original owner for the original sale price of £500. There was some confusion in which members of the public did not know it was being resold solely to its original owners and people from around the world sent letters to the British government asking to purchase the island for £500.
Wildfire
On 26 March 2022, the island was burned "from one end to the other" by a wildfire. Eyewitnesses described the scene as "apocalyptic". The cause of the wildfire has not been confirmed but around 200 hectares have been destroyed by the fire. A spokeswoman on behalf of the Gruinard estate did not explicitly state the cause of the fire, only that "It hasn't caused any damage. It has done good." Local speculation believes the fire was set as an act of muirburn, a Scottish term when native moorland is burned to provide fresh growth for game and livestock.
Popular culture references
Gruinard Island is mentioned in the novels The Anthrax Mutation by Alan Scott (1971), The Enemy by Desmond Bagley (1977), Isvik by Hammond Innes (1991), Sea of Death by Richard P. Henrick (1992), The Fist of God by Frederick Forsyth (1994), Quantico by Greg Bear (2005), The Big Over Easy by Jasper Fforde (2005), Forbidden Island by Malcolm Rose (2009), And then you die by Iris Johansen (1998), The Island by R. J. Price (better-known as the poet Richard Price) (2010), And the Land Lay Still by James Robertson, The Impossible Dead by Ian Rankin (2011), White Pines by Gemma Amor (2020), and Paying the Piper by Sharon McCrumb. It also features as the principal setting for the novel El año de gracia by Cristina Fernández Cubas, in which the protagonist spends a winter shipwrecked on the island. The island is the principal location in the novel Anthrax Island by D. L. Marshall (2021).
In issues 187–188 of the comic book Hellblazer, in a story titled "Bred in the Bone", the protagonist's niece finds herself on Gruinard surrounded by flesh-eating children. The issues were released in 2003 and were written by Mike Carey and illustrated by Doug Alexander Gregory.
An episode of the British wartime TV series Foyle's War entitled "Bad Blood" involved biological testing – a reference to the Gruinard testing.
The 1970 Hawaii Five-O episode "Three Dead Cows at Makapu, Part 2" featured a scientist played by Ed Flanders who threatened to unleash a deadly virus on the island of Oahu. When being interrogated, the scientist briefly mentions Gruinard Island and how it will be uninhabitable for a century due to anthrax experiments.
Outlying Islands, a Fringe First-winning play by Scottish dramatist David Greig, is a fictionalised account of two British scientists' visit to an island in Scotland where the government plans to test anthrax inspired by the story of Gruinard.
"Smallpox Island", off the north-west coast of Scotland, appears in the 2000 AD comic strip Caballistics, Inc., although the warnings of contamination from biological weapons are a cover for a top secret, high-security prison.
The 2006 Doctor Who audio drama Night Thoughts is set on the fictional Gravonax Island, the name and history of which are inspired by those of Gruinard.
The 2013 UK TV series Utopia describes the fictional outbreak of a new form of flu. During Episode 3, Dugdale visits the proposed origin of the virus at the, now quarantined, Island of Fetlar. On arrival, personnel at the island, wearing orange overalls, carry one of numerous covered bodies past on a stretcher in a scene that is nearly identical to that seen in the original test footage from Gruinard Island. In the dramatisation however, the personnel at Fetlar are seen wearing dust masks as opposed to the gas masks seen in the Gruinard footage; likely due to budget constraints (much of Utopia was not filmed where it claims to be).
The experiments are referred to in the storyline of "Trust", the third and fourth episodes of Series 16 of the BBC series Silent Witness.
See also
List of islands of Scotland
Footnotes
References
External links
Archive colour 16 mm footage from 1942, showing the Bioweapons testing on Gruinard island
The Plan that Never Was: Churchill and the 'Anthrax Bomb' by Julian Lewis
Gruinard Island photo
More footage of the testing done on Gruinard Island
Art Project based on Gruinard weapons testing
Biological warfare facilities
Uninhabited islands of Highland (council area)
Former populated places in Scotland
United Kingdom biological weapons program | Gruinard Island | [
"Biology"
] | 2,235 | [
"Biological warfare facilities",
"Biological warfare"
] |
43,024 | https://en.wikipedia.org/wiki/Levee | A levee ( or ), dike (American English), dyke (British English; see spelling differences), embankment, floodbank, or stop bank is an elevated ridge, natural or artificial, alongside the banks of a river, often intended to protect against flooding of the area adjoining the river. It is usually earthen and often runs parallel to the course of a river in its floodplain or along low-lying coastlines.
Naturally occurring levees form on river floodplains following flooding, where sediment and alluvium is deposited and settles, forming a ridge and increasing the river channel's capacity. Alternatively, levees can be artificially constructed from fill, designed to regulate water levels. In some circumstances, artificial levees can be environmentally damaging.
Ancient civilizations in the Indus Valley, ancient Egypt, Mesopotamia and China all built levees. Today, levees can be found around the world, and failures of levees due to erosion or other causes can be major disasters, such as the catastrophic 2005 levee failures in Greater New Orleans that occurred as a result of Hurricane Katrina.
Etymology
Speakers of American English use the word levee, from the French word (from the feminine past participle of the French verb , 'to raise'). It originated in New Orleans a few years after the city's founding in 1718 and was later adopted by English speakers. The name derives from the trait of the levee's ridges being raised higher than both the channel and the surrounding floodplains.
The modern word dike or dyke most likely derives from the Dutch word , with the construction of dikes well attested as early as the 11th century. The Westfriese Omringdijk, completed by 1250, was formed by connecting existing older dikes. The Roman chronicler Tacitus mentions that the rebellious Batavi pierced dikes to flood their land and to protect their retreat (70 CE). The word originally indicated both the trench and the bank. It closely parallels the English verb to dig.
In Anglo-Saxon, the word already existed and was pronounced as dick in northern England and as ditch in the south. Similar to Dutch, the English origins of the word lie in digging a trench and forming the upcast soil into a bank alongside it. This practice has meant that the name may be given to either the excavation or to the bank. Thus Offa's Dyke is a combined structure and Car Dyke is a trench – though it once had raised banks as well. In the English Midlands and East Anglia, and in the United States, a dike is what a ditch is in the south of England, a property-boundary marker or drainage channel. Where it carries a stream, it may be called a running dike as in Rippingale Running Dike, which leads water from the catchwater drain, Car Dyke, to the South Forty Foot Drain in Lincolnshire (TF1427). The Weir Dike is a soak dike in Bourne North Fen, near Twenty and alongside the River Glen, Lincolnshire. In the Norfolk and Suffolk Broads, a dyke may be a drainage ditch or a narrow artificial channel off a river or broad for access or mooring, some longer dykes being named, e.g., Candle Dyke.
In parts of Britain, particularly Scotland and Northern England, a dyke may be a field wall, generally made with dry stone.
Uses
The main purpose of artificial levees is to prevent flooding of the adjoining countryside and to slow natural course changes in a waterway to provide reliable shipping lanes for maritime commerce over time; they also confine the flow of the river, resulting in higher and faster water flow. Levees can be mainly found along the sea, where dunes are not strong enough, along rivers for protection against high floods, along lakes or along polders. Furthermore, levees have been built for the purpose of impoldering, or as a boundary for an inundation area. The latter can be a controlled inundation by the military or a measure to prevent inundation of a larger area surrounded by levees. Levees have also been built as field boundaries and as military defences. More on this type of levee can be found in the article on dry-stone walls.
Levees can be permanent earthworks or emergency constructions (often of sandbags) built hastily in a flood emergency.
Some of the earliest levees were constructed by the Indus Valley civilization (in Pakistan and North India from ) on which the agrarian life of the Harappan peoples depended. Levees were also constructed over 3,000 years ago in ancient Egypt, where a system of levees was built along the left bank of the River Nile for more than , stretching from modern Aswan to the Nile Delta on the shores of the Mediterranean. The Mesopotamian civilizations and ancient China also built large levee systems. Because a levee is only as strong as its weakest point, the height and standards of construction have to be consistent along its length. Some authorities have argued that this requires a strong governing authority to guide the work and may have been a catalyst for the development of systems of governance in early civilizations. However, others point to evidence of large-scale water-control earthen works such as canals and/or levees dating from before King Scorpion in Predynastic Egypt, during which governance was far less centralized.
Another example of a historical levee that protected the growing city-state of Mēxihco-Tenōchtitlan and the neighboring city of Tlatelōlco, was constructed during the early 1400s, under the supervision of the tlahtoani of the altepetl Texcoco, Nezahualcoyotl. Its function was to separate the brackish waters of Lake Texcoco (ideal for the agricultural technique Chināmitls) from the fresh potable water supplied to the settlements. However, after the Europeans destroyed Tenochtitlan, the levee was also destroyed and flooding became a major problem, which resulted in the majority of The Lake being drained in the 17th century.
Levees are usually built by piling earth on a cleared, level surface. Broad at the base, they taper to a level top, where temporary embankments or sandbags can be placed. Because flood discharge intensity increases in levees on both river banks, and because silt deposits raise the level of riverbeds, planning and auxiliary measures are vital. Sections are often set back from the river to form a wider channel, and flood valley basins are divided by multiple levees to prevent a single breach from flooding a large area. A levee made from stones laid in horizontal rows with a bed of thin turf between each of them is known as a spetchel.
Artificial levees require substantial engineering. Their surface must be protected from erosion, so they are planted with vegetation such as Bermuda grass in order to bind the earth together. On the land side of high levees, a low terrace of earth known as a banquette is usually added as another anti-erosion measure. On the river side, erosion from strong waves or currents presents an even greater threat to the integrity of the levee. The effects of erosion are countered by planting suitable vegetation or installing stones, boulders, weighted matting, or concrete revetments. Separate ditches or drainage tiles are constructed to ensure that the foundation does not become waterlogged.
River flood prevention
Prominent levee systems have been built along the Mississippi River and Sacramento River in the United States, and the Po, Rhine, Meuse River, Rhône, Loire, Vistula, the delta formed by the Rhine, Maas/Meuse and Scheldt in the Netherlands and the Danube in Europe. During the Chinese Warring States period, the Dujiangyan irrigation system was built by the Qin as a water conservation and flood control project. The system's infrastructure is located on the Min River, which is the longest tributary of the Yangtze River, in Sichuan, China.
The Mississippi levee system represents one of the largest such systems found anywhere in the world. It comprises over of levees extending some along the Mississippi, stretching from Cape Girardeau, Missouri, to the Mississippi delta. They were begun by French settlers in Louisiana in the 18th century to protect the city of New Orleans. The first Louisiana levees were about high and covered a distance of about along the riverside. The U.S. Army Corps of Engineers, in conjunction with the Mississippi River Commission, extended the levee system beginning in 1882 to cover the riverbanks from Cairo, Illinois to the mouth of the Mississippi delta in Louisiana. By the mid-1980s, they had reached their present extent and averaged in height; some Mississippi levees are as high as . The Mississippi levees also include some of the longest continuous individual levees in the world. One such levee extends southwards from Pine Bluff, Arkansas, for a distance of some . The scope and scale of the Mississippi levees has often been compared to the Great Wall of China.
The United States Army Corps of Engineers (USACE) recommends and supports cellular confinement technology (geocells) as a best management practice. Particular attention is given to the matter of surface erosion, overtopping prevention and protection of levee crest and downstream slope. Reinforcement with geocells provides tensile force to the soil to better resist instability.
Artificial levees can lead to an elevation of the natural riverbed over time; whether this happens or not and how fast, depends on different factors, one of them being the amount and type of the bed load of a river. Alluvial rivers with intense accumulations of sediment tend to this behavior. Examples of rivers where artificial levees led to an elevation of the riverbed, even up to a point where the riverbed is higher than the adjacent ground surface behind the levees, are found for the Yellow River in China and the Mississippi in the United States.
Coastal flood prevention
Levees are very common on the marshlands bordering the Bay of Fundy in New Brunswick and Nova Scotia, Canada. The Acadians who settled the area can be credited with the original construction of many of the levees in the area, created for the purpose of farming the fertile tidal marshlands. These levees are referred to as dykes. They are constructed with hinged sluice gates that open on the falling tide to drain freshwater from the agricultural marshlands and close on the rising tide to prevent seawater from entering behind the dyke. These sluice gates are called "aboiteaux". In the Lower Mainland around the city of Vancouver, British Columbia, there are levees (known locally as dikes, and also referred to as "the sea wall") to protect low-lying land in the Fraser River delta, particularly the city of Richmond on Lulu Island. There are also dikes to protect other locations which have flooded in the past, such as the Pitt Polder, land adjacent to the Pitt River, and other tributary rivers.
Coastal flood prevention levees are also common along the inland coastline behind the Wadden Sea, an area devastated by many historic floods. Thus the peoples and governments have erected increasingly large and complex flood protection levee systems to stop the sea even during storm floods. The biggest of these are the huge levees in the Netherlands, which have gone beyond just defending against floods, as they have aggressively taken back land that is below mean sea level.
Spur dykes or groynes
These typically man-made hydraulic structures are situated to protect against erosion. They are typically placed in alluvial rivers perpendicular, or at an angle, to the bank of the channel or the revetment, and are used widely along coastlines. There are two common types of spur dyke, permeable and impermeable, depending on the materials used to construct them.
Natural examples
Natural levees commonly form around lowland rivers and creeks without human intervention. They are elongated ridges of mud and/or silt that form on the river floodplains immediately adjacent to the cut banks. Like artificial levees, they act to reduce the likelihood of floodplain inundation.
Deposition of levees is a natural consequence of the flooding of meandering rivers which carry high proportions of suspended sediment in the form of fine sands, silts, and muds. Because the carrying capacity of a river depends in part on its depth, the sediment in the water which is over the flooded banks of the channel is no longer capable of keeping the same number of fine sediments in suspension as the main thalweg. The extra fine sediments thus settle out quickly on the parts of the floodplain nearest to the channel. Over a significant number of floods, this will eventually result in the building up of ridges in these positions and reducing the likelihood of further floods and episodes of levee building.
If aggradation continues to occur in the main channel, this will make levee overtopping more likely again, and the levees can continue to build up. In some cases, this can result in the channel bed eventually rising above the surrounding floodplains, penned in only by the levees around it; an example is the Yellow River in China near the sea, where oceangoing ships appear to sail high above the plain on the elevated river.
Levees are common in any river with a high suspended sediment fraction and thus are intimately associated with meandering channels, which also are more likely to occur where a river carries large fractions of suspended sediment. For similar reasons, they are also common in tidal creeks, where tides bring in large amounts of coastal silts and muds. High spring tides will cause flooding, and result in the building up of levees.
Failures and breaches
Both natural and man-made levees can fail in a number of ways. Factors that cause levee failure include overtopping, erosion, structural failures, and levee saturation. The most frequent (and dangerous) is a levee breach. Here, a part of the levee actually breaks or is eroded away, leaving a large opening for water to flood land otherwise protected by the levee. A breach can be a sudden or gradual failure, caused either by surface erosion or by subsurface weakness in the levee. A breach can leave a fan-shaped deposit of sediment radiating away from the breach, described as a crevasse splay. In natural levees, once a breach has occurred, the gap in the levee will remain until it is again filled in by levee building processes. This increases the chances of future breaches occurring in the same location. Breaches can be the location of meander cutoffs if the river flow direction is permanently diverted through the gap.
Sometimes levees are said to fail when water overtops the crest of the levee. This will cause flooding on the floodplains, but because it does not damage the levee, it has fewer consequences for future flooding.
Among various failure mechanisms that cause levee breaches, soil erosion is found to be one of the most important factors. Predicting soil erosion and scour generation when overtopping happens is important in order to design stable levee and floodwalls. There have been numerous studies to investigate the erodibility of soils. Briaud et al. (2008) used Erosion Function Apparatus (EFA) test to measure the erodibility of the soils and afterwards by using Chen 3D software, numerical simulations were performed on the levee to find out the velocity vectors in the overtopping water and the generated scour when the overtopping water impinges the levee. By analyzing the results from EFA test, an erosion chart to categorize erodibility of the soils was developed. Hughes and Nadal in 2009 studied the effect of combination of wave overtopping and storm surge overflow on the erosion and scour generation in levees. The study included hydraulic parameters and flow characteristics such as flow thickness, wave intervals, surge level above levee crown in analyzing scour development. According to the laboratory tests, empirical correlations related to average overtopping discharge were derived to analyze the resistance of levee against erosion. These equations could only fit to the situation, similar to the experimental tests, while they can give a reasonable estimation if applied to other conditions.
Osouli et al. (2014) and Karimpour et al. (2015) conducted lab scale physical modeling of levees to evaluate score characterization of different levees due to floodwall overtopping.
Another approach applied to prevent levee failures is electrical resistivity tomography (ERT). This non-destructive geophysical method can detect in advance critical saturation areas in embankments. ERT can thus be used in monitoring of seepage phenomena in earth structures and act as an early warning system, e.g., in critical parts of levees or embankments.
Negative impacts
Large scale structures designed to modify natural processes inevitably have some drawbacks or negative impacts.
Ecological impact
Levees interrupt floodplain ecosystems that developed under conditions of seasonal flooding. In many cases, the impact is two-fold, as reduced recurrence of flooding also facilitates land-use change from forested floodplain to farms.
Increased height
In a natural watershed, floodwaters spread over a landscape and slowly return to the river. Downstream, the delivery of water from the area of flooding is spread out in time. If levees keep the floodwaters inside a narrow channel, the water is delivered downstream over a shorter time period. The same volume of water over a shorter time interval means higher river stage (height). As more levees are built upstream, the recurrence interval for high-water events in the river increases, often requiring increases in levee height.
Levee breaches produce high-energy flooding
During natural flooding, water spilling over banks rises slowly. When a levee fails, a wall of water held back by the levee suddenly pours out over the landscape, much like a dam break. Impacted areas far from a breach may experience flooding similar to a natural event, while damage near a breach can be catastrophic, including carving out deep holes and channels in the nearby landscape.
Prolonged flooding after levee failure
Under natural conditions, floodwaters return quickly to the river channel as water-levels drop. During a levee breach, water pours out into the floodplain and moves down-slope where it is blocked from return to the river. Flooding is prolonged over such areas, waiting for floodwater to slowly infiltrate and evaporate.
Subsidence and seawater intrusion
Natural flooding adds a layer of sediment to the floodplain. The added weight of such layers over many centuries makes the crust sink deeper into the mantle, much like a floating block of wood is pushed deeper into the water if another board is added on top. The momentum of downward movement does not immediately stop when new sediment layers stop being added, resulting in subsidence (sinking of land surface). In coastal areas, this results in land dipping below sea level, the ocean migrating inland, and salt-water intruding into freshwater aquifers.
Coastal sediment loss
Where a large river spills out into the ocean, the velocity of the water suddenly slows and its ability to transport sand and silt decreases. Sediments begin to settle out, eventually forming a delta and extending to the coastline seaward. During subsequent flood events, water spilling out of the channel will find a shorter route to the ocean and begin building a new delta. Wave action and ocean currents redistribute some of the sediment to build beaches along the coast. When levees are constructed all the way to the ocean, sediments from flooding events are cut off, the river never migrates, and elevated river velocity delivers sediment to deep water where wave action and ocean currents cannot redistribute. Instead of a natural wedge shaped delta forming, a "birds-foot delta" extends far out into the ocean. The results for surrounding land include beach depletion, subsidence, salt-water intrusion, and land loss.
See also
Lava channel
Notes
References
External links
"Well Diggers Trick", June 1951, Popular Science article on how flood control engineers were using an old method to protect flood levees along rivers from seepage undermining the levee
"Design and Construction of Levees" US Army Engineer Manual EM-1110-2-1913
The International Levee Handbook
Flood control
Fluvial landforms
Riparian zone | Levee | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,165 | [
"Flood control",
"Riparian zone",
"Hydrology",
"Environmental engineering"
] |
43,036 | https://en.wikipedia.org/wiki/Scoville%20scale | The Scoville scale is a measurement of pungency (spiciness or "heat") of chili peppers and other substances, recorded in Scoville heat units (SHU). It is based on the concentration of capsaicinoids, among which capsaicin is the predominant component.
The scale is named after its creator, American pharmacist Wilbur Scoville, whose 1912 method is known as the Scoville organoleptic test. The Scoville organoleptic test is a subjective assessment derived from the capsaicinoid sensitivity by people experienced with eating hot chilis.
An alternative method, high-performance liquid chromatography (HPLC), can be used to analytically quantify the capsaicinoid content as an indicator of pungency.
Scoville organoleptic test
In the Scoville organoleptic test, an exact weight of dried pepper is dissolved in alcohol to extract the heat components (capsaicinoids), then diluted in a solution of sugar water. Decreasing concentrations of the extracted capsaicinoids are given to a panel of five trained tasters, until a majority (at least three) can no longer detect the heat in a dilution. The heat level is based on this dilution, rated in multiples of 100 SHU.
Another source using subjective assessment stated, "Conventional methods used in determining the level of pungency or capsaicin concentration are using a panel of tasters (Scoville organoleptic test method). ... Pepper pungency is measured in Scoville heat units (SHU). This measurement is the highest dilution of a chili pepper extract at which heat can be detected by a taste panel."
A weakness of the Scoville organoleptic test is its imprecision due to human subjectivity, depending on the taster's palate and number of mouth heat receptors, which vary widely among subjects. Another shortcoming is sensory fatigue; the palate is quickly desensitized to capsaicinoids after tasting a few samples within a short time period. Results vary widely (up to ± 50%) between laboratories.
Quantification by HPLC
Since the 1980s, spice heat has been assessed quantitatively by high-performance liquid chromatography (HPLC), which measures the concentration of heat-producing capsaicinoids, typically with capsaicin content as the main measure. As stated in one review "the most reliable, rapid, and efficient method to identify and quantify capsaicinoids is HPLC; the results of which can be converted to Scoville heat units by multiplying the parts-per-million by 16."
HPLC method gives results in American Spice Trade Association 1985 "pungency units", which are defined as one part capsaicin equivalent per million parts dried pepper mass. This "parts per million of heat" (ppmH) is found with the following calculation:
Peak areas are calculated from HPLC traces of dry samples of the substance to be tested in 1 ml of acetonitrile. The standard used to calibrate the calculation is 1 gram of capsaicin. Scoville heat units are found by multiplying the ppmH value by a factor of 15. By this definition of ppmH, spicy compounds other than the two most important capsaicinoids are ignored, despite the ability of HPLC to measure these other compounds at the same time.
Scoville ratings
Considerations
Since Scoville ratings are defined per unit of dry mass, comparison of ratings between products having different water content can be misleading. For example, typical fresh chili peppers have a water content around 90%, whereas Tabasco sauce has a water content of 95%. For law-enforcement-grade pepper spray, values from 500,000 up to 5 million SHU have been reported, but the actual strength of the spray depends on the dilution. This problem can be overcome by stating the water content along with the Scoville value. One way to do so is the "D-value", defined as total mass divided by dry mass.
Numerical results for any specimen vary depending on its cultivation conditions and the uncertainty of the laboratory methods used to assess the capsaicinoid content. Pungency values for any pepper are variable, owing to expected variation within a species, possibly by a factor of 10 or more, depending on seed lineage, climate and humidity, and soil composition supplying nutrients. The inaccuracies described in the measurement methods also contribute to the imprecision of these values.
Capsicum peppers
Capsicum chili peppers are commonly used to add pungency in cuisines worldwide. The range of pepper heat reflected by a Scoville score is from 500 or less (sweet peppers) to over 2.6 million (Pepper X) (table below; Scoville scales for individual chili peppers are in the respective linked article). Some peppers such as the Guntur chilli and Rocoto are excluded from the list due to their very wide SHU range. Others such as Dragon's Breath and Chocolate 7-pot have not been officially verified.
The class of compounds causing pungency in plants such as chili peppers is called capsaicinoids, which display a linear correlation between concentration and Scoville scale, and may vary in content during ripening. Capsaicin is the major capsaicinoid in chili peppers.
The Scoville scale may be used to express the pungency of other, unrelated TRPV1 agonists, sometimes with extrapolation for much hotter compounds. One such substance is resiniferatoxin, an alkaloid present in the sap of some species of euphorbia plants (spurges). Since it is 1,000 times as hot as capsaicin, it would have a Scoville scale rating of 16 billion. In the table below, non-capsaicinoid compounds are italicized.
See also
List of capsaicinoids
Explanatory notes
References
1912 introductions
Scoville scale
Gustatory system
Scales
Spices
Units of measurement | Scoville scale | [
"Mathematics"
] | 1,249 | [
"Quantity",
"Units of measurement"
] |
43,039 | https://en.wikipedia.org/wiki/Omphalos | An omphalos is a religious stone artefact. In Ancient Greek, the word () means "navel". Among the Ancient Greeks, it was a widespread belief that Delphi was the center of the world. According to the myths regarding the founding of the Delphic Oracle, Zeus, in his attempt to locate the center of the Earth, launched two eagles from the two ends of the world, and the eagles, starting simultaneously and flying at equal speed, crossed their paths above the area of Delphi, and so that was the place where Zeus placed the stone.
Omphalos is also the name of the stone given to Cronus.
Similar ideas of a particular geographical point being the center of the world (or its most important place) also surface in the major religions of the modern era. The Latin term is umbilicus mundi, 'navel of the world'.
Delphi
Most accounts locate the Delphi omphalos in the adyton (sacred part of the temple) near the Pythia (oracle). The stone sculpture itself, which may be a copy, has a carving of a knotted net covering its surface and a hollow center, widening towards the base. The omphalos represents the stone which Rhea wrapped in swaddling clothes, pretending it was Zeus, in order to deceive Cronus. (Cronus was the father who swallowed his children so as to prevent them from usurping him as he had deposed his own father, Uranus.)
Omphalos stones were believed to allow direct communication with the gods. Holland (1933) suggested that the stone was hollow to allow intoxicating vapours breathed by the Oracle to channel through it. Erwin Rohde wrote that the Python at Delphi was an earth spirit, who was conquered by Apollo and buried under the Omphalos. However, understanding of the use of the omphalos is uncertain due to destruction of the site by Theodosius I and Arcadius in the 4th century CE.
Jerusalem
Judaism
The Foundation Stone at the peak of the Temple Mount is considered in traditional Jewish sources to be the place from which the creation of the world began, with several further major biblical events connected to it. Jewish tradition holds that God revealed himself to His people through the Ark of the Covenant in the Temple in Jerusalem, which rested on the Foundation Stone marking the centre of the world.
Christianity
The omphalos is an important religious symbol in classical antiquity, with a similar level of significance as the Christian cross. The latter eventually gained more prominence.
In medieval Christian tradition, the omphalos at the Church of the Holy Sepulchre, Jerusalem, represents the navel of the world (the spiritual and cosmological centre of the world).
Art
Omphalos is a public art sculpture by Dimitri Hadzi formerly located in the Harvard Square, Cambridge, Massachusetts under the Arts on the Line program. , the sculpture has been deinstalled; it will be relocated to Rockport, Massachusetts.
Omfalos is a concrete and rock sculpture by the conceptual artist Lars Vilks, previously standing in the Kullaberg nature reserve, Skåne County, Sweden. As of 2001, the sculpture belongs to the collections of Moderna Museet in Stockholm, Sweden.
Literature
In literature, the word omphalos has held various meanings but usually refers to the stone at Delphi. Authors who have used the term include: Homer, Pausanias, D.H. Lawrence, James Joyce, Philip K. Dick, Jacques Derrida, Ted Chiang, Sandy Hingston and Seamus Heaney. For example, Joyce uses the term in the novel, Ulysses:
In Ted Chiang's short story "Omphalos" (2019), the protagonist is forced to question her belief about where the center of the world is located.
In “The Toome Road”, a Seamus Heaney poem from the 1979 anthology Field Work, Heaney writes about an encounter with a convoy of armoured cars in Northern Ireland, “… O charioteers, above your dormant guns,
It stands here still, stands vibrant as you pass,
The invisible, untoppable omphalos.”
Omphalos syndrome
Omphalos syndrome refers to the belief that a place of geopolitical power and currency is the most important place in the world.
See also
Axis mundi
Apollo Omphalos
Benben stone
Black Stone
Kaaba
Lapis Niger
Lia Fáil
Lingam
Name of Mexico
Stone of Scone
Umbilicus urbis Romae
Sources
References
External links
Classical oracles
Phallic symbols
Stones
Yonic symbols
Sacred rocks | Omphalos | [
"Physics"
] | 939 | [
"Stones",
"Physical objects",
"Matter"
] |
43,050 | https://en.wikipedia.org/wiki/Neo-Darwinism | Neo-Darwinism is generally used to describe any integration of Charles Darwin's theory of evolution by natural selection with Gregor Mendel's theory of genetics. It mostly refers to evolutionary theory from either 1895 (for the combinations of Darwin's and August Weismann's theories of evolution) or 1942 ("modern synthesis"), but it can mean any new Darwinian- and Mendelian-based theory, such as the current evolutionary theory.
Original use
Darwin's theory of evolution by natural selection, as published in 1859, provided a selection mechanism for evolution, but not a trait transfer mechanism. Lamarckism was still a very popular candidate for this. August Weismann and Alfred Russel Wallace rejected the Lamarckian idea of inheritance of acquired characteristics that Darwin had accepted and later expanded upon in his writings on heredity. The basis for the complete rejection of Lamarckism was Weismann's germ plasm theory. Weismann realised that the cells that produce the germ plasm, or gametes (such as sperm and eggs in animals), separate from the somatic cells that go on to make other body tissues at an early stage in development. Since he could see no obvious means of communication between the two, he asserted that the inheritance of acquired characteristics was therefore impossible; a conclusion now known as the Weismann barrier.
It is, however, usually George Romanes who is credited with the first use of the word in a scientific context. Romanes used the term to describe the combination of natural selection and Weismann's germ plasm theory that evolution occurs solely through natural selection, and not by the inheritance of acquired characteristics resulting from use or disuse, thus using the word to mean "Darwinism without Lamarckism."
Following the development, from about 1918 to 1947, of the modern synthesis of evolutionary biology, the term neo-Darwinian started to be used to refer to that contemporary evolutionary theory.
Current meaning
Biologists, however, have not limited their application of the term neo-Darwinism to the historical synthesis. For example, Ernst Mayr wrote in 1984 that:
The term neo-Darwinism for the synthetic theory [of the early 20th century] is sometimes considered wrong, because the term neo-Darwinism was coined by Romanes in 1895 as a designation of Weismann's theory.
Publications such as Encyclopædia Britannica use neo-Darwinism to refer to current-consensus evolutionary theory, not the version prevalent during the early 20th century. Similarly, Richard Dawkins and Stephen Jay Gould have used neo-Darwinism in their writings and lectures to denote the forms of evolutionary biology that were contemporary when they were writing.
See also
History of evolutionary thought
References
Evolutionary biology | Neo-Darwinism | [
"Biology"
] | 568 | [
"Evolutionary biology"
] |
43,052 | https://en.wikipedia.org/wiki/Quantum%20evolution | Quantum evolution is a component of George Gaylord Simpson's multi-tempoed theory of evolution proposed to explain the rapid emergence of higher taxonomic groups in the fossil record. According to Simpson, evolutionary rates differ from group to group and even among closely related lineages. These different rates of evolutionary change were designated by Simpson as bradytelic (slow tempo), horotelic (medium tempo), and tachytelic (rapid tempo).
Quantum evolution differed from these styles of change in that it involved a drastic shift in the adaptive zones of certain classes of animals. The word "quantum" therefore refers to an "all-or-none reaction", where transitional forms are particularly unstable, and thereby perish rapidly and completely. Although quantum evolution may happen at any taxonomic level, it plays a much larger role in "the origin taxonomic units of relatively high rank, such as families, orders, and classes."
Quantum evolution in plants
Usage of the phrase "quantum evolution" in plants was apparently first articulated by Verne Grant in 1963 (pp. 458-459). He cited an earlier 1958 paper by Harlan Lewis and Peter H. Raven, wherein Grant asserted that Lewis and Raven gave a "parallel" definition of quantum evolution as defined by Simpson. Lewis and Raven postulated that species in the Genus Clarkia had a mode of speciation that resulted ...as a consequence of a rapid reorganization of the chromosomes due to the presence, at some time, of a genotype conducive to extensive chromosome breakage. A similar mode of origin by rapid reorganization of the chromosomes is suggested for the derivation of other species of Clarkia. In all of these examples the derivative populations grow adjacent to the parental species, which they resemble closely in morphology, but from which they are reproductively isolated because of multiple structural differences in their chromosomes. The spatial relationship of each parental species and its derivative suggests that differentiation has been recent. The repeated occurrence of the same pattern of differentiation in Clarkia suggests that a rapid reorganization of chromosomes has been an important mode of evolution in the genus. This rapid reorganization of the chromosomes is comparable to the systemic mutations proposed by Goldschmidt as a mechanism of macroevolution. In Clarkia, we have not observed marked changes in physiology and pattern of development that could be described as macroevolution. Reorganization of the genomes may, however, set the stage for subsequent evolution along a very different course from that of the ancestral populations
Harlan Lewis refined this concept in a 1962 paper where he coined the term "Catastrophic Speciation" to describe this mode of speciation, since he theorized that the reductions in population size and consequent inbreeding that led to chromosomal rearrangements occurred in small populations that were subject to severe drought.
Leslie D. Gottlieb in his 2003 summary of the subject in plants stated we can define quantum speciation as the budding off of a new and very different daughter species from a semi-isolated peripheral population of the ancestral species in a cross-fertilizing organism...as compared with geographical speciation, which is a gradual and conservative process, quantum speciation is rapid and radical in its phenotypic or genotypic effects or both. Gottlieb did not believe that sympatric speciation required disruptive selection to form a reproductive isolating barrier, as defined by Grant, and in fact Gottlieb stated that requiring disruptive selection was "unnecessarily restrictive" in identifying cases of sympatric speciation. In this 2003 paper Gottlieb summarized instances of quantum evolution in the plant species Clarkia, Layia, and Stephanomeria.
Mechanisms
According to Simpson (1944), quantum evolution resulted from Sewall Wright's model of random genetic drift. Simpson believed that major evolutionary transitions would arise when small populations, that were isolated and limited from gene flow, would fixate upon unusual gene combinations. This "inadaptive phase" (caused by genetic drift) would then (by natural selection) drive a deme population from one stable adaptive peak to another on the adaptive fitness landscape. However, in his Major Features of Evolution (1953) Simpson wrote that this mechanism was still controversial:
"whether prospective adaptation as prelude to quantum evolution arises adaptively or inadaptively. It was concluded above that it usually arises adaptively . . . . The precise role of, say, genetic drift in this process thus is largely speculative at present. It may have an essential part or none. It surely is not involved in all cases of quantum evolution, but there is a strong possibility that it is often involved. If or when it is involved, it is an initiating mechanism. Drift can only rarely, and only for lower categories, have completed the transition to a new adaptive zone."
This preference for adaptive over inadaptive forces led Stephen Jay Gould to call attention to the "hardening of the Modern Synthesis", a trend in the 1950s where adaptationism took precedence over the pluralism of mechanisms common in the 1930s and 40s.
Simpson considered quantum evolution his crowning achievement, being "perhaps the most important outcome of [my] investigation, but also the most controversial and hypothetical."
See also
Environmental niche modelling
Mutationism
Punctuated equilibrium
Quantum speciation
Rapid modes of evolution
Shifting balance theory
Sympatric speciation
References
Sources
Eldredge, Niles (1995). Reinventing Darwin. New York: John Wiley & Sons. pp. 20-26.
Gould, S. J. (1994). "Tempo and mode in the macroevolutionary reconstruction on Darwinism" PNAS USA 91(15): 6764-71.
Gould S.J. (2002). The Structure of Evolutionary Theory Cambridge MA: Harvard Univ. Press. pp. 529-31.
Mayr, Ernst (1976). Evolution and the Diversity of Life. Cambridge MA: Belknap Press. p. 206.
Mayr, Ernst (1982). The Growth of Biological Thought. Cambridge MA: Belknap Press. pp. 555, 609-10.
External links
George Gaylord Simpson - Biographical sketch.
Tempo and Mode in Evolution: Genetics and Paleontology 50 Years After Simpson
Evolutionary biology
Modern synthesis (20th century)
Rate of evolution | Quantum evolution | [
"Biology"
] | 1,286 | [
"Evolutionary biology"
] |
43,086 | https://en.wikipedia.org/wiki/Project%20Mogul | Project Mogul (sometimes referred to as Operation Mogul) was a top secret project by the US Army Air Forces involving microphones flown on high-altitude balloons, whose primary purpose was long-distance detection of sound waves generated by Soviet atomic bomb tests. The project was carried out from 1947 until early 1949. It was a classified portion of an unclassified project by New York University (NYU) atmospheric researchers. The project was moderately successful, but was very expensive and was superseded by a network of seismic detectors and air sampling for fallout, which were cheaper, more reliable, and easier to deploy and operate.
Project Mogul was conceived by Maurice Ewing who had earlier researched the deep sound channel in the oceans and theorized that a similar sound channel existed in the upper atmosphere: a certain height where the air pressure and temperature result in minimal speed of sound, so that sound waves would propagate and stay in that channel due to refraction. The project involved arrays of balloons carrying disc microphones and radio transmitters to relay the signals to the ground. It was supervised by James Peoples, who was assisted by Albert P. Crary.
One of the requirements of the balloons was that they maintain a relatively constant altitude over a prolonged period of time. Thus instrumentation had to be developed to maintain such constant altitudes, such as pressure sensors controlling the release of ballast.
The early Mogul balloons consisted of large clusters of rubber meteorological balloons, however, these were quickly replaced by enormous balloons made of polyethylene plastic. These were more durable, leaked less helium, and also were better at maintaining a constant altitude than the early rubber balloons. Constant-altitude-control and polyethylene balloons were the two major innovations of Project Mogul.
Subsequent programs
Project Mogul was the forerunner of the Skyhook balloon program, which started in the late 1940s, as well as two other espionage programs involving balloon overflights and photographic surveillance of the Soviet Union during the 1950s, Project Moby Dick and Project Genetrix. The spy balloon overflights raised storms of protest from the Soviets. The constant-altitude balloons also were used for scientific purposes such as cosmic ray experiments. Further development of nuclear detonation detection systems was extensive for decades afterward, culminating in worldwide systems by various countries to keep eyes and ears on detecting and verifying the others' nuclear weapon developments. There would also be fixed-wing United States aerial reconnaissance of the Soviet Union during the 1950s. Overflights would end in 1960 (once an aircraft had been shot down by SAMs), and reconnaissance would for decades afterward be handled mostly by reconnaissance satellites and to some extent by aircraft, such as the A-12 OXCART and SR-71 Blackbird (photography and radar) and RC-135U and similar aircraft (SIGINT including ELINT and COMINT).
Roswell incident
In 1947, a Project Mogul balloon NYU Flight 4, launched June 4, crashed in the desert near Roswell, New Mexico. The subsequent military cover-up of the true nature of the balloon and burgeoning conspiracy theories from UFO enthusiasts led to a celebrated "UFO" incident.
Unlike a weather balloon, the Project Mogul paraphernalia was massive and contained unusual types of materials, according to research conducted by The New York Times: "...squadrons of big balloons ... It was like having an elephant in your backyard and hoping that no one would notice it. ... To the untrained eye, the reflectors looked extremely odd, a geometrical hash of lightweight sticks and sharp angles made of metal foil. .. photographs of it, taken in 1947 and published in newspapers, show bits and pieces of what are obviously collapsed balloons and radar reflectors."
Legacy
Implementation of Mogul's experimental infrasound detection of nuclear tests exist today in ground-based detectors, part of so-called Geophysical MASINT (Measurement And Signal INTelligence). In 2013, this world-wide network of sound detectors picked up the large explosion of the Chelyabinsk meteor in Russia. The strength of the sound waves was used to estimate the size of the explosion.
References
External links
Obituary of the man who launched the balloon
Balloons (aeronautics)
Military projects of the United States
Roswell incident
Soviet Union–United States relations
Projects of the United States Air Force
Cold War military history of the United States
Articles containing video clips | Project Mogul | [
"Engineering"
] | 887 | [
"Military projects of the United States",
"Military projects"
] |
43,093 | https://en.wikipedia.org/wiki/Flagellum | A flagellum (; : flagella) (Latin for 'whip' or 'scourge') is a hair-like appendage that protrudes from certain plant and animal sperm cells, from fungal spores (zoospores), and from a wide range of microorganisms to provide motility. Many protists with flagella are known as flagellates.
A microorganism may have from one to many flagella. A gram-negative bacterium Helicobacter pylori, for example, uses its flagella to propel itself through the stomach to reach the mucous lining where it may colonise the epithelium and potentially cause gastritis, and ulcers – a risk factor for stomach cancer. In some swarming bacteria, the flagellum can also function as a sensory organelle, being sensitive to wetness outside the cell.
Across the three domains of Bacteria, Archaea, and Eukaryota, the flagellum has a different structure, protein composition, and mechanism of propulsion but shares the same function of providing motility. The Latin word means "whip" to describe its lash-like swimming motion. The flagellum in archaea is called the archaellum to note its difference from the bacterial flagellum.
Eukaryotic flagella and cilia are identical in structure but have different lengths and functions. Prokaryotic fimbriae and pili are smaller, and thinner appendages, with different functions. Cilia are attached to the surface of flagella and are used to swim or move fluid from one region to another.
Types
The three types of flagella are bacterial, archaeal, and eukaryotic.
The flagella in eukaryotes have dynein and microtubules that move with a bending mechanism. Bacteria and archaea do not have dynein or microtubules in their flagella, and they move using a rotary mechanism.
Other differences among these three types are:
Bacterial flagella are helical filaments, each with a rotary motor at its base which can turn clockwise or counterclockwise. They provide two of several kinds of bacterial motility.
Archaeal flagella (archaella) are superficially similar to bacterial flagella in that it also has a rotary motor, but are different in many details and considered non-homologous.
Eukaryotic flagella—those of animal, plant, and protist cells—are complex cellular projections that lash back and forth. Eukaryotic flagella and motile cilia are identical in structure, but have different lengths, waveforms, and functions. Primary cilia are immotile, and have a structurally different 9+0 axoneme rather than the 9+2 axoneme found in both flagella and motile cilia.
Bacterial flagella
Structure and composition
The bacterial flagellum is made up of protein subunits of flagellin. Its shape is a 20-nanometer-thick hollow tube. It is helical and has a sharp bend just outside the outer membrane; this "hook" allows the axis of the helix to point directly away from the cell. A shaft runs between the hook and the basal body, passing through protein rings in the cell's membrane that act as bearings. Gram-positive organisms have two of these basal body rings, one in the peptidoglycan layer and one in the plasma membrane. Gram-negative organisms have four such rings: the L ring associates with the lipopolysaccharides, the P ring associates with peptidoglycan layer, the M ring is embedded in the plasma membrane, and the S ring is directly attached to the cytoplasm. The filament ends with a capping protein.
The flagellar filament is the long, helical screw that propels the bacterium when rotated by the motor, through the hook. In most bacteria that have been studied, including the gram-negative Escherichia coli, Salmonella typhimurium, Caulobacter crescentus, and Vibrio alginolyticus, the filament is made up of 11 protofilaments approximately parallel to the filament axis. Each protofilament is a series of tandem protein chains. However, Campylobacter jejuni has seven protofilaments.
The basal body has several traits in common with some types of secretory pores, such as the hollow, rod-like "plug" in their centers extending out through the plasma membrane. The similarities between bacterial flagella and bacterial secretory system structures and proteins provide scientific evidence supporting the theory that bacterial flagella evolved from the type-three secretion system (TTSS).
The atomic structure of both bacterial flagella as well as the TTSS injectisome have been elucidated in great detail, especially with the development of cryo-electron microscopy. The best understood parts are the parts between the inner and outer membrane, that is, the scaffolding rings of the inner membrane (IM), the scaffolding pairs of the outer membrane (OM), and the rod/needle (injectisome) or rod/hook (flagellum) sections.
Motor
The bacterial flagellum is driven by a rotary engine (Mot complex) made up of protein, located at the flagellum's anchor point on the inner cell membrane. The engine is powered by proton-motive force, i.e., by the flow of protons (hydrogen ions) across the bacterial cell membrane due to a concentration gradient set up by the cell's metabolism (Vibrio species have two kinds of flagella, lateral and polar, and some are driven by a sodium ion pump rather than a proton pump). The rotor transports protons across the membrane, and is turned in the process. The rotor alone can operate at 6,000 to 100,000 rpm, but with the flagellar filament attached usually only reaches 200 to 1000 rpm. The direction of rotation can be changed by the flagellar motor switch almost instantaneously, caused by a slight change in the position of a protein, FliG, in the rotor. The torque is transferred from the MotAB to the torque helix on FliG's D5 domain and with the increase in the requirement of the torque or speed more MotAB are employed. Because the flagellar motor has no on-off switch, the protein epsE is used as a mechanical clutch to disengage the motor from the rotor, thus stopping the flagellum and allowing the bacterium to remain in one place.
The production and rotation of a flagellum can take up to 10% of an Escherichia coli cell's energy budget and has been described as an "energy-guzzling machine". Its operation generates reactive oxygen species that elevate mutation rates.
The cylindrical shape of flagella is suited to locomotion of microscopic organisms; these organisms operate at a low Reynolds number, where the viscosity of the surrounding water is much more important than its mass or inertia.
The rotational speed of flagella varies in response to the intensity of the proton-motive force, thereby permitting certain forms of speed control, and also permitting some types of bacteria to attain remarkable speeds in proportion to their size; some achieve roughly 60 cell lengths per second. At such a speed, a bacterium would take about 245 days to cover 1 km; although that may seem slow, the perspective changes when the concept of scale is introduced. In comparison to macroscopic life forms, it is very fast indeed when expressed in terms of number of body lengths per second. A cheetah, for example, only achieves about 25 body lengths per second.
Through use of their flagella, bacteria are able to move rapidly towards attractants and away from repellents, by means of a biased random walk, with runs and tumbles brought about by rotating its flagellum counterclockwise and clockwise, respectively. The two directions of rotation are not identical (with respect to flagellum movement) and are selected by a molecular switch. Clockwise rotation is called the traction mode with the body following the flagella. Counterclockwise rotation is called the thruster mode with the flagella lagging behind the body.
Assembly
During flagellar assembly, components of the flagellum pass through the hollow cores of the basal body and the nascent filament. During assembly, protein components are added at the flagellar tip rather than at the base. In vitro, flagellar filaments assemble spontaneously in a solution containing purified flagellin as the sole protein.
Evolution
At least 10 protein components of the bacterial flagellum share homologous proteins with the type three secretion system (T3SS) found in many gram-negative bacteria, hence one likely evolved from the other. Because the T3SS has a similar number of components as a flagellar apparatus (about 25 proteins), which one evolved first is difficult to determine. However, the flagellar system appears to involve more proteins overall, including various regulators and chaperones, hence it has been argued that flagella evolved from a T3SS. However, it has also been suggested that the flagellum may have evolved first or the two structures evolved in parallel. Early single-cell organisms' need for motility (mobility) support that the more mobile flagella would be selected by evolution first, but the T3SS evolving from the flagellum can be seen as 'reductive evolution', and receives no topological support from the phylogenetic trees. The hypothesis that the two structures evolved separately from a common ancestor accounts for the protein similarities between the two structures, as well as their functional diversity.
Flagella and the intelligent design debate
Some authors have argued that flagella cannot have evolved, assuming that they can only function properly when all proteins are in place. In other words, the flagellar apparatus is "irreducibly complex". However, many proteins can be deleted or mutated and the flagellum still works, though sometimes at reduced efficiency. Moreover, with many proteins unique to some number across species, diversity of bacterial flagella composition was higher than expected. Hence, the flagellar apparatus is clearly very flexible in evolutionary terms and perfectly able to lose or gain protein components. For instance, a number of mutations have been found that increase the motility of E. coli. Additional evidence for the evolution of bacterial flagella includes the existence of vestigial flagella, intermediate forms of flagella and patterns of similarities among flagellar protein sequences, including the observation that almost all of the core flagellar proteins have known homologies with non-flagellar proteins. Furthermore, several processes have been identified as playing important roles in flagellar evolution, including self-assembly of simple repeating subunits, gene duplication with subsequent divergence, recruitment of elements from other systems ('molecular bricolage') and recombination.
Flagellar arrangements
Different species of bacteria have different numbers and arrangements of flagella, named using the term tricho, from the Greek trichos meaning hair.
Monotrichous bacteria such as Vibrio cholerae have a single polar flagellum.
Amphitrichous bacteria have a single flagellum on each of two opposite ends (e.g., Campylobacter jejuni or Alcaligenes faecalis)—both flagella rotate but coordinate to produce coherent thrust.
Lophotrichous bacteria (lopho Greek combining term meaning crest or tuft) have multiple flagella located at the same spot on the bacterial surface such as Helicobacter pylori, which act in concert to drive the bacteria in a single direction. In many cases, the bases of multiple flagella are surrounded by a specialized region of the cell membrane, called the polar organelle.
Peritrichous bacteria have flagella projecting in all directions (e.g., E. coli).
Counterclockwise rotation of a monotrichous polar flagellum pushes the cell forward with the flagellum trailing behind, much like a corkscrew moving inside cork. Water on the microscopic scale is highly viscous, unlike usual water.
Spirochetes, in contrast, have flagella called endoflagella arising from opposite poles of the cell, and are located within the periplasmic space as shown by breaking the outer-membrane and also by electron cryotomography microscopy. The rotation of the filaments relative to the cell body causes the entire bacterium to move forward in a corkscrew-like motion, even through material viscous enough to prevent the passage of normally flagellated bacteria.
In certain large forms of Selenomonas, more than 30 individual flagella are organized outside the cell body, helically twining about each other to form a thick structure (easily visible with the light microscope) called a "fascicle".
In some Vibrio spp. (particularly Vibrio parahaemolyticus) and related bacteria such as Aeromonas, two flagellar systems co-exist, using different sets of genes and different ion gradients for energy. The polar flagella are constitutively expressed and provide motility in bulk fluid, while the lateral flagella are expressed when the polar flagella meet too much resistance to turn. These provide swarming motility on surfaces or in viscous fluids.
Bundling
Bundling is an event that can happen in multi-flagellated cells, bundling the flagella together and causing them to rotate in a coordinated manner.
Flagella are left-handed helices, and when rotated counter-clockwise by their rotors, they can bundle and rotate together. When the rotors reverse direction, thus rotating clockwise, the flagellum unwinds from the bundle. This may cause the cell to stop its forward motion and instead start twitching in place, referred to as tumbling. Tumbling results in a stochastic reorientation of the cell, causing it to change the direction of its forward swimming.
It is not known which stimuli drive the switch between bundling and tumbling, but the motor is highly adaptive to different signals. In the model describing chemotaxis ("movement on purpose") the clockwise rotation of a flagellum is suppressed by chemical compounds favorable to the cell (e.g. food). When moving in a favorable direction, the concentration of such chemical attractants increases and therefore tumbles are continually suppressed, allowing forward motion; likewise, when the cell's direction of motion is unfavorable (e.g., away from a chemical attractant), tumbles are no longer suppressed and occur much more often, with the chance that the cell will be thus reoriented in the correct direction.
Even if all flagella would rotate clockwise, however, they often cannot form a bundle due to geometrical and hydrodynamic reasons.
Eukaryotic flagella
Terminology
Aiming to emphasize the distinction between the bacterial flagella and the eukaryotic cilia and flagella, some authors attempted to replace the name of these two eukaryotic structures with "undulipodia" (e.g., all papers by Margulis since the 1970s) or "cilia" for both (e.g., Hülsmann, 1992; Adl et al., 2012; most papers of Cavalier-Smith), preserving "flagella" for the bacterial structure. However, the discriminative usage of the terms "cilia" and "flagella" for eukaryotes adopted in this article (see below) is still common (e.g., Andersen et al., 1991; Leadbeater et al., 2000).
Internal structure
The core of a eukaryotic flagellum, known as the axoneme is a bundle of nine fused pairs of microtubules known as doublets surrounding two central single microtubules (singlets). This 9+2 axoneme is characteristic of the eukaryotic flagellum. At the base of a eukaryotic flagellum is a basal body, "blepharoplast" or kinetosome, which is the microtubule organizing center for flagellar microtubules and is about 500 nanometers long. Basal bodies are structurally identical to centrioles. The flagellum is encased within the cell's plasma membrane, so that the interior of the flagellum is accessible to the cell's cytoplasm.
Besides the axoneme and basal body, relatively constant in morphology, other internal structures of the flagellar apparatus are the transition zone (where the axoneme and basal body meet) and the root system (microtubular or fibrilar structures that extend from the basal bodies into the cytoplasm), more variable and useful as indicators of phylogenetic relationships of eukaryotes. Other structures, more uncommon, are the paraflagellar (or paraxial, paraxonemal) rod, the R fiber, and the S fiber. For surface structures, see below.
Mechanism
Each of the outer 9 doublet microtubules extends a pair of dynein arms (an "inner" and an "outer" arm) to the adjacent microtubule; these produce force through ATP hydrolysis. The flagellar axoneme also contains radial spokes, polypeptide complexes extending from each of the outer nine microtubule doublets towards the central pair, with the "head" of the spoke facing inwards. The radial spoke is thought to be involved in the regulation of flagellar motion, although its exact function and method of action are not yet understood.
Flagella versus cilia
The regular beat patterns of eukaryotic cilia and flagella generate motion on a cellular level. Examples range from the propulsion of single cells such as the swimming of spermatozoa to the transport of fluid along a stationary layer of cells such as in the respiratory tract.
Although eukaryotic cilia and flagella are ultimately the same, they are sometimes classed by their pattern of movement, a tradition from before their structures have been known. In the case of flagella, the motion is often planar and wave-like, whereas the motile cilia often perform a more complicated three-dimensional motion with a power and recovery stroke. Yet another traditional form of distinction is by the number of 9+2 organelles on the cell.
Intraflagellar transport
Intraflagellar transport, the process by which axonemal subunits, transmembrane receptors, and other proteins are moved up and down the length of the flagellum, is essential for proper functioning of the flagellum, in both motility and signal transduction.
Evolution and occurrence
Eukaryotic flagella or cilia, probably an ancestral characteristic, are widespread in almost all groups of eukaryotes, as a relatively perennial condition, or as a flagellated life cycle stage (e.g., zoids, gametes, zoospores, which may be produced continually or not).
The first situation is found either in specialized cells of multicellular organisms (e.g., the choanocytes of sponges, or the ciliated epithelia of metazoans), as in ciliates and many eukaryotes with a "flagellate condition" (or "monadoid level of organization", see Flagellata, an artificial group).
Flagellated lifecycle stages are found in many groups, e.g., many green algae (zoospores and male gametes), bryophytes (male gametes), pteridophytes (male gametes), some gymnosperms (cycads and Ginkgo, as male gametes), centric diatoms (male gametes), brown algae (zoospores and gametes), oomycetes (assexual zoospores and gametes), hyphochytrids (zoospores), labyrinthulomycetes (zoospores), some apicomplexans (gametes), some radiolarians (probably gametes), foraminiferans (gametes), plasmodiophoromycetes (zoospores and gametes), myxogastrids (zoospores), metazoans (male gametes), and chytrid fungi (zoospores and gametes).
Flagella or cilia are completely absent in some groups, probably due to a loss rather than being a primitive condition. The loss of cilia occurred in red algae, some green algae (Zygnematophyceae), the gymnosperms except cycads and Ginkgo, angiosperms, pennate diatoms, some apicomplexans, some amoebozoans, in the sperm of some metazoans, and in fungi (except chytrids).
Typology
A number of terms related to flagella or cilia are used to characterize eukaryotes. According to surface structures present, flagella may be:
whiplash flagella (= smooth, acronematic flagella): without hairs, e.g., in Opisthokonta
hairy flagella (= tinsel, flimmer, pleuronematic flagella): with hairs (= mastigonemes sensu lato), divided in:
with fine hairs (= non-tubular, or simple hairs): occurs in Euglenophyceae, Dinoflagellata, some Haptophyceae (Pavlovales)
with stiff hairs (= tubular hairs, retronemes, mastigonemes sensu stricto), divided in:
bipartite hairs: with two regions. Occurs in Cryptophyceae, Prasinophyceae, and some Heterokonta
tripartite (= straminipilous) hairs: with three regions (a base, a tubular shaft, and one or more terminal hairs). Occurs in most Heterokonta
stichonematic flagella: with a single row of hairs
pantonematic flagella: with two rows of hairs
acronematic: flagella with a single, terminal mastigoneme or flagellar hair (e.g., bodonids); some authors use the term as synonym of whiplash
with scales: e.g., Prasinophyceae
with spines: e.g., some brown algae
with undulating membrane: e.g., some kinetoplastids, some parabasalids
with proboscis (trunk-like protrusion of the cell): e.g., apusomonads, some bodonids
According to the number of flagella, cells may be: (remembering that some authors use "ciliated" instead of "flagellated")
uniflagellated: e.g., most Opisthokonta
biflagellated: e.g., all Dinoflagellata, the gametes of Charophyceae, of most bryophytes and of some metazoans
triflagellated: e.g., the gametes of some Foraminifera
quadriflagellated: e.g., some Prasinophyceae, Collodictyonidae
octoflagellated: e.g., some Diplomonada, some Prasinophyceae
multiflagellated: e.g., Opalinata, Ciliophora, Stephanopogon, Parabasalida, Hemimastigophora, Caryoblastea, Multicilia, the gametes (or zoids) of Oedogoniales (Chlorophyta), some pteridophytes and some gymnosperms
According to the place of insertion of the flagella:
opisthokont: cells with flagella inserted posteriorly, e.g., in Opisthokonta (Vischer, 1945). In Haptophyceae, flagella are laterally to terminally inserted, but are directed posteriorly during rapid swimming.
akrokont: cells with flagella inserted apically
subakrokont: cells with flagella inserted subapically
pleurokont: cells with flagella inserted laterally
According to the beating pattern:
gliding: a flagellum that trails on the substrate
heterodynamic: flagella with different beating patterns (usually with one flagellum functioning in food capture and the other functioning in gliding, anchorage, propulsion or "steering")
isodynamic: flagella beating with the same patterns
Other terms related to the flagellar type:
isokont: cells with flagella of equal length. It was also formerly used to refer to the Chlorophyta
anisokont: cells with flagella of unequal length, e.g., some Euglenophyceae and Prasinophyceae
heterokont: term introduced by Luther (1899) to refer to the Xanthophyceae, due to the pair of flagella of unequal length. It has taken on a specific meaning in referring to cells with an anterior straminipilous flagellum (with tripartite mastigonemes, in one or two rows) and a posterior usually smooth flagellum. It is also used to refer to the taxon Heterokonta
stephanokont: cells with a crown of flagella near its anterior end, e.g., the gametes and spores of Oedogoniales, the spores of some Bryopsidales. Term introduced by Blackman & Tansley (1902) to refer to the Oedogoniales
akont: cells without flagella. It was also used to refer to taxonomic groups, as Aconta or Akonta: the Zygnematophyceae and Bacillariophyceae (Oltmanns, 1904), or the Rhodophyceae (Christensen, 1962)
Archaeal flagella
The archaellum possessed by some species of Archaea is superficially similar to the bacterial flagellum; in the 1980s, they were thought to be homologous on the basis of gross morphology and behavior. Both flagella and archaella consist of filaments extending outside the cell, and rotate to propel the cell. Archaeal flagella have a unique structure which lacks a central channel. Similar to bacterial type IV pilins, the archaeal proteins (archaellins) are made with class 3 signal peptides and they are processed by a type IV prepilin peptidase-like enzyme. The archaellins are typically modified by the addition of N-linked glycans which are necessary for proper assembly or function.
Discoveries in the 1990s revealed numerous detailed differences between the archaeal and bacterial flagella. These include:
Bacterial flagella rotation is powered by the proton motive force – a flow of H+ ions or occasionally by the sodium-motive force – a flow of Na+ ions; archaeal flagella rotation is powered by ATP.
While bacterial cells often have many flagellar filaments, each of which rotates independently, the archaeal flagellum is composed of a bundle of many filaments that rotates as a single assembly.
Bacterial flagella grow by the addition of flagellin subunits at the tip; archaeal flagella grow by the addition of subunits to the base.
Bacterial flagella are thicker than archaella, and the bacterial filament has a large enough hollow "tube" inside that the flagellin subunits can flow up the inside of the filament and get added at the tip; the archaellum is too thin (12-15 nm) to allow this.
Many components of bacterial flagella share sequence similarity to components of the type III secretion systems, but the components of bacterial flagella and archaella share no sequence similarity. Instead, some components of archaella share sequence and morphological similarity with components of type IV pili, which are assembled through the action of type II secretion systems (the nomenclature of pili and protein secretion systems is not consistent).
These differences support the theory that the bacterial flagella and archaella are a classic case of biological analogy, or convergent evolution, rather than homology. Research into the structure of archaella made significant progress beginning in the early 2010s, with the first atomic resolution structure of an archaella protein, the discovery of additional functions of archaella, and the first reports of archaella in Nanoarchaeota and Thaumarchaeota.
Fungal
The only fungi to have a single flagellum on their spores are the chytrids. In Batrachochytrium dendrobatidis the flagellum is 19–20 μm long. A nonfunctioning centriole lies adjacent to the kinetosome. Nine interconnected props attach the kinetosome to the plasmalemma, and a terminal plate is present in the transitional zone. An inner ring-like structure attached to the tubules of the flagellar doublets within the transitional zone has been observed in transverse section.
Additional images
See also
Ciliopathy
RpoF
References
Further reading
External links
Cell Image Library - Flagella
Cell movement
Organelles
Protein complexes
Bacteria | Flagellum | [
"Biology"
] | 6,128 | [
"Prokaryotes",
"Microorganisms",
"Bacteria"
] |
43,126 | https://en.wikipedia.org/wiki/Callisto%20%28moon%29 | Callisto ( ), or Jupiter IV, is the second-largest moon of Jupiter, after Ganymede. In the Solar System it is the third-largest moon after Ganymede and Saturn's largest moon Titan, and nearly as large as the smallest planet Mercury. Callisto is, with a diameter of , roughly a third larger than Earth's Moon and orbits Jupiter on average at a distance of , which is about five times further out than the Moon orbiting Earth. It is the outermost of the four large Galilean moons of Jupiter, which were discovered in 1610 with one of the first telescopes, and is today visible from Earth with common binoculars.
The surface of Callisto is the oldest and most heavily cratered in the Solar System. Its surface is completely covered with impact craters. It does not show any signatures of subsurface processes such as plate tectonics or volcanism, with no signs that geological activity in general has ever occurred, and is thought to have evolved predominantly under the influence of impacts. Prominent surface features include multi-ring structures, variously shaped impact craters, and chains of craters (catenae) and associated scarps, ridges and deposits. At a small scale, the surface is varied and made up of small, sparkly frost deposits at the tips of high spots, surrounded by a low-lying, smooth blanket of dark material. This is thought to result from the sublimation-driven degradation of small landforms, which is supported by the general deficit of small impact craters and the presence of numerous small knobs, considered to be their remnants. The absolute ages of the landforms are not known.
Callisto is composed of approximately equal amounts of rock and ice, with a density of about , the lowest density and surface gravity of Jupiter's major moons. Compounds detected spectroscopically on the surface include water ice, carbon dioxide, silicates and organic compounds. Investigation by the Galileo spacecraft revealed that Callisto may have a small silicate core and possibly a subsurface ocean of liquid water at depths greater than .
It is not in an orbital resonance like the three other Galilean satellites—Io, Europa and Ganymede—and is thus not appreciably tidally heated. Callisto's rotation is tidally locked to its orbit around Jupiter, so that it always faces the same direction, making Jupiter appear to hang directly overhead over its near-side. It is less affected by Jupiter's magnetosphere than the other inner satellites because of its more remote orbit, located just outside Jupiter's main radiation belt. Callisto is surrounded by an extremely thin atmosphere composed of carbon dioxide and probably molecular oxygen, as well as by a rather intense ionosphere. Callisto is thought to have formed by slow accretion from the disk of the gas and dust that surrounded Jupiter after its formation. Callisto's gradual accretion and the lack of tidal heating meant that not enough heat was available for rapid differentiation. The slow convection in the interior of Callisto, which commenced soon after formation, led to partial differentiation and possibly to the formation of a subsurface ocean at a depth of 100–150 km and a small, rocky core.
The likely presence of an ocean within Callisto leaves open the possibility that it could harbor life. However, conditions are thought to be less favorable than on nearby Europa. Various space probes from Pioneers 10 and 11 to Galileo and Cassini have studied Callisto. Because of its low radiation levels, Callisto has long been considered the most suitable to base possible future crewed missions on to study the Jovian system.
History
Discovery
Callisto was discovered independently by Simon Marius and Galileo Galilei in 1610, along with the three other large Jovian moons—Ganymede, Io and Europa.
Name
Callisto, like all of Jupiter's moons, is named after one of Zeus's many lovers or other sexual partners in Greek mythology. Callisto was a nymph (or, according to some sources, the daughter of Lycaon) who was associated with the goddess of the hunt, Artemis. The name was suggested by Simon Marius soon after Callisto's discovery. Marius attributed the suggestion to Johannes Kepler.
However, the names of the Galilean satellites fell into disfavor for a considerable time, and were not revived in common use until the mid-20th century. In much of the earlier astronomical literature, Callisto is referred to by its Roman numeral designation, a system introduced by Galileo, as or as "the fourth satellite of Jupiter".
There's no established English adjectival form of the name. The adjectival form of Greek Καλλιστῴ Kallistōi is Καλλιστῴος Kallistōi-os, from which one might expect Latin Callistōius and English *Callistóian (with 5 syllables), parallel to Sapphóian (4 syllables) for Sapphōi and Letóian for Lētōi. However, the iota subscript is often omitted from such Greek names (cf. Inóan from Īnōi and Argóan from Argōi), and indeed the analogous form Callistoan is found.
In Virgil, a second oblique stem appears in Latin: Callistōn-, but the corresponding Callistonian has rarely appeared in English. One also sees ad hoc forms, such as Callistan, Callistian and Callistean.
Orbit and rotation
Callisto is the outermost of the four Galilean moons of Jupiter. It orbits at a distance of approximately 1,880,000 km (26.3 times the 71,492 km radius of Jupiter itself). This is significantly larger than the orbital radius—1,070,000 km—of the next-closest Galilean satellite, Ganymede. As a result of this relatively distant orbit, Callisto does not participate in mean-motion resonance—in which the three inner Galilean satellites are locked—and probably never has. Callisto is expected to be captured into the resonance in about 1.5 billion years, completing the 1:2:4:8 chain.
Like most other regular planetary moons, Callisto's rotation is locked to be synchronous with its orbit. The length of Callisto's day, simultaneously its orbital period, is about 16.7 Earth days. Its orbit is very slightly eccentric and inclined to the Jovian equator, with the eccentricity and inclination changing quasi-periodically due to solar and planetary gravitational perturbations on a timescale of centuries. The ranges of change are 0.0072–0.0076 and 0.20–0.60°, respectively. These orbital variations cause the axial tilt (the angle between the rotational and orbital axes) to vary between 0.4 and 1.6°.
The dynamical isolation of Callisto means that it has never been appreciably tidally heated, which has important consequences for its internal structure and evolution. Its distance from Jupiter also means that the charged-particle flux from Jupiter's magnetosphere at its surface is relatively low—about 300 times lower than, for example, that at Europa. Hence, unlike the other Galilean moons, charged-particle irradiation has had a relatively minor effect on Callisto's surface. The radiation level at Callisto's surface is equivalent to a dose of about 0.01 rem (0.1 mSv) per day, which is just over ten times higher than Earth's average background radiation, but less than in Low Earth Orbit or on Mars.
Physical characteristics
Composition
The average density of Callisto, 1.83 g/cm3, suggests a composition of approximately equal parts of rocky material and water ice, with some additional volatile ices such as ammonia. The mass fraction of ices is 49–55%. The exact composition of Callisto's rock component is not known, but is probably close to the composition of L/LL type ordinary chondrites, which are characterized by less total iron, less metallic iron and more iron oxide than H chondrites. The weight ratio of iron to silicon is 0.9–1.3 in Callisto, whereas the solar ratio is around 1:8.
Callisto's surface has an albedo of about 20%. Its surface composition is thought to be broadly similar to its composition as a whole. Near-infrared spectroscopy has revealed the presence of water ice absorption bands at wavelengths of 1.04, 1.25, 1.5, 2.0 and 3.0 micrometers. Water ice seems to be ubiquitous on the surface of Callisto, with a mass fraction of 25–50%. The analysis of high-resolution, near-infrared and UV spectra obtained by the Galileo spacecraft and from the ground has revealed various non-ice materials: magnesium- and iron-bearing hydrated silicates, carbon dioxide, sulfur dioxide, and possibly ammonia and various organic compounds. Spectral data indicate that Callisto's surface is extremely heterogeneous at the small scale. Small, bright patches of pure water ice are intermixed with patches of a rock–ice mixture and extended dark areas made of a non-ice material.
The Callistoan surface is asymmetric: the leading hemisphere is darker than the trailing one. This is different from other Galilean satellites, where the reverse is true. The trailing hemisphere of Callisto appears to be enriched in carbon dioxide, whereas the leading hemisphere has more sulfur dioxide. Many fresh impact craters like Lofn also show enrichment in carbon dioxide. Overall, the chemical composition of the surface, especially in the dark areas, may be close to that seen on D-type asteroids, whose surfaces are made of carbonaceous material.
Internal structure
Callisto's battered surface lies on top of a cold, stiff and icy lithosphere that is between 80 and 150 km thick. A salty ocean 150–200 km deep may lie beneath the crust, indicated by studies of the magnetic fields around Jupiter and its moons. It was found that Callisto responds to Jupiter's varying background magnetic field like a perfectly conducting sphere; that is, the field cannot penetrate inside Callisto, suggesting a layer of highly conductive fluid within it with a thickness of at least 10 km. The existence of an ocean is more likely if water contains a small amount of ammonia or other antifreeze, up to 5% by weight. In this case the water+ice layer can be as thick as 250–300 km. Failing an ocean, the icy lithosphere may be somewhat thicker, up to about 300 km.
Beneath the lithosphere and putative ocean, Callisto's interior appears to be neither entirely uniform nor particularly variable. Galileo orbiter data (especially the dimensionless moment of inertia—0.3549 ± 0.0042—determined during close flybys) suggest that, if Callisto is in hydrostatic equilibrium, its interior is composed of compressed rocks and ices, with the amount of rock increasing with depth due to partial settling of its constituents. In other words, Callisto may be only partially differentiated. The density and moment of inertia for an equilibrium Callisto are compatible with the existence of a small silicate core in the center of Callisto. The radius of any such core cannot exceed 600 km, and the density may lie between 3.1 and 3.6 g/cm3. In this case, Callisto's interior would be in stark contrast to that of Ganymede, which appears to be fully differentiated.
However, a 2011 reanalysis of Galileo data suggests that Callisto is not in hydrostatic equilibrium. In that case, the gravity data may be more consistent with a more thoroughly differentiated Callisto with a hydrated silicate core.
Surface features
The ancient surface of Callisto is one of the most heavily cratered in the Solar System. In fact, the crater density is close to saturation: any new crater will tend to erase an older one. The large-scale geology is relatively simple; on Callisto there are no large mountains, volcanoes or other endogenic tectonic features. The impact craters and multi-ring structures—together with associated fractures, scarps and deposits—are the only large features to be found on the surface.
Callisto's surface can be divided into several geologically different parts: cratered plains, light plains, bright and dark smooth plains, and various units associated with particular multi-ring structures and impact craters. The cratered plains make up most of the surface area and represent the ancient lithosphere, a mixture of ice and rocky material. The light plains include bright impact craters like Burr and Lofn, as well as the effaced remnants of old large craters called palimpsests, the central parts of multi-ring structures, and isolated patches in the cratered plains. These light plains are thought to be icy impact deposits. The bright, smooth plains make up a small fraction of Callisto's surface and are found in the ridge and trough zones of the Valhalla and Asgard formations and as isolated spots in the cratered plains. They were thought to be connected with endogenic activity, but the high-resolution Galileo images showed that the bright, smooth plains correlate with heavily fractured and knobby terrain and do not show any signs of resurfacing. The Galileo images also revealed small, dark, smooth areas with overall coverage less than 10,000 km2, which appear to embay the surrounding terrain. They are possible cryovolcanic deposits. Both the light and the various smooth plains are somewhat younger and less cratered than the background cratered plains.
Impact crater diameters seen range from 0.1 km—a limit defined by the imaging resolution—to over 100 km, not counting the multi-ring structures. Small craters, with diameters less than 5 km, have simple bowl or flat-floored shapes. Those 5–40 km across usually have a central peak. Larger impact features, with diameters in the range 25–100 km, have central pits instead of peaks, such as Tindr crater. The largest craters with diameters over 60 km can have central domes, which are thought to result from central tectonic uplift after an impact; examples include Doh and Hár craters. A small number of very large—more than 100 km in diameter—and bright impact craters show anomalous dome geometry. These are unusually shallow and may be a transitional landform to the multi-ring structures, as with the Lofn impact feature. Callisto's craters are generally shallower than those on the Moon.
The largest impact features on Callisto's surface are multi-ring basins. Two are enormous. Valhalla is the largest, with a bright central region 600 km in diameter, and rings extending as far as 1,800 km from the center (see figure). The second largest is Asgard, measuring about 1,600 km in diameter. Multi-ring structures probably originated as a result of a post-impact concentric fracturing of the lithosphere lying on a layer of soft or liquid material, possibly an ocean. The catenae—for example Gomul Catena—are long chains of impact craters lined up in straight lines across the surface. They were probably created by objects that were tidally disrupted as they passed close to Jupiter prior to the impact on Callisto, or by very oblique impacts. A historical example of a disruption was Comet Shoemaker–Levy 9.
As mentioned above, small patches of pure water ice with an albedo as high as 80% are found on the surface of Callisto, surrounded by much darker material. High-resolution Galileo images showed the bright patches to be predominately located on elevated surface features: crater rims, scarps, ridges and knobs. They are likely to be thin water frost deposits. Dark material usually lies in the lowlands surrounding and mantling bright features and appears to be smooth. It often forms patches up to 5 km across within the crater floors and in the intercrater depressions.
On a sub-kilometer scale the surface of Callisto is more degraded than the surfaces of other icy Galilean moons. Typically there is a deficit of small impact craters with diameters less than 1 km as compared with, for instance, the dark plains on Ganymede. Instead of small craters, the almost ubiquitous surface features are small knobs and pits. The knobs are thought to represent remnants of crater rims degraded by an as-yet uncertain process. The most likely candidate process is the slow sublimation of ice, which is enabled by a temperature of up to 165 K, reached at a subsolar point. Such sublimation of water or other volatiles from the dirty ice that is the bedrock causes its decomposition. The non-ice remnants form debris avalanches descending from the slopes of the crater walls. Such avalanches are often observed near and inside impact craters and termed "debris aprons". Sometimes crater walls are cut by sinuous valley-like incisions called "gullies", which resemble certain Martian surface features. In the ice sublimation hypothesis, the low-lying dark material is interpreted as a blanket of primarily non-ice debris, which originated from the degraded rims of craters and has covered a predominantly icy bedrock.
The relative ages of the different surface units on Callisto can be determined from the density of impact craters on them. The older the surface, the denser the crater population. Absolute dating has not been carried out, but based on theoretical considerations, the cratered plains are thought to be ~4.5 billion years old, dating back almost to the formation of the Solar System. The ages of multi-ring structures and impact craters depend on chosen background cratering rates and are estimated by different authors to vary between 1 and 4 billion years.
Atmosphere and ionosphere
Callisto has a very tenuous atmosphere composed of carbon dioxide and probably oxygen. It was detected by the Galileo Near Infrared Mapping Spectrometer (NIMS) from its absorption feature near the wavelength 4.2 micrometers. The surface pressure is estimated to be 7.5 picobar (0.75 μPa) and particle density 4 cm−3. Because such a thin atmosphere would be lost in only about four years (see atmospheric escape), it must be constantly replenished, possibly by slow sublimation of carbon dioxide ice from Callisto's icy crust, which would be compatible with the sublimation–degradation hypothesis for the formation of the surface knobs.
Callisto's ionosphere was first detected during Galileo flybys; its high electron density of 7–17 cm−3 cannot be explained by the photoionization of the atmospheric carbon dioxide alone. Hence, it is suspected that the atmosphere of Callisto is actually dominated by molecular oxygen (in amounts 10–100 times greater than ). However, oxygen has not yet been directly detected in the atmosphere of Callisto. Observations with the Hubble Space Telescope (HST) placed an upper limit on its possible concentration in the atmosphere, based on lack of detection, which is still compatible with the ionospheric measurements. At the same time, HST was able to detect condensed oxygen trapped on the surface of Callisto.
Atomic hydrogen has also been detected in Callisto's atmosphere via recent analysis of 2001 Hubble Space Telescope data. Spectral images taken on 15 and 24 December 2001 were re-examined, revealing a faint signal of scattered light that indicates a hydrogen corona. The observed brightness from the scattered sunlight in Callisto's hydrogen corona is approximately two times larger when the leading hemisphere is observed. This asymmetry may originate from a different hydrogen abundance in both the leading and trailing hemispheres. However, this hemispheric difference in Callisto's hydrogen corona brightness is likely to originate from the extinction of the signal in Earth's geocorona, which is greater when the trailing hemisphere is observed.
Origin and evolution
The partial differentiation of Callisto (inferred e.g. from moment of inertia measurements) means that it has never been heated enough to melt its ice component. Therefore, the most favorable model of its formation is a slow accretion in the low-density Jovian subnebula—a disk of the gas and dust that existed around Jupiter after its formation. Such a prolonged accretion stage would allow cooling to largely keep up with the heat accumulation caused by impacts, radioactive decay and contraction, thereby preventing melting and fast differentiation. The allowable timescale for the formation of Callisto lies then in the range 0.1 million–10 million years.
The further evolution of Callisto after accretion was determined by the balance of the radioactive heating, cooling through thermal conduction near the surface, and solid state or subsolidus convection in the interior. Details of the subsolidus convection in the ice is the main source of uncertainty in the models of all icy moons. It is known to develop when the temperature is sufficiently close to the melting point, due to the temperature dependence of ice viscosity. Subsolidus convection in icy bodies is a slow process with ice motions of the order of 1 centimeter per year, but is, in fact, a very effective cooling mechanism on long timescales. It is thought to proceed in the so-called stagnant lid regime, where a stiff, cold outer layer of Callisto conducts heat without convection, whereas the ice beneath it convects in the subsolidus regime. For Callisto, the outer conductive layer corresponds to the cold and rigid lithosphere with a thickness of about 100 km. Its presence would explain the lack of any signs of the endogenic activity on the Callistoan surface. The convection in the interior parts of Callisto may be layered, because under the high pressures found there, water ice exists in different crystalline phases beginning from the ice I on the surface to ice VII in the center. The early onset of subsolidus convection in the Callistoan interior could have prevented large-scale ice melting and any resulting differentiation that would have otherwise formed a large rocky core and icy mantle. Due to the convection process, however, very slow and partial separation and differentiation of rocks and ices inside Callisto has been proceeding on timescales of billions of years and may be continuing to this day.
The current understanding of the evolution of Callisto allows for the existence of a layer or "ocean" of liquid water in its interior. This is connected with the anomalous behavior of ice I phase's melting temperature, which decreases with pressure, achieving temperatures as low as 251 K at 2,070 bar (207 MPa). In all realistic models of Callisto the temperature in the layer between 100 and 200 km in depth is very close to, or exceeds slightly, this anomalous melting temperature. The presence of even small amounts of ammonia—about 1–2% by weight—almost guarantees the liquid's existence because ammonia would lower the melting temperature even further.
Although Callisto is very similar in bulk properties to Ganymede, it apparently had a much simpler geological history. The surface appears to have been shaped mainly by impacts and other exogenic forces. Unlike neighboring Ganymede with its grooved terrain, there is little evidence of tectonic activity. Explanations that have been proposed for the contrasts in internal heating and consequent differentiation and geologic activity between Callisto and Ganymede include differences in formation conditions, the greater tidal heating experienced by Ganymede, and the more numerous and energetic impacts that would have been suffered by Ganymede during the Late Heavy Bombardment. The relatively simple geological history of Callisto provides planetary scientists with a reference point for comparison with other more active and complex worlds.
Habitability
It is speculated that there could be life in Callisto's subsurface ocean. Like Europa and Ganymede, as well as Saturn's moons Enceladus, Dione and Titan and Neptune's moon Triton, a possible subsurface ocean might be composed of salt water.
It is possible that halophiles could thrive in the ocean.
As with Europa and Ganymede, the idea has been raised that habitable conditions and even extraterrestrial microbial life may exist in the salty ocean under the Callistoan surface. However, the environmental conditions necessary for life appear to be less favorable on Callisto than on Europa. The principal reasons are the lack of contact with rocky material and the lower heat flux from the interior of Callisto. Callisto's ocean is heated only by radioactive decay, while Europa's is also heated by tidal energy, as it is much closer to Jupiter. It is thought that of all of Jupiter's moons, Europa has the greatest chance of supporting microbial life.
Exploration
Past
The Pioneer 10 and Pioneer 11 Jupiter encounters in the early 1970s contributed little new information about Callisto in comparison with what was already known from Earth-based observations. The real breakthrough happened later with the Voyager 1 and Voyager 2 flybys in 1979. They imaged more than half of the Callistoan surface with a resolution of 1–2 km, and precisely measured its temperature, mass and shape. A second round of exploration lasted from 1994 to 2003, when the Galileo spacecraft had eight close encounters with Callisto, the last flyby during the C30 orbit in 2001 came as close as 138 km to the surface. The Galileo orbiter completed the global imaging of the surface and delivered a number of pictures with a resolution as high as 15 meters of selected areas of Callisto. In 2000, the Cassini spacecraft en route to Saturn acquired high-quality infrared spectra of the Galilean satellites including Callisto. In February–March 2007, the New Horizons probe on its way to Pluto obtained new images and spectra of Callisto.
Future exploration
Callisto will be visited by three spacecraft in the near future.
The European Space Agency's Jupiter Icy Moons Explorer (JUICE), which launched on 14 April 2023, will perform 21 close flybys of Callisto between 2031 and 2034.
NASA's Europa Clipper, which launched on 14 October 2024, will conduct nine close flybys of Callisto beginning in 2030.
China's CNSA Tianwen-4 is planned to launch to Jupiter around 2030 before entering orbit around Callisto.
Old proposals
Formerly proposed for a launch in 2020, the Europa Jupiter System Mission (EJSM) was a joint NASA/ESA proposal for exploration of Jupiter's moons. In February 2009 it was announced that ESA/NASA had given this mission priority ahead of the Titan Saturn System Mission. At the time ESA's contribution still faced funding competition from other ESA projects. EJSM consisted of the NASA-led Jupiter Europa Orbiter, the ESA-led Jupiter Ganymede Orbiter and possibly a JAXA-led Jupiter Magnetospheric Orbiter.
Potential crewed exploration and habitation
In 2003 NASA conducted a conceptual study called Human Outer Planets Exploration (HOPE) regarding the future human exploration of the outer Solar System. The target chosen to consider in detail was Callisto.
The study proposed a possible surface base on Callisto that would produce rocket propellant for further exploration of the Solar System. Advantages of a base on Callisto include low radiation (due to its distance from Jupiter) and geological stability. Such a base could facilitate remote exploration of Europa, or be an ideal location for a Jovian system waystation servicing spacecraft heading farther into the outer Solar System, using a gravity assist from a close flyby of Jupiter after departing Callisto.
In December 2003, NASA reported that a crewed mission to Callisto might be possible in the 2040s.
See also
List of former planets
Jupiter's moons in fiction
List of craters on Callisto
List of geological features on Callisto
List of natural satellites
Notes
References
External links
Callisto Profile at NASA's Solar System Exploration site
Callisto page at The Nine Planets
Callisto page at Views of the Solar System
Callisto Crater Database from the Lunar and Planetary Institute
Images of Callisto at JPL's Planetary Photojournal
Movie of Callisto's rotation from the National Oceanic and Atmospheric Administration
Callisto map with feature names from Planetary Photojournal
Callisto nomenclature and Callisto map with feature names from the USGS planetary nomenclature page
Paul Schenk's 3D images and flyover videos of Callisto and other outer solar system satellites
Google Callisto 3D, interactive map of the moon
16100107
Discoveries by Galileo Galilei
Moons of Jupiter
Moons with a prograde orbit
Solar System | Callisto (moon) | [
"Astronomy"
] | 5,886 | [
"Outer space",
"Solar System"
] |
43,127 | https://en.wikipedia.org/wiki/Europa%20%28moon%29 | Europa , or Jupiter II, is the smallest of the four Galilean moons orbiting Jupiter, and the sixth-closest to the planet of all the 95 known moons of Jupiter. It is also the sixth-largest moon in the Solar System. Europa was discovered independently by Simon Marius and Galileo Galilei and was named (by Marius) after Europa, the Phoenician mother of King Minos of Crete and lover of Zeus (the Greek equivalent of the Roman god Jupiter).
Slightly smaller than Earth's Moon, Europa is made of silicate rock and has a water-ice crust and probably an iron–nickel core. It has a very thin atmosphere, composed primarily of oxygen. Its geologically young white-beige surface is striated by light tan cracks and streaks, with very few impact craters. In addition to Earth-bound telescope observations, Europa has been examined by a succession of space-probe flybys, the first occurring in the early 1970s. In September 2022, the Juno spacecraft flew within about 320 km (200 miles) of Europa for a more recent close-up view.
Europa has the smoothest surface of any known solid object in the Solar System. The apparent youth and smoothness of the surface is due to a water ocean beneath the surface, which could conceivably harbor extraterrestrial life, although such life would most likely be that of single celled organisms and bacteria-like creatures. The predominant model suggests that heat from tidal flexing causes the ocean to remain liquid and drives ice movement similar to plate tectonics, absorbing chemicals from the surface into the ocean below. Sea salt from a subsurface ocean may be coating some geological features on Europa, suggesting that the ocean is interacting with the sea floor. This may be important in determining whether Europa could be habitable. In addition, the Hubble Space Telescope detected water vapor plumes similar to those observed on Saturn's moon Enceladus, which are thought to be caused by erupting cryogeysers. In May 2018, astronomers provided supporting evidence of water plume activity on Europa, based on an updated analysis of data obtained from the Galileo space probe, which orbited Jupiter from 1995 to 2003. Such plume activity could help researchers in a search for life from the subsurface Europan ocean without having to land on the moon. In March 2024, astronomers reported that the surface of Europa may have much less oxygen than previously inferred.
The Galileo mission, launched in 1989, provides the bulk of current data on Europa. No spacecraft has yet landed on Europa, although there have been several proposed exploration missions. The European Space Agency's Jupiter Icy Moon Explorer (JUICE) is a mission to Ganymede launched on 14 April 2023, that will include two flybys of Europa. NASA's Europa Clipper was launched on 14 October 2024.
Discovery and naming
Europa, along with Jupiter's three other large moons, Io, Ganymede, and Callisto, was discovered by Galileo Galilei on 8 January 1610, and possibly independently by Simon Marius. On 7 January, Galileo had observed Io and Europa together using a 20×-magnification refracting telescope at the University of Padua, but the low resolution could not separate the two objects. The following night, he saw Io and Europa for the first time as separate bodies.
The moon is the namesake of Europa, in Greek mythology the daughter of the Phoenician king of Tyre. Like all the Galilean satellites, Europa is named after a lover of Zeus, the Greek counterpart of Jupiter. Europa was courted by Zeus and became the queen of Crete. The naming scheme was suggested by Simon Marius, who attributed the proposal to Johannes Kepler:
The names fell out of favor for a considerable time and were not revived in general use until the mid-20th century. In much of the earlier astronomical literature, Europa is simply referred to by its Roman numeral designation as (a system also introduced by Galileo) or as the "second satellite of Jupiter". In 1892, the discovery of Amalthea, whose orbit lay closer to Jupiter than those of the Galilean moons, pushed Europa to the third position. The Voyager probes discovered three more inner satellites in 1979, so Europa is now counted as Jupiter's sixth satellite, though it is still referred to as .
The adjectival form has stabilized as Europan.
Orbit and rotation
Europa orbits Jupiter in just over three and a half days, with an orbital radius of about 670,900 km. With an orbital eccentricity of only 0.009, the orbit itself is nearly circular, and the orbital inclination relative to Jupiter's equatorial plane is small, at 0.470°. Like its fellow Galilean satellites, Europa is tidally locked to Jupiter, with one hemisphere of Europa constantly facing Jupiter. Because of this, there is a sub-Jovian point on Europa's surface, from which Jupiter would appear to hang directly overhead. Europa's prime meridian is a line passing through this point. Research suggests that tidal locking may not be full, as a non-synchronous rotation has been proposed: Europa spins faster than it orbits, or at least did so in the past. This suggests an asymmetry in internal mass distribution and that a layer of subsurface liquid separates the icy crust from the rocky interior.
The slight eccentricity of Europa's orbit, maintained by gravitational disturbances from the other Galileans, causes Europa's sub-Jovian point to oscillate around a mean position. As Europa comes slightly nearer to Jupiter, Jupiter's gravitational attraction increases, causing Europa to elongate towards and away from it. As Europa moves slightly away from Jupiter, Jupiter's gravitational force decreases, causing Europa to relax back into a more spherical shape, and creating tides in its ocean. The orbital eccentricity of Europa is continuously pumped by its mean-motion resonance with Io. Thus, the tidal flexing kneads Europa's interior and gives it a source of heat, possibly allowing its ocean to stay liquid while driving subsurface geological processes. The ultimate source of this energy is Jupiter's rotation, which is tapped by Io through the tides it raises on Jupiter and is transferred to Europa and Ganymede by the orbital resonance.
Analysis of the unique cracks lining Europa yielded evidence that it likely spun around a tilted axis at some point in time. If correct, this would explain many of Europa's features. Europa's immense network of crisscrossing cracks serves as a record of the stresses caused by massive tides in its global ocean. Europa's tilt could influence calculations of how much of its history is recorded in its frozen shell, how much heat is generated by tides in its ocean, and even how long the ocean has been liquid. Its ice layer must stretch to accommodate these changes. When there is too much stress, it cracks. A tilt in Europa's axis could suggest that its cracks may be much more recent than previously thought. The reason for this is that the direction of the spin pole may change by as much as a few degrees per day, completing one precession period over several months. A tilt could also affect estimates of the age of Europa's ocean. Tidal forces are thought to generate the heat that keeps Europa's ocean liquid, and a tilt in the spin axis would cause more heat to be generated by tidal forces. Such additional heat would have allowed the ocean to remain liquid for a longer time. However, it has not yet been determined when this hypothesized shift in the spin axis might have occurred.
Physical characteristics
Europa is slightly smaller than the Earth's moon. At just over in diameter, it is the sixth-largest moon and fifteenth-largest object in the Solar System. Though by a wide margin the least massive of the Galilean satellites, it is nonetheless more massive than all known moons in the Solar System smaller than itself combined. Its bulk density suggests that it is similar in composition to terrestrial planets, being primarily composed of silicate rock.
Internal structure
It is estimated that Europa has an outer layer of water around thick – a part frozen as its crust and a part as a liquid ocean underneath the ice. Recent magnetic-field data from the Galileo orbiter showed that Europa has an induced magnetic field through interaction with Jupiter's, which suggests the presence of a subsurface conductive layer. This layer is likely to be a salty liquid-water ocean. Portions of the crust are estimated to have undergone a rotation of nearly 80°, nearly flipping over (see true polar wander), which would be unlikely if the ice were solidly attached to the mantle. Europa probably contains a metallic iron core.
Surface features
Europa is the smoothest known object in the Solar System, lacking large-scale features such as mountains and craters. The prominent markings crisscrossing Europa appear to be mainly albedo features that emphasize low topography. There are few craters on Europa, because its surface is tectonically too active and therefore young. Its icy crust has an albedo (light reflectivity) of 0.64, one of the highest of any moon. This indicates a young and active surface: based on estimates of the frequency of cometary bombardment that Europa experiences, the surface is about 20 to 180 million years old. There is no scientific consensus about the explanation for Europa's surface features.
It has been postulated Europa's equator may be covered in icy spikes called penitentes, which may be up to 15 meters high. Their formation is due to direct overhead sunlight near the equator causing the ice to sublime, forming vertical cracks. Although the imaging available from the Galileo orbiter does not have the resolution for confirmation, radar and thermal data are consistent with this speculation.
The ionizing radiation level at Europa's surface is equivalent to a daily dose of about 5.4 Sv (540 rem), an amount that would cause severe illness or death in human beings exposed for a single Earth day (24 hours). A Europan day is about 3.5 times as long as an Earth day.
Lineae
Europa's most striking surface features are a series of dark streaks crisscrossing the entire globe, called lineae (). Close examination shows that the edges of Europa's crust on either side of the cracks have moved relative to each other. The larger bands are more than across, often with dark, diffuse outer edges, regular striations, and a central band of lighter material.
The most likely hypothesis is that the lineae on Europa were produced by a series of eruptions of warm ice as Europa's crust slowly spreads open to expose warmer layers beneath. The effect would have been similar to that seen on Earth's oceanic ridges. These various fractures are thought to have been caused in large part by the tidal flexing exerted by Jupiter. Because Europa is tidally locked to Jupiter, and therefore always maintains approximately the same orientation towards Jupiter, the stress patterns should form a distinctive and predictable pattern. However, only the youngest of Europa's fractures conform to the predicted pattern; other fractures appear to occur at increasingly different orientations the older they are. This could be explained if Europa's surface rotates slightly faster than its interior, an effect that is possible due to the subsurface ocean mechanically decoupling Europa's surface from its rocky mantle and the effects of Jupiter's gravity tugging on Europa's outer ice crust. Comparisons of Voyager and Galileo spacecraft photos serve to put an upper limit on this hypothetical slippage. A full revolution of the outer rigid shell relative to the interior of Europa takes at least 12,000 years. Studies of Voyager and Galileo images have revealed evidence of subduction on Europa's surface, suggesting that, just as the cracks are analogous to ocean ridges, so plates of icy crust analogous to tectonic plates on Earth are recycled into the molten interior. This evidence of both crustal spreading at bands and convergence at other sites suggests that Europa may have active plate tectonics, similar to Earth. However, the physics driving these plate tectonics are not likely to resemble those driving terrestrial plate tectonics, as the forces resisting potential Earth-like plate motions in Europa's crust are significantly stronger than the forces that could drive them.
Chaos and lenticulae
Other features present on Europa are circular and elliptical (Latin for "freckles"). Many are domes, some are pits and some are smooth, dark spots. Others have a jumbled or rough texture. The dome tops look like pieces of the older plains around them, suggesting that the domes formed when the plains were pushed up from below.
One hypothesis states that these lenticulae were formed by diapirs of warm ice rising up through the colder ice of the outer crust, much like magma chambers in Earth's crust. The smooth, dark spots could be formed by meltwater released when the warm ice breaks through the surface. The rough, jumbled lenticulae (called regions of "chaos"; for example, Conamara Chaos) would then be formed from many small fragments of crust, embedded in hummocky, dark material, appearing like icebergs in a frozen sea.
An alternative hypothesis suggests that lenticulae are actually small areas of chaos and that the claimed pits, spots and domes are artefacts resulting from the over-interpretation of early, low-resolution Galileo images. The implication is that the ice is too thin to support the convective diapir model of feature formation.
In November 2011, a team of researchers, including researchers at University of Texas at Austin, presented evidence suggesting that many "chaos terrain" features on Europa sit atop vast lakes of liquid water. These lakes would be entirely encased in Europa's icy outer shell and distinct from a liquid ocean thought to exist farther down beneath the ice shell. Full confirmation of the lakes' existence will require a space mission designed to probe the ice shell either physically or indirectly, e.g. using radar. Chaos features may also be a result of increased melting of the ice shell and deposition of marine ice at low latitudes as a result of heterogeneous heating.
Work published by researchers from Williams College suggests that chaos terrain may represent sites where impacting comets penetrated through the ice crust and into an underlying ocean.
Subsurface ocean
The scientific consensus is that a layer of liquid water exists beneath Europa's surface, and that heat from tidal flexing allows the subsurface ocean to remain liquid. Europa's surface temperature averages about at the equator and only at the poles, keeping Europa's icy crust as hard as granite. The first hints of a subsurface ocean came from theoretical considerations of tidal heating (a consequence of Europa's slightly eccentric orbit and orbital resonance with the other Galilean moons). Galileo imaging team members argue for the existence of a subsurface ocean from analysis of Voyager and Galileo images. The most dramatic example is "chaos terrain", a common feature on Europa's surface that some interpret as a region where the subsurface ocean has melted through the icy crust. This interpretation is controversial. Most geologists who have studied Europa favor what is commonly called the "thick ice" model, in which the ocean has rarely, if ever, directly interacted with the present surface. The best evidence for the thick-ice model is a study of Europa's large craters. The largest impact structures are surrounded by concentric rings and appear to be filled with relatively flat, fresh ice; based on this and on the calculated amount of heat generated by Europan tides, it is estimated that the outer crust of solid ice is approximately thick, including a ductile "warm ice" layer, which could mean that the liquid ocean underneath may be about deep. This leads to a volume of Europa's oceans of 3×1018m3, between two or three times the volume of Earth's oceans.
The thin-ice model suggests that Europa's ice shell may be only a few kilometers thick. However, most planetary scientists conclude that this model considers only those topmost layers of Europa's crust that behave elastically when affected by Jupiter's tides. One example is flexure analysis, in which Europa's crust is modeled as a plane or sphere weighted and flexed by a heavy load. Models such as this suggest the outer elastic portion of the ice crust could be as thin as . If the ice shell of Europa is really only a few kilometers thick, this "thin ice" model would mean that regular contact of the liquid interior with the surface could occur through open ridges, causing the formation of areas of chaotic terrain. Large impacts going fully through the ice crust would also be a way that the subsurface ocean could be exposed.
Composition
The Galileo orbiter found that Europa has a weak magnetic moment, which is induced by the varying part of the Jovian magnetic field. The field strength at the magnetic equator (about 120 nT) created by this magnetic moment is about one-sixth the strength of Ganymede's field and six times the value of Callisto's. The existence of the induced moment requires a layer of a highly electrically conductive material in Europa's interior. The most plausible candidate for this role is a large subsurface ocean of liquid saltwater.
Since the Voyager spacecraft flew past Europa in 1979, scientists have worked to understand the composition of the reddish-brown material that coats fractures and other geologically youthful features on Europa's surface. Spectrographic evidence suggests that the darker, reddish streaks and features on Europa's surface may be rich in salts such as magnesium sulfate, deposited by evaporating water that emerged from within. Sulfuric acid hydrate is another possible explanation for the contaminant observed spectroscopically. In either case, because these materials are colorless or white when pure, some other material must also be present to account for the reddish color, and sulfur compounds are suspected.
Another hypothesis for the colored regions is that they are composed of abiotic organic compounds collectively called tholins. The morphology of Europa's impact craters and ridges is suggestive of fluidized material welling up from the fractures where pyrolysis and radiolysis take place. In order to generate colored tholins on Europa, there must be a source of materials (carbon, nitrogen, and water) and a source of energy to make the reactions occur. Impurities in the water ice crust of Europa are presumed both to emerge from the interior as cryovolcanic events that resurface the body, and to accumulate from space as interplanetary dust. Tholins bring important astrobiological implications, as they may play a role in prebiotic chemistry and abiogenesis.
The presence of sodium chloride in the internal ocean has been suggested by a 450 nm absorption feature, characteristic of irradiated NaCl crystals, that has been spotted in HST observations of the chaos regions, presumed to be areas of recent subsurface upwelling. The subterranean ocean of Europa contains carbon and was observed on the surface ice as a concentration of carbon dioxide within Tara Regio, a geologically recently resurfaced terrain.
Sources of heat
Europa receives thermal energy from tidal heating, which occurs through the tidal friction and tidal flexing processes caused by tidal acceleration: orbital and rotational energy are dissipated as heat in the core of the moon, the internal ocean, and the ice crust.
Tidal friction
Ocean tides are converted to heat by frictional losses in the oceans and their interaction with the solid bottom and with the top ice crust. In late 2008, it was suggested Jupiter may keep Europa's oceans warm by generating large planetary tidal waves on Europa because of its small but non-zero obliquity. This generates so-called Rossby waves that travel quite slowly, at just a few kilometers per day, but can generate significant kinetic energy. For the current axial tilt estimate of 0.1 degree, the resonance from Rossby waves would contain 7.3 J of kinetic energy, which is two thousand times larger than that of the flow excited by the dominant tidal forces. Dissipation of this energy could be the principal heat source of Europa's ocean.
Tidal flexing
Tidal flexing kneads Europa's interior and ice shell, which becomes a source of heat. Depending on the amount of tilt, the heat generated by the ocean flow could be 100 to thousands of times greater than the heat generated by the flexing of Europa's rocky core in response to the gravitational pull from Jupiter and the other moons circling that planet. Europa's seafloor could be heated by the moon's constant flexing, driving hydrothermal activity similar to undersea volcanoes in Earth's oceans.
Experiments and ice modeling published in 2016, indicate that tidal flexing dissipation can generate one order of magnitude more heat in Europa's ice than scientists had previously assumed. Their results indicate that most of the heat generated by the ice actually comes from the ice's crystalline structure (lattice) as a result of deformation, and not friction between the ice grains. The greater the deformation of the ice sheet, the more heat is generated.
Radioactive decay
In addition to tidal heating, the interior of Europa could also be heated by the decay of radioactive material (radiogenic heating) within the rocky mantle. But the models and values observed are one hundred times higher than those that could be produced by radiogenic heating alone, thus implying that tidal heating has a leading role in Europa.
Plumes
The Hubble Space Telescope acquired an image of Europa in 2012 that was interpreted to be a plume of water vapour erupting from near its south pole. The image suggests the plume may be high, or more than 20 times the height of Mt. Everest., though recent observations and modeling suggest that typical Europan plumes may be much smaller. It has been suggested that if plumes exist, they are episodic and likely to appear when Europa is at its farthest point from Jupiter, in agreement with tidal force modeling predictions. Additional imaging evidence from the Hubble Space Telescope was presented in September 2016.
In May 2018, astronomers provided supporting evidence of water plume activity on Europa, based on an updated critical analysis of data obtained from the Galileo space probe, which orbited Jupiter between 1995 and 2003. Galileo flew by Europa in 1997 within of the moon's surface and the researchers suggest it may have flown through a water plume. Such plume activity could help researchers in a search for life from the subsurface Europan ocean without having to land on the moon.
The tidal forces are about 1,000 times stronger than the Moon's effect on Earth. The only other moon in the Solar System exhibiting water vapor plumes is Enceladus. The estimated eruption rate at Europa is about 7000 kg/s compared to about 200 kg/s for the plumes of Enceladus. If confirmed, it would open the possibility of a flyby through the plume and obtain a sample to analyze in situ without having to use a lander and drill through kilometres of ice.
In November 2020, a study was published in the peer-reviewed scientific journal Geophysical Research Letters suggesting that the plumes may originate from water within the crust of Europa as opposed to its subsurface ocean. The study's model, using images from the Galileo space probe, proposed that a combination of freezing and pressurization may result in at least some of the cryovolcanic activity. The pressure generated by migrating briny water pockets would thus, eventually, burst through the crust, thereby creating these plumes. The hypothesis that cryovolcanism on Europa could be triggered by freezing and pressurization of liquid pockets in the icy crust was first proposed by Sarah Fagents at the University of Hawai'i at Mānoa, who in 2003, was the first to model and publish work on this process. A press release from NASA's Jet Propulsion Laboratory referencing the November 2020 study suggested that plumes sourced from migrating liquid pockets could potentially be less hospitable to life. This is due to a lack of substantial energy for organisms to thrive off, unlike proposed hydrothermal vents on the subsurface ocean floor.
Atmosphere
The atmosphere of Europa can be categorized as thin and tenuous (often called an exosphere), primarily composed of oxygen and trace amounts of water vapor. However, this quantity of oxygen is produced in a non-biological manner. Given that Europa's surface is icy, and subsequently very cold; as solar ultraviolet radiation and charged particles (ions and electrons) from the Jovian magnetospheric environment collide with Europa's surface, water vapor is created and instantaneously separated into oxygen and hydrogen constituents. As it continues to move, the hydrogen is light enough to pass through the surface gravity of the atmosphere leaving behind only oxygen. The surface-bounded atmosphere forms through radiolysis, the dissociation of molecules through radiation. This accumulated oxygen atmosphere can get to a height of above the surface of Europa. Molecular oxygen is the densest component of the atmosphere because it has a long lifetime; after returning to the surface, it does not stick (freeze) like a water or hydrogen peroxide molecule but rather desorbs from the surface and starts another ballistic arc. Molecular hydrogen never reaches the surface, as it is light enough to escape Europa's surface gravity. Europa is one of the few moons in our solar system with a quantifiable atmosphere, along with Titan, Io, Triton, Ganymede and Callisto. Europa is also one of several moons in our solar system with very large quantities of ice (volatiles), otherwise known as "icy moons".Europa is also considered to be geologically active due to the constant release of hydrogen-oxygen mixtures into space. As a result of the moon's particle venting, the atmosphere requires continuous replenishment. Europa also contains a small magnetosphere (approximately 25% of Ganymede's). However, this magnetosphere varies in size as Europa orbits through Jupiter's magnetic field. This confirms that a conductive element, such as a large ocean, likely lies below its icy surface. As multiple studies have been conducted over Europa's atmosphere, several findings conclude that not all oxygen molecules are released into the atmosphere. This unknown percentage of oxygen may be absorbed into the surface and sink into the subsurface. Because the surface may interact with the subsurface ocean (considering the geological discussion above), this molecular oxygen may make its way to the ocean, where it could aid in biological processes. One estimate suggests that, given the turnover rate inferred from the apparent ~0.5 Gyr maximum age of Europa's surface ice, subduction of radiolytically generated oxidizing species might well lead to oceanic free oxygen concentrations that are comparable to those in terrestrial deep oceans.
Through the slow release of oxygen and hydrogen, a neutral torus around Europa's orbital plane is formed. This "neutral cloud" has been detected by both the Cassini and Galileo spacecraft, and has a greater content (number of atoms and molecules) than the neutral cloud surrounding Jupiter's inner moon Io. This torus was officially confirmed using Energetic Neutral Atom (ENA) imaging. Europa's torus ionizes through the process of neutral particles exchanging electrons with its charged particles. Since Europa's magnetic field rotates faster than its orbital velocity, these ions are left in the path of its magnetic field trajectory, forming a plasma. It has been hypothesized that these ions are responsible for the plasma within Jupiter's magnetosphere.
On 4 March 2024, astronomers reported that the surface of Europa may have much less oxygen than previously inferred.
Discovery of atmosphere
The atmosphere of Europa was first discovered in 1995 by astronomers D. T. Hall and collaborators using the Goddard High Resolution Spectrograph instrument of the Hubble Space Telescope. This observation was further supported in 1997 by the Galileo orbiter during its mission within the Jovian system. The Galileo orbiter performed three radio occultation events of Europa, where the probe's radio contact with Earth was temporarily blocked by passing behind Europa. By analyzing the effects Europa's sparse atmosphere had on the radio signal just before and after the occultation, for a total of six events, a team of astronomers led by A. J. Kliore established the presence of an ionized layer in Europa's atmosphere.
Climate and weather
Despite the presence of a gas torus, Europa has no weather producing clouds. As a whole, Europa has no wind, precipitation, or presence of sky color as its gravity is too low to hold an atmosphere substantial enough for those features. Europa's gravity is approximately 13% of Earth's. The temperature on Europa varies from −160 °C at the equator, to −220 °C at either of its poles. Europa's subsurface ocean is thought to be significantly warmer however. It is hypothesized that because of radioactive and tidal heating (as mentioned in the sections above), there are points in the depths of Europa's ocean that may be only slightly cooler than Earth's oceans. Studies have also concluded that Europa's ocean would have been rather acidic at first, with large concentrations of sulfate, calcium, and carbon dioxide. But over the course of 4.5 billion years, it became full of chloride, thus resembling our 1.94% chloride oceans on Earth.
Exploration
Exploration of Europa began with the Jupiter flybys of Pioneer 10 and 11 in 1973 and 1974, respectively. The first closeup photos were of low resolution compared to later missions. The two Voyager probes traveled through the Jovian system in 1979, providing more-detailed images of Europa's icy surface. The images caused many scientists to speculate about the possibility of a liquid ocean underneath.
Starting in 1995, the Galileo space probe orbited Jupiter for eight years, until 2003, and provided the most detailed examination of the Galilean moons to date. It included the "Galileo Europa Mission" and "Galileo Millennium Mission", with numerous close flybys of Europa. In 2007, New Horizons imaged Europa, as it flew by the Jovian system while on its way to Pluto. In 2022, the Juno orbiter flew by Europa at a distance of 352 km (219 mi).
In 2012, Jupiter Icy Moons Explorer (JUICE) was selected by the European Space Agency (ESA) as a planned mission. That mission includes two flybys of Europa, but is more focused on Ganymede. It was launched in 2023, and is expected to reach Jupiter in July 2031 after four gravity assists and eight years of travel.
In 2011, a Europa mission was recommended by the U.S. Planetary Science Decadal Survey. In response, NASA commissioned concept studies of a Europa lander in 2011, along with concepts for a Europa flyby (Europa Clipper), and a Europa orbiter. The orbiter element option concentrates on the "ocean" science, while the multiple-flyby element (Clipper) concentrates on the chemistry and energy science. On 13 January 2014, the House Appropriations Committee announced a new bipartisan bill that includes $80 million in funding to continue the Europa mission concept studies.
In July 2013 an updated concept for a flyby Europa mission called Europa Clipper was presented by the Jet Propulsion Laboratory (JPL) and the Applied Physics Laboratory (APL). In May 2015, NASA announced that it had accepted development of the Europa Clipper mission, and revealed the instruments it would use. The aim of Europa Clipper is to explore Europa in order to investigate its habitability, and to aid in selecting sites for a future lander. The Europa Clipper would not orbit Europa, but instead orbit Jupiter and conduct 45 low-altitude flybys of Europa during its envisioned mission. The probe would carry an ice-penetrating radar, short-wave infrared spectrometer, topographical imager, and an ion- and neutral-mass spectrometer. The mission was launched on 14 October 2024 aboard a Falcon Heavy.
Future missions
Conjectures regarding extraterrestrial life have ensured a high profile for Europa and have led to steady lobbying for future missions. The aims of these missions have ranged from examining Europa's chemical composition to searching for extraterrestrial life in its hypothesized subsurface oceans. Robotic missions to Europa need to endure the high-radiation environment around Jupiter. Because it is deeply embedded within Jupiter's magnetosphere, Europa receives about 5.40 Sv of radiation per day.
Europa Lander is a recent NASA concept mission under study. 2018 research suggests Europa may be covered in tall, jagged ice spikes, presenting a problem for any potential landing on its surface.
Old proposals
In the early 2000s, Jupiter Europa Orbiter led by NASA and the Jupiter Ganymede Orbiter led by the ESA were proposed together as an Outer Planet Flagship Mission to Jupiter's icy moons called Europa Jupiter System Mission, with a planned launch in 2020. In 2009 it was given priority over Titan Saturn System Mission. At that time, there was competition from other proposals. Japan proposed Jupiter Magnetospheric Orbiter.
Jovian Europa Orbiter was an ESA Cosmic Vision concept study from 2007. Another concept was Ice Clipper, which would have used an impactor similar to the Deep Impact mission—it would make a controlled crash into the surface of Europa, generating a plume of debris that would then be collected by a small spacecraft flying through the plume.
Jupiter Icy Moons Orbiter (JIMO) was a partially developed fission-powered spacecraft with ion thrusters that was cancelled in 2006. It was part of Project Prometheus. The Europa Lander Mission proposed a small nuclear-powered Europa lander for JIMO. It would travel with the orbiter, which would also function as a communication relay to Earth.
Europa Orbiter – Its objective would be to characterize the extent of the ocean and its relation to the deeper interior. Instrument payload could include a radio subsystem, laser altimeter, magnetometer, Langmuir probe, and a mapping camera. The Europa Orbiter received the go-ahead in 1999 but was canceled in 2002. This orbiter featured a special ice-penetrating radar that would allow it to scan below the surface.
More ambitious ideas have been put forward including an impactor in combination with a thermal drill to search for biosignatures that might be frozen in the shallow subsurface.
Another proposal put forward in 2001 calls for a large nuclear-powered "melt probe" (cryobot) that would melt through the ice until it reached an ocean below. Once it reached the water, it would deploy an autonomous underwater vehicle (hydrobot) that would gather information and send it back to Earth. Both the cryobot and the hydrobot would have to undergo some form of extreme sterilization to prevent detection of Earth organisms instead of native life and to prevent contamination of the subsurface ocean. This suggested approach has not yet reached a formal conceptual planning stage.
Habitability
So far, there is no evidence that life exists on Europa, but the moon has emerged as one of the most likely locations in the Solar System for potential habitability. Life could exist in its under-ice ocean, perhaps in an environment similar to Earth's deep-ocean hydrothermal vents. Even if Europa lacks volcanic hydrothermal activity, a 2016 NASA study found that Earth-like levels of hydrogen and oxygen could be produced through processes related to serpentinization and ice-derived oxidants, which do not directly involve volcanism. In 2015, scientists announced that salt from a subsurface ocean may likely be coating some geological features on Europa, suggesting that the ocean is interacting with the seafloor. This may be important in determining if Europa could be habitable. The likely presence of liquid water in contact with Europa's rocky mantle has spurred calls to send a probe there.
The energy provided by tidal forces drives active geological processes within Europa's interior, just as they do to a far more obvious degree on its sister moon Io. Although Europa, like the Earth, may possess an internal energy source from radioactive decay, the energy generated by tidal flexing would be several orders of magnitude greater than any radiological source. Life on Europa could exist clustered around hydrothermal vents on the ocean floor, or below the ocean floor, where endoliths are known to inhabit on Earth. Alternatively, it could exist clinging to the lower surface of Europa's ice layer, much like algae and bacteria in Earth's polar regions, or float freely in Europa's ocean. Should Europa's oceans be too cold, biological processes similar to those known on Earth could not occur; too salty, only extreme halophiles could survive in that environment. In 2010, a model proposed by Richard Greenberg of the University of Arizona proposed that irradiation of ice on Europa's surface could saturate its crust with oxygen and peroxide, which could then be transported by tectonic processes into the interior ocean. Such a process could render Europa's ocean as oxygenated as our own within just 12 million years, allowing the existence of complex, multicellular lifeforms.
Evidence suggests the existence of lakes of liquid water entirely encased in Europa's icy outer shell and distinct from a liquid ocean thought to exist farther down beneath the ice shell, as well as pockets of water that form M-shaped ice ridges when the water freezes on the surface – as in Greenland. If confirmed, the lakes and pockets of water could be yet another potential habitat for life. Evidence suggests that hydrogen peroxide is abundant across much of the surface of Europa. Because hydrogen peroxide decays into oxygen and water when combined with liquid water, the authors argue that it could be an important energy supply for simple life forms. Nonetheless, on 4 March 2024, astronomers reported that the surface of Europa may have much less oxygen than previously inferred.
Clay-like minerals (specifically, phyllosilicates), often associated with organic matter on Earth, have been detected on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet. Some scientists have speculated that life on Earth could have been blasted into space by asteroid collisions and arrived on the moons of Jupiter in a process called lithopanspermia.
See also
Moons of Jupiter
Galilean moons (the four biggest moons of Jupiter)
Jupiter's moons in fiction
List of craters on Europa
List of geological features on Europa
List of lineae on Europa
Snowball Earth hypothesis
Ocean world
Extraterrestrial water
Notes
References
Further reading
External links
Europa Profile at NASA
Europa Facts at The Nine Planets
Europa Facts at Views of the Solar System
Preventing Forward Contamination of Europa – USA Space Studies Board (2000)
Images of Europa at JPL's Planetary Photojournal
Movie of Europa's rotation from the National Oceanic and Atmospheric Administration
Europa map with feature names from Planetary Photojournal
Europa nomenclature and Europa map with feature names from the USGS planetary nomenclature page
Paul Schenk's 3D images and flyover videos of Europa and other outer Solar System satellites; see also
Large, high-resolution Galileo image mosaics of Europan terrain from Jason Perry at JPL: 1, 2, 3, 4, 5, 6, 7
Europa image montage from Galileo spacecraft NASA
View of Europa from Galileo flybys
Google Europa 3D, interactive map of the moon
High-resolution animation by Kevin M. Gill of a flyover of Europa; see album for more
16100108
Discoveries by Galileo Galilei
Discoveries by Simon Marius
Moons of Jupiter
Moons with a prograde orbit
Solar System | Europa (moon) | [
"Astronomy"
] | 8,116 | [
"Outer space",
"Solar System"
] |
43,138 | https://en.wikipedia.org/wiki/Siboglinidae | Siboglinidae is a family of polychaete annelid worms whose members made up the former phyla Pogonophora and Vestimentifera (the giant tube worms). The family is composed of around 100 species of vermiform creatures which live in thin tubes buried in sediment (Pogonophora) or in tubes attached to hard substratum (Vestimentifera) at ocean depths ranging from . They can also be found in association with hydrothermal vents, methane seeps, sunken plant material, and whale carcasses.
The first specimen was dredged from the waters of Indonesia in 1900. These specimens were given to French zoologist Maurice Caullery, who studied them for nearly 50 years.
Anatomy
Most siboglinids are less than in diameter, but in length. They inhabit tubular structures composed of chitin which are fixed to rocks or substrates. The tubes are often clustered together in large colonies.
Their bodies are divided into four regions. The anterior end is called the cephalic lobe, which ranges from one to over 200 thin branchial ciliated tentacles, each with tiny side branches known as pinnules. Behind this is a glandular forepart, which helps to secrete the tube. The main part of the body is the trunk, which is greatly elongated and bears various annuli, papillae, and ciliary tracts. Posterior to the trunk is the short metamerically segmented opisthosoma, bearing external paired chaetae, which help to anchor the animal to the base of its tube.
The body cavity has a separate compartment in each of the first three regions of the body and extends into the tentacles. The opisthosoma has a coelomic chamber in each of its 5 to 23 segments, separated by septa. The worms have a complex closed circulatory system and a well-developed nervous system, but as adults, siboglinids completely lack a mouth, gut, and anus.
Evolution
The family Siboglinidae has been difficult to place in an evolutionary context. After examination of genetic differences between annelids, Siboglinidae were placed within the order Polychaeta by scientific consensus. The fossil record along with molecular clocks suggest the family has Mesozoic (250 – 66 Mya) or Cenozoic (66 Mya – recent) origins. However, some fossils of crystallized tubes are attributed to early Siboglinidae dating back to 500 Mya. The oldest definitive specimens referred to the family came from Early Jurassic (Pliensbachian-Toarcian) Figueroa Sulfide deposits from San Rafael Mountains, found to be similar to modern Ridgeia. This tubes, known as ‘Figueroa tubes’, along the ‘Troodos collared tubes’ (Cyprus, Turonian) were resolved among modern vestimentiferans. Molecular work aligning five genes has identified four distinct clades within Siboglinidae. The clades are Vestimentifera, Sclerolinum, Frenulata, and Osedax. Vestimentiferans live in vent and seep habitats. Separation of vestimentiferans into seep and deep-sea-dwelling clades is still debated due to some phylogenies based on sequencing data placing the genera along a continuum. Sclerolinum is a monogeneric clade (which may be called Monilifera) living on organic-rich remains. Frenulates live in organic-rich sediment habitats. Osedax is a monogeneric clade specialized in living on whale bones, although recent evidence shows them living on fish bones as well.
One probable relationship between the four clades is shown in the cladogram below. The position of Osedax is weakly supported.
Vestimentiferans
Like other tube worms, vestimentiferans are benthic marine creatures. Riftia pachyptila, a vestimentiferan, is known only from the hydrothermal vent systems.
Anatomy of vestimentiferans
Vestimentiferan bodies are divided into four regions: the obturaculum, vestimentum, trunk, and opisthosome. The main trunk of the body bears wing-like extensions. Unlike other siboglinids that never have a digestive tract, they have one that they completely lose during metamorphosis.
The obturaculum is the first anterior body part. It is possible that the obturaculum is actually an outgrowth of the vestimentum rather than a separate body segment which would distinguish it from other siboglinids.
The vestimentum, from which the group's name is derived, is a wing-like body part with glands that secrete the tube. In a ventroanterior position in the vestimentum is the brain which is postulated to be simpler than relatives that maintain a gut in the adult form. The opisthosome is the anchoring rear body part.
Vestimentiferan ecology
Their primary nutrition is derived from the sulfide-rich fluids emanating from the hydrothermal vents where they live. The sulfides are metabolized by symbiotic hydrogen sulfide- or methane-oxidizing bacteria living in an internal organ, the trophosome. One gram of trophosome tissue can contain one billion bacteria. The origin of this symbiotic relationship is not currently known. The bacteria appear to colonize the host animal larvae after they have settled on a surface, entering them through their skin. This method of entry, known as horizontal transmission, means that each organism may have different species of bacteria assisting in this symbiosis. However, these bacteria all play similar roles in sustaining the vestimentiferans. Endosymbionts have a wide variety of metabolic genes, which may allow them to switch between autotrophic and heterotrophic methods of nutrient acquisition. When the host dies, the bacteria are released and return to the free-living population in the seawater.
Discovery of the hydrothermal vents in the eastern Pacific Ocean was quickly followed by the discovery and description of new vestimentiferan tubeworm species. These tubeworms are one of the most dominant organisms associated with the hydrothermal vents in the Pacific Ocean. Tubeworms anchor themselves to the substratum of the hydrocarbon seep by roots located at the basal portion of their bodies. Intact tubeworm roots have proven very difficult to obtain for study because they are extremely delicate, and often break off when a tubeworm is removed from hypothermal vent regions. How long the roots of the tube worms can grow is unknown, but roots have been recovered longer than 30 m.
A single aggregation of tubeworms can contain thousands of individuals, and the roots produced by each tubeworm can become tangled with the roots of neighbouring tubeworms. These mats of roots are known as "ropes", and travel down the tubes of dead tubeworms, and run through holes in rocks. The diameter and wall thickness of the tubeworm roots do not appear to change with distance from the trunk portion of the tubeworm's body.
Like the trunk portion of the body, the roots of the vestimentiferan tubeworms are composed of chitin crystallites, which support and protect the tubeworm from predation and environmental stresses. Tubeworms build the external chitin structure themselves by secreting chitin from specialized glands located in their body walls.
Genera
Osedax
Clade Frenulata
Birsteinia
Bobmarleya
Choanophorus
Crassibrachia
Cyclobrachia
Diplobrachia
Galathealinum
Heptabrachia
Lamellisabella
Nereilinum
Oligobrachia
Paraescarpia
Polybrachia
Siboglinoides
Siboglinum
Siphonobrachia
Spirobrachia
Unibrachium
Volvobrachia
Zenkevitchiana
Clade Monilifera
Sclerolinum
Clade Vestimentifera
Alaysia
Arcovesia
Escarpia
Lamellibrachia
Oasisia
Ridgeia
Riftia
Tevnia
References
External links
Polychaetes
Chemosynthetic symbiosis
Annelid families | Siboglinidae | [
"Biology"
] | 1,692 | [
"Biological interactions",
"Chemosynthetic symbiosis",
"Behavior",
"Symbiosis"
] |
43,139 | https://en.wikipedia.org/wiki/Placozoa | Placozoa ( ; ) is a phylum of free-living (non-parasitic) marine invertebrates. They are blob-like animals composed of aggregations of cells. Moving in water by ciliary motion, eating food by engulfment, reproducing by fission or budding, placozoans are described as "the simplest animals on Earth." Structural and molecular analyses have supported them as among the most basal animals, thus, constituting a primitive metazoan phylum.
The first known placozoan, Trichoplax adhaerens, was discovered in 1883 by the German zoologist Franz Eilhard Schulze (1840–1921). Describing the uniqueness, another German, Karl Gottlieb Grell (1912–1994), erected a new phylum, Placozoa, for it in 1971. Remaining a monotypic phylum for over a century, new species began to be added since 2018. So far, three other extant species have been described, in two distinct classes: Uniplacotomia (Hoilungia hongkongensis in 2018 and Cladtertia collaboinventa in 2022) and Polyplacotomia (Polyplacotoma mediterranea, the most basal, in 2019). A single putative fossil species is known, the Middle Triassic Maculicorpus microbialis.
History
Trichoplax was discovered in 1883 by the German zoologist Franz Eilhard Schulze, in a seawater aquarium at the Zoological Institute in Graz, Austria. The generic name is derived from the classical Greek (), meaning "hair", and (), "plate". The specific epithet adhaerens is Latin meaning "adherent", reflecting its propensity to stick to the glass slides and pipettes used in its examination. Schulze realized that the animal could not be a member of any existing phyla, and based on the simple structure and behaviour, concluded in 1891 that it must be an early metazoan. He also observed the reproduction by fission, cell layers and locomotion.
In 1893, Italian zoologist Francesco Saverio Monticelli described another animal which he named Treptoplax, the specimens of which he collected from Naples. He gave the species name T. reptans in 1896. Monticelli did not preserve them and no other specimens were found again, as a result of which the identification is ruled as doubtful, and the species rejected.
Schulze's description was opposed by other zoologists. For instance, in 1890, F.C. Noll argued that the animal was a flat worm (Turbellaria). In 1907, Thilo Krumbach published a hypothesis that Trichoplax is not a distinct animal but that it is a form of the planula larva of the anemone-like hydrozoan Eleutheria krohni. Although this was refuted in print by Schulze and others, Krumbach's analysis became the standard textbook explanation, and nothing was printed in zoological journals about Trichoplax until the 1960s.
The development of electron microscopy in the mid-20th century allowed in-depth observation of the cellular components of organisms, following which there was renewed interest in Trichoplax starting in 1966. The most important descriptions were made by Karl Gottlieb Grell at the University of Tübingen since 1971. That year, Grell revived Schulze's interpretation that the animals are unique and created a new phylum Placozoa. Grell derived the name from the placula hypothesis, Otto Bütschli's notion on the origin of metazoans.
Biology
Placozoans do not have well-defined body plans, much like amoebas, unicellular eukaryotes. As Andrew Masterson reported: "they are as close as it is possible to get to being simply a little living blob." An individual body measures about 0.55 mm in diameter. There are no body parts; as one of the researchers Michael Eitel described: "There's no mouth, there's no back, no nerve cells, nothing." Animals studied in laboratories have bodies consisting of everything from hundreds to millions of cells.
Placozoans have only three anatomical parts as tissue layers inside its body: the upper, intermediate (middle) and lower epithelia. There are at least six different cell types. The upper epithelium is the thinnest portion and essentially comprises flat cells with their cell body hanging underneath the surface, and each cell having a cilium. Crystal cells are sparsely distributed near the marginal edge. A few cells have unusually large number of mitochondria. The middle layer is the thickest made up of numerous fiber cells, which contain mitochondrial complexes, vacuoles and endosymbiotic bacteria in the endoplasmic reticulum. The lower epithelium consists of numerous monociliated cylinder cells along with a few endocrine-like gland cells and lipophil cells. Each lipophil cell contains numerous middle-sized granules, one of which is a secretory granule.
The body axes of Hoilungia and Trichoplax are overtly similar to the oral–aboral axis of cnidarians, animals from another phylum with which they are most closely related. Structurally, they can not be distinguished from other placozoans, so that identification is purely on genetic (mitochondrial DNA) differences. Genome sequencing has shown that each species has a set of unique genes and several uniquely missing genes.
Trichoplax is a small, flattened, animal around across. An amorphous multi-celled body, analogous to a single-celled amoeba, it has no regular outline, although the lower surface is somewhat concave, and the upper surface is always flattened. The body consists of an outer layer of simple epithelium enclosing a loose sheet of stellate cells resembling the mesenchyme of some more complex animals. The epithelial cells bear cilia, which the animal uses to help it creep along the seafloor.
The lower surface engulfs small particles of organic detritus, on which the animal feeds. All placozoans can reproduce asexually, budding off smaller individuals, and the lower surface may also bud off eggs into the mesenchyme.
Sexual reproduction has been reported to occur in one clade of placozoans, whose strain H8 was later found to belong to genus Cladtertia, where intergenic recombination was observed as well as other hallmarks of sexual reproduction.
Some Trichoplax species contain Rickettsiales bacteria as endosymbionts.
One of the at least 20 described species turned out to have two bacterial endosymbionts; Grellia which lives in the animal's endoplasmic reticulum and is assumed to play a role in the protein and membrane production. The other endosymbiont is the first described Margulisbacteria, that lives inside cells used for algal digestion. It appears to eat the fats and other lipids of the algae and provide its host with vitamins and amino acids in return.
Studies suggest that aragonite crystals in crystal cells have the same function as statoliths, allowing it to use gravity for spatial orientation.
Located in the dorsal epithelium there are lipid granules called shiny spheres which release a cocktail of venoms and toxins as an anti-predator defense, and can induce paralysis or death in some predators. Genes has been found in Trichoplax with a strong resemblance to the venom genes of some poisonous snakes, like the American copperhead and the West African carpet viper.
The Placozoa show substantial evolutionary radiation in regard to sodium channels, of which they have 5–7 different types, more than any other invertebrate species studied to date.
Three modes of population dynamics depended upon feeding sources, including induction of social behaviors, morphogenesis, and reproductive strategies.
In addition to fission, representatives of all species produced “swarmers” (a separate vegetative reproduction stage), which could also be formed from the lower epithelium with greater cell-type diversity.
Evolutionary relationships
There is no convincing fossil record of the Placozoa, although the Ediacaran biota (Precambrian, ) organism Dickinsonia appears somewhat similar to placozoans. Knaust (2021) reported preservation of placozoan fossils in a microbialite bed from the Middle Triassic Muschelkalk (Germany).
Traditionally, classification was based on their level of organization, i.e., they possess no tissues or organs. However this may be as a result of secondary loss and thus is inadequate to exclude them from relationships with more complex animals. More recent work has attempted to classify them based on the DNA sequences in their genome; this has placed the phylum between the sponges and the Eumetazoa. In such a feature-poor phylum, molecular data are considered to provide the most reliable approximation of the placozoans' phylogeny.
Their exact position on the phylogenetic tree would give important information about the origin of neurons and muscles. If the absence of these features is an original trait of the Placozoa, it would mean that a nervous system and muscles evolved three times should placozoans and cnidarians be a sister group; once in the Ctenophora, once in the Cnidaria and once in the Bilateria. If they branched off before the Cnidaria and Bilateria split, the neurons and muscles would have the same origin in the two latter groups.
Functional-morphology hypothesis
On the basis of their simple structure, the Placozoa were frequently viewed as a model organism for the transition from unicellular organisms to the multicellular animals (Metazoa) and are thus considered a sister taxon to all other metazoans:
According to a functional-morphology model, all or most animals are descended from a gallertoid, a free-living (pelagic) sphere in seawater, consisting of a single ciliated layer of cells supported by a thin, noncellular separating layer, the basal lamina. The interior of the sphere is filled with contractile fibrous cells and a gelatinous extracellular matrix. Both the modern Placozoa and all other animals then descended from this multicellular beginning stage via two different processes:
Infolding of the epithelium led to the formation of an internal system of ducts and thus to the development of a modified gallertoid from which the sponges (Porifera), Cnidaria and Ctenophora subsequently developed.
Other gallertoids, according to this model, made the transition over time to a benthic mode of life; that is, their habitat has shifted from the open ocean to the floor (benthic zone). This results naturally in a selective advantage for flattening of the body, as of course can be seen in many benthic species.
While the probability of encountering food, potential sexual partners, or predators is the same in all directions for animals floating freely in the water, there is a clear difference on the seafloor between the functions useful on body sides facing toward and away from the substrate, leading their sensory, defensive, and food-gathering cells to differentiate and orient according to the vertical – the direction perpendicular to the substrate. In the proposed functional-morphology model, the Placozoa, and possibly several similar organisms only known from the fossils, are descended from such a life form, which is now termed placuloid.
Three different life strategies have accordingly led to three different possible lines of development:
Animals that live interstitially in the sand of the ocean floor were responsible for the fossil crawling traces that are considered the earliest evidence of animals; and are detectable even prior to the dawn of the Ediacaran Period in geology. These are usually attributed to bilaterally symmetrical worms, but the hypothesis presented here views animals derived from placuloids, and thus close relatives of Trichoplax adhaerens, to be the producers of the traces.
Animals that incorporated algae as photosynthetically active endosymbionts, i.e. primarily obtaining their nutrients from their partners in symbiosis, were accordingly responsible for the mysterious creatures of the Ediacara fauna that are not assigned to any modern animal taxon and lived during the Ediacaran Period, before the start of the Paleozoic. However, recent work has shown that some of the Ediacaran assemblages (e.g. Mistaken Point) were in deep water, below the photic zone, and hence those individuals could not dependent on endosymbiotic photosynthesisers.
Animals that grazed on algal mats would ultimately have been the direct ancestors of the Placozoa. The advantages of an amoeboid multiplicity of shapes thus allowed a previously present basal lamina and a gelatinous extracellular matrix to be lost secondarily. Pronounced differentiation between the surface facing the substrate (ventral) and the surface facing away from it (dorsal) accordingly led to the physiologically distinct cell layers of Trichoplax adhaerens that can still be seen today. Consequently, these are analogous, but not homologous, to ectoderm and endoderm – the "external" and "internal" cell layers in eumetazoans – i.e. the structures corresponding functionally to one another have, according to the proposed hypothesis, no common evolutionary origin.
Should any of the analyses presented above turn out to be correct, Trichoplax adhaerens would be the oldest branch of the multicellular animals, and a relic of the Ediacaran fauna, or even the pre-Ediacara fauna. Although very successful in their ecological niche, due to the absence of extracellular matrix and basal lamina, the development potential of these animals was of course limited, which would explain the low rate of evolution of their phenotype (their outward form as adults) – referred to as bradytely.
This hypothesis was supported by a recent analysis of the Trichoplax adhaerens mitochondrial genome in comparison to those of other animals. The hypothesis was, however, rejected in a statistical analysis of the Trichoplax adhaerens whole genome sequence in comparison to the whole genome sequences of six other animals and two related non-animal species, but only at which indicates a marginal level of statistical significance.
Epitheliozoa hypothesis
A concept based on purely morphological characteristics pictures the Placozoa as the nearest relative of the animals with true tissues (Eumetazoa). The taxon they share, called the Epitheliozoa, is itself construed to be a sister group to the sponges (Porifera):
The above view could be correct, although there is some evidence that the ctenophores, traditionally seen as Eumetazoa, may be the sister to all other animals.
This is now a disputed classification. Placozoans are estimated to have emerged 750–800 million years ago, and the first modern neuron to have originated in the common ancestor of cnidarians and bilaterians about 650 million years ago (many of the genes expressed in modern neurons are absent in ctenopheres, although some of these missing genes are present in placozoans).
The principal support for such a relationship comes from special cell to cell junctions – belt desmosomes – that occur not just in the Placozoa but in all animals except the sponges: They enable the cells to join together in an unbroken layer like the epitheloid of the Placozoa. Trichoplax adhaerens also shares the ventral gland cells with most eumetazoans. Both characteristics can be considered evolutionarily derived features (apomorphies), and thus form the basis of a common taxon for all animals that possess them.
One possible scenario inspired by the proposed hypothesis starts with the idea that the monociliated cells of the epitheloid in Trichoplax adhaerens evolved by reduction of the collars in the collar cells (choanocytes) of sponges as the hypothesized ancestors of the Placozoa abandoned a filtering mode of life. The epitheloid would then have served as the precursor to the true epithelial tissue of the eumetazoans.
In contrast to the model based on functional morphology described earlier, in the Epitheliozoa hypothesis, the ventral and dorsal cell layers of the Placozoa are homologs of endoderm and ectoderm — the two basic embryonic cell layers of the eumetazoans. The digestive gastrodermis in the Cnidaria or the gut epithelium in the bilaterally symmetrical animals (Bilateria) may have developed from endoderm, whereas ectoderm is the precursor to the external skin layer (epidermis), among other things. The interior space pervaded by a fiber syncytium in the Placozoa would then correspond to connective tissue in the other animals. It is unclear whether the calcium ions stored in the syncytium would be related to the lime skeletons of many cnidarians.
As noted above, this hypothesis was supported in a statistical analysis of the Trichoplax adhaerens whole genome sequence, as compared to the whole-genome sequences of six other animals and two related non-animal species.
Eumetazoa hypothesis
A third hypothesis, based primarily on molecular genetics, views the Placozoa as highly simplified eumetazoans. According to this, Trichoplax adhaerens is descended from considerably more complex animals that already had muscles and nerve tissues. Both tissue types, as well as the basal lamina of the epithelium, were accordingly lost more recently by radical secondary simplification.
Various studies in this regard so far yield differing results for identifying the exact sister group: In one case, the Placozoa would qualify as the nearest relatives of the Cnidaria, while in another they would be a sister group to the Ctenophora, and occasionally they are placed directly next to the Bilateria. Currently, they are typically placed according to the cladogram below:
In this cladogram the Epitheliozoa and Eumetazoa are synonyms to each other and to the Diploblasts, and the Ctenophora are basal to them.
An argument raised against the proposed scenario is that it leaves morphological features of the animals completely out of consideration. The extreme degree of simplification that would have to be postulated for the Placozoa in this model, moreover, is only known for parasitic organisms, but would be difficult to explain functionally in a free-living species like Trichoplax adhaerens.
This version is supported by statistical analysis of the Trichoplax adhaerens whole genome sequence in comparison to the whole genome sequences of six other animals and two related non-animal species. However, Ctenophora was not included in the analyses, placing the placozoans outside of the sampled Eumetazoans.
Cnidaria-sister hypothesis
DNA comparisons suggest that placozoans are related to Cnidaria, derived from planula larva (as seen in some Cnidaria). The Bilateria also are thought to be derived from planuloids. The Cnidaria and Placozoa body axis are overtly similar, and placozoan and cnidarian cells are responsive to the same neuropeptide antibodies despite extant placozoans not developing any neurons.
References
External links
The Trichoplax adhaerens Grell-BS-1999 v1.0 Genome Portal at the DOE Joint Genome Institute
The Trichoplax Genome Project at the Yale Peabody Museum
A Weird Wee Beastie: Trichoplax adhaerens
Research articles from the ITZ, TiHo Hannover
Information page from the University of California at Berkeley
– Mitochondrial DNA and 16S rRNA analysis and phylogeny of Trichoplax adhaerens
Historical overview of Trichoplax research
Science Daily:Genome Of Simplest Animal Reveals Ancient Lineage, Confounding Array Of Complex Capabilities
Vicki Buchsbaum Pearse, and Oliver Voigt, 2007. "Field biology of placozoans (Trichoplax): distribution, diversity, biotic interactions. Integrative and Comparative Biology", .
ParaHoxozoa
Animal phyla
Parazoa
Ediacaran first appearances | Placozoa | [
"Biology"
] | 4,311 | [
"Parazoa",
"ParaHoxozoa",
"Animals"
] |
43,165 | https://en.wikipedia.org/wiki/Toni%20Morrison | Chloe Anthony Wofford Morrison (born Chloe Ardelia Wofford; February 18, 1931 – August 5, 2019), known as Toni Morrison, was an American novelist and editor. Her first novel, The Bluest Eye, was published in 1970. The critically acclaimed Song of Solomon (1977) brought her national attention and won the National Book Critics Circle Award. In 1988, Morrison won the Pulitzer Prize for Beloved (1987); she was awarded the Nobel Prize in Literature in 1993.
Born and raised in Lorain, Ohio, Morrison graduated from Howard University in 1953 with a B.A. in English. Morrison earned a master's degree in American Literature from Cornell University in 1955. In 1957 she returned to Howard University, was married, and had two children before divorcing in 1964. Morrison became the first black female editor for fiction at Random House in New York City in the late 1960s. She developed her own reputation as an author in the 1970s and '80s. Her novel Beloved was made into a film in 1998. Morrison's works are praised for addressing the harsh consequences of racism in the United States and the Black American experience.
The National Endowment for the Humanities selected Morrison for the Jefferson Lecture, the U.S. federal government's highest honor for achievement in the humanities, in 1996. She was honored with the National Book Foundation's Medal of Distinguished Contribution to American Letters the same year. President Barack Obama presented her with the Presidential Medal of Freedom on May 29, 2012. She received the PEN/Saul Bellow Award for Achievement in American Fiction in 2016. Morrison was inducted into the National Women's Hall of Fame in 2020.
Early years
Toni Morrison was born Chloe Ardelia Wofford, the second of four children from a working-class, Black family, in Lorain, Ohio, to Ramah (née Willis) and George Wofford. Her mother was born in Greenville, Alabama, and moved north with her family as a child. She was a homemaker and a devout member of the African Methodist Episcopal Church. George Wofford grew up in Cartersville, Georgia. When Wofford was about 15 years old, a group of white people lynched two African-American businessmen who lived on his street. Morrison later said: "He never told us that he'd seen bodies. But he had seen them. And that was too traumatic, I think, for him." Soon after the lynching, George Wofford moved to the racially integrated town of Lorain, Ohio, in the hope of escaping racism and securing gainful employment in Ohio's burgeoning industrial economy. He worked odd jobs and as a welder for U.S. Steel. In a 2015 interview Morrison said that her father, traumatized by his experiences of racism, hated whites so much he would not let them in the house.
When Morrison was about two years old, her family's landlord set fire to the house in which they lived, while they were home, because her parents could not afford to pay rent. Her family responded to what she called this "bizarre form of evil" by laughing at the landlord rather than falling into despair. Morrison later said her family's response demonstrated how to keep your integrity and claim your own life in the face of acts of such "monumental crudeness".
Morrison's parents instilled in her a sense of heritage and language through telling traditional African-American folktales, ghost stories, and singing songs. She read frequently as a child; among her favorite authors were Jane Austen and Leo Tolstoy.
Morrison became a Catholic at the age of 12 and took the baptismal name Anthony (after Anthony of Padua), which led to her nickname, Toni. Attending Lorain High School, she was on the debate team, the yearbook staff, and in the drama club.
Career
Adulthood, Howard and Cornell years, and editing career: 1949–1975
In 1949, she enrolled at Howard University in Washington, D.C., seeking the company of fellow black intellectuals. She was the first person in her family to attend college, meaning that she was a first-generation college student. Initially a student in the drama program at Howard, she studied theatre with celebrated drama teachers Anne Cooke Reid and Owen Dodson. It was while at Howard that she encountered racially segregated restaurants and buses for the first time. She graduated in 1953 with a B.A. in English and a minor in Classics, and was able to work with key members of the Harlem Renaissance era such as Alain Lock and Sterling Brown. Additionally, she participated in the university's theater group, known as the Howard Players, where she had the opportunity to travel the Deep South, which was a defining experience of her life.
Morrison went on to earn a Master of Arts degree in 1955 from Cornell University in Ithaca, New York. Her master's thesis was titled "Virginia Woolf's and William Faulkner's treatment of the alienated". She taught English, first at Texas Southern University in Houston from 1955 to 1957, and then at Howard University for the next seven years. While teaching at Howard, she met Harold Morrison, a Jamaican architect, whom she married in 1958. Their first son was born in 1961 and she was pregnant with their second son when she and Harold divorced in 1964.
After her divorce and the birth of her son Slade in 1965, Morrison began working as an editor for L. W. Singer, a textbook division of publisher Random House, in Syracuse, New York. Two years later, she transferred to Random House in New York City, where she became their first black woman senior editor in the fiction department.
In that capacity, Morrison played a vital role in bringing Black literature into the mainstream. One of the first books she worked on was the groundbreaking Contemporary African Literature (1972), a collection that included work by Nigerian writers Wole Soyinka, Chinua Achebe, and South African playwright Athol Fugard. She fostered a new generation of Afro-American writers, including poet and novelist Toni Cade Bambara, radical activist Angela Davis, Black Panther Huey Newton and novelist Gayl Jones, whose writing Morrison discovered. She also brought to publication the 1975 autobiography of the outspoken boxing champion Muhammad Ali, The Greatest: My Own Story. In addition, she published and promoted the work of Henry Dumas, a little-known novelist and poet who in 1968 had been shot to death by a transit officer in the New York City Subway.
Among other books that Morrison developed and edited is The Black Book (1974), an anthology of photographs, illustrations, essays, and documents of Black life in the United States from the time of slavery to the 1920s. Random House had been uncertain about the project but its publication met with a good reception. Alvin Beam reviewed the anthology for the Cleveland Plain Dealer, writing: "Editors, like novelists, have brain childrenbooks they think up and bring to life without putting their own names on the title page. Mrs. Morrison has one of these in the stores now, and magazines and newsletters in the publishing trade are ecstatic, saying it will go like hotcakes."
First writings and teaching, 1970–1986
Morrison had begun writing fiction as part of an informal group of poets and writers at Howard University who met to discuss their work. She attended one meeting with a short story about a Black girl who longed to have blue eyes. Morrison later developed the story as her first novel, The Bluest Eye, getting up every morning at 4 am to write, while raising two children on her own.
The Bluest Eye was published by Holt, Rinehart, and Winston in 1970, when Morrison was aged 39. It was favorably reviewed in The New York Times by John Leonard, who praised Morrison's writing style as being "a prose so precise, so faithful to speech and so charged with pain and wonder that the novel becomes poetry ... But The Bluest Eye is also history, sociology, folklore, nightmare and music." The novel did not sell well at first, but the City University of New York put The Bluest Eye on its reading list for its new Black studies department, as did other colleges, which boosted sales. The book also brought Morrison to the attention of the acclaimed editor Robert Gottlieb at Knopf, an imprint of the publisher Random House. Gottlieb later edited all but one of Morrison's novels.
In 1975, Morrison's second novel Sula (1973), about a friendship between two Black women, was nominated for the National Book Award. Her third novel, Song of Solomon (1977), follows the life of Macon "Milkman" Dead III, from birth to adulthood, as he discovers his heritage. This novel brought her national acclaim, being a main selection of the Book of the Month Club, the first novel by a Black writer to be so chosen since Richard Wright's Native Son in 1940. Song of Solomon also won the National Book Critics Circle Award.
At its 1979 commencement ceremonies, Barnard College awarded Morrison its highest honor, the Barnard Medal of Distinction.
Morrison gave her next novel, Tar Baby (1981), a contemporary setting. In it, a looks-obsessed fashion model, Jadine, falls in love with Son, a penniless drifter who feels at ease with being Black.
Resigning from Random House in 1983, Morrison left publishing to devote more time to writing, while living in a converted boathouse on the Hudson River in Nyack, New York. She taught English at two branches of the State University of New York (SUNY) and at Rutgers University's New Brunswick campus. In 1984, she was appointed to an Albert Schweitzer chair at the University at Albany, SUNY.
Morrison's first play, Dreaming Emmett, is about the 1955 murder by white men of Black teenager Emmett Till. The play was commissioned by the New York State Writers Institute at the State University of New York at Albany, where she was teaching at the time. It was produced in 1986 by Capital Repertory Theatre and directed by Gilbert Moses. Morrison was also a visiting professor at Bard College from 1986 to 1988.
Beloved trilogy and the Nobel Prize: 1987–1998
In 1987, Morrison published her most celebrated novel, Beloved. It was inspired by the true story of an enslaved African-American woman, Margaret Garner, whose story Morrison had discovered when compiling The Black Book. Garner had escaped slavery but was pursued by slave hunters. Facing a return to slavery, Garner killed her two-year-old daughter but was captured before she could kill herself. Morrison's novel imagines the dead baby returning as a ghost, Beloved, to haunt her mother and family.
Beloved was a critical success and a bestseller for 25 weeks. The New York Times book reviewer Michiko Kakutani wrote that the scene of the mother killing her baby is "so brutal and disturbing that it appears to warp time before and after into a single unwavering line of fate". Canadian writer Margaret Atwood wrote in a review for The New York Times, "Ms. Morrison's versatility and technical and emotional range appear to know no bounds. If there were any doubts about her stature as a pre-eminent American novelist, of her own or any other generation, Beloved will put them to rest."
Some critics panned Beloved. African-American conservative social critic Stanley Crouch, for instance, complained in his review in The New Republic that the novel "reads largely like a melodrama lashed to the structural conceits of the miniseries", and that Morrison "perpetually interrupts her narrative with maudlin ideological commercials".
Despite overall high acclaim, Beloved failed to win the prestigious National Book Award or the National Book Critics Circle Award. Forty-eight Black critics and writers, among them Maya Angelou, protested the omission in a statement that The New York Times published on January 24, 1988. "Despite the international stature of Toni Morrison, she has yet to receive the national recognition that her five major works of fiction entirely deserve", they wrote. Two months later, Beloved won the Pulitzer Prize for Fiction. It also won an Anisfield-Wolf Book Award.
Beloved is the first of three novels about love and African-American history, sometimes called the Beloved Trilogy. Morrison said they are intended to be read together, explaining: "The conceptual connection is the search for the beloved – the part of the self that is you, and loves you, and is always there for you." The second novel in the trilogy, Jazz, came out in 1992. Told in language that imitates the rhythms of jazz music, the novel is about a love triangle during the Harlem Renaissance in New York City. According to Lyn Innes, "Morrison sought to change not just the content and audience for her fiction; her desire was to create stories which could be lingered over and relished, not 'consumed and gobbled as fast food', and at the same time to ensure that these stories and their characters had a strong historical and cultural base."
In 1992, Morrison also published her first book of literary criticism, Playing in the Dark: Whiteness and the Literary Imagination (1992), an examination of the African-American presence in White American literature. (In 2016, Time magazine noted that Playing in the Dark was among Morrison's most-assigned texts on U.S. college campuses, together with several of her novels and her 1993 Nobel Prize lecture.) Lyn Innes wrote in the Guardian obituary of Morrison, "Her 1990 series of Massey lectures at Harvard were published as Playing in the Dark: Whiteness and the Literary Imagination (1992), and explore the construction of a 'non-white Africanist presence and personae' in the works of Poe, Hawthorne, Melville, Cather and Hemingway, arguing that 'all of us are bereft when criticism remains too polite or too fearful to notice a disrupting darkness before its eyes'."
Before the third novel of the Beloved Trilogy was published, Morrison was awarded the Nobel Prize in Literature in 1993. The citation praised her as an author "who in novels characterized by visionary force and poetic import, gives life to an essential aspect of American reality". She was the first Black woman of any nationality to win the prize. In her acceptance speech, Morrison said: "We die. That may be the meaning of life. But we do language. That may be the measure of our lives."
In her Nobel lecture, Morrison talked about the power of storytelling. To make her point, she told a story. She spoke about a blind, old, Black woman who is approached by a group of young people. They demand of her, "Is there no context for our lives? No song, no literature, no poem full of vitamins, no history connected to experience that you can pass along to help us start strong? ... Think of our lives and tell us your particularized world. Make up a story."
In 1996, the National Endowment for the Humanities selected Morrison for the Jefferson Lecture, the U.S. federal government's highest honor for "distinguished intellectual achievement in the humanities". Morrison's lecture, entitled "The Future of Time: Literature and Diminished Expectations", began with the aphorism: "Time, it seems, has no future." She cautioned against the misuse of history to diminish expectations of the future. Morrison was also honored with the 1996 National Book Foundation's Medal of Distinguished Contribution to American Letters, which is awarded to a writer "who has enriched our literary heritage over a life of service, or a corpus of work".
The third novel of her Beloved Trilogy, Paradise, about citizens of an all-Black town, came out in 1997. The following year, Morrison was on the cover of Time magazine, making her only the second female writer of fiction and second Black writer of fiction to appear on what was perhaps the most significant U.S. magazine cover of the era.
Beloved onscreen and "the Oprah effect"
Also in 1998, the movie adaptation of Beloved was released, directed by Jonathan Demme and co-produced by Oprah Winfrey, who had spent ten years bringing it to the screen. Winfrey also stars as the main character, Sethe, alongside Danny Glover as Sethe's lover, Paul D, and Thandiwe Newton as Beloved.
The movie flopped at the box office. A review in The Economist opined that "most audiences are not eager to endure nearly three hours of a cerebral film with an original storyline featuring supernatural themes, murder, rape, and slavery". Film critic Janet Maslin, in her New York Times review "No Peace from a Brutal Legacy", called it a "transfixing, deeply felt adaptation of Toni Morrison's novel. ... Its linchpin is of course Oprah Winfrey, who had the clout and foresight to bring 'Beloved' to the screen and has the dramatic presence to hold it together." Film critic Roger Ebert suggested that Beloved was not a genre ghost story but the supernatural was used to explore deeper issues and the non-linear structure of Morrison's story had a purpose.
In 1996, television talk-show host Oprah Winfrey selected Song of Solomon for her newly launched Book Club, which became a popular feature on her Oprah Winfrey Show. An average of 13 million viewers watched the show's book club segments. As a result, when Winfrey selected Morrison's earliest novel The Bluest Eye in 2000, it sold another 800,000 paperback copies. John Young wrote in the African American Review in 2001 that Morrison's career experienced the boost of "The Oprah Effect, ... enabling Morrison to reach a broad, popular audience."
Winfrey selected a total of four of Morrison's novels over six years, giving Morrison's works a bigger sales boost than they received from her Nobel Prize win in 1993. The novelist also appeared three times on Winfrey's show. Winfrey said, "For all those who asked the question 'Toni Morrison again?'... I say with certainty there would have been no Oprah's Book Club if this woman had not chosen to share her love of words with the world." Morrison called the book club a "reading revolution".
Early 21st century
Morrison continued to explore different art forms, such as providing texts for original scores of classical music. She collaborated with André Previn on the song cycle Honey and Rue, which premiered with Kathleen Battle in January 1992, and on Four Songs, premiered at Carnegie Hall with Sylvia McNair in November 1994. Both Sweet Talk: Four Songs on Text and Spirits In the Well (1997) were written for Jessye Norman with music by Richard Danielpour, and, alongside Maya Angelou and Clarissa Pinkola Estés, Morrison provided the text for composer Judith Weir's woman.life.song commissioned by Carnegie Hall for Jessye Norman, which premiered in April 2000.
Morrison returned to Margaret Garner's life story, the basis of her novel Beloved, to write the libretto for a new opera, Margaret Garner. Completed in 2002, with music by Richard Danielpour, the opera was premièred on May 7, 2005, at the Detroit Opera House with Denyce Graves in the title role. Love, Morrison's first novel since Paradise, came out in 2003. In 2004, she put together a children's book called Remember to mark the 50th anniversary of the Brown v. Board of Education Supreme Court decision in 1954 that declared racially segregated public schools to be unconstitutional.
From 1997 to 2003, Morrison was an Andrew D. White Professor-at-Large at Cornell University.
In 2004, Morrison was invited by Wellesley College to deliver the commencement address, which has been described as "among the greatest commencement addresses of all time and a courageous counterpoint to the entire genre".
In June 2005, the University of Oxford awarded Morrison an honorary Doctor of Letters degree.
In the spring 2006, The New York Times Book Review named Beloved the best work of American fiction published in the previous 25 years, as chosen by a selection of prominent writers, literary critics, and editors. In his essay about the choice, "In Search of the Best", critic A. O. Scott said: "Any other outcome would have been startling since Morrison's novel has inserted itself into the American canon more completely than any of its potential rivals. With remarkable speed, 'Beloved' has, less than 20 years after its publication, become a staple of the college literary curriculum, which is to say a classic. This triumph is commensurate with its ambition since it was Morrison's intention in writing it precisely to expand the range of classic American literature, to enter, as a living Black woman, the company of dead White males like Faulkner, Melville, Hawthorne and Twain."
In November 2006, Morrison visited the Louvre museum in Paris as the second in its "Grand Invité" program to guest-curate a month-long series of events across the arts on the theme of "The Foreigner's Home", about which The New York Times said: "In tapping her own African-American culture, Ms. Morrison is eager to credit 'foreigners' with enriching the countries where they settle."
Morrison's novel A Mercy, released in 2008, is set in the Virginia colonies of 1682. Diane Johnson, in her review in Vanity Fair, called A Mercy "a poetic, visionary, mesmerizing tale that captures, in the cradle of our present problems and strains, the natal curse put on us back then by the Indian tribes, Africans, Dutch, Portuguese, and English competing to get their footing in the New World against a hostile landscape and the essentially tragic nature of human experience."
Princeton years
From 1989 until her retirement in 2006, Morrison held the Robert F. Goheen Chair in the Humanities at Princeton University. She said she did not think much of modern fiction writers who reference their own lives instead of inventing new material, and she used to tell her creative writing students, "I don't want to hear about your little life, OK?" Similarly, she chose not to write about her own life in a memoir or autobiography.
Though based in the Creative Writing Program at Princeton, Morrison did not regularly offer writing workshops to students after the late 1990s, a fact that earned her some criticism. Rather, she conceived and developed the Princeton Atelier, a program that brings together students with writers and performing artists. Together the students and the artists produce works of art that are presented to the public after a semester of collaboration.
Inspired by her curatorship at the Louvre Museum, Morrison returned to Princeton in the fall 2008 to lead a small seminar, also entitled "The Foreigner's Home".
On November 17, 2017, Princeton University dedicated Morrison Hall (a building previously called West College) in her honor.
Final years: 2010–2019
In May 2010, Morrison appeared at PEN World Voices for a conversation with Marlene van Niekerk and Kwame Anthony Appiah about South African literature and specifically van Niekerk's 2004 novel Agaat.
Morrison wrote books for children with her younger son, Slade Morrison, who was a painter and a musician. Slade died of pancreatic cancer on December 22, 2010, aged 45, when Morrison's novel Home (2012) was half-completed.
In May 2011, Morrison received an Honorary Doctor of Letters degree from Rutgers University–New Brunswick. During the commencement ceremony, she delivered a speech on the "pursuit of life, liberty, meaningfulness, integrity, and truth".
In 2011, Morrison worked with opera director Peter Sellars and Malian singer-songwriter Rokia Traoré on Desdemona, taking a fresh look at William Shakespeare's tragedy Othello. The trio focused on the relationship between Othello's wife Desdemona and her African nursemaid, Barbary, who is only briefly referenced in Shakespeare. The play, a mix of words, music and song, premiered in Vienna in 2011.
Morrison had stopped working on her latest novel when her son died in 2010, later explaining, "I stopped writing until I began to think, He would be really put out if he thought that he had caused me to stop. 'Please, Mom, I'm dead, could you keep going ...?
She completed Home and dedicated it to her son Slade. Published in 2012, it is the story of a Korean War veteran in the segregated United States of the 1950s who tries to save his sister from brutal medical experiments at the hands of a white doctor.
In August 2012, Oberlin College became the home base of the Toni Morrison Society, an international literary society founded in 1993, dedicated to scholarly research of Morrison's work.
Morrison's eleventh novel, God Help the Child, was published in 2015. It follows Bride, an executive in the fashion and beauty industry whose mother tormented her as a child for being dark-skinned, a trauma that has continued to dog Bride.
Morrison was a member of the editorial advisory board of The Nation, a magazine started in 1865 by Northern abolitionists.
Personal life
While teaching at Howard University from 1957 to 1964, she met Harold Morrison, a Jamaican architect, whom she married in 1958. She took his last name and became known as Toni Morrison. Their first son, Harold Ford, was born in 1961. She was pregnant when she and Harold divorced in 1964. Her second son, Slade Kevin, was born in 1965.
Her son Slade Morrison died of pancreatic cancer on December 22, 2010, when Morrison was halfway through writing her novel Home. She stopped work on the novel for a year or two before completing it; that novel was published in 2012.
Death
Morrison died at Montefiore Medical Center in The Bronx, New York City, on August 5, 2019, from complications of pneumonia. She was 88 years old.
A memorial tribute was held on November 21, 2019, at the Cathedral of St. John the Divine in the Morningside Heights neighborhood of Manhattan in New York City. Morrison was eulogized by, among others, Oprah Winfrey, Angela Davis, Michael Ondaatje, David Remnick, Fran Lebowitz, Ta-Nehisi Coates, and Edwidge Danticat. The jazz saxophonist David Murray performed a musical tribute.
Politics, literary reception, and legacy
Politics
Morrison spoke openly about American politics and race relations.
In writing about the 1998 impeachment of Bill Clinton, she claimed that since Whitewater, Bill Clinton was being mistreated in the same way Black people often are:
The phrase "our first Black president" was adopted as a positive by Bill Clinton supporters. When the Congressional Black Caucus honored the former president at its dinner in Washington, D.C., on September 29, 2001, for instance, Rep. Eddie Bernice Johnson (D-TX), the chair, told the audience that Clinton "took so many initiatives he made us think for a while we had elected the first black president".
In the context of the 2008 Democratic Primary campaign, Morrison stated to Time magazine: "People misunderstood that phrase. I was deploring the way in which President Clinton was being treated, vis-à-vis the sex scandal that was surrounding him. I said he was being treated like a black on the street, already guilty, already a perp. I have no idea what his real instincts are, in terms of race." In the Democratic primary contest for the 2008 presidential race, Morrison endorsed Senator Barack Obama over Senator Hillary Clinton, though expressing admiration and respect for the latter. When he won, Morrison said she felt like an American for the first time. She said, "I felt very powerfully patriotic when I went to the inauguration of Barack Obama. I felt like a kid."
In April 2015, speaking of the deaths of Michael Brown, Eric Garner and Walter Scott – three unarmed Black men killed by white police officers – Morrison said: "People keep saying, 'We need to have a conversation about race.' This is the conversation. I want to see a cop shoot a white unarmed teenager in the back. And I want to see a white man convicted for raping a Black woman. Then when you ask me, 'Is it over?', I will say yes."
After the 2016 election of Donald Trump as President of the United States, Morrison wrote an essay, "Mourning for Whiteness", published in the November 21, 2016 issue of The New Yorker. In it she argues that white Americans are so afraid of losing privileges afforded them by their race that white voters elected Trump, whom she described as being "endorsed by the Ku Klux Klan", in order to keep the idea of white supremacy alive.
Relationship to feminism
Although her novels typically concentrate on black women, Morrison did not identify her works as feminist. When asked in a 1998 interview, "Why distance oneself from feminism?" she replied: "In order to be as free as I possibly can, in my own imagination, I can't take positions that are closed. Everything I've ever done, in the writing world, has been to expand articulation, rather than to close it, to open doors, sometimes, not even closing the book – leaving the endings open for reinterpretation, revisitation, a little ambiguity." She went on to state that she thought it "off-putting to some readers, who may feel that I'm involved in writing some kind of feminist tract. I don't subscribe to patriarchy, and I don't think it should be substituted with matriarchy. I think it's a question of equitable access, and opening doors to all sorts of things."
In 2012, she responded to a question about the difference between black and white feminists in the 1970s. "Womanists is what black feminists used to call themselves", she explained. "They were not the same thing. And also the relationship with men. Historically, black women have always sheltered their men because they were out there, and they were the ones that were most likely to be killed."
W. S. Kottiswari writes in Postmodern Feminist Writers (2008) that Morrison exemplifies characteristics of "postmodern feminism" by "altering Euro-American dichotomies by rewriting a history written by mainstream historians" and by her usage of shifting narration in Beloved and Paradise. Kottiswari states: "Instead of western logocentric abstractions, Morrison prefers the powerful vivid language of women of color ... She is essentially postmodern since her approach to myth and folklore is re-visionist."
Contributions to Black feminism
Many of Toni Morrison's works have been cited by scholars as significant contributions to Black feminism, reflecting themes of race, gender, and sexual identity within her narratives.
Barbara Smith's 1977 essay "Toward a Black Feminist Criticism" argues that Toni Morrison's Sula is a work of Black feminism, as it presents a lesbian perspective that challenges heterosexual relationships and the conventional family unit. Smith states, “Consciously or not, Morrison's work poses both lesbian and feminist questions about Black women's autonomy and their impact upon each other's lives."
Hilton Als's 2003 profile in The New Yorker notes that “Before the late sixties, there was no real Black Studies curriculum in the academy—let alone a post-colonial-studies program or a feminist one. As an editor and author, Morrison, backed by the institutional power of Random House, provided the material for those discussions to begin.”
Toni Morrison consistently advocated for feminist ideas that challenge the dominance of the white patriarchal system, frequently rejecting the notion of writing from the perspective of the "white male gaze." Feminist political activist Angela Davis notes that “Toni Morrison's project resides precisely in the effort to discredit the notion that this white male gaze must be omnipresent.”
In a 1998 episode of Charlie Rose, Toni Morrison responded to a review of Sula, stating, “I remember a review of Sula in which the reviewer said, 'One day, she,' meaning me, 'will have to face up 'to the real responsibilities, and get mature, 'and write about the real confrontation 'for black people, which is white people.' As though our lives have no meaning and no depth without the white gaze, and I have spent my entire writing life trying to make sure that the white gaze was not the dominant one in any of my books.”
In a 2015 interview with The New York Times Magazine, Toni Morrison reiterated her intention to write without the white gaze, stating, “What I’m interested in is writing without the gaze, without the white gaze. In so many earlier books by African-American writers, particularly the men, I felt that they were not writing to me. But what interested me was the African-American experience throughout whichever time I spoke of. It was always about African-American culture and people — good, bad, indifferent, whatever — but that was, for me, the universe.”
Regarding the racial environment in which she wrote, Toni Morrison stated, “Navigating a white male world was not threatening. It wasn’t even interesting. I was more interesting than they were. I knew more than they did. And I wasn’t afraid to show it.”
In a 1986 interview with Sandi Russell, Toni Morrison stated that she wrote primarily for Black women, explaining, “I write for black women. We are not addressing the men, as some white female writers do. We are not attacking each other, as both black and white men do. Black women writers look at things in an unforgiving/loving way. They are writing to repossess, re-name, re-own.”
In a 2003 interview, when asked about the labels "black" and "female" being attached to her work, Toni Morrison replied, "I can accept the labels because being a black woman writer is not a shallow place but a rich place to write from. It doesn’t limit my imagination; it expands it. It’s richer than being a white male writer because I know more and I’ve experienced more.”
In a 1987 article in The New York Times, Toni Morrison argued for the greatness of being a Black woman, stating, “I really think the range of emotions and perceptions I have had access to as a black person and as a female person are greater than those of people who are neither. I really do. So it seems to me that my world did not shrink because I was a black female writer. It just got bigger.''
National Memorial for Peace and Justice
The National Memorial for Peace and Justice in Montgomery, Alabama, includes writing by Morrison. Visitors can see her quote after they have walked through the section commemorating individual victims of lynching.
Papers
The Toni Morrison Papers are part of the permanent library collections of Princeton University, where they are held in the Manuscripts Division, Department of Rare Books and Special Collections. Morrison's decision to offer her papers to Princeton instead of to her alma mater Howard University was criticized by some within the historically black colleges and universities community.
Opening in February 2023, an exhibition titled Toni Morrison: Sites of Memory, which was curated from her archives at Princeton University, commemorated the 30th anniversary of her winning the Nobel Prize. Running from the week after her birthday until June 4, the exhibition featured rare manuscripts, correspondence between Morrison and others, and unfinished projects, taking its name from a 1995 essay by Morrison in which she spoke of a "journey to a site to see what remains were left behind and to reconstruct the world that these remains imply."
Day and halls
In 2019, a resolution was passed in her hometown of Lorain, Ohio, to designate February 18, her birthday, as Toni Morrison Day. Additional legislation was introduced to also proclaim that date as "Toni Morrison Day" throughout the State of Ohio. The legislation, HB 325, was passed by the Ohio House of Representatives on December 2, 2020, and signed into law by Governor Mike DeWine on December 21.
In 2021, Cornell University opened Toni Morrison Hall, a 178,869 square-foot residence hall and Morrison Dining in 2022, an adjacent dining hall designed by ikon.5 Architects.
During December 2023, the Toni Morrison Collective at Cornell University to celebrate the 30th anniversary of Morrison's Nobel win partnered with Calvary Baptist Church to give away free copies of two of Morrison's books and hold book talks in various locations. As explained by Anne V. Adams, professor emerita of Africana studies and comparative literature and chair of the Toni Morrison Collective: “The fact that Toni Morrison, during her first year as a master’s student, lodged at a house just a couple of doors up the street from historic Calvary Baptist Church created a perfect context for a collaboration."
Documentary films
Morrison was interviewed by Margaret Busby in London for a 1988 documentary film by Sindamani Bridglal, entitled Identifiable Qualities, shown on Channel 4.
Morrison was the subject of a film titled Imagine – Toni Morrison Remembers, directed by Jill Nicholls and shown on BBC One television on July 15, 2015, in which Morrison talked to Alan Yentob about her life and work.
In 2016, Oberlin College received a grant to complete a documentary film begun in 2014, The Foreigner's Home, about Morrison's intellectual and artistic vision, explored in the context of the 2006 exhibition she guest-curated at the Louvre. The film's executive producer was Jonathan Demme. It was directed by Oberlin College Cinema Studies faculty Geoff Pingree and Rian Brown, and incorporates footage shot by Morrison's first-born son Harold Ford Morrison, who also consulted on the film.
In 2019, Timothy Greenfield-Sanders' documentary Toni Morrison: The Pieces I Am premiered at the Sundance Film Festival. Those featured in the film include Morrison, Angela Davis, Oprah Winfrey, Fran Lebowitz, Sonia Sanchez, and Walter Mosley, among others.
Awards
1975: Ohioana Book Award for Sula
1977: National Book Critics Circle Award for Song of Solomon
1977: American Academy and Institute of Arts and Letters Award
1981: Langston Hughes Medal, City College of New York
1982: Ohio Women's Hall of Fame inductee
1986: New York State Governor's Arts Award
1988: Robert F. Kennedy Book Award
1988: Helmerich Award
1988: American Book Award for Beloved
1988: Anisfield-Wolf Book Award in Race Relations for Beloved
1988: Pulitzer Prize for Fiction for Beloved
1988: Frederic G. Melcher Book Award for Beloved
1988: Honorary Doctor of Laws at University of Pennsylvania
1989: Honorary Doctor of Letters at Harvard University
1993: Nobel Prize in Literature
1993: Commander of the Arts and Letters, Paris
1994: Condorcet Medal, Paris
1994: Rhegium Julii Prize for Literature
1996: Jefferson Lecture
1996: National Book Foundation's Medal of Distinguished Contribution to American Letters
1997: Honorary Doctorate of Humane Letters from Gustavus Adolphus College.
1998: Audie Award for Narration by the Author for Sula
2000: National Humanities Medal
2002: 100 Greatest African Americans, list by Molefi Kete Asante
2005: Golden Plate Award of the American Academy of Achievement
2005: Honorary Doctorate of Letters from University of Oxford
2005: Coretta Scott King Award for Remember: The Journey to School Integration
2008: New Jersey Hall of Fame inductee
2009: Norman Mailer Prize, Lifetime Achievement
2010: Officier de la Légion d'Honneur
2010: Institute for Arts and Humanities Medal for Distinguished Contributions to the Arts and Humanities from the Pennsylvania State University
2011: Library of Congress Creative Achievement Award for Fiction
2011: Honorary Doctor of Letters at Rutgers University Graduation Commencement
2011: Honorary Doctorate of Letters from the University of Geneva
2012: Presidential Medal of Freedom
2013: The Nichols-Chancellor's Medal awarded by Vanderbilt University
2013: Honorary Doctorate of Literature awarded by Princeton University
2013: PEN Oakland – Josephine Miles Literary Award for Home
2013: Writer in Residence at the American Academy in Rome
2014: Ivan Sandrof Lifetime Achievement Award given by the National Book Critics Circle
2016: PEN/Saul Bellow Award for Achievement in American Fiction
2016: The Charles Eliot Norton Professorship in Poetry (The Norton Lectures), Harvard University
2016: The Edward MacDowell Medal, awarded by the MacDowell Colony
2018: The Thomas Jefferson Medal, awarded by The American Philosophical Society
2020: National Women's Hall of Fame inductee
2020: Designation of "Toni Morrison Day" in Ohio, to be celebrated annually on her birthday, February 18
2021: Featured on "Cleveland is the Reason" mural in downtown Cleveland (with other notable Cleveland area figures)
2023: Featured on a USPS Forever stamp, designed by art director Ethel Kessler with photography by Deborah Feingold
Nomination
Who's Got Game? The Ant or the Grasshopper? The Lion or the Mouse? Poppy or the Snake? was a Grammy Award for Best Spoken Word Album for Children nominee in 2008.
Bibliography
Novels
Children's books (with Slade Morrison)
The Big Box (1999). .
The Book of Mean People (2002). .
Remember: The Journey to School Integration (2004). .
Who's Got Game? The Ant or the Grasshopper?, The Lion or the Mouse?, Poppy or the Snake? (2007). .
Peeny Butter Fudge (2009). .
Little Cloud and Lady Wind (2010). .
Please, Louise (2014). .
A Toni Morrison Treasury: The Big Box; The Ant or the Grasshopper?; The Lion or the Mouse?; Poppy or the Snake?; Peeny Butter Fudge; The Tortoise or the Hare; Little Cloud and Lady Wind; Please, Louise (2023). .
Short fiction
"Recitatif", in Amiri Baraka and Amina Baraka (eds), Confirmation: An Anthology of African American Women (1983). A hardback book version, with an introduction by Zadie Smith, was published in February 2022 (US: Knopf; UK: Chatto & Windus).
Plays
N'Orleans: The Storyville Musical (aka New Orleans) (performed 1982) with Donald McKayle
Dreaming Emmett (performed 1986)
Desdemona (first performed May 15, 2011, in Vienna)
Poetry
Five Poems (2002, limited edition book with illustrations by Kara Walker)
Libretto
Margaret Garner (first performed May 2005)
Non-fiction
Foreword, The Black Photographers Annual Volume 1, edited by Joe Crawford (1973),
Foreword and Preface, The Black Book edited by Harris, Levitt, Furman and Smith. Random House (1974),
Foreword, Race-ing Justice, En-gendering Power: Essays on Anita Hill, Clarence Thomas, and the Construction of Social Reality. Pantheon Books (1992),
Co-editor, Birth of a Nation'hood: Gaze, Script, and Spectacle in the O.J. Simpson Case (1997),
Remember: The Journey to School Integration (2004),
Playing in the Dark: Whiteness and the Literary Imagination (1992, 2007),
What Moves at the Margin: Selected Nonfiction, edited by Carolyn C. Denard (2008),
Editor (2009), Burn This Book: PEN Writers Speak Out on the Power of the Word,
The Origin of Others – The Charles Eliot Norton Lectures, Harvard University Press (2017),
Goodness and the Literary Imagination: Harvard Divinity School's 95th Ingersoll Lecture: With Essays on Morrison's Moral and Religious Vision. Edited by David Carrasco, Stephanie Paulsell, and Mara Willard. Charlottesville: University of Virginia Press (2019)
The Source of Self-Regard: Selected Essays, Speeches, and Meditations. New York: Alfred A. Knopf (2019), . UK edition published as Mouth Full of Blood: Essays, Speeches, Meditations, London: Chatto & Windus (2019),
Articles
"Introduction." Mark Twain, Adventures of Huckleberry Finn. [1885] The Oxford Mark Twain, edited by Shelley Fisher Fishkin. New York: Oxford University Press, 1996, pp. xxxii–xli.
See also
American literature
African-American literature
List of black Nobel laureates
List of female Nobel laureates
Notes
References
External links
"Toni Morrison: Beloved". From the Bookworm archives, August 15, 2019.
Bookworm Interviews (Audio) with Michael Silverblatt
"Reading the Writing: A Conversation with Toni Morrison" (Cornell University video, March 7, 2013)
Toni Morrison at Random House Australia
Toni Morrison's oral history video excerpts at The National Visionary Leadership Project
Toni Morrison Papers at Princeton University Library Special Collections
Toni Morrison Society based at Oberlin College
1931 births
2019 deaths
20th-century African-American academics
20th-century African-American women writers
20th-century African-American writers
20th-century American academics
20th-century American novelists
20th-century American women writers
20th-century American essayists
21st-century African-American academics
21st-century African-American women writers
21st-century African-American writers
21st-century American academics
21st-century American non-fiction writers
21st-century American novelists
21st-century American women writers
African-American Catholics
African-American children's writers
African-American feminists
African-American novelists
African-American women musicians
American academic administrators
American Book Award winners
American book editors
American children's writers
American feminist writers
American Nobel laureates
American opera librettists
American recipients of the Legion of Honour
American women academics
American women anthologists
American women children's writers
American women essayists
American women novelists
Bard College faculty
Catholics from Ohio
Catholics from Texas
Converts to Roman Catholicism from Methodism
Cornell University alumni
Deaths from pneumonia in New York City
Howard University alumni
Magic realism writers
Members of the American Academy of Arts and Letters
Members of the American Philosophical Society
National Humanities Medal recipients
Nobel laureates in Literature
Novelists from New Jersey
Novelists from New York (state)
Novelists from Ohio
Officers of the Legion of Honour
PEN Oakland/Josephine Miles Literary Award winners
People from Lorain, Ohio
Postmodern feminists
American postmodern writers
Presidential Medal of Freedom recipients
Princeton University faculty
Pulitzer Prize for Fiction winners
The New Yorker people
University at Albany, SUNY faculty
Women Nobel laureates
Women opera librettists
Writers from Houston
Writers from New York City
Writers from Ohio
Writers from Syracuse, New York
Coretta Scott King Award winners
National Book Critics Circle Award winners | Toni Morrison | [
"Technology"
] | 9,480 | [
"Women Nobel laureates",
"Women in science and technology"
] |
43,184 | https://en.wikipedia.org/wiki/Lobopodia | Lobopodians are members of the informal group Lobopodia (from the Greek, meaning "blunt feet"), or the formally erected phylum Lobopoda Cavalier-Smith (1998). They are panarthropods with stubby legs called lobopods, a term which may also be used as a common name of this group as well. While the definition of lobopodians may differ between literatures, it usually refers to a group of soft-bodied, marine worm-like fossil panarthropods such as Aysheaia and Hallucigenia. However, other genera like Kerygmachela and Pambdelurion (which have features similar to other groups) are often referred to as “gilled lobopodians”.
The oldest near-complete fossil lobopodians date to the Lower Cambrian; some are also known from Ordovician, Silurian and Carboniferous Lagerstätten. Some bear toughened claws, plates or spines, which are commonly preserved as carbonaceous or mineralized microfossils in Cambrian strata. The grouping is considered to be paraphyletic, as the three living panarthropod groups (Arthropoda, Tardigrada and Onychophora) are thought to have evolved from lobopodian ancestors.
Definitions
The Lobopodian concept varies from author to author. Its most general sense refers to a suite of mainly Cambrian worm-like panarthropod taxa possessing lobopods – for example, Aysheaia, Hallucigenia, and Xenusion – which were traditionally united as "Xenusians" or "Xenusiids" (class Xenusia). Certain Dinocaridid genera, such as Opabinia, Pambdelurion, and Kerygmachela, may also be regarded as lobopodians, sometimes referred to more specifically as "gilled lobopodians" or "gilled lobopods". This traditional, informal usage of "Lobopodia" treats it as an evolutionary grade, including only extinct Panarthropods near the base of crown Panarthropoda. Crown Panarthropoda comprises the three extant Panarthropod phyla – Onychophora (velvet worms), Tardigrada (waterbears), and Arthropoda (arthropods) – as well as their most recent common ancestor and all of its descendants. Thus, in this usage, Lobopodia consists of various basal Panarthropods. This corresponds to "A" in the image to the left.
An alternative, broader definition of Lobopodia would also incorporate Onychophora and Tardigrada, the two living panarthropod phyla which still bear lobopodous limbs. This definition, corresponding to "C", is a morphological one, depending on the superficial similarity of appendages (the "lobopods"). Thus, it is paraphyletic, excluding the Euarthropods, which are descendants of certain Lobopodians, on the basis of their highly divergent limb morphology. "Lobopodia" has also been used to refer to a proposed sister clade to Arthropoda, consisting of the extant Onychophora and Tardigrada, as well as their most recent common ancestor and all of its descendants. This definition renders Lobopodia a monophyletic taxon, if indeed it is valid (that is, if Tardigrades and Onychophora are closer to one another than either is to Arthropoda), but would exclude all the Euarthropod-line taxa traditionally considered Lobopodians. Its validity is uncertain, however, as there are a number of hypotheses regarding the internal phylogeny of Panarthropoda. The broadest definition treats Lobopodia as a monophyletic superphylum equivalent in circumscription to Panarthropoda. By this definition, represented by "D" in the image, Lobopodia is no longer treated as an evolutionary grade but as a clade, containing not only the early, superficially "Lobopodian" forms but also all of their descendants, including the extant Panarthropods.
Lobopodia has, historically, sometimes included Pentastomida, a group of parasitic panarthropod which were traditionally thought to be a unique phylum, but revealed by subsequent phylogenomic and anatomical studies to be a highly specialized taxon of crustaceans.
Representative taxa
The better-known genera include Aysheaia, which was discovered in the Canadian Burgess Shale, and Hallucigenia, known from both the Chenjiang Maotianshan Shale and the Burgess Shale. Aysheaia pedunculata has a morphology apparently basic for lobopodians — for example, a significantly annulated cuticle, a terminal mouth opening, specialized frontalmost appendages, and stubby lobopods with terminal claws. Hallucigenia sparsa is famous for having a complex history of interpretation — it was originally reconstructed with long, stilt-like legs and mysterious fleshy dorsal protuberances, and was long considered a prime example of the way in which nature experimented with the most diverse and bizarre body designs during the Cambrian. However, further discoveries showed that this reconstruction had placed the animal upside-down: interpreting the "stilts" as dorsal spines made it clear that the fleshy "dorsal" protuberances were actually elongated lobopods. More recent reconstruction even exchanged the front and rear ends of the animal: it was revealed that the bulbous imprint previously thought to be a head was actually gut contents being expelled from the anus.
Microdictyon is another charismatic as well as the speciose genus of lobopodians resembling Hallucigenia, but instead of spines, it bore pairs of net-like plates, which are often found disarticulated and are known as an example of small shelly fossils (SSF). Xenusion has the oldest fossil record amongst the described lobopodians, which may trace back to Cambrian Stage 2. Luolishania is an iconic example of lobopodians with multiple pairs of specialized appendages. The gill lobopodians Kerygmachela and Pambdelurion shed light on the relationship between lobopodians and arthropods, as they have both lobopodian affinities and characteristics linked to the arthropod stem-group.
Morphology
Most lobopodians were only a few centimeters in length, while some genera grew up to over 20 centimeters. Their bodies are annulated, although the presence of annulation may differ between position or taxa, and sometimes difficult to discern due to their close spacing and low relief on the fossil materials. Body and appendages are circular in cross-section.
Head
Due to the usually poor preservation, detailed reconstructions of the head region are only available for a handful of lobopodian species. The head of a lobopodian is more or less bulbous, and sometime possesses a pair of pre-ocular, presumely protocerebral appendages – for example, primary antennae or well-developed frontal appendages, which are individualized from the trunk lobopods (with the exception of Antennacanthopodia, which have two pairs of head appendages instead of one). Mouthparts may consist of rows of teeth or a conical proboscis. The eyes may be represented by a single ocellus or by numerous pairs of simple ocelli, as has been shown in Luolishania (=Miraluolishania), Ovatiovermis, Onychodictyon, Hallucigenia, Facivermis, and less certainly Aysheaia as well. However, in gilled lobopodians like Kerygmachela, the eyes are relatively complex reflective patches that may had been compound in nature.
Trunk and lobopods
The trunk is elongated and composed of numerous body segments (somites), each bearing a pair of legs called lobopods or lobopodous limbs. The segmental boundaries are not as externally significant as those of arthropods, although they are indicated by heteronomous annulations (i.e., the alternation of annulation density corresponding to the position of segmental boundaries) in some species. The trunk segments may bear other external, segment-corresponding structures such as nodes (e.g. Hadranax, Kerygmachela), papillae (e.g. Onychodictyon), spine/plate-like sclerites (e.g. armoured lobopodians) or lateral flaps (e.g. gilled lobopodians). The trunk may terminate with a pair of lobopods (e.g. Aysheaia, Hallucigenia sparsa) or a tail-like extension (e.g. Paucipodia, Siberion, Jianshanopodia).
The lobopods are flexible and loosely conical in shape, tapering from the body to tips that may or may not bear claws. The claws, if present, are hardened structures with a shape resembling a hook or gently-curved spine. Claw-bearing lobopods usually have two claws, but single claws are known (e.g. posterior lobopods of luolishaniids), as are more than two (e.g. three in Tritonychus, seven in Aysheaia) depending on its segmental or taxonomical association. In some genera, the lobopods bear additional structures such as spines (e.g. Diania), fleshy outgrowths (e.g. Onychodictyon), or tubercules (e.g. Jianshanopodia). There is no sign of arthropodization (development of a hardened exoskeleton and segmental division on panarthropod appendages) in known members of lobopodians, even for those belonging to the arthropod stem-group (e.g. gilled lobopodians and siberiids), and the suspected case of arthropodization on the limbs of Diania is considered to be a misinterpretation.
Differentiation (tagmosis) between trunk somites barely occurs, except in hallucigenids and luolishaniids, where numerous pairs of their anterior lobopods are significantly slender (hallucigenids) or setose (luolishaniids) in contrast to their posterior counterparts.
Internal structures
The gut of lobopodians is often straight, undifferentiated, and sometimes preserved in the fossil record in three dimensions. In some specimens the gut is found to be filled with sediment. The gut consists of a central tube occupying the full length of the lobopodian's trunk, which does not change much in width - at least not systematically. However, in some groups, specifically the gilled lobopodians and siberiids, the gut is surrounded by pairs of serially repeated, kidney-shaped gut diverticulae (digestive glands). In some specimens, parts of the lobopodian gut can be preserved in three dimensions. This cannot result from phosphatisation, which is usually responsible for 3-D gut preservation, because the phosphate content of the guts is under 1%; the contents comprise quartz and muscovite. The gut of the representative Paucipodia is variable in width, being widest at the centre of the body. Its position in the body cavity is only loosely fixed, so flexibility is possible.
Not much is known about the neural anatomy of lobopodians due to the spare and mostly ambiguous fossil evidence. Possible traces of a nervous system were found in Paucipodia, Megadictyon and Antennacanthopodia. The first and so far the only confirmed evidence of lobopodian neural structures comes from the gilled lobopodian Kerygmachela in Park et al. 2018 — it presents a brain composed of only a protocerebrum (the frontal-most cerebral ganglion of panarthropods) that is directly connected to the nerves of eyes and frontal appendages, suggesting the protocerebral ancestry of the head of lobopodians as well as the whole Panarthropoda.
In some extant ecdysozoan such as priapulids and onychophorans, there is a layer of outermost circular muscles and a layer of innermost longitudinal muscles. The onychophorans also have a third, intermediate, layer of interwoven oblique muscles. Musculature of the gilled lobopodian Pambdelurion shows a similar anatomy, but that of the lobopodian Tritonychus shows the opposite pattern: it is the outermost muscles that are longitudinal and the innermost layer that consists of circular muscles.
Categories
Based on external morphology, lobopdians may fall under different categories — for example the general worm-like taxa as "xenusiid" or "xenusian"; xenusiid with sclerite as "armoured lobopodians"; and taxa with both robust frontal appendages and lateral flaps as "gilled lobopodians". Some of them were originally defined under a taxonomic sense (e.g. class Xenusia), but neither any of them are generally accepted as monophyletic in further studies.
Armoured lobopodians
Armoured lobopodians referred to xenusiid lobopodians which bore repeated sclerites such as spine or plates on their trunk (e.g. Hallucigenia, Microdictyon, Luolishania) or lobopods (e.g. Diania). In contrast, lobopodians without sclerites may be referred to as "unarmoured lobopodians". Function of the sclerites were interpreted as protective armor and/or muscle attachment points. In some cases, only the disarticulated sclerites of the animal were preserved, which represented as component of small shelly fossils (SSF). Armoured lobopodians were suggest to be onychophoran-related and may even represent a clade in some previous studies, but their phylogenetic positions in later studies are controversial. (see text)
Gilled lobopodians
Dinocaridids with lobopodian affinities (due to shared features like annulation and lobopods) are referred to as "gilled lobopodians" or "gilled lobopods". These forms sport a pair of flaps on each trunk segment, but otherwise no signs of arthropodization, in contrast to more derived dinocaridids like the Radiodonta that have robust and sclerotized frontal appendages. Gilled lobopodians cover at least four genera: Pambdelurion, Kerygmachela, Utahnax and Mobulavermis. Opabinia may also fall under this category in a broader sense, although the presence of lobopods in this genus is not definitively proven. Omnidens, a genus known only from Pambdelurion-like mouthparts and distal parts of the frontal appendages, may also be a gilled lobopodian. The body flaps may have functioned as both swimming appendages and gills, and are possibly homologous to the dorsal flaps of radiodonts and exites of Euarthropoda. Whether these genera were true lobopodians is still contested by some. However, they are widely accepted as stem-group arthropods just basal to radiodonts.
Siberion and similar taxa
Siberion, Megadictyon and Jianshanopodia may be grouped as siberiids (order Siberiida), jianshanopodians or "giant lobopodians" by some literatures. They are generally large — body length ranging between 7 and 22 centimeters (2¼ to 8⅔ inches) — xenusiid lobopodians with widen trunk, stout trunk lobopods without evidence of claws, and most notably a pair of robust frontal appendages. With the possible exception of Siberion, they also have digestive glands like those of a gilled lobopodian and basal euarthropod. Their anatomy represent transitional forms between typical xenusiids and gilled lobopodians, eventually placing them under the basalmost position of arthropod stem-group.
Paleoecology
Lobopodians possibly occupied a wide range of ecological niches. Although most of them had undifferentiated appendages and straight gut, which would suggest a simple sediment-feeding lifestyle, sophisticated digestive glands and large size of gilled lobopodians and siberiids would allow them to consume larger food items, and their robust frontal appendages may even suggest a predatory lifestyle. On the other hand, luolishaniids such as Luolishania and Ovatiovermis have elaborate feather-like lobopods that presumably formed 'baskets' for suspension or filter-feeding. Lobopods with curved terminal claws may have given some lobopodians the ability to climb on substrances.
Not much is known about the physiology of lobopodians. There is evidence to suggest that lobopodians moult just like other ecdysozoan taxa, but the outline and ornamentation of the harden sclerite did not vary during ontogeny. The gill-like structures on the body flaps of gilled lobopodians and ramified extensions on the lobopods of Jianshanopodia may provide respiratory function (gills). Pambdelurion may control the movement of their lobopods in a way similar to onychophorans.
Distribution
During the Cambrian, lobopodians displayed a substantial degree of biodiversity. One species is known from each of the Ordovician and Silurian periods, with a few more known from the Carboniferous (Mazon Creek) — this represents the paucity of exceptional lagerstatten in post-Cambrian deposits.
Phylogeny
The overall phylogenetic interpretation on lobopodians has changed dramatically since their discovery and first description. The reassignments are not only based on new fossil evidence, but also new embryological, neuroanatomical, and genomic (e.g. gene expression, phylogenomics) information observed from extant panarthropod taxa.
Based on their apparently onychophoran-like morphology (e.g. annulated cuticle, lobopodous appendage with claws), lobopodians were originally thought to be present a group of paleozoic onychophorans. This interpretation was challenged after the discovery of lobopodians with arthropod and tardigrade-like characteristics, suggesting that the similarity between lobopodians and onychophorans represents deeper panarthropod ancestral traits (plesiomorphies) instead of onychophoran-exclusive characteristics (synapomorphies). For example, The British palaeontologist Graham Budd sees the Lobopodia as representing a basal grade from which the phyla Onychophora and Arthropoda arose, with Aysheaia comparable to the ancestral plan, and with forms like Kerygmachela and Pambdelurion representing a transition that, via the dinocaridids, would lead to an arthropod body plan. Aysheaia's surface ornamentation, if homologous with palaeoscolecid sclerites, may represent a deeper link connecting it with cycloneuralian outgroups. Lobopodians are paraphyletic, and include the last common ancestor of arthropods, onychophorans and tardigrades.
Stem-group arthropods
Compared to other panarthropod stem-groups, suggestion on the lobopodian members of arthropod stem-group is relatively consistent — siberiid like Megadictyon and Jianshanopodia occupied the basalmost position, gilled lobopodians Pambdelurion and Kerygmachela branch next, and finally lead to a clade compose of Opabinia, Radiodonta and Euarthropoda (crown-group arthropods). Their positions within arthropod stem-group are indicated by numerous arthropod groundplans and intermediate forms (e.g. arthropod-like digestive glands, radiodont-like frontal appendages and dorso-ventral appendicular structures link to arthropod biramous appendages). Lobopodian ancestry of arthropods also reinforced by genomic studies on extant taxa — gene expression support the homology between arthropod appendages and onychophoran lobopods, suggests that modern less-segmented arthropodized appendages evolved from annulated lobopodous limbs. On the other hand, primary antennae and frontal appendages of lobopodians and dinocaridids may be homologous to the labrum/hypostome complex of euarthropods, an idea support by their protocerebral origin and developmental pattern of the labrum of extant arthropods.
Diania, a genus of armoured lobopodian with stout and spiny legs, were originally thought to be associated within the arthropod stem-group based on its apparently arthropod-like (arthropodized) trunk appendages. However, this interpretation is questionable as the data provided by the original description are not consistent with the suspected phylogenic relationships. Further re-examination even revealed that the suspected arthropodization on the legs of Diania was a misinterpretation — although the spine may have hardened, the remaining cuticle of Diania's legs were soft (not harden nor scleritzed), lacking any evidence of pivot joint and arthrodial membrane, suggest the legs are lobopods with only widely spaced annulations. Thus, the re-examination eventually reject the evidence of arthropodization (sclerotization, segmentation and articulation) on the appendages as well as the fundamental relationship between Diania and arthropods.
Stem-group onychophorans
While Antennacanthopodia is widely accepted as a stem-group onychophoran, the position of other xenusiid genera that were previously thought to be onychophoran-related is controversial — in further studies, most of them were either suggested to be stem-group onychophorans or basal panarthropods, with a few species (Aysheaia or Onychodictyon ferox) occasionally suggested to be stem-group tardigrades. A study in 2014 suggested that Hallucigenia are stem-group onychophorans based on their claws, which have overlapped internal structures resembling those of an extant onychophoran. This interpretation was questioned by later studies, as the structures may be a panarthropod plesiomorphy.
Stem-group tardigrades
Lobopodian taxa of the tardigrade stem-group is unclear. Aysheaia or Onychodictyon ferox had been suggest to be a possible member, based on the high claw number (in Aysheaia) and/or terminal lobopods with anterior-facing claws (in both taxa). Although not widely accepted, there are even suggestions that Tardigrada itself representing the basalmost panarthropod or branch between the arthropod stem-group. However, a paper in 2023 found luolishaniids to be the closest relatives of tardigrades using various morphological characteristics.
Stem-group panarthropods
It is unclear that which lobopodians represent members of the panarthropod stem-group, which were branched just before the last common ancestor of extant panarthropod phyla. Aysheaia may have occupied this position based on its apparently basic morphology; while other studies rather suggest luolishaniid and hallucigenid, two lobopodian taxa which had been resolved as members of stem-group onychophorans as well.
Described genera
As of 2018, over 20 lobopodian genera have been described. The fossil materials being described as lobopodians Mureropodia apae and Aysheaia prolata are considered to be disarticulated frontal appendages of the radiodonts Caryosyntrips and Stanleycaris, respectively. Miraluolishania was suggested to be synonym of Luolishania by some studies. The enigmatic Facivermis was later revealed to be a highly specialized genus of luolishaniid lobopodians.
Acinocricus
Antennacanthopodia
Aysheaia
Carbotubulus
Cardiodictyon
Collinsium
Collinsovermis
Diania
Entothyreos
Facivermis
Fusuconcharium
Hadranax
Hallucigenia
Jianshanopodia
Kerygmachela?
Lenisambulatrix
Luolishania (=Miraluolishania)
Megadictyon
Microdictyon
Mobulavermis?
Omnidens?
Onychodictyon
Orstenotubulus
Ovatiovermis
Pambdelurion?
Parvibellus?
Paucipodia
Quadratapora
Siberion
Thanahita
Tritonychus
Utahnax?
Xenusion
Youti?
References
Prehistoric protostomes
†
Cambrian Series 2 first appearances
Paraphyletic groups | Lobopodia | [
"Biology"
] | 5,416 | [
"Phylogenetics",
"Paraphyletic groups"
] |
43,207 | https://en.wikipedia.org/wiki/Polychaete | Polychaeta () is a paraphyletic class of generally marine annelid worms, commonly called bristle worms or polychaetes (). Each body segment has a pair of fleshy protrusions called parapodia that bear many bristles, called chaetae, which are made of chitin. More than 10,000 species are described in this class. Common representatives include the lugworm (Arenicola marina) and the sandworm or clam worm Alitta.
Polychaetes as a class are robust and widespread, with species that live in the coldest ocean temperatures of the abyssal plain, to forms which tolerate the extremely high temperatures near hydrothermal vents. Polychaetes occur throughout the Earth's oceans at all depths, from forms that live as plankton near the surface, to a 2- to 3-cm specimen (still unclassified) observed by the robot ocean probe Nereus at the bottom of the Challenger Deep, the deepest known spot in the Earth's oceans. Only 168 species (less than 2% of all polychaetes) are known from fresh waters.
Description
Polychaetes are segmented worms, generally less than in length, although ranging at the extremes from to , in Eunice aphroditois. They can sometimes be brightly coloured, and may be iridescent or even luminescent. Each segment bears a pair of paddle-like and highly vascularized parapodia, which are used for movement and, in many species, act as the worm's primary respiratory surfaces. Bundles of bristles, called chaetae, project from the parapodia.
However, polychaetes vary widely from this generalized pattern, and can display a range of different body forms. The most generalised polychaetes are those that crawl along the bottom, but others have adapted to many different ecological niches, including burrowing, swimming, pelagic life, tube-dwelling or boring, commensalism, and parasitism, requiring various modifications to their body structures.
The head, or prostomium, is relatively well developed, compared with other annelids. It projects forward over the mouth, which therefore lies on the animal's underside. The head normally includes two to four pair of eyes, although some species are blind. These are typically fairly simple structures, capable of distinguishing only light and dark, although some species have large eyes with lenses that may be capable of more sophisticated vision, including the Alciopids' complex eyes which rival cephalopod and vertebrate eyes.
Many species show bioluminescence; eight families have luminous species.
The head also includes a pair of antennae, tentacle-like palps, and a pair of pits lined with cilia, known as "nuchal organs". These latter appear to be chemoreceptors, and help the worm to seek out food.
Internal anatomy and physiology
The outer surface of the body wall consists of a simple columnar epithelium covered by a thin cuticle. Underneath this, in order, are a thin layer of connective tissue, a layer of circular muscle, a layer of longitudinal muscle, and a peritoneum surrounding the body cavity. Additional oblique muscles move the parapodia. In most species the body cavity is divided into separate compartments by sheets of peritoneum between each segment, but in some species it is more continuous.
The mouth of polychaetes is located on the peristomium, the segment behind the prostomium, and varies in form depending on their diets, since the group includes predators, herbivores, filter feeders, scavengers, and parasites. In general, however, they possess a pair of jaws and a pharynx that can be rapidly everted, allowing the worms to grab food and pull it into their mouths. In some species, the pharynx is modified into a lengthy proboscis. The digestive tract is a simple tube, usually with a stomach part way along.
The smallest species, and those adapted to burrowing, lack gills, breathing only through their body surfaces. Most other species have external gills, often associated with the parapodia.
A simple but well-developed circulatory system is usually present. The two main blood vessels furnish smaller vessels to supply the parapodia and the gut. Blood flows forward in the dorsal vessel, above the gut, and returns down the body in the ventral vessel, beneath the gut. The blood vessels themselves are contractile, helping to push the blood along, so most species have no need of a heart. In a few cases, however, muscular pumps analogous to a heart are found in various parts of the system. Conversely, some species have little or no circulatory system at all, transporting oxygen in the coelomic fluid that fills their body cavities.
The blood may be colourless, or have any of three different respiratory pigments. The most common of these is haemoglobin, but some groups have haemerythrin or the green-coloured chlorocruorin, instead.
The nervous system consists of a single or double ventral nerve cord running the length of the body, with ganglia and a series of small nerves in each segment. The brain is relatively large, compared with that of other annelids, and lies in the upper part of the head. An endocrine gland is attached to the ventral posterior surface of the brain, and appears to be involved in reproductive activity. In addition to the sensory organs on the head, photosensitive eye spots, statocysts, and numerous additional sensory nerve endings, most likely involved with the sense of touch, also occur on the body.
Polychaetes have a varying number of protonephridia or metanephridia for excreting waste, which in some cases can be relatively complex in structure. The body also contains greenish "chloragogen" tissue, similar to that found in oligochaetes, which appears to function in metabolism, in a similar fashion to that of the vertebrate liver.
The cuticle is constructed from cross-linked fibres of collagen and may be 200 nm to 13 mm thick. Their jaws are formed from sclerotised collagen, and their setae from sclerotised chitin.
Ecology
Polychaetes are predominantly marine, but many species also live in freshwater, and a few in terrestrial environments. They are extremely variable in both form and lifestyle, and include a few taxa that swim among the plankton or above the abyssal plain. Most burrow or build tubes in the sediment, and some live as commensals. A few species, roughly 80 (less than 0.5% of species), are parasitic. These include both ectoparasites and endoparasites. Ectoparasitic polychaetes feed on skin, blood, and other secretions, and some are adapted to bore through hard, usually calcerous surfaces, such as the shells of mollusks. These "boring" polychaetes may be parasitic, but may be opportunistic or even obligate symbionts (commensals).
The mobile forms (Errantia) tend to have well-developed sense organs and jaws, while the stationary forms (Sedentaria) lack them, but may have specialized gills or tentacles used for respiration and deposit or filter feeding, e.g., fanworms.
Underwater polychaetes have eversible mouthparts used to capture prey. A few groups have evolved to live in terrestrial environments, like Namanereidinae with many terrestrial species, but are restricted to humid areas. Some have even evolved cutaneous invaginations for aerial gas exchange.
Notable polychaetes
One notable polychaete, the Pompeii worm (Alvinella pompejana), is endemic to the hydrothermal vents of the Pacific Ocean. Pompeii worms are among the most heat-tolerant complex animals known.
A recently discovered genus, Osedax, includes a species nicknamed the "bone-eating snot flower".
Another remarkable polychaete is Hesiocaeca methanicola, which lives on methane clathrate deposits.
Lamellibrachia luymesi is a cold seep tube worm that reaches lengths of over 3 m and may be the most long-lived annelid, being over 250 years old.
A still unclassified multilegged predatory polychaete worm was identified only by observation from the underwater vehicle Nereus at the bottom of the Challenger Deep, the greatest depth in the oceans, near in depth. It was about an inch long visually, but the probe failed to capture it, so it could not be studied in detail.
The Bobbit worm (Eunice aphroditois) is a predatory species that can achieve a length of ), with an average diameter of .
Dimorphilus gyrociliatus has the smallest known genome of any annelid. The species shows extreme sexual dimorphism. Females measure ~1 mm long and have simplified bodies containing six segments, a reduced coelom, and no appendages, parapodia, or chaetae. The males are only 50 μm long and consist of just a few hundred cells. They lack a digestive system and have just 68 neurons, and only live for roughly a week.
Reproduction
Most polychaetes have separate sexes, rather than being hermaphroditic. The most primitive species have a pair of gonads in every segment, but most species exhibit some degree of specialisation. The gonads shed immature gametes directly into the body cavity, where they complete their development. Once mature, the gametes are shed into the surrounding water through ducts or openings that vary between species, or in some cases by the complete rupture of the body wall (and subsequent death of the adult). A few species copulate, but most fertilize their eggs externally.
The fertilized eggs typically hatch into trochophore larvae, which float among the plankton, and eventually metamorphose into the adult form by adding segments. A few species have no larval form, with the egg hatching into a form resembling the adult, and in many that do have larvae, the trochophore never feeds, surviving off the yolk that remains from the egg.
However, some polychaetes exhibit remarkable reproductive strategies. Some species reproduce by epitoky. For much of the year, these worms look like any other burrow-dwelling polychaete, but as the breeding season approaches, the worm undergoes a remarkable transformation as new, specialized segments begin to grow from its rear end until the worm can be clearly divided into two halves. The front half, the atoke, is asexual. The new rear half, responsible for breeding, is known as the epitoke. Each of the epitoke segments is packed with eggs and sperm and features a single eyespot on its surface. The beginning of the last lunar quarter is the cue for these animals to breed, and the epitokes break free from the atokes and float to the surface. The eye spots sense when the epitoke reaches the surface and the segments from millions of worms burst, releasing their eggs and sperm into the water.
A similar strategy is employed by the deep sea worm Syllis ramosa, which lives inside a sponge. The rear ends of the worm develop into "stolons" containing the eggs or sperm; these stolons then become detached from the parent worm and rise to the sea surface, where fertilisation takes place.
Fossil record
Stem-group polychaete fossils are known from the Sirius Passet Lagerstätte, a rich, sedimentary deposit in Greenland tentatively dated to the late Atdabanian (early Cambrian). The oldest found is Phragmochaeta canicularis. Many of the more famous Burgess Shale organisms, such as Canadia, may also have polychaete affinities. Wiwaxia, long interpreted as an annelid, is now considered to represent a mollusc. An even older fossil, Cloudina, dates to the terminal Ediacaran period; this has been interpreted as an early polychaete, although consensus is absent.
Being soft-bodied organisms, the fossil record of polychaetes is dominated by their fossilized jaws, known as scolecodonts, and the mineralized tubes that some of them secrete. Most important biomineralising polychaetes are serpulids, sabellids, and cirratulids. Polychaete cuticle does have some preservation potential; it tends to survive for at least 30 days after a polychaete's death. Although biomineralisation is usually necessary to preserve soft tissue after this time, the presence of polychaete muscle in the nonmineralised Burgess shale shows this need not always be the case. Their preservation potential is similar to that of jellyfish.
Taxonomy and systematics
Taxonomically, polychaetes are thought to be paraphyletic, meaning the group excludes some descendants of its most recent common ancestor. Groups that may be descended from the polychaetes include the clitellates (earthworms and leeches), sipunculans, and echiurans. The Pogonophora and Vestimentifera were once considered separate phyla, but are now classified in the polychaete family Siboglinidae.
Much of the classification below matches Rouse & Fauchald, 1998, although that paper does not apply ranks above family.
Older classifications recognize many more (sub)orders than the layout presented here. As comparatively few polychaete taxa have been subject to cladistic analysis, some groups which are usually considered invalid today may eventually be reinstated.
These divisions were shown to be mostly paraphyletic in recent years.
Basal or incertae sedis
Family Diurodrilidae
Family Histriobdellidae
Family Nerillidae
Family Parergodrilidae
Family Potamodrilidae
Family Psammodrilidae
Family Spintheridae
Family Protodriloididae
Family Saccocirridae
Order Haplodrili
Order Myzostomida
Family Endomyzostomatidae
Family Asteromyzostomatidae
Family Myzostomatidae
Subclass Palpata
Family Protodrilidae
Family Polygordiidae
Subclass Aciculata
Family Levidoridae
Order Amphinomida
Family Amphinomidae
Family Euphrosinidae
Order Eunicida
Family Dorvilleidae
Family Eunicidae
Family Hartmaniellidae
Family Ichthyotomidae
Family Lumbrineridae
Family Oenonidae
Family Onuphidae
Order Phyllodocida
Suborder Aphroditiformia
Family Acoetidae
Family Aphroditidae
Family Eulepethidae
Family Iphionidae
Family Pholoidae
Family Polynoidae
Family Sigalionidae
Suborder Glyceriformia
Family Glyceridae
Family Goniadidae
Family Lacydoniidae
Family Paralacydoniidae
Suborder Nereidiformia
Family Antonbruunidae
Family Chrysopetalidae
Family Hesionidae
Family Nereididae
Family Pilargidae
Family Syllidae
Suborder Phyllodocida incertae sedis
Family Iospilidae
Family Nautiliniellidae
Family Nephtyidae
Family Typhloscolecidae
Family Tomopteridae
Suborder Phyllodociformia
Family Alciopidae
Family Lopadorrhynchidae
Family Phyllodocidae
Family Pontodoridae
Subclass Sedentaria
Family Chaetopteridae
Infraclass Canalipalpata
Order Sabellida
Family Caobangidae
Family Fabriciidae
Family Oweniidae
Family Sabellariidae
Family Sabellidae
Family Serpulidae
Family Siboglinidae (formerly the phyla Pogonophora & Vestimentifera)
Order Spionida
Suborder Spioniformia
Family Apistobranchidae
Family Longosomatidae
Family Magelonidae
Family Poecilochaetidae
Family Spionidae
Family Trochochaetidae
Family Uncispionidae
Order Terebellida
Suborder Cirratuliformia
Family Acrocirridae (sometimes placed in Spionida)
Family Cirratulidae (sometimes placed in Spionida)
Family Ctenodrilidae (sometimes own suborder Ctenodrilida)
Family Fauveliopsidae (sometimes own suborder Fauveliopsida)
Family Flabelligeridae (sometimes suborder Flabelligerida)
Family Flotidae (sometimes included in Flabelligeridae)
Family Poeobiidae (sometimes own suborder Poeobiida or included in Flabelligerida)
Family Sternaspidae (sometimes own suborder Sternaspida)
Suborder Terebellomorpha
Family Alvinellidae
Family Ampharetidae
Family Pectinariidae
Family Terebellidae
Family Trichobranchidae
Infraclass Scolecida
Family Arenicolidae
Family Capitellidae
Family Cossuridae
Family Maldanidae
Family Opheliidae
Family Orbiniidae
Family Paraonidae
Family Scalibregmatidae
Order Capitellida (nomen dubium)
Order Cossurida (nomen dubium)
Order Opheliida (nomen dubium)
Order Orbiniida (nomen dubium)
Order Questida (nomen dubium)
Order Scolecidaformia (nomen dubium)
Subclass Echiura
Order Bonelliida
Family Bonelliidae
Family Ikedidae
Order Echiurida
Family Echiuridae
Family Thalassematidae
Family Urechidae
See also
Aelosoma
Edith Berkeley
Australonuphis
References
Bibliography
Campbell, Reece, and Mitchell. Biology. 1999.
Notes
External links
World Polychaeta Database
Special issue of Marine Ecology dedicated to polychaetes
Marine Polychaete Larva, a guide to the marine zooplankton of south eastern Australia
Key to Families of Polychaetes, Natural History Museum
Extant Cambrian first appearances
Paraphyletic groups | Polychaete | [
"Biology"
] | 3,771 | [
"Phylogenetics",
"Paraphyletic groups"
] |
43,218 | https://en.wikipedia.org/wiki/Zipf%27s%20law | Zipf's law (; ) is an empirical law stating that when a list of measured values is sorted in decreasing order, the value of the -th entry is often approximately inversely proportional to .
The best known instance of Zipf's law applies to the frequency table of words in a text or corpus of natural language:
It is usually found that the most common word occurs approximately twice as often as the next common one, three times as often as the third most common, and so on. For example, in the Brown Corpus of American English text, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's law, the second-place word "of" accounts for slightly over 3.5% of words (36,411 occurrences), followed by "and" (28,852). It is often used in the following form, called Zipf-Mandelbrot law:
where and are fitted parameters, with , and .
This law is named after the American linguist George Kingsley Zipf, and is still an important concept in quantitative linguistics. It has been found to apply to many other types of data studied in the physical and social sciences.
In mathematical statistics, the concept has been formalized as the Zipfian distribution: A family of related discrete probability distributions whose rank-frequency distribution is an inverse power law relation. They are related to Benford's law and the Pareto distribution.
Some sets of time-dependent empirical data deviate somewhat from Zipf's law. Such empirical distributions are said to be quasi-Zipfian.
History
In 1913, the German physicist Felix Auerbach observed an inverse proportionality between the population sizes of cities, and their ranks when sorted by decreasing order of that variable.
Zipf's law had been discovered before Zipf, first by the French stenographer Jean-Baptiste Estoup in 1916, and also by G. Dewey in 1923, and by E. Condon in 1928.
The same relation for frequencies of words in natural language texts was observed by George Zipf in 1932, but he never claimed to have originated it. In fact, Zipf did not like mathematics. In his 1932 publication, the author speaks with disdain about mathematical involvement in linguistics, a.o. ibidem, p. 21:
... let me say here for the sake of any mathematician who may plan to formulate the ensuing data more exactly, the ability of the highly intense positive to become the highly intense negative, in my opinion, introduces the devil into the formula in the form of
The only mathematical expression Zipf used looks like which he "borrowed" from Alfred J. Lotka's 1926 publication.
The same relationship was found to occur in many other contexts, and for other variables besides frequency. For example, when corporations are ranked by decreasing size, their sizes are found to be inversely proportional to the rank. The same relation is found for personal incomes (where it is called Pareto principle), number of people watching the same TV channel, notes in music, cells transcriptomes, and more.
In 1992 bioinformatician Wentian Li published a short paper showing that Zipf's law emerges even in randomly generated texts. It included proof that the power law form of Zipf's law was a byproduct of ordering words by rank.
Formal definition
Formally, the Zipf distribution on elements assigns to the element of rank (counting from 1) the probability:
where is a normalization constant: The th harmonic number:
The distribution is sometimes generalized to an inverse power law with exponent instead of Namely,
where , is a generalized harmonic number
The generalized Zipf distribution can be extended to infinitely many items ( = ∞) only if the exponent exceeds In that case, the normalization constant , becomes Riemann's zeta function,
The infinite item case is characterized by the Zeta distribution and is called Lotka's law. If the exponent is or less, the normalization constant , diverges as tends to infinity.
Empirical testing
Empirically, a data set can be tested to see whether Zipf's law applies by checking the goodness of fit of an empirical distribution to the hypothesized power law distribution with a Kolmogorov–Smirnov test, and then comparing the (log) likelihood ratio of the power law distribution to alternative distributions like an exponential distribution or lognormal distribution.
Zipf's law can be visuallized by plotting the item frequency data on a log-log graph, with the axes being the logarithm of rank order, and logarithm of frequency. The data conform to Zipf's law with exponent to the extent that the plot approximates a linear (more precisely, affine) function with slope . For exponent one can also plot the reciprocal of the frequency (mean interword interval) against rank, or the reciprocal of rank against frequency, and compare the result with the line through the origin with slope
Statistical explanations
Although Zipf's Law holds for most natural languages, and even certain artificial ones such as Esperanto and Toki Pona, the reason is still not well understood. Recent reviews of generative processes for Zipf's law include Mitzenmacher, "A Brief History of Generative Models for Power Law and Lognormal Distributions", and Simkin, "Re-inventing Willis".
However, it may be partly explained by statistical analysis of randomly generated texts. Wentian Li has shown that in a document in which each character has been chosen randomly from a uniform distribution of all letters (plus a space character), the "words" with different lengths follow the macro-trend of Zipf's law (the more probable words are the shortest and have equal probability). In 1959, Vitold Belevitch observed that if any of a large class of well-behaved statistical distributions (not only the normal distribution) is expressed in terms of rank and expanded into a Taylor series, the first-order truncation of the series results in Zipf's law. Further, a second-order truncation of the Taylor series resulted in Mandelbrot's law.
The principle of least effort is another possible explanation:
Zipf himself proposed that neither speakers nor hearers using a given language wants to work any harder than necessary to reach understanding, and the process that results in approximately equal distribution of effort leads to the observed Zipf distribution.
A minimal explanation assumes that words are generated by monkeys typing randomly. If language is generated by a single monkey typing randomly, with fixed and nonzero probability of hitting each letter key or white space, then the words (letter strings separated by white spaces) produced by the monkey follows Zipf's law.
Another possible cause for the Zipf distribution is a preferential attachment process, in which the value of an item tends to grow at a rate proportional to (intuitively, "the rich get richer" or "success breeds success"). Such a growth process results in the Yule–Simon distribution, which has been shown to fit word frequency versus rank in language and population versus city rank better than Zipf's law. It was originally derived to explain population versus rank in species by Yule, and applied to cities by Simon.
A similar explanation is based on atlas models, systems of exchangeable positive-valued diffusion processes with drift and variance parameters that depend only on the rank of the process. It has been shown mathematically that Zipf's law holds for Atlas models that satisfy certain natural regularity conditions.
Related laws
A generalization of Zipf's law is the Zipf–Mandelbrot law, proposed by Benoit Mandelbrot, whose frequencies are:
The constant is the Hurwitz zeta function evaluated at .
Zipfian distributions can be obtained from Pareto distributions by an exchange of variables.
The Zipf distribution is sometimes called the discrete Pareto distribution because it is analogous to the continuous Pareto distribution in the same way that the discrete uniform distribution is analogous to the continuous uniform distribution.
The tail frequencies of the Yule–Simon distribution are approximately
for any choice of
In the parabolic fractal distribution, the logarithm of the frequency is a quadratic polynomial of the logarithm of the rank. This can markedly improve the fit over a simple power-law relationship. Like fractal dimension, it is possible to calculate Zipf dimension, which is a useful parameter in the analysis of texts.
It has been argued that Benford's law is a special bounded case of Zipf's law, with the connection between these two laws being explained by their both originating from scale invariant functional relations from statistical physics and critical phenomena. The ratios of probabilities in Benford's law are not constant. The leading digits of data satisfying Zipf's law with satisfy Benford's law.
Occurrences
City sizes
Following Auerbach's 1913 observation, there has been substantial examination of Zipf's law for city sizes. However, more recent empirical and theoretical studies have challenged the relevance of Zipf's law for cities.
Word frequencies in natural languages
In many texts in human languages, word frequencies approximately follow a Zipf distribution with exponent close to 1; that is, the most common word occurs about times the -th most common one.
The actual rank-frequency plot of a natural language text deviates in some extent from the ideal Zipf distribution, especially at the two ends of the range. The deviations may depend on the language, on the topic of the text, on the author, on whether the text was translated from another language, and on the spelling rules used. Some deviation is inevitable because of sampling error.
At the low-frequency end, where the rank approaches , the plot takes a staircase shape, because each word can occur only an integer number of times.
In some Romance languages, the frequencies of the dozen or so most frequent words deviate significantly from the ideal Zipf distribution, because of those words include articles inflected for grammatical gender and number.
In many East Asian languages, such as Chinese, Tibetan, and Vietnamese, each morpheme (word or word piece) consists of a single syllable; a word of English being often translated to a compound of two such syllables. The rank-frequency table for those morphemes deviates significantly from the ideal Zipf law, at both ends of the range.
Even in English, the deviations from the ideal Zipf's law become more apparent as one examines large collections of texts. Analysis of a corpus of 30,000 English texts showed that only about 15% of the texts in it have a good fit to Zipf's law. Slight changes in the definition of Zipf's law can increase this percentage up to close to 50%.
In these cases, the observed frequency-rank relation can be modeled more accurately as by separate Zipf–Mandelbrot laws distributions for different subsets or subtypes of words. This is the case for the frequency-rank plot of the first 10 million words of the English Wikipedia. In particular, the frequencies of the closed class of function words in English is better described with lower than 1, while open-ended vocabulary growth with document size and corpus size require greater than 1 for convergence of the Generalized Harmonic Series.
When a text is encrypted in such a way that every occurrence of each distinct plaintext word is always mapped to the same encrypted word (as in the case of simple substitution ciphers, like the Caesar ciphers, or simple codebook ciphers), the frequency-rank distribution is not affected. On the other hand, if separate occurrences of the same word may be mapped to two or more different words (as happens with the Vigenère cipher), the Zipf distribution will typically have a flat part at the high-frequency end.
Applications
Zipf's law has been used for extraction of parallel fragments of texts out of comparable corpora. Laurance Doyle and others have suggested the application of Zipf's law for detection of alien language in the search for extraterrestrial intelligence.
The frequency-rank word distribution is often characteristic of the author and changes little over time. This feature has been used in the analysis of texts for authorship attribution.
The word-like sign groups of the 15th-century codex Voynich Manuscript have been found to satisfy Zipf's law, suggesting that text is most likely not a hoax but rather written in an obscure language or cipher.
See also
Letter frequency
Most common words in English
Notes
References
Further reading
External links
—An article on Zipf's law applied to city populations
Seeing Around Corners (Artificial societies turn up Zipf's law)
PlanetMath article on Zipf's law
Distributions de type "fractal parabolique" dans la Nature (French, with English summary)
An analysis of income distribution
Zipf List of French words
Zipf list for English, French, Spanish, Italian, Swedish, Icelandic, Latin, Portuguese and Finnish from Gutenberg Project and online calculator to rank words in texts
Citations and the Zipf–Mandelbrot's law
Zipf's Law examples and modelling (1985)
Complex systems: Unzipping Zipf's law (2011)
Benford's law, Zipf's law, and the Pareto distribution by Terence Tao.
Discrete distributions
Computational linguistics
Power laws
Statistical laws
Empirical laws
Eponyms
Tails of probability distributions
Quantitative linguistics
Bibliometrics
Corpus linguistics
1949 introductions | Zipf's law | [
"Mathematics",
"Technology"
] | 2,801 | [
"Metrics",
"Bibliometrics",
"Quantity",
"Science and technology studies",
"Computational linguistics",
"Natural language and computing"
] |
43,221 | https://en.wikipedia.org/wiki/E%20number | E numbers, short for Europe numbers, are codes for substances used as food additives, including those found naturally in many foods, such as vitamin C, for use within the European Union (EU) and European Free Trade Association (EFTA). Commonly found on food labels, their safety assessment and approval are the responsibility of the European Food Safety Authority (EFSA). The fact that an additive has an E number implies that its use was at one time permitted in products for sale in the European Single Market; some of these additives are no longer allowed today.
Having a single unified list for food additives was first agreed upon in 1962 with food colouring. In 1964, the directives for preservatives were added, in 1970 antioxidants were added, in 1974 emulsifiers, stabilisers, thickeners and gelling agents were added as well.
Numbering schemes
The numbering scheme follows that of the International Numbering System (INS) as determined by the Codex Alimentarius committee, though only a subset of the INS additives are approved for use in the European Union as food additives. Outside the European continent plus Russia, E numbers are also encountered on food labelling in other jurisdictions, including the Cooperation Council for the Arab States of the Gulf, South Africa, Australia, New Zealand, Malaysia, Hong Kong, and India.
Colloquial use
In some European countries, the "E number" is used informally as a derogatory term for artificial food additives. For example, in the UK, food companies are required to include the "E number(s)" in the ingredients that are added as part of the manufacturing process. Many components of naturally occurring healthy foods and vitamins have assigned E numbers (and the number is a synonym for the chemical component), e.g. vitamin C (E300) and lycopene (E160d), found in carrots. At the same time, "E number" is sometimes misunderstood to imply approval for safe consumption. This is not necessarily the case, e.g. Avoparcin (E715) is an antibiotic once used in animal feed, but is no longer permitted in the EU, and has never been permitted for human consumption. Sodium nitrite (E250) is toxic. Sulfuric acid (E513) is caustic.
Classification by numeric range
Not all examples of a class fall into the given numeric range; moreover, certain chemicals (particularly in the E400–499 range) have a variety of purposes.
Full list
The list shows all components that have an E-number assigned, even those no longer allowed in the EU.
E100–E199 (colours)
E200–E299 (preservatives)
E300–E399 (antioxidants, acidity regulators)
E400–E499 (thickeners, stabilisers, emulsifiers)
E500–E599 (acidity regulators, anti-caking agents)
E600–E699 (flavour enhancers)
E700–E799 (antibiotics)
E900–E999 (glazing agents, gases and sweeteners)
E1000–E1599 (additional additives)
See also
Food Chemicals Codex
List of food additives
International Numbering System for Food Additives
Clean label
References
External links
CODEXALIMENTARIUS FAO-WHO, the international foods standards, established by the Food and Agriculture Organization (FAO) and the World Health Organization (WHO) in 1963
See also their document "Class Names and the International Numbering System for Food Additives" (Ref: CAC/GL #36 publ. in 1989, Revised in 2008, Amended in 2018, 2019, 2021)
Joint FAO/WHO Expert Committee on Food Additives (JECFA) publications at the World Health Organization (WHO)
Food Additive Index, JECFA, Food and Agriculture Organization (FAO)
E-codes and ingredients search engine with details/suggestions for Muslims
Databases of EU-approved food additives and flavoring substances
Food Additives in the European Union
The Food Additives and Ingredients Association, FAIA website, UK.
Chemical numbering schemes
Chemistry-related lists
Food additives
European Union food law
1962 introductions
1962 neologisms
Number-related lists | E number | [
"Chemistry",
"Mathematics"
] | 899 | [
"Mathematical objects",
"Chemical numbering schemes",
"nan",
"Numbers",
"Number-related lists"
] |
43,234 | https://en.wikipedia.org/wiki/Green%20flash | The green flash and green ray are meteorological optical phenomena that sometimes occur transiently around the moment of sunset or sunrise. When the conditions are right, a distinct green spot is briefly visible above the Sun's upper limb; the green appearance usually lasts for no more than two seconds. Rarely, the green flash can resemble a green ray shooting up from the sunset or sunrise point.
Green flashes occur because the Earth's atmosphere can cause the light from the Sun to separate, or refract, into different colors. Green flashes are a group of similar phenomena that stem from slightly different causes, and therefore, some types of green flashes are more common than others.
Observing
Green flashes may be observed from any altitude. They usually are seen at an unobstructed horizon, such as over the ocean, but are possible over cloud tops and mountain tops as well. They may occur at any latitude, although at the equator, the flash rarely lasts longer than a second.
The green flash also may be observed in association with the Moon and bright planets at the horizon, including Venus and Jupiter. With an unrestricted view of the horizon, green flashes are regularly seen by airline pilots, particularly when flying westwards as the sunset is slowed. If the atmosphere is layered, the green flash may appear as a series of flashes.
While observing at the Vatican Observatory in 1960, D.J.K. O'Connell produced the first color photograph of the green flash at sunset.
Explanation
Green flash occurs because the atmosphere causes the light from the Sun to separate, or refract, into different frequencies. Green flashes are enhanced by mirages, which increase refraction. A green flash is more likely to be seen in stable, clear air, when more of the light from the setting sun reaches the observer without being scattered. One might expect to see a blue flash, since blue light is refracted most of all and the blue component of the sun's light is therefore the last to disappear below the horizon, but the blue is preferentially scattered out of the line of sight, and the remaining light ends up appearing green.
With slight magnification, a green rim on the top of the solar disk may be seen on most clear-day sunsets, although the flash or ray effects require a stronger layering of the atmosphere and a mirage, which serves to magnify the green from a fraction of a second to a couple of seconds.
While simple atmospheric refraction or lensing explains the background gradient of red-amber twilight, the primary potential cause of the bright, verdant discontinuity from that gradient known as the Green Flash may be due to naturally-occurring coherent (laser) light. Part of this phenomenon was recently discovered by researchers at the Washington University School of Medicine, regarding infrared-laser light converting-up (or upconverting) to visible-green laser light, causing what researchers there call "a double hit" of photons on the retina, creating the perception of bright neon green from an originally invisible infrared laser.
Doubling the wavelength of green light yields roughly 1000-1100 nm infrared light, so the most likely hypothesis is that the Green Flash is coherent upconverted infrared (laser) light that "double-hits" the retina or camera lens, creating the effect of bright green (additionally, this explains why the Green Flash is also sometimes blue or purple; coherent infrared light can upconvert to any color of visible light if the conditions are met).
Types
The "green flash" description relates to a group of optical phenomena, some of which are listed below:
The majority of flashes observed are inferior-mirage or mock-mirage effects, with the others constituting only 1% of reports. Some types not listed in the table above, such as the cloud-top flash (seen as the Sun sinks into a coastal fog, or at distant cumulus clouds), are not understood.
Blue flashes
On rare occasion, the amount of blue light is sufficient to be visible as a "blue flash".
Green rim
As an astronomical object sets or rises in relation to the horizon, the light it emits travels through Earth's atmosphere, which works as a prism separating the light into different colors. The color of the upper rim of an astronomical object could go from green to blue to violet depending on the decrease in concentration of pollutants as they spread throughout an increasing volume of atmosphere. The lower rim of an astronomical object is always red.A green rim is very thin and is difficult or impossible to see with the naked eye. In usual conditions, a green rim of an astronomical object gets fainter when an astronomical object is very low above the horizon because of atmospheric reddening, but sometimes the conditions are right to see a green rim just above the horizon.
The following quote describes what was probably the longest observation of a green rim, which at times could have been a green flash. It was seen on and off for 35 minutes by members of the Richard Evelyn Byrd party from the Antarctic Little America exploration base in 1934:
For the explorers to have seen a green rim on and off for 35 minutes, there must have been some mirage effect present.
A green rim is present at every sunset, but it is too thin to be seen with the naked eye. Often a green rim changes to a green flash and back again during the same sunset. The best time to observe a green rim is about 10 minutes before sunset. That is too early to use any magnification like binoculars or a telescope to look directly at the Sun without potential harm to the eyes. (Of course, a magnified image might be projected onto a sheet of paper for safe viewing.) As the Sun gets closer to the horizon, the green rim becomes fainter due to atmospheric reddening. According to the above, it is probably correct to conclude that although a green rim is present during every sunset, a green flash is rarer because of the required mirage.
In popular culture
Jules Verne's 1882 novel The Green Ray helped to popularize the green flash phenomenon.
In Éric Rohmer's 1986 film The Green Ray (French: Le rayon vert), the main character, Delphine, eavesdrops on a conversation about Jules Verne's novel and the significance of the green flash, eventually witnessing the phenomenon herself in the final scene.
In "Arthur's New Year's Eve" from the first season of Arthur in 1996, Arthur Read, having never stayed up until midnight on New Year's Eve before, talks with his friends about what happens when the New Year comes. Despite not actually having stayed up themselves, they each share their take on the matter, Prunella Deegan telling him that there is an amazing green flash at midnight, but if it doesn't happen, then it has to stay the same year for another whole year.
Walt Disney Pictures' 2007 movie Pirates of the Caribbean: At World's End references the green flash as a signal that a soul had returned from the dead.
The episode Trials and Determinations! of Pokémon the Series: Sun & Moon references the green flash when Ash's Rockruff evolves into Dusk Form Lycanroc after witnessing a green flash at sunset.
See also
Mirage of astronomical objects
Crown flash
Fogbow
References
Further reading
David Winsta "Atmospheric Refraction and the Last Rays of the Setting Sun", reported at the Manchester Literary & Philosophical Society Meeting, 7 October 1873
Sir Arthur Schuster, Letter to NATURE, 21 February 1915, referring to his observation of the phenomenon on a voyage in the Indian Ocean in 1875
Captain Alfred Carpenter & Captain D. Wilson-Barker, Nature Notes for Ocean Voyagers (London, 1915), reported on page 147
External links
A Green Flash Page, Andrew T. Young's page with comprehensive explanations and simulations
Green Flash – Atmospheric Optics, explanations and image gallery, Les Cowley's Atmospheric Optics site
06/03/2010 Photograph of a green flash over the Indian Ocean
Green Flash Videos
Atmospheric optical phenomena
Solar phenomena
Sky | Green flash | [
"Physics"
] | 1,627 | [
"Physical phenomena",
"Earth phenomena",
"Optical phenomena",
"Solar phenomena",
"Stellar phenomena",
"Atmospheric optical phenomena"
] |
43,246 | https://en.wikipedia.org/wiki/Coalition%20for%20Positive%20Sexuality | Coalition for Positive Sexuality is an internet-based comprehensive sexuality education website, for youth and young adults.
The most commonly used feature of the CPS website is the "Let's Talk" feature, which allows youth members to post anonymously and receive answers to sexual health-related questions from moderators.
Staff and moderators
CPS staff and moderators are graduate and doctoral-level health educators, researchers, legal professionals, counselors and advocates with years of professional experience addressing myriad adolescent health issues, including teen pregnancy prevention, sexual orientation, reproductive health laws, and sexually transmitted infections (STI) and HIV/AIDS prevention.
References
http://www.positive.org
Health education
Human sexuality | Coalition for Positive Sexuality | [
"Biology"
] | 142 | [
"Human sexuality",
"Behavior",
"Sexuality stubs",
"Sexuality",
"Human behavior"
] |
43,270 | https://en.wikipedia.org/wiki/Trace%20%28linear%20algebra%29 | In linear algebra, the trace of a square matrix , denoted , is the sum of the elements on its main diagonal, . It is only defined for a square matrix ().
The trace of a matrix is the sum of its eigenvalues (counted with multiplicities). Also, for any matrices and of the same size. Thus, similar matrices have the same trace. As a consequence, one can define the trace of a linear operator mapping a finite-dimensional vector space into itself, since all matrices describing such an operator with respect to a basis are similar.
The trace is related to the derivative of the determinant (see Jacobi's formula).
Definition
The trace of an square matrix is defined as
where denotes the entry on the row and column of . The entries of can be real numbers, complex numbers, or more generally elements of a field . The trace is not defined for non-square matrices.
Example
Let be a matrix, with
Then
Properties
Basic properties
The trace is a linear mapping. That is,
for all square matrices and , and all scalars .
A matrix and its transpose have the same trace:
This follows immediately from the fact that transposing a square matrix does not affect elements along the main diagonal.
Trace of a product
The trace of a square matrix which is the product of two matrices can be rewritten as the sum of entry-wise products of their elements, i.e. as the sum of all elements of their Hadamard product. Phrased directly, if and are two matrices, then:
If one views any real matrix as a vector of length (an operation called vectorization) then the above operation on and coincides with the standard dot product. According to the above expression, is a sum of squares and hence is nonnegative, equal to zero if and only if is zero. Furthermore, as noted in the above formula, . These demonstrate the positive-definiteness and symmetry required of an inner product; it is common to call the Frobenius inner product of and . This is a natural inner product on the vector space of all real matrices of fixed dimensions. The norm derived from this inner product is called the Frobenius norm, and it satisfies a submultiplicative property, as can be proven with the Cauchy–Schwarz inequality:
if and are real positive semi-definite matrices of the same size. The Frobenius inner product and norm arise frequently in matrix calculus and statistics.
The Frobenius inner product may be extended to a hermitian inner product on the complex vector space of all complex matrices of a fixed size, by replacing by its complex conjugate.
The symmetry of the Frobenius inner product may be phrased more directly as follows: the matrices in the trace of a product can be switched without changing the result. If and are and real or complex matrices, respectively, then
This is notable both for the fact that does not usually equal , and also since the trace of either does not usually equal . The similarity-invariance of the trace, meaning that for any square matrix and any invertible matrix of the same dimensions, is a fundamental consequence. This is proved by
Similarity invariance is the crucial property of the trace in order to discuss traces of linear transformations as below.
Additionally, for real column vectors and , the trace of the outer product is equivalent to the inner product:
Cyclic property
More generally, the trace is invariant under circular shifts, that is,
This is known as the cyclic property.
Arbitrary permutations are not allowed: in general,
However, if products of three symmetric matrices are considered, any permutation is allowed, since:
where the first equality is because the traces of a matrix and its transpose are equal. Note that this is not true in general for more than three factors.
Trace of a Kronecker product
The trace of the Kronecker product of two matrices is the product of their traces:
Characterization of the trace
The following three properties:
characterize the trace up to a scalar multiple in the following sense: If is a linear functional on the space of square matrices that satisfies then and are proportional.
For matrices, imposing the normalization makes equal to the trace.
Trace as the sum of eigenvalues
Given any matrix , there is
where are the eigenvalues of counted with multiplicity. This holds true even if is a real matrix and some (or all) of the eigenvalues are complex numbers. This may be regarded as a consequence of the existence of the Jordan canonical form, together with the similarity-invariance of the trace discussed above.
Trace of commutator
When both and are matrices, the trace of the (ring-theoretic) commutator of and vanishes: , because and is linear. One can state this as "the trace is a map of Lie algebras from operators to scalars", as the commutator of scalars is trivial (it is an Abelian Lie algebra). In particular, using similarity invariance, it follows that the identity matrix is never similar to the commutator of any pair of matrices.
Conversely, any square matrix with zero trace is a linear combination of the commutators of pairs of matrices. Moreover, any square matrix with zero trace is unitarily equivalent to a square matrix with diagonal consisting of all zeros.
Traces of special kinds of matrices
Relationship to the characteristic polynomial
The trace of an matrix is the coefficient of in the characteristic polynomial, possibly changed of sign, according to the convention in the definition of the characteristic polynomial.
Relationship to eigenvalues
If is a linear operator represented by a square matrix with real or complex entries and if are the eigenvalues of (listed according to their algebraic multiplicities), then
This follows from the fact that is always similar to its Jordan form, an upper triangular matrix having on the main diagonal. In contrast, the determinant of is the product of its eigenvalues; that is,
Everything in the present section applies as well to any square matrix with coefficients in an algebraically closed field.
Derivative relationships
If is a square matrix with small entries and denotes the identity matrix, then we have approximately
Precisely this means that the trace is the derivative of the determinant function at the identity matrix. Jacobi's formula
is more general and describes the differential of the determinant at an arbitrary square matrix, in terms of the trace and the adjugate of the matrix.
From this (or from the connection between the trace and the eigenvalues), one can derive a relation between the trace function, the matrix exponential function, and the determinant:
A related characterization of the trace applies to linear vector fields. Given a matrix , define a vector field on by . The components of this vector field are linear functions (given by the rows of ). Its divergence is a constant function, whose value is equal to .
By the divergence theorem, one can interpret this in terms of flows: if represents the velocity of a fluid at location and is a region in , the net flow of the fluid out of is given by , where is the volume of .
The trace is a linear operator, hence it commutes with the derivative:
Trace of a linear operator
In general, given some linear map (where is a finite-dimensional vector space), we can define the trace of this map by considering the trace of a matrix representation of , that is, choosing a basis for and describing as a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise to similar matrices, allowing for the possibility of a basis-independent definition for the trace of a linear map.
Such a definition can be given using the canonical isomorphism between the space of linear maps on and , where is the dual space of . Let be in and let be in . Then the trace of the indecomposable element is defined to be ; the trace of a general element is defined by linearity. The trace of a linear map can then be defined as the trace, in the above sense, of the element of corresponding to f under the above mentioned canonical isomorphism. Using an explicit basis for and the corresponding dual basis for , one can show that this gives the same definition of the trace as given above.
Numerical algorithms
Stochastic estimator
The trace can be estimated unbiasedly by "Hutchinson's trick":Given any matrix , and any random with , we have . (Proof: expand the expectation directly.)Usually, the random vector is sampled from (normal distribution) or (Rademacher distribution).
More sophisticated stochastic estimators of trace have been developed.
Applications
If a 2 x 2 real matrix has zero trace, its square is a diagonal matrix.
The trace of a 2 × 2 complex matrix is used to classify Möbius transformations. First, the matrix is normalized to make its determinant equal to one. Then, if the square of the trace is 4, the corresponding transformation is parabolic. If the square is in the interval , it is elliptic. Finally, if the square is greater than 4, the transformation is loxodromic. See classification of Möbius transformations.
The trace is used to define characters of group representations. Two representations of a group are equivalent (up to change of basis on ) if for all .
The trace also plays a central role in the distribution of quadratic forms.
Lie algebra
The trace is a map of Lie algebras from the Lie algebra of linear operators on an -dimensional space ( matrices with entries in ) to the Lie algebra of scalars; as is Abelian (the Lie bracket vanishes), the fact that this is a map of Lie algebras is exactly the statement that the trace of a bracket vanishes:
The kernel of this map, a matrix whose trace is zero, is often said to be or , and these matrices form the simple Lie algebra , which is the Lie algebra of the special linear group of matrices with determinant 1. The special linear group consists of the matrices which do not change volume, while the special linear Lie algebra is the matrices which do not alter volume of infinitesimal sets.
In fact, there is an internal direct sum decomposition of operators/matrices into traceless operators/matrices and scalars operators/matrices. The projection map onto scalar operators can be expressed in terms of the trace, concretely as:
Formally, one can compose the trace (the counit map) with the unit map of "inclusion of scalars" to obtain a map mapping onto scalars, and multiplying by . Dividing by makes this a projection, yielding the formula above.
In terms of short exact sequences, one has
which is analogous to
(where ) for Lie groups. However, the trace splits naturally (via times scalars) so , but the splitting of the determinant would be as the th root times scalars, and this does not in general define a function, so the determinant does not split and the general linear group does not decompose:
Bilinear forms
The bilinear form (where , are square matrices)
is called the Killing form, which is used for the classification of Lie algebras.
The trace defines a bilinear form:
The form is symmetric, non-degenerate and associative in the sense that:
For a complex simple Lie algebra (such as ), every such bilinear form is proportional to each other; in particular, to the Killing form.
Two matrices and are said to be trace orthogonal if
There is a generalization to a general representation of a Lie algebra , such that is a homomorphism of Lie algebras The trace form on is defined as above. The bilinear form
is symmetric and invariant due to cyclicity.
Generalizations
The concept of trace of a matrix is generalized to the trace class of compact operators on Hilbert spaces, and the analog of the Frobenius norm is called the Hilbert–Schmidt norm.
If is a trace-class operator, then for any orthonormal basis , the trace is given by
and is finite and independent of the orthonormal basis.
The partial trace is another generalization of the trace that is operator-valued. The trace of a linear operator which lives on a product space is equal to the partial traces over and :
For more properties and a generalization of the partial trace, see traced monoidal categories.
If is a general associative algebra over a field , then a trace on is often defined to be any map which vanishes on commutators; for all . Such a trace is not uniquely defined; it can always at least be modified by multiplication by a nonzero scalar.
A supertrace is the generalization of a trace to the setting of superalgebras.
The operation of tensor contraction generalizes the trace to arbitrary tensors.
Gomme and Klein (2011) define a matrix trace operator that operates on block matrices and use it to compute second-order perturbation solutions to dynamic economic models without the need for tensor notation.
Traces in the language of tensor products
Given a vector space , there is a natural bilinear map given by sending to the scalar . The universal property of the tensor product automatically implies that this bilinear map is induced by a linear functional on .
Similarly, there is a natural bilinear map given by sending to the linear map . The universal property of the tensor product, just as used previously, says that this bilinear map is induced by a linear map . If is finite-dimensional, then this linear map is a linear isomorphism. This fundamental fact is a straightforward consequence of the existence of a (finite) basis of , and can also be phrased as saying that any linear map can be written as the sum of (finitely many) rank-one linear maps. Composing the inverse of the isomorphism with the linear functional obtained above results in a linear functional on . This linear functional is exactly the same as the trace.
Using the definition of trace as the sum of diagonal elements, the matrix formula is straightforward to prove, and was given above. In the present perspective, one is considering linear maps and , and viewing them as sums of rank-one maps, so that there are linear functionals and and nonzero vectors and such that and for any in . Then
for any in . The rank-one linear map has trace and so
Following the same procedure with and reversed, one finds exactly the same formula, proving that equals .
The above proof can be regarded as being based upon tensor products, given that the fundamental identity of with is equivalent to the expressibility of any linear map as the sum of rank-one linear maps. As such, the proof may be written in the notation of tensor products. Then one may consider the multilinear map given by sending to . Further composition with the trace map then results in , and this is unchanged if one were to have started with instead. One may also consider the bilinear map given by sending to the composition , which is then induced by a linear map . It can be seen that this coincides with the linear map . The established symmetry upon composition with the trace map then establishes the equality of the two traces.
For any finite dimensional vector space , there is a natural linear map ; in the language of linear maps, it assigns to a scalar the linear map . Sometimes this is called coevaluation map, and the trace is called evaluation map. These structures can be axiomatized to define categorical traces in the abstract setting of category theory.
See also
Trace of a tensor with respect to a metric tensor
Characteristic function
Field trace
Golden–Thompson inequality
Singular trace
Specht's theorem
Trace class
Trace identity
Trace inequalities
von Neumann's trace inequality
Notes
References
External links
Linear algebra
Matrix theory
Trace theory | Trace (linear algebra) | [
"Mathematics"
] | 3,264 | [
"Linear algebra",
"Algebra"
] |
43,285 | https://en.wikipedia.org/wiki/Common%20Object%20Request%20Broker%20Architecture | The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) designed to facilitate the communication of systems that are deployed on diverse platforms. CORBA enables collaboration between systems on different operating systems, programming languages, and computing hardware. CORBA uses an object-oriented model although the systems that use the CORBA do not have to be object-oriented. CORBA is an example of the distributed object paradigm.
While briefly popular in the mid to late 1990s, CORBA's complexity, inconsistency, and high licensing costs have relegated it to being a niche technology.
Overview
CORBA enables communication between software written in different languages and running on different computers. Implementation details from specific operating systems, programming languages, and hardware platforms are all removed from the responsibility of developers who use CORBA. CORBA normalizes the method-call semantics between application objects residing either in the same address-space (application) or in remote address-spaces (same host, or remote host on a network). Version 1.0 was released in October 1991.
CORBA uses an interface definition language (IDL) to specify the interfaces that objects present to the outer world. CORBA then specifies a mapping from IDL to a specific implementation language like C++ or Java. Standard mappings exist for Ada, C, C++, C++11, COBOL, Java, Lisp, PL/I, Object Pascal, Python, Ruby, and Smalltalk. Non-standard mappings exist for C#, Erlang, Perl, Tcl, and Visual Basic implemented by object request brokers (ORBs) written for those languages. Versions of IDL have changed significantly with annotations replacing some pragmas.
The CORBA specification dictates there shall be an ORB through which an application would interact with other objects. This is how it is implemented in practice:
The application initializes the ORB, and accesses an internal Object Adapter, which maintains things like reference counting, object (and reference) instantiation policies, and object lifetime policies.
The Object Adapter is used to register instances of the generated code classes. Generated code classes are the result of compiling the user IDL code, which translates the high-level interface definition into an OS- and language-specific class base for use by the user application. This step is necessary in order to enforce CORBA semantics and provide a clean user process for interfacing with the CORBA infrastructure.
Some IDL mappings are more difficult to use than others. For example, due to the nature of Java, the IDL-Java mapping is rather straightforward and makes usage of CORBA very simple in a Java application. This is also true of the IDL to Python mapping. The C++ mapping requires the programmer to learn datatypes that predate the C++ Standard Template Library (STL). By contrast, the C++11 mapping is easier to use, but requires heavy use of the STL. Since the C language is not object-oriented, the IDL to C mapping requires a C programmer to manually emulate object-oriented features.
In order to build a system that uses or implements a CORBA-based distributed object interface, a developer must either obtain or write the IDL code that defines the object-oriented interface to the logic the system will use or implement. Typically, an ORB implementation includes a tool called an IDL compiler that translates the IDL interface into the target language for use in that part of the system. A traditional compiler then compiles the generated code to create the linkable-object files for use in the application. This diagram illustrates how the generated code is used within the CORBA infrastructure:
This figure illustrates the high-level paradigm for remote interprocess communications using CORBA. The CORBA specification further addresses data typing, exceptions, network protocols, communication timeouts, etc. For example: Normally the server side has the Portable Object Adapter (POA) that redirects calls either to the local servants or (to balance the load) to the other servers. The CORBA specification (and thus this figure) leaves various aspects of distributed system to the application to define including object lifetimes (although reference counting semantics are available to applications), redundancy/fail-over, memory management, dynamic load balancing, and application-oriented models such as the separation between display/data/control semantics (e.g. see Model–view–controller), etc.
In addition to providing users with a language and a platform-neutral remote procedure call (RPC) specification, CORBA defines commonly needed services such as transactions and security, events, time, and other domain-specific interface models.
Versions history
This table presents the history of CORBA standard versions.
Note that IDL changes have progressed with annotations (e.g. @unit, @topic) replacing some pragmas.
Servants
A servant is the invocation target containing methods for handling the remote method invocations. In the newer CORBA versions, the remote object (on the server side) is split into the object (that is exposed to remote invocations) and servant (to which the former part forwards the method calls). It can be one servant per remote object, or the same servant can support several (possibly all) objects, associated with the given Portable Object Adapter. The servant for each object can be set or found "once and forever" (servant activation) or dynamically chosen each time the method on that object is invoked (servant location). Both servant locator and servant activator can forward the calls to another server. In total, this system provides a very powerful means to balance the load, distributing requests between several machines. In the object-oriented languages, both remote object and its servant are objects from the viewpoint of the object-oriented programming.
Incarnation is the act of associating a servant with a CORBA object so that it may service requests. Incarnation provides a concrete servant form for the virtual CORBA object. Activation and deactivation refer only to CORBA objects, while the terms incarnation and etherealization refer to servants. However, the lifetimes of objects and servants are independent. You always incarnate a servant before calling activate_object(), but the reverse is also possible, create_reference() activates an object without incarnating a servant, and servant incarnation is later done on demand with a Servant Manager.
The (POA) is the CORBA object responsible for splitting the server side remote invocation handler into the remote object and its servant. The object is exposed for the remote invocations, while the servant contains the methods that are actually handling the requests. The servant for each object can be chosen either statically (once) or dynamically (for each remote invocation), in both cases allowing the call forwarding to another server.
On the server side, the POAs form a tree-like structure, where each POA is responsible for one or more objects being served. The branches of this tree can be independently activated/deactivated, have the different code for the servant location or activation and the different request handling policies.
Features
The following describes some of the most significant ways that CORBA can be used to facilitate communication among distributed objects.
Objects By Reference
This reference is either acquired through a stringified Uniform Resource Locator (URL), NameService lookup (similar to Domain Name System (DNS)), or passed-in as a method parameter during a call.
Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success, or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according to the local language and OS mapping.
Data By Value
The CORBA Interface Definition Language provides the language- and OS-neutral inter-object communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc.) are passed by value. The combination of Objects-by-reference and data-by-value provides the means to enforce great data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problem-space.
Objects By Value (OBV)
Apart from remote objects, the CORBA and RMI-IIOP define the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be either a priori known for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list of URLs whence this code should be downloaded. The OBV can also have the remote methods.
CORBA Component Model (CCM)
CORBA Component Model (CCM) is an addition to the family of CORBA definitions. It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language dependent Enterprise Java Beans (EJB)", it is a more general form of EJB, providing four component types instead of the two that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces called ports.
The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to) notification, authentication, persistence, and transaction processing. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced.
Portable interceptors
Portable interceptors are the "hooks", used by CORBA and RMI-IIOP to mediate the most important functions of the CORBA system. The CORBA standard defines the following types of interceptors:
IOR interceptors mediate the creation of the new references to the remote objects, presented by the current server.
Client interceptors usually mediate the remote method calls on the client (caller) side. If the object Servant exists on the same server where the method is invoked, they also mediate the local calls.
Server interceptors mediate the handling of the remote method calls on the server (handler) side.
The interceptors can attach the specific information to the messages being sent and IORs being created. This information can be later read by the corresponding interceptor on the remote side. Interceptors can also throw forwarding exceptions, redirecting request to another target.
General InterORB Protocol (GIOP)
The GIOP is an abstract protocol by which Object request brokers (ORBs) communicate. Standards associated with the protocol are maintained by the Object Management Group (OMG). The GIOP architecture provides several concrete protocols, including:
Internet InterORB Protocol (IIOP) – The Internet Inter-Orb Protocol is an implementation of the GIOP for use over the Internet, and provides a mapping between GIOP messages and the TCP/IP layer.
SSL InterORB Protocol (SSLIOP) – SSLIOP is IIOP over SSL, providing encryption and authentication.
HyperText InterORB Protocol (HTIOP) – HTIOP is IIOP over HTTP, providing transparent proxy bypassing.
Zipped IOP (ZIOP) – A zipped version of GIOP that reduces the bandwidth usage.
VMCID (Vendor Minor Codeset ID)
Each standard CORBA exception includes a minor code to designate the subcategory of the exception. Minor exception codes are of type unsigned long and consist of a 20-bit "Vendor Minor Codeset ID" (VMCID), which occupies the high order 20 bits, and the minor code proper which occupies the low order 12 bits.
Minor codes for the standard exceptions are prefaced by the VMCID assigned to OMG, defined as the unsigned long constant CORBA::OMGVMCID, which has the VMCID allocated to OMG occupying the high order 20 bits. The minor exception codes associated with the standard exceptions that are found in Table 3–13 on page 3-58 are or-ed with OMGVMCID to get the minor code value that is returned in the ex_body structure (see Section 3.17.1, "Standard Exception Definitions", on page 3-52 and Section 3.17.2, "Standard Minor Exception Codes", on page 3-58).
Within a vendor assigned space, the assignment of values to minor codes is left to the vendor. Vendors may request allocation of VMCIDs by sending email to tagrequest@omg.org. A list of currently assigned VMCIDs can be found on the OMG website at: https://www.omg.org/cgi-bin/doc?vendor-tags
The VMCID 0 and 0xfffff are reserved for experimental use. The VMCID OMGVMCID (Section 3.17.1, "Standard Exception Definitions", on page 3-52) and 1 through 0xf are reserved for OMG use.
The Common Object Request Broker: Architecture and Specification (CORBA 2.3)
Corba Location (CorbaLoc)
Corba Location (CorbaLoc) refers to a stringified object reference for a CORBA object that looks similar to a URL.
All CORBA products must support two OMG-defined URLs: "" and "". The purpose of these is to provide a human readable and editable way to specify a location where an IOR can be obtained.
An example of corbaloc is shown below:
A CORBA product may optionally support the "", "", and "" formats. The semantics of these is that they provide details of how to download a stringified IOR (or, recursively, download another URL that will eventually provide a stringified IOR). Some ORBs do deliver additional formats which are proprietary for that ORB.
Benefits
CORBA's benefits include language- and OS-independence, freedom from technology-linked implementations, strong data-typing, high level of tunability, and freedom from the details of distributed data transfers.
Language independenceCORBA was designed to free engineers from limitations of coupling their designs to a particular software language. Currently there are many languages supported by various CORBA providers, the most popular being Java and C++. There are also C++11, C-only, Smalltalk, Perl, Ada, Ruby, and Python implementations, just to mention a few.
OS-independence CORBA's design is meant to be OS-independent. CORBA is available in Java (OS-independent), as well as natively for Linux/Unix, Windows, Solaris, OS X, OpenVMS, HPUX, Android, LynxOS, VxWorks, ThreadX, INTEGRITY, and others.
Freedom from technologies One of the main implicit benefits is that CORBA provides a neutral playing field for engineers to be able to normalize the interfaces between various new and legacy systems. When integrating C, C++, Object Pascal, Java, Fortran, Python, and any other language or OS into a single cohesive system design model, CORBA provides the means to level the field and allow disparate teams to develop systems and unit tests that can later be joined together into a whole system. This does not rule out the need for basic system engineering decisions, such as threading, timing, object lifetime, etc. These issues are part of any system regardless of technology. CORBA allows system elements to be normalized into a single cohesive system model. For example, the design of a multitier architecture is made simple using Java Servlets in the web server and various CORBA servers containing the business logic and wrapping the database accesses. This allows the implementations of the business logic to change, while the interface changes would need to be handled as in any other technology. For example, a database wrapped by a server can have its database schema change for the sake of improved disk usage or performance (or even whole-scale database vendor change), without affecting the external interfaces. At the same time, C++ legacy code can talk to C/Fortran legacy code and Java database code, and can provide data to a web interface.
Data-typing CORBA provides flexible data typing, for example an "ANY" datatype. CORBA also enforces tightly coupled data typing, reducing human errors. In a situation where Name-Value pairs are passed around, it is conceivable that a server provides a number where a string was expected. CORBA Interface Definition Language provides the mechanism to ensure that user-code conforms to method-names, return-, parameter-types, and exceptions.
High tunability Many implementations (e.g. ORBexpress (Ada, C++, and Java implementation) and OmniORB (open source C++ and Python implementation)) have options for tuning the threading and connection management features. Not all ORB implementations provide the same features.
Freedom from data-transfer details When handling low-level connection and threading, CORBA provides a high level of detail in error conditions. This is defined in the CORBA-defined standard exception set and the implementation-specific extended exception set. Through the exceptions, the application can determine if a call failed for reasons such as "Small problem, so try again", "The server is dead", or "The reference does not make sense." The general rule is: Not receiving an exception means that the method call completed successfully. This is a very powerful design feature.
Compression CORBA marshals its data in a binary form and supports compression. IONA, Remedy IT, and Telefónica have worked on an extension to the CORBA standard that delivers compression. This extension is called ZIOP and this is now a formal OMG standard.
Problems and criticism
While CORBA delivered much in the way code was written and software constructed, it has been the subject of criticism.
Much of the criticism of CORBA stems from poor implementations of the standard and not deficiencies of the standard itself. Some of the failures of the standard itself were due to the process by which the CORBA specification was created and the compromises inherent in the politics and business of writing a common standard sourced by many competing implementors.
Initial implementation incompatibilities
The initial specifications of CORBA defined only the IDL, not the on-the-wire format. This meant that source-code compatibility was the best that was available for several years. With CORBA 2 and later this issue was resolved.
Location transparency
CORBA's notion of location transparency has been criticized; that is, that objects residing in the same address space and accessible with a simple function call are treated the same as objects residing elsewhere (different processes on the same machine, or different machines). This is a fundamental design flaw, as it makes all object access as complex as the most complex case (i.e., remote network call with a wide class of failures that are not possible in local calls). It also hides the inescapable differences between the two classes, making it impossible for applications to select an appropriate use strategy (that is, a call with 1 μs latency and guaranteed return will be used very differently from a call with 1 s latency with possible transport failure, in which the delivery status is potentially unknown and might take 30 s to time out).
Design and process deficiencies
The creation of the CORBA standard is also often cited for its process of design by committee. There was no process to arbitrate between conflicting proposals or to decide on the hierarchy of problems to tackle. Thus the standard was created by taking a union of the features in all proposals with no regard to their coherence. This made the specification complex, expensive to implement entirely, and often ambiguous.
A design committee composed of a mixture of implementation vendors and customers created a diverse set of interests. This diversity made difficult a cohesive standard. Standards and interoperability increased competition and eased customers' movement between alternative implementations. This led to much political fighting within the committee and frequent releases of revisions of the CORBA standard that some ORB implementors ensured were difficult to use without proprietary extensions. Less ethical CORBA vendors encouraged customer lock-in and achieved strong short-term results. Over time the ORB vendors that encourage portability took over market share.
Problems with implementations
Through its history, CORBA has been plagued by shortcomings in poor ORB implementations. Unfortunately many of the papers criticizing CORBA as a standard are simply criticisms of a particularly bad CORBA ORB implementation.
CORBA is a comprehensive standard with many features. Few implementations attempt to implement all of the specifications, and initial implementations were incomplete or inadequate. As there were no requirements to provide a reference implementation, members were free to propose features which were never tested for usefulness or implementability. Implementations were further hindered by the general tendency of the standard to be verbose, and the common practice of compromising by adopting the sum of all submitted proposals, which often created APIs that were incoherent and difficult to use, even if the individual proposals were perfectly reasonable.
Robust implementations of CORBA have been very difficult to acquire in the past, but are now much easier to find. The SUN Java SDK comes with CORBA built-in. Some poorly designed implementations have been found to be complex, slow, incompatible, and incomplete. Robust commercial versions began to appear but for significant cost. As good quality free implementations became available the bad commercial implementations died quickly.
Firewalls
CORBA (more precisely, GIOP) is not tied to any particular communications transport. A specialization of GIOP is the Internet Inter-ORB Protocol or IIOP. IIOP uses raw TCP/IP connections in order to transmit data.
If the client is behind a very restrictive firewall or transparent proxy server environment that only allows HTTP connections to the outside through port 80, communication may be impossible, unless the proxy server in question allows the HTTP CONNECT method or SOCKS connections as well. At one time, it was difficult even to force implementations to use a single standard port – they tended to pick multiple random ports instead. As of today, current ORBs do have these deficiencies. Due to such difficulties, some users have made increasing use of web services instead of CORBA. These communicate using XML/SOAP via port 80, which is normally left open or filtered through a HTTP proxy inside the organization, for web browsing via HTTP. Recent CORBA implementations, though, support SSL and can be easily configured to work on a single port. Some ORBS, such as TAO, omniORB, and JacORB also support bidirectional GIOP, which gives CORBA the advantage of being able to use callback communication rather than the polling approach characteristic of web service implementations. Also, most modern firewalls support GIOP & IIOP and are thus CORBA-friendly firewalls.
See also
Software engineering
Component-based software engineering
Distributed computing
Portable object
Service-oriented architecture (SOA)
Component-based software technologies
Common Language Infrastructure – Current .NET cross-language cross-platform object model
Component Object Model (COM) – Microsoft Windows-only cross-language object model
DCOM (Distributed COM) – extension making COM able to work in networks
Freedesktop.org D-Bus – current open cross-language cross-platform object model
GNOME Bonobo – deprecated GNOME cross-language object model
IBM System Object Model SOM and DSOM – component systems from IBM used in OS/2 and AIX
Internet Communications Engine (ICE)
Java Platform, Enterprise Edition (Java EE)
Java remote method invocation (Java RMI)
JavaBean
KDE DCOP – deprecated KDE interprocess and software componentry communication system
KDE KParts – KDE component framework
OpenAIR
Remote procedure call (RPC)
Software Communications Architecture (SCA) – components for embedded systems, cross-language, cross-transport, cross-platform
Windows Communication Foundation (WCF)
XPCOM (Cross Platform Component Object Model) – developed by Mozilla for applications based on it (e.g. Mozilla Application Suite, SeaMonkey 1.x)
Language bindings
Application binary interface - ABI
Application programming interface - API
Calling convention
Comparison of application virtual machines
Dynamic Invocation Interface
Foreign function interface
Language binding
Name mangling
SWIG opensource automatic interfaces bindings generator from many languages to many languages
References
Further reading
External links
Official OMG CORBA Components page
Unofficial CORBA Component Model page
Comparing IDL to C++ with IDL to C++11
CORBA: Gone But (Hopefully) Not Forgotten
OMG XMI Specification
Component-based software engineering
GNOME
Inter-process communication
ISO standards
Object-oriented programming | Common Object Request Broker Architecture | [
"Technology"
] | 5,166 | [
"Component-based software engineering",
"Components"
] |
43,303 | https://en.wikipedia.org/wiki/NCSA%20Mosaic | NCSA Mosaic was among the first widely available web browsers, instrumental in popularizing the World Wide Web and the general Internet by integrating multimedia such as text and graphics. Mosaic was the first browser to display images inline with text (instead of a separate window).
Named for supporting multiple Internet protocols, including Hypertext Transfer Protocol, File Transfer Protocol, Network News Transfer Protocol, and Gopher, its intuitive interface, reliability, personal computer support, and simple installation all contributed to Mosaic's initial popularity. Mistakenly described as the first graphical web browser, it was preceded by WorldWideWeb, the lesser-known Erwise, and ViolaWWW.
Mosaic was developed at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana–Champaign beginning in late 1992, released in January 1993, with official development and support until January 1997. Mosaic lost market share to Netscape Navigator in late 1994, and had only a tiny fraction of users left by 1997, when the project was discontinued. Microsoft licensed one of the derivative commercial products, Spyglass Mosaic, to create Internet Explorer in 1995.
History
In December 1991, the High Performance Computing Act of 1991 was passed, which provided funding for new projects at the NCSA, where after trying ViolaWWW, David Thompson demonstrated it to the NCSA software design group. This inspired Marc Andreessen and Eric Bina – two programmers working at NCSA – to create Mosaic. Andreessen and Bina began developing Mosaic in December 1992 for Unix's X Window System, calling it xmosaic. Marc Andreessen announced the project's first release, the "alpha/beta version 0.5," on January 23, 1993. Version 1.0 was released on April 21, 1993. Ports to Microsoft Windows and Macintosh were released in September. A port of Mosaic to the Amiga was available by October 1993. NCSA Mosaic for Unix (X Window System) version 2.0 was released on November 10, 1993 and was notable for adding support for forms, thus enabling the creation of the first dynamic web pages. From 1994 to 1997, the National Science Foundation supported the further development of Mosaic.
Marc Andreessen, the leader of the team that developed Mosaic, left NCSA and, with James H. Clark, one of the founders of Silicon Graphics, Inc. (SGI), and four other former students and staff of the University of Illinois, started Mosaic Communications Corporation. Mosaic Communications eventually became Netscape Communications Corporation, producing Netscape Navigator. Mosaic's popularity as a separate browser began to decrease after the 1994 release of Netscape Navigator, the relevance of which was noted in The HTML Sourcebook: The Complete Guide to HTML: "Netscape Communications has designed an all-new WWW browser Netscape, that has significant enhancements over the original Mosaic program."
In 1994, SCO released Global Access, a modified version of SCO's Open Desktop Unix, which became the first commercial product to incorporate Mosaic. However, by 1998, the Mosaic user base had almost completely evaporated as users moved to other web browsers.
Licensing
The licensing terms for NCSA Mosaic were generous for a proprietary software program. In general, non-commercial use was free of charge for all versions (with certain limitations). Additionally, the X Window System/Unix version publicly provided source code (source code for the other versions was available after agreements were signed). Despite persistent rumors to the contrary, however, Mosaic was never released as open source software during its brief reign as a major browser; there were always constraints on permissible uses without payment.
, license holders included these:
Amdahl Corporation
Fujitsu Limited (Product: Infomosaic, a Japanese version of Mosaic. Price: Yen5,000 (approx US$50)
Infoseek Corporation (Product: No commercial Mosaic. May use Mosaic as part of a commercial database effort)
Quadralay Corporation (Consumer version of Mosaic. Also using Mosaic in its online help and information product, GWHIS. Price: US$249)
Quarterdeck Office Systems Inc.
The Santa Cruz Operation Inc. (Product: Incorporating Mosaic into "SCO Global Access", a communications package for Unix machines that works with SCO's Open Server. Runs a graphical e-mail service and accesses newsgroups.)
SPRY Inc. (Products: A communication suite: Air Mail, Air News, Air Mosaic, etc. Also producing Internet In a Box with O'Reilly & Associates. Price: US$149–$399 for Air Series.)
Spyglass, Inc. (Product: Spyglass Mosaic, essentially licensing the Mosaic name, as it was written from scratch not using NCSA's Mosaic code. Relicensing to other vendors. Signed deal with Digital Equipment Corp. to ship Mosaic with all its machines. Signed a deal with Microsoft to license Spyglass' code to develop Internet Explorer)
Features
Robert Reid notes that Andreessen's team hoped:
Mosaic is based on the libwww library and thus supported a wide variety of Internet protocols included in the library: Archie, FTP, gopher, HTTP, NNTP, telnet, WAIS.
Mosaic is not the first web browser for Microsoft Windows; this is Thomas R. Bruce's little-known Cello. The Unix version of Mosaic was already famous before the Microsoft Windows, Amiga, and Mac versions were released. Other than displaying images embedded in the text (rather than in a separate window), Mosaic's original feature set is similar to the browsers on which it was modeled, such as ViolaWWW. But Mosaic was the first browser written and supported by a team of full-time programmers, was reliable and easy enough for novices to install, and the inline graphics proved immensely appealing. Mosaic is said to have made the Internet accessible to the ordinary person.
Mosaic was the first browser to explore the concept of collaborative annotation in 1993 but never passed the test state.
Mosaic was the first browser that could submit forms to a server.
Impact
Mosaic led to the Internet boom of the 1990s. Other browsers existed during this period, such as Erwise, ViolaWWW, MidasWWW, and tkWWW, but did not have the same effect as Mosaic on public use of the Internet.
In the October 1994 issue of Wired magazine, Gary Wolfe notes in the article titled "The (Second Phase of the) Revolution Has Begun: Don't look now, but Prodigy, AOL, and CompuServe are all suddenly obsolete – and Mosaic is well on its way to becoming the world's standard interface":
Reid also refers to Matthew K. Gray's website, Internet Statistics: Growth and Usage of the Web and the Internet, which indicates a dramatic leap in web use around the time of Mosaic's introduction.
David Hudson concurs with Reid:
Ultimately, web browsers such as Mosaic became the killer applications of the 1990s. Web browsers were the first to bring a graphical interface to search tools the Internet's burgeoning wealth of distributed information services. A mid-1994 guide lists Mosaic alongside the traditional, text-oriented information search tools of the time, Archie and Veronica, Gopher, and WAIS but Mosaic quickly subsumed and displaced them all. Joseph Hardin, the director of the NCSA group within which Mosaic was developed, said downloads were up to 50,000 a month in mid-1994.
In November 1992, there were twenty-six websites in the world and each one attracted attention. In its release year of 1993, Mosaic had a What's New page, and about one new link was being added per day. This was a time when access to the Internet was expanding rapidly outside its previous domain of academia and large industrial research institutions. Yet it was the availability of Mosaic and Mosaic-derived graphical browsers themselves that drove the explosive growth of the Web to over 10,000 sites by August 1995 and millions by 1998. Metcalfe expressed the pivotal role of Mosaic this way:
Legacy
Netscape Navigator was later developed by Netscape, which employed many of the original Mosaic authors; however, it intentionally shared no code with Mosaic. Netscape Navigator's code descendant is Mozilla Firefox.
Spyglass, Inc. licensed the technology and trademarks from NCSA for producing its own web browser but never used any of the NCSA Mosaic source code. Microsoft licensed Spyglass Mosaic in 1995 for US$2 million, modified it, and renamed it Internet Explorer. After a later auditing dispute, Microsoft paid Spyglass $8 million. The 1995 user guide The HTML Sourcebook: The Complete Guide to HTML, specifically states, in a section called Coming Attractions, that Internet Explorer "will be based on the Mosaic program". Versions of Internet Explorer before version 7 stated "Based on NCSA Mosaic" in the About box. Internet Explorer 7 was audited by Microsoft to ensure that it contained no Spyglass Mosaic code, and thus no longer credits Spyglass or Mosaic.
After NCSA stopped work on Mosaic, development of the NCSA Mosaic for the X Window System source code was continued by several independent groups. These independent development efforts include mMosaic (multicast Mosaic) which ceased development in early 2004, and Mosaic-CK and VMS Mosaic.
VMS Mosaic, a version specifically targeting OpenVMS operating system, is one of the longest-lived efforts to maintain Mosaic. Using the VMS support already built-in in original version (Bjorn S. Nilsson ported Mosaic 1.2 to VMS in the summer of 1993), developers incorporated a substantial part of the HTML engine from mMosaic, another defunct flavor of the browser. As of the most recent version (4.2), released in 2007, VMS Mosaic supported HTML 4.0, OpenSSL, cookies, and various image formats including GIF, JPEG, PNG, BMP, TGA, TIFF and JPEG 2000 image formats. The browser works on VAX, Alpha, and Itanium platforms.
Another long-lived version, Mosaic-CK, developed by Cameron Kaiser, was last released (version 2.7ck9) on July 11, 2010; a maintenance release with minor compatibility fixes (version 2.7ck10) was released on January 9, 2015, followed by another one (2.7ck11) in October 2015. The stated goal of the project is "Lynx with graphics" and runs on Mac OS X, Power MachTen, Linux and other compatible Unix-like OSs.
Release history
The X, Windows, and Mac versions of Mosaic all had separate development teams and code bases.
NCSA Mosaic for X
NCSA Mosaic for Windows
NCSA Mosaic for Macintosh
See also
History of the World Wide Web
History of the web browser
Comparison of web browsers
List of web browsers
Usage share of web browsers
References
Further reading
External links
1993 software
Cross-platform software
Discontinued web browsers
Gopher clients
History of software
History of the Internet
History of web browsers
Macintosh web browsers
OS/2 web browsers
POSIX web browsers
Windows web browsers
1993 in Internet culture | NCSA Mosaic | [
"Technology"
] | 2,282 | [
"History of software",
"History of computing"
] |
43,314 | https://en.wikipedia.org/wiki/List%20of%20free%20and%20open-source%20software%20packages | This is a list of free and open-source software packages (FOSS), computer software licensed under free software licenses and open-source licenses. Software that fits the Free Software Definition may be more appropriately called free software; the GNU project in particular objects to their works being referred to as open-source. For more information about the philosophical background for open-source software, see free software movement and Open Source Initiative. However, nearly all software meeting the Free Software Definition also meets the Open Source Definition and vice versa. A small fraction of the software that meets either definition is listed here. Some of the open-source applications are also the basis of commercial products, shown in the List of commercial open-source applications and services.
Artificial intelligence
General AI
OpenCog – A project that aims to build an artificial general intelligence (AGI) framework. OpenCog Prime is a specific set of interacting components designed to give rise to human-equivalent artificial general intelligence.
Computer vision
AForge.NET – computer vision, artificial intelligence and robotics library for the .NET framework
OpenCV – computer vision library in C++
Machine learning
See List of open-source machine learning software
See Data Mining below
See R programming language – packages of statistical learning and analysis tools
Planning
TREX – Reactive planning
Robotics
Robot Operating System (ROS)
Webots – Robot simulator
Assistive technology
Speech (synthesis and recognition)
CMU Sphinx – Speech recognition software from Carnegie Mellon University
Emacspeak – Audio desktop
ESpeak – Compact software speech synthesizer for English and other languages
Festival Speech Synthesis System – General multilingual speech synthesis
Modular Audio Recognition Framework – Voice, audio, speech NLP processing
NonVisual Desktop Access – (NVDA) Screen reader, for Windows
Text2Speech – Lightweight, easy-to-use Text-To-Speech (TTS) Software
Other assistive technology
Dasher – Unique text input software
Gnopernicus – AT suite for GNOME 2
Virtual Magnifying Glass – A multi-platform screen magnification tool
Biology
AMAP
BAli-Phy
BLAST, CS-BLAST, BLAT
Bowtie
Clustal
DECIPHER
FASTA
Fast statistical alignment
HMMER
HH-suite
JAligner
MAFFT
MAVID
MUSCLE
Nextflow
Phyloscan
Probalign
ProbCons
Stemloc
T-Coffee
UGENE
Yass
CAD
FreeCAD – Parametric 3D CAD modeler with a focus on mechanical engineering, BIM, and product design.
LibreCAD – 2D CAD software using AutoCAD-like interface and file format.
SolveSpace – 2D and 3D CAD, constraint-based parametric modeler with simple mechanical simulation abilities.
BRL-CAD – a constructive solid geometry (CSG) solid modeling computer-aided design (CAD) system.
OpenSCAD – A scripting based 3D CAD software.
Open Cascade Technology (OCCT) – a CAD kernel for 3D CAD, CAM, CAE, etc.
Blender
Wings 3D
Art of Illusion
MeshLab
MakeHuman
Sweet Home 3D
Finite Element Analysis (FEA)
Gmsh – A three-dimensional finite element mesh generator with built-in pre- and post-processing facilities.
Electronic design automation (EDA)
Electric
FreePCB
Fritzing – a CAD software for the design of electronics hardware to build more permanent circuits from prototypes
gEDA
GNU Circuit Analysis Package (Gnucap)
Icarus Verilog
KiCad – a suite for electronic design automation (EDA) for schematic capture, PCB layout, manufacturing file viewing, SPICE simulation, and engineering calculation
KTechLab
Magic
Ngspice
pcb-rnd
Oregano
Quite Universal Circuit Simulator (QUCS)
Verilator
XCircuit
Computer simulation
Blender – 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, and motion graphics.
FreeCad, equivalent of Blender, towards to mechanical engineering
OpenFOAM – open-source software used for computational fluid dynamics (or CFD).
FlightGear - atmospheric and orbital flight simulator with a flight dynamics engine (JSBSim) that is used in a 2015 NASA benchmark to judge new simulation code to space industry standards.
SimPy – Queue-theoretic event-based simulator written in Python
Salome – a generic platform for Pre- and Post-Processing for numerical simulation
Cybersecurity
Antivirus
ClamAV – cross-platform antimalware toolkit written in C and C++, able to detect many types of malware including viruses
ClamWin – free and open-source antivirus tool for Windows and written in C, C++
Lynis – Security audit tool (set of shell scripts) for Unix and Linux
Data loss prevention
MyDLP
Data recovery
dvdisaster
ddrescue
Foremost
PhotoRec
TestDisk
Forensics
The Coroner's Toolkit
The Sleuth Kit
Anti-forensics
USBKill
Tails
BusKill
Disk erasing
DBAN
srm
Encryption
Bouncy Castle
GnuPG
GnuTLS
KGPG
NaCl
OpenSSL
Seahorse
Signal
stunnel
TextSecure
wolfCrypt
7-Zip
Disk encryption
dm-crypt
CrossCrypt
FreeOTFE and FreeOTFE Explorer
eCryptfs
VeraCrypt
Firewall
Firewalld
Uncomplicated Firewall (ufw)
Firestarter
IPFilter
ipfw
iptables
nftables
M0n0wall
PeerGuardian
PF
pfSense
OPNsense
Rope
Shorewall
SmoothWall
Vyatta
VyOS
Network and security monitoring
Snort – Network intrusion detection system (IDS) and intrusion prevention system (IPS)
OpenVAS – software framework of several services and tools offering vulnerability scanning and vulnerability management
Secure Shell (SSH)
Cyberduck – macOS and Windows client (since version 4.0)
Lsh – Server and client, with support for SRP and Kerberos authentication
OpenSSH – Client and server
PuTTY – Client-only
Password management
Bitwarden
KeePass
KeePassXC (multiplatform fork able to open KeePass databases)
Password Safe
Mitro
Pass
Other cybersecurity programs
Data storage and management
Disk cleaning utilities
BleachBit
Backup software
Database management systems (including administration)
Apache Cassandra – A NoSQL database from Apache Software Foundation offers support for clusters spanning multiple datacenter
Apache CouchDB – A NoSQL database from Apache Software Foundation with multi-master replication
MariaDB – A community-developed relational database management system with pluggable storage engines and commercial support
PostGIS – Adds support for geographic objects to the PostgreSQL as per Open Geospatial Consortium (OGC)
PostgreSQL – A relational database management system emphasizes on extensibility and SQL compliance and available for Windows, Linux, FreeBSD, and OpenBSD
Data mining
Environment for DeveLoping KDD-Applications Supported by Index-Structures (ELKI) – Data mining software framework written in Java with a focus on clustering and outlier detection methods
FrontlineSMS – Information distribution and collecting via text messaging (SMS)
Konstanz Information Miner (KNIME)
OpenNN – Open-source neural network software library written in C++
Orange (software) – Data visualization and data mining for novice and experts, through visual programming or Python scripting. Extensions for bioinformatics and text mining
RapidMiner – Data mining software written in Java, fully integrating Weka, featuring 350+ operators for preprocessing, machine learning, visualization, etc. – the prior version is available as open-source
Scriptella ETL – ETL (Extract-Transform-Load) and script execution tool. Supports integration with J2EE and Spring. Provides connectors to CSV, LDAP, XML, JDBC/ODBC, and other data sources
Weka – Data mining software written in Java featuring machine learning operators for classification, regression, and clustering
JasperSoft – Data mining with programmable abstraction layer
Data Visualization Components
ParaView – Plotting and visualization functions developed by Sandia National Laboratory; capable of massively parallel flow visualization utilizing multiple computer processors
VTK – Toolkit for 3D computer graphics, image processing, and visualisation.
Digital Asset Management software system
Disk partitioning software
GParted
FIPS (computer program)
TestDisk
Enterprise search engines
ApexKB, formerly known as Jumper
Lucene
Nutch
Solr
Xapian
ETLs (Extract Transform Load)
Konstanz Information Miner (KNIME)
Pentaho
File archivers
PeaZip
7-Zip
File systems
OpenAFS – Distributed file system supporting a very wide variety of operating systems
Tahoe-LAFS – Distributed file system/Cloud storage system with integrated privacy and security features
CephFS – Distributed file system included in the Ceph storage platform.
Desktop publishing
Collabora Online Draw and Writer – Enterprise-ready edition of LibreOffice accessible from a web browser. The Draw application is for flyers, newsletters, brochures and more, Writer has most of the functionality too.
Scribus – Designed for layout, typesetting, and preparation of files for professional-quality image-setting equipment. It can also create animated and interactive PDF presentations and forms.
LyX – A "What You See Is What You Mean" document creation system, LyX makes use of the LaTeX markup macro system for TeX, allowing the elegant creation of documents which match up with the layouts in it for various document classes.
E-book management and editing
Calibre – Cross-platform suite of ebook software
Collabora Online Writer – Enterprise-ready edition of LibreOffice accessible from a web browser. Allows exporting in the EPUB format.
Sigil – Editing software for e-books in the EPUB format
Education
E-learning, learning support
ATutor – Web-based Learning Content Management System (LCMS)
Chamilo – Web-based e-learning and content management system
Claroline – Collaborative Learning Management System
DoceboLMS – SAAS/cloud platform for learning
eFront – Icon-based learning management system
H5P – Framework for creating and sharing interactive HTML5 content
IUP Portfolio – Educational platform for Swedish schools
ILIAS – Web-based learning management system (LMS)
Moodle – Free and open-source learning management system
OLAT – Web-based Learning Content Management System
Omeka – Content management system for online digital collections
openSIS – Web-based Student Information and School Management system
Sakai Project – Web-based learning management system
SWAD – Web-based learning management system
Academic advising
FlightPath – Academic advising software for universities
Educational suites for children
Tux Paint – Painting application for 3–12 year olds
GCompris – Educational entertainment, aimed at children aged 2–10
Language
Alpheios Project
Anki (software)
FirstVoices
Kiten
Operating systems
Linux - Unix-based general use OS
UberStudent – Linux-based operating system and software suite for academic studies
MAX (operating system)
Edubuntu
Mind mapping & others
Vym (software)
Compendium (software)
Gnaural – Brainwave entrainment software
Offline learning & Open data
Kiwix: A free and open-source offline web browser that allows users download Wikipedia entire content and use for offline learning, later was expanded with repositories for Wikimedia Foundation, public domain texts from Project Gutenberg, many of the Stack Exchange sites, and other resources.
OpenStreetMap: OpenStreetMap was developed in 2004, it uses Open data and users data input through Crowdsourcing and Web mapping to create a complete and downloadable alternative to other online maps, this allow users to enter data when there is no data available due to lack of governance and economic interest or due lower population of the places mapped
Typing
KTouch – Touch typing lessons with a variety of keyboard layouts
Tux Typing – Typing tutor for children, featuring two games to improve typing speed
File managers
Finance
Accounting
GnuCash – Double-entry book-keeping
HomeBank – Personal accounting software
KMyMoney – Double-entry book-keeping
LedgerSMB – Double-entry book-keeping
RCA open-source application – management accounting application
SQL Ledger – Double-entry book-keeping
TurboCASH – Double-entry book-keeping for Windows
Wave Accounting – Double-entry book-keeping
Cryptocurrency
Bitcoin – Blockchain platform, peer-to-peer decentralised digital currency
Ethereum – Blockchain platform with smart contract functionality
CRM
CiviCRM – Constituent Relationship Management software aimed at NGOs
iDempiere – Business Suite, ERP and CRM
SuiteCRM – Web-based CRM
ERP
Adempiere – Enterprise resource planning (ERP) business suite
Apache OFBiz – A suite of enterprise applications from Apache Software Foundation
Compiere – ERP solution automates accounting, supply chain, inventory, and sales orders
Dolibarr – Web-based ERP system
ERPNext – Web-based open-source ERP system for managing accounting and finance
ERP5 – Single Unified Business Model based system written with Python and Zope
iDempiere – Fully navigable on PCs, tablets and smartphones driven only by a community of supporters
Ino erp – Dynamic pull based system ERP
JFire – An ERP business suite written with Java and JDO
LedgerSMB – A double entry accounting and ERP system written with Perl
metasfresh – ERP Software
Odoo – Open-source ERP, CRM and CMS
Openbravo – Web-based ERP
Tryton – Open-source ERP
Human resources
OrangeHRM – Commercial human resource management
Microfinance
Mifos – Microfinance Institution management software
Process management
Bonita Open Solution – Business Process Management
Games
Action
Nexuiz – First-person shooter.
OpenArena – First-person shooter.
Red Eclipse – First-person shooter.
Tremulous – First-person shooter.
Unvanquished – First-person shooter.
Xonotic – First-person shooter that runs on a heavily modified version of the Quake engine known as the DarkPlaces engine
Warsow – First-person shooter fast-paced arena FPS game that runs on the Qfusion engine
Application layer
WINE – Allows Windows applications to be run on Unix-like operating systems
Chess
ChessV
Fairy-Max
GNU Chess
PyChess
XBoard
Lichess
Educational games
GCompris – software suite comprising educational entertainment software for children aged 2 to 10
Tux, of Math Command
Tux Paint
Video game emulation
MAME – Multi-platform emulator designed to recreate the hardware of arcade game systems
MESS – Multi-platform emulator designed to recreate the hardware of video game consoles
RetroArch – Cross-platform front-end for emulators, game engines and video games
Snes9x – A Super Nintendo emulator
Stella – Atari 2600 emulator
PCSX – A PlayStation emulator designed to recreate the hardware of the original PlayStation system
PCSX2 – A PlayStation 2 emulator designed to recreate the hardware of PlayStation 2 system
PPSSPP – A PlayStation Portable emulator designed to recreate the hardware of PlayStation Portable system
Project64 – A Nintendo 64 emulator
RPCS3 – A PlayStation 3 emulator designed to recreate the hardware of PlayStation 3 system
Dolphin (emulator) – A GameCube and Wii emulator designed to recreate the hardware of GameCube and Wii systems
Citra (emulator) – A Nintendo 3DS and Wii emulator designed to recreate the hardware of Nintendo 3DS systems
Cemu – A Wii U emulator designed to recreate the hardware of Wii U systems
Music video games
Frets on Fire
Karaoke
UltraStar
Rhythm game
StepMania
Puzzle
Pingus – Lemmings alternative with penguins instead of lemmings
Sandbox
Luanti – An open source voxel game engine
Snake games
GLtron
Simulation
Endless Sky – Space trading and combat simulation
FlightGear – Flight simulator
OpenTTD – Business simulation game in which players try to earn money via transporting passengers and freight by road, rail, water and air
SuperTuxKart – Kart racing game that features mascots of various open-source projects
Strategy
0 A.D. – Real-time strategy video game
Freeciv – Turn-based strategy game inspired by proprietary Sid Meier's Civilization series
Glest
The Battle for Wesnoth – Turn-based strategy video game with fantasy setting
Genealogy
Gramps (software) – a free and open source genealogy software
Geographic information systems
QGIS – cross-platform desktop geographic information system (GIS) application to view, edit, and analyse geospatial data
Graphical user interface
Desktop environments
Window managers
Windowing system
Groupware
Content management systems
Wiki software
Healthcare software
Integrated library management software
Evergreen – Integrated Library System initially developed for the Georgia Public Library Service's PINES catalog
Koha – SQL-based library management
NewGenLib
OpenBiblio
PMB
refbase – Web-based institutional repository and reference management software
Image editor
Darktable – Digital image workflow management, including RAW photo processing
digiKam – Integrated photography toolkit including editing abilities
GIMP – Raster graphics editor aimed at image retouching/editing
Inkscape – Vector graphics editor
Karbon – Scalable vector drawing application in KDE
Krita – Digital painting, sketching and 2D animation application, with a variety of brush engines
LazPaint – Lightweight raster and vector graphics editor, aimed at being simpler to use than GIMP
LightZone – Free, open-source digital photo editor software application.
RawTherapee – Digital image workflow management aimed at RAW photo processing
Maps & Navigation
OpenStreetMap – open geographic database updated and maintained by a community of volunteers via open collaboration.
Mathematics
ALTRAN
FriCAS
GAP (computer algebra system)
GiNaC
gnuplot
Maxima
Mathomatic
Normaliz
SageMath
Singular (software)
SymPy
Yacas
Computer algebra systems
Axiom
Cadabra
Cambridge Algebra System
CPMP-Tools
CoCoA
Erable
PARI/GP
Reduce
Xcas
symbolic manipulation systems
FORM (symbolic manipulation system)
Statistics
R – Statistics software
Numerical analysis
Octave – Numerical analysis software
Scilab – Numerical analysis software
Geometry
Geogebra – Geometry and algebra
Spreadsheet
LibreOffice Calc – spreadsheet component of the LibreOffice package
Gnumeric – spreadsheet program of the GNOME Project
Calligra Sheets – spreadsheet component of the Calligra Suite in KDE
Pyspread – spreadsheet which uses Python for macro programming, and allows each cell to contain data, the results of a calculation, a Python program, or the results of a Python program.
Mobile software
Celestia (Android, iOS)
Calligra (Android)
Collabora Online Libre office for online collaboration and mobile devices. (Android)
Conversations (Android)
F-Droid (Android) – app store and software repository
I2P (Android) – anonymous network layer (implemented as a mix network) that allows for censorship-resistant, peer-to-peer communication.
Kiwix: Offline web browser that allows users to download the entire content of Wikipedia for offline learning purposes. (Android)
Krita (Android)
Linphone (Android, iOS)
Maps.me (Android)
Monal (iOS)
NetHunter App Store (Android) – fork of F-Droid for Kali NetHunter
OpenVPN (Android, iOS) – virtual private network (VPN) system that implements techniques to create secure point-to-point or site-to-site connections in routed or bridged configurations and remote access facilities. It implements both client and server applications.
Orbot (Android, iOS) – free proxy app that provides anonymity on the Internet for users of the Android and iOS operating systems. It allows traffic from apps such as web browsers, email clients, map programs, and others to be routed via the Tor network.
Organic Maps (Android, iOS)
OsmAnd (Android)
Quicksy (Android)
Stellarium (Android, iOS)
Tor Browser – onion-routed browser by The Tor Project, based on Firefox ESR
VLC (Android, iOS)
Wikipedia (Android, iOS) – See also: List of Wikipedia mobile applications
Media
Audio editors, audio management
Audacity
Ardour: Professional digital audio workstation
LMMS: Digital audio workstation
CD/USB-writing software
Brasero (software)
cdrtools
K3b
X-CD-Roast
Flash animation
Pencil2D – For animations
SWFTools – For scripting
Game engines
Blender Game Engine – Discontinued 2019
Godot – Application for the design of cross-platform video games
MonoGame – C# framework
Open3DEngine – Based on Amazon Lumberyard
Stride – (prev. Xenko) 2D and 3D cross-platform game engine originally developed by Silicon Studio
Chess engines
KnightCap
Leela Chess Zero – Universal Chess Interface chess engine
Stockfish – Universal Chess Interface chess engine
Graphics
2D
Pencil2D – Simple 2D graphics and animation program
Synfig – 2D vector graphics and timeline based animation
TupiTube (formerly KTooN) – Application for the design and creation of animation
OpenToonz – Part of a family of 2D animation software
Krita – Digital painting, sketching and 2D animation application, with a variety of brush engines
Blender – Computer graphics software, Blender's Grease Pencil tools allow for 2D animation within a full 3D pipeline.
mtPaint – raster graphics editor for creating icons, pixel art
3D
Blender – Computer graphics software featuring modeling, sculpting, texturing, rigging, simulation, rendering, camera tracking, video editing, and compositing
MakeHuman
OpenFX – Modeling and animation software with a variety of built-in post processing effects
Seamless3d – Node-driven 3D modeling software
Wings 3D – subdivision modeler inspired by Nendo and Mirai from Izware.
Image galleries
Shotwell
Image viewers
Eye of GNOME
F-spot
feh
Geeqie
Gthumb
Gwenview
KPhotoAlbum
Opticks
Maps
GeoDa
GeoServer
GeoTools
GRASS GIS
GvSIG
ILWIS
JUMP GIS
Kosmo (GIS)
Libre Map Project
MapWindow GIS
Mapnik
MapServer
Marble
OpenStreetMap
OpenLayers
PostGIS
QGIS
SAGA GIS
uDig
Whitebox Geospatial Analysis Tools
Multimedia codecs, containers, splitters
Subtitle
Aegisub
Gnome Subtitles
Subtitle Composer (KDE)
Subtitle Edit
Television
Video converters
Dr. DivX
FFmpeg
MEncoder
OggConvert
Video editing
Avidemux
AviSynth
Blender
Cinelerra
Flowblade
Kdenlive
Kino
LiVES
LosslessCut
Natron
Olive
OpenShot
Open Movie Editor
Pitivi
Shotcut
VirtualDub
VirtualDubMod
VideoLAN Movie Creator
DVD authoring
DeVeDe
DVD Flick
DVDStyler
Other media packages
Celtx – Media pre-production software
Open Broadcaster Software (OBS) – Cross-platform streaming and recording program
Ripping
K9Copy
Thoggen
Video encoders
Avidemux
HandBrake
FFmpeg
OggConvert
Video players
Media Player Classic
VLC media player
mpv
Networking and Internet
Advertising
Revive Adserver
Communication-related
Asterisk – Telephony and VoIP server
Ekiga – Video conferencing application for GNOME and Microsoft Windows
ConferenceXP – video conferencing application for Windows XP or later
Dino - xmpp client, supporting both OMEMO encryption and Jingle Audio/Video protocol, under Windows, Linux and BSD.
FreePBX – Front-end and advanced PBX configuration for Asterisk
FreeSWITCH – Telephony platform
Gajim – xmpp client
I2P – anonymous network layer (implemented as a mix network) that allows for censorship-resistant, peer-to-peer communication.
Jami – Cross-platform, peer to peer instant-messaging and video-calling protocol that offers end-to-end encryption and SIP client
Jitsi – Java VoIP and Instant Messaging client
QuteCom – Voice, video, and IM client application
Enterprise Communications System sipXecs – SIP Communications Server
Slrn – Newsreader
Telegram
Twinkle – VoIP softphone
Tox – Cross-platform, peer-to-peer instant-messaging and video-calling protocol that offers end-to-end encryption
E-mail
Amavis – Email content filter
Claws Mail – Email Client
Fetchmail – Email Retrieval
Geary – Email client based on WebKitGTK+
GNUMail – Cross-platform email client
Hula – Discontinued mail and calendar project
K-9 Mail – Android Email Client
MailScanner – Email security system
MH Message Handling System – Email Client
Modest – Email Client
Mozilla Mail & Newsgroups – Email Client that was part of the now discontinued Mozilla Application Suite
Mozilla Thunderbird – Email, news, RSS, and chat client
POPFile – Cross-platform mail filter
Roundcube – Web-based IMAP email client
Sylpheed – Email and News Client
Sympa – MLA software
Vpopmail – Email management software
File transfer
FTP Open Source Software.
=clientes
Filezilla
Servidor
Grid and distributed processing
GNU Queue
HTCondor
pexec
Instant messaging
IRC Clients
Middleware
Apache Axis2 – Web service framework (implementations are available in both Java & C)
Apache Geronimo – Application server
Bonita Open Solution – a J2EE web application and java BPMN2 compliant engine
GlassFish – Application server
Apache Tomcat – Servlet container and standalone webserver
JBoss – Application server
OpenRemote – IoT Middleware
TAO (software) – C++ implementation of the OMG's CORBA standard
Enduro/X – C/C++ middleware platform based on X/Open group's XATMI and XA standards
RSS, Atom readers, aggregators
Akregator – Platforms running KDE
Liferea – Platforms running GNOME
NetNewsWire – macOS, iOS
RSS Bandit – Windows, using .NET framework
RSSOwl – Windows, macOS, Solaris, Linux using Java SWT Eclipse
Sage (Mozilla Firefox extension)
Peer-to-peer file sharing
I2P – anonymous network layer (implemented as a mix network) that allows for censorship-resistant, peer-to-peer communication.
Popcorn Time – Multi-platform, free, and open-source media player
qBittorrent – Alternative to popular clients such as μTorrent
Transmission – BitTorrent client
Deluge – BitTorrent client
Portal Server
Drupal
Liferay
Sun Java System Portal Server
uPortal
Remote access and management
FreeNX
OpenVPN
rdesktop
Synergy
VNC (RealVNC, TightVNC, UltraVNC)
Remmina (based on FreeRDP)
Routing software
Web browsers
Graphical
Chromium – web browser using the custom Blink engine from which Google Chrome draws its source code
Brave – privacy-focused web browser based on Chromium browser
Falkon – web browser based on Blink engine, a KDE project
Firefox – Mozilla-developed web browser using Gecko layout engine
Waterfox – Firefox fork supporting legacy extensions, 64-bit only
Pale Moon – a customizable fork of Firefox
Tor Browser – onion-routed browser by The Tor Project, based on Firefox ESR
GNOME Web – WebKit-based web browser for the GNOME desktop environment
Midori – Lightweight web browser using the WebKit layout engine
qutebrowser – keyboard operated Webkit-based browser with vi-like keybindings
SeaMonkey Navigator – the SeaMonkey internet suite's web browser
Surf – a minimal tab-less browser by suckless.org using WebKitGTK
Firefox Focus – privacy-focused mobile web browser from Mozilla available for Android and iOS
Text-based
Lynx – a text-based web browser developed since 1992
Links – a text-based browser with a framebuffer-based graphical mode
ELinks – fork of Links with JavaScript support
Webcam
Cheese – GNOME webcam application
Guvcview – Linux webcam application
Webgrabber
cURL
HTTrack
Wget
Web-related
Apache Cocoon – A web application framework
Apache – The most popular web server
AWStats – Log file parser and analyzer
BookmarkSync – Tool for browsers
Caddy – an extensible, cross-platform, open-source web server written in Go.
Cherokee – Fast, feature-rich HTTP server
curl-loader – Powerful HTTP/HTTPS/FTP/FTPS loading and testing tool
FileZilla – FTP
Hiawatha – Secure, high performance, and easy-to-configure HTTP server
HTTP File Server – User-friendly file server software, with a drag-and-drop interface
lighttpd – Resource-sparing, but also fast and full-featured, HTTP Server
Lucee – CFML application server
Nginx – Lightweight, high performance web server/reverse proxy and e-mail (IMAP/POP3) proxy
NetKernel – Internet application server
Qcodo – PHP5 framework
Squid – Web proxy cache
Vaadin – Fast, Java-based framework for creating web applications
Varnish – High-performance web application accelerator/reverse proxy and load balancer/HTTP router
XAMPP – Package of web applications including Apache and MariaDB
Zope – Web application server
Web search engines
Searx – Self-hostable metasearch engine
YaCy – P2P-based search engine
Other networking programs
JXplorer – LDAP client
Nextcloud – A fork of ownCloud
OpenLDAP – LDAP server
ownCloud – File share and sync server
Wireshark – Network monitor
Office software
Text editors
Spreadsheet software
Office suites
Apache OpenOffice – The cross platform office productivity suite from Apache Software Foundation (ASF) consists of programs for word processing, spreadsheets, presentation, diagrams and drawings, databases, etc.
Calligra Suite – The office productivity suite from KDE consists of programs for word processing, spreadsheets, presentation, databases, vector graphics, and digital painting
Collabora Online – Enterprise-ready edition of LibreOffice, web application, mobile phone, tablet, Chromebook and desktop (Windows, macOS, Linux)
LibreOffice – The cross platform office productivity suite from The Document Foundation (TDF) consists of programs for word processing, spreadsheets, presentation, diagrams and drawings, databases, etc.
OnlyOffice Desktop Editors – An open-source offline edition of the Cloud
Operating systems
Be advised that available distributions of these systems can contain, or offer to build and install, added software that is neither free software nor open-source.
BSD: FreeBSD, OpenBSD, NetBSD, GhostBSD, TrueNAS, MidnightBSD, DragonFly BSD, OPNsense, pfSense, XigmaNAS, among others.
GrapheneOS
Kali NetHunter
Linux: Debian, Ubuntu, Manjaro, Fedora, openSUSE, antiX, NixOS, Kali, Alpine, Tails, Mageia, Slackware, Gentoo, BlackArch, among others.
LineageOS: An android-based operative system for tablets and mobile phones.
GNU Hurd
Mobian
Plasma Mobile
PostmarketOS
PureOS
Ubuntu Touch
Redox OS
FreeDOS – a free OS compatible with IBM PC DOS and Microsoft's MS-DOS
ReactOS – an open-source OS intended to run the same software as Windows, originally designed to simulate Windows NT 4.0, later aiming at Windows 7 compatibility. It has been in the development stage since 1996.
Emulation and Virtualization
AppleWin
DOSBox – DOS programs emulator (including PC games)
GNOME Boxes
Hercules (emulator)
Kernel-based Virtual Machine
QEMU
VirtualBox – hosted hypervisor for x86 virtualization
Personal information managers
Chandler – Developed by the Open Source Applications Foundation (OSAF)
KAddressBook
Kontact
KOrganizer
Mozilla Calendar – Mozilla-based, multi-platform calendar program
GNOME Evolution
Perkeep – Personal data store for pictures
Project.net – Commercial project management
TeamLab – Platform for project management and collaboration
Programming language support
Bug trackers
Bugzilla
Mantis
Mindquarry
Redmine
Trac
Code generators
Bison
CodeSynthesis XSD – XML Data Binding compiler for C++
CodeSynthesis XSD/e – Validating XML parser/serializer and C++ XML Data Binding generator for mobile and embedded systems
Flex lexical analyser – Generates lexical analyzers
Open Scene Graph – 3D graphics application programming interface
OpenSCDP – Open Smart Card Development Platform
SableCC – Parser generator for Java and .NET
SWIG – Simplified Wrapper and Interface Generator for several languages
^txt2regex$
xmlbeansxx – XML Data Binding code generator for C++
YAKINDU Statechart Tools – Statechart code generator for C++ and Java
Documentation generators
Doxygen – Tool for writing software reference documentation. The documentation is written within code
Mkd – Extracts software documentation from source code files, pseudocode, or comments
Natural Docs – Claims to use a more natural language as input from the comments, hence its name
Configuration software
Autoconf
Automake
CMake
Debuggers (for testing and trouble-shooting)
GNU Debugger – A portable debugger that runs on many Unix-like systems
Memtest86 – Stress-tests RAM on x86 machines
Xnee – Record and replay tests
Integrated development environments
Version control systems
Reference management software
Risk Management
Active Agenda – Operational risk management and Rapid application development platform
Science
Bioinformatics
Cheminformatics
Chemistry Development Kit
JOELib
OpenBabel
Electronic lab notebooks
Jupyter
Geographic information systems
Geoscience
Grid computing
Microscope image processing
CellProfiler – Automatic microscopic analysis, aimed at individuals lacking training in computer vision
Endrov – Java-based plugin architecture designed to analyse complex spatio-temporal image data
Fiji – ImageJ-based image processing
Ilastik – Image-classification and segmentation software
ImageJ – Image processing application developed at the National Institutes of Health
IMOD – 2D and 3D analysis of electron microscopy data
ITK – Development framework used for creation of image segmentation and registration programs
KNIME – Data analytics, reporting, and integration platform
VTK – C++ toolkit for 3D computer graphics, image processing, and visualisation
3DSlicer – Medical image analysis and visualisation
Molecular dynamics
GROMACS – Protein, lipid, and nucleic acid simulation
LAMMPS – Molecular dynamics software
MDynaMix – General-purpose molecular dynamics, simulating mixtures of molecules
ms2 – molecular dynamics and Monte Carlo simulation package to predict thermophysical properties of fluids
NWChem – Quantum chemical and molecular dynamics software
Molecule viewer
Avogadro – Plugin-extensible molecule visualisation
BALLView – Molecular modeling and visualisation
Jmol – 3D representation of molecules in many formats, for teaching use
Molekel – Molecule viewing software
MeshLab – Able to import PDB dataset and build up surfaces from them
PyMOL – High-quality representations of small molecules and biological macromolecules
QuteMol – Interactive molecule representations offering an array of innovative OpenGL visual effects
RasMol – Visualizes biological macromolecules
Nanotechnology
Ninithi – Visualise and analyse carbon allotropes, such as carbon nanotube, Fullerene, graphene nanoribbons
Plotting
Veusz
Quantum chemistry
CP2K – Atomistic and molecular simulation of solid-state, liquid, molecular, and biological systems
Screencast
recordMyDesktop
Screensavers
BOINC
Electric Sheep
XScreenSaver
Simulation software
List of free and open source simulation software
Statistics
R – Statistics software
LimeSurvey – Online survey system
Theology
Bible study tools
Go Bible – A free Bible viewer application for Java mobile phones
Marcion – Coptic–English/Czech dictionary
OpenLP – A worship presentation program licensed under the GNU General Public License
The SWORD Project – The CrossWire Bible Society's free software project
Typesetting
Web conferencing
Jitsi Meet
OpenMeetings
Conference XP
Jami
BigBlueButton
See also
Open-source software
Open-source license
GNOME Core Applications
List of GNU packages
List of KDE applications
List of formerly proprietary software
List of Unix commands
General directories
AlternativeTo
CodePlex
Free Software Directory
Freecode
Open Hub
SourceForge
References
External links
Open Source Software Directory (OSSD), a collection of FOSS organized by target audience.
List of open-source programs (LOOP) for Windows, maintained by the Ubuntu Documentation Project.
The OSSwin Project, a list of free and open-source software for Windows
Apache Project List
Apache Projects Directory
Software - GNU Project - Free Software Foundation
Free Software Directory
Free software lists and comparisons
Lists of software | List of free and open-source software packages | [
"Technology"
] | 7,600 | [
"Computing-related lists",
"Lists of software"
] |
43,315 | https://en.wikipedia.org/wiki/List%20of%20mail%20server%20software | This is a list of mail server software: mail transfer agents, mail delivery agents, and other computer software which provide e-mail.
Product statistics
All such figures are necessarily estimates because data about mail server share is difficult to obtain; there are few reliable primary sources—and no agreed methodologies for its collection.
Surveys probing Internet-exposed systems typically attempt to identify systems via their banner, or other identifying features. , Postfix and exim appeared to be the overwhelming leaders in mail server types, with greater than 92% share between them, having come to prominence before 2010 in each case. While such methods are effective at identifying mail server share for receiving systems, most large-scale sending environments are not listening for traffic on the public internet and will not be counted using such methodologies.
SMTP
POP/IMAP
JMAP
Mail filtering
Mail server packages
See also
Comparison of mail servers
Message transfer agent
References
Notes
Message transfer agents
Mail servers | List of mail server software | [
"Technology"
] | 190 | [
"Computing-related lists",
"Internet-related lists"
] |
43,322 | https://en.wikipedia.org/wiki/William%20W.%20Tunnicliffe | William Warren Tunnicliffe (April 22, 1922 – September 12, 1996) is credited by Charles Goldfarb as being the first person (1967) to articulate the idea of separating the definition of formatting from the structure of content in electronic documents (separation of presentation and content).
In September 1967, during a meeting at the Canadian Government Printing Office, Tunnicliffe gave a presentation on the separation of information content of documents from their format. In the 1970s, Tunnicliffe led the development of a standard called GenCode for the publishing industry. He served as the first chair of the International Organization for Standardization committee that developed the first international standard for markup languages, ISO 8879.
Tunnicliffe was a member and former chairman of the Printing Industries of America, and held the rank of captain in the US Navy and Navy Reserves until 1982.
See also
Markup language
References
Sources
The SGML Handbook, Goldfarb, pg 567, on the Generic Coding Concept.
External links
SGML: In memory of William W. Tunnicliffe
1922 births
1996 deaths
United States Navy captains
United States Navy reservists
Worcester Polytechnic Institute alumni
Harvard University alumni
20th-century American engineers | William W. Tunnicliffe | [
"Technology"
] | 242 | [
"Computing stubs",
"Computer specialist stubs"
] |
43,325 | https://en.wikipedia.org/wiki/Probability%20space | In probability theory, a probability space or a probability triple is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models the throwing of a .
A probability space consists of three elements:
A sample space, , which is the set of all possible outcomes.
An event space, which is a set of events, , an event being a set of outcomes in the sample space.
A probability function, , which assigns, to each event in the event space, a probability, which is a number between 0 and 1 (inclusive).
In order to provide a model of probability, these elements must satisfy probability axioms.
In the example of the throw of a standard die,
The sample space is typically the set where each element in the set is a label which represents the outcome of the die landing on that label. For example, represents the outcome that the die lands on 1.
The event space could be the set of all subsets of the sample space, which would then contain simple events such as ("the die lands on 5"), as well as complex events such as ("the die lands on an even number").
The probability function would then map each event to the number of outcomes in that event divided by 6 – so for example, would be mapped to , and would be mapped to .
When an experiment is conducted, it results in exactly one outcome from the sample space . All the events in the event space that contain the selected outcome are said to "have occurred". The probability function must be so defined that if the experiment were repeated arbitrarily many times, the number of occurrences of each event as a fraction of the total number of experiments, will most likely tend towards the probability assigned to that event.
The Soviet mathematician Andrey Kolmogorov introduced the notion of a probability space and the axioms of probability in the 1930s. In modern probability theory, there are alternative approaches for axiomatization, such as the algebra of random variables.
Introduction
A probability space is a mathematical triplet that presents a model for a particular class of real-world situations.
As with other models, its author ultimately defines which elements , , and will contain.
The sample space is the set of all possible outcomes. An outcome is the result of a single execution of the model. Outcomes may be states of nature, possibilities, experimental results and the like. Every instance of the real-world situation (or run of the experiment) must produce exactly one outcome. If outcomes of different runs of an experiment differ in any way that matters, they are distinct outcomes. Which differences matter depends on the kind of analysis we want to do. This leads to different choices of sample space.
The σ-algebra is a collection of all the events we would like to consider. This collection may or may not include each of the elementary events. Here, an "event" is a set of zero or more outcomes; that is, a subset of the sample space. An event is considered to have "happened" during an experiment when the outcome of the latter is an element of the event. Since the same outcome may be a member of many events, it is possible for many events to have happened given a single outcome. For example, when the trial consists of throwing two dice, the set of all outcomes with a sum of 7 pips may constitute an event, whereas outcomes with an odd number of pips may constitute another event. If the outcome is the element of the elementary event of two pips on the first die and five on the second, then both of the events, "7 pips" and "odd number of pips", are said to have happened.
The probability measure is a set function returning an event's probability. A probability is a real number between zero (impossible events have probability zero, though probability-zero events are not necessarily impossible) and one (the event happens almost surely, with almost total certainty). Thus is a function The probability measure function must satisfy two simple requirements: First, the probability of a countable union of mutually exclusive events must be equal to the countable sum of the probabilities of each of these events. For example, the probability of the union of the mutually exclusive events and in the random experiment of one coin toss, , is the sum of probability for and the probability for , . Second, the probability of the sample space must be equal to 1 (which accounts for the fact that, given an execution of the model, some outcome must occur). In the previous example the probability of the set of outcomes must be equal to one, because it is entirely certain that the outcome will be either or (the model neglects any other possibility) in a single coin toss.
Not every subset of the sample space must necessarily be considered an event: some of the subsets are simply not of interest, others cannot be "measured". This is not so obvious in a case like a coin toss. In a different example, one could consider javelin throw lengths, where the events typically are intervals like "between 60 and 65 meters" and unions of such intervals, but not sets like the "irrational numbers between 60 and 65 meters".
Definition
In short, a probability space is a measure space such that the measure of the whole space is equal to one.
The expanded definition is the following: a probability space is a triple consisting of:
the sample space – an arbitrary non-empty set,
the σ-algebra (also called σ-field) – a set of subsets of , called events, such that:
contains the sample space: ,
is closed under complements: if , then also ,
is closed under countable unions: if for , then also
The corollary from the previous two properties and De Morgan's law is that is also closed under countable intersections: if for , then also
the probability measure – a function on such that:
P is countably additive (also called σ-additive): if is a countable collection of pairwise disjoint sets, then
the measure of the entire sample space is equal to one: .
Discrete case
Discrete probability theory needs only at most countable sample spaces . Probabilities can be ascribed to points of by the probability mass function such that . All subsets of can be treated as events (thus, is the power set). The probability measure takes the simple form
The greatest σ-algebra describes the complete information. In general, a σ-algebra corresponds to a finite or countable partition , the general form of an event being . See also the examples.
The case is permitted by the definition, but rarely used, since such can safely be excluded from the sample space.
General case
If is uncountable, still, it may happen that for some ; such are called atoms. They are an at most countable (maybe empty) set, whose probability is the sum of probabilities of all atoms. If this sum is equal to 1 then all other points can safely be excluded from the sample space, returning us to the discrete case. Otherwise, if the sum of probabilities of all atoms is between 0 and 1, then the probability space decomposes into a discrete (atomic) part (maybe empty) and a non-atomic part.
Non-atomic case
If for all (in this case, Ω must be uncountable, because otherwise could not be satisfied), then equation () fails: the probability of a set is not necessarily the sum over the probabilities of its elements, as summation is only defined for countable numbers of elements. This makes the probability space theory much more technical. A formulation stronger than summation, measure theory is applicable. Initially the probabilities are ascribed to some "generator" sets (see the examples). Then a limiting procedure allows assigning probabilities to sets that are limits of sequences of generator sets, or limits of limits, and so on. All these sets are the σ-algebra . For technical details see Carathéodory's extension theorem. Sets belonging to are called measurable. In general they are much more complicated than generator sets, but much better than non-measurable sets.
Complete probability space
A probability space is said to be a complete probability space if for all with and all one has . Often, the study of probability spaces is restricted to complete probability spaces.
Examples
Discrete examples
Example 1
If the experiment consists of just one flip of a fair coin, then the outcome is either heads or tails: . The σ-algebra contains events, namely: ("heads"), ("tails"), ("neither heads nor tails"), and ("either heads or tails"); in other words, . There is a fifty percent chance of tossing heads and fifty percent for tails, so the probability measure in this example is , , , .
Example 2
The fair coin is tossed three times. There are 8 possible outcomes: (here "HTH" for example means that first time the coin landed heads, the second time tails, and the last time heads again). The complete information is described by the σ-algebra of events, where each of the events is a subset of Ω.
Alice knows the outcome of the second toss only. Thus her incomplete information is described by the partition , where ⊔ is the disjoint union, and the corresponding σ-algebra . Bryan knows only the total number of tails. His partition contains four parts: ; accordingly, his σ-algebra contains 24 = 16 events.
The two σ-algebras are incomparable: neither nor ; both are sub-σ-algebras of 2Ω.
Example 3
If 100 voters are to be drawn randomly from among all voters in California and asked whom they will vote for governor, then the set of all sequences of 100 Californian voters would be the sample space Ω. We assume that sampling without replacement is used: only sequences of 100 different voters are allowed. For simplicity an ordered sample is considered, that is a sequence (Alice, Bryan) is different from (Bryan, Alice). We also take for granted that each potential voter knows exactly his/her future choice, that is he/she does not choose randomly.
Alice knows only whether or not Arnold Schwarzenegger has received at least 60 votes. Her incomplete information is described by the σ-algebra that contains: (1) the set of all sequences in Ω where at least 60 people vote for Schwarzenegger; (2) the set of all sequences where fewer than 60 vote for Schwarzenegger; (3) the whole sample space Ω; and (4) the empty set ∅.
Bryan knows the exact number of voters who are going to vote for Schwarzenegger. His incomplete information is described by the corresponding partition and the σ-algebra consists of 2101 events.
In this case, Alice's σ-algebra is a subset of Bryan's: . Bryan's σ-algebra is in turn a subset of the much larger "complete information" σ-algebra 2Ω consisting of events, where n is the number of all potential voters in California.
Non-atomic examples
Example 4
A number between 0 and 1 is chosen at random, uniformly. Here Ω = [0,1], is the σ-algebra of Borel sets on Ω, and P is the Lebesgue measure on [0,1].
In this case, the open intervals of the form , where , could be taken as the generator sets. Each such set can be ascribed the probability of , which generates the Lebesgue measure on [0,1], and the Borel σ-algebra on Ω.
Example 5
A fair coin is tossed endlessly. Here one can take Ω = {0,1}∞, the set of all infinite sequences of numbers 0 and 1. Cylinder sets may be used as the generator sets. Each such set describes an event in which the first n tosses have resulted in a fixed sequence , and the rest of the sequence may be arbitrary. Each such event can be naturally given the probability of 2−n.
These two non-atomic examples are closely related: a sequence leads to the number . This is not a one-to-one correspondence between {0,1}∞ and [0,1] however: it is an isomorphism modulo zero, which allows for treating the two probability spaces as two forms of the same probability space. In fact, all non-pathological non-atomic probability spaces are the same in this sense. They are so-called standard probability spaces. Basic applications of probability spaces are insensitive to standardness. However, non-discrete conditioning is easy and natural on standard probability spaces, otherwise it becomes obscure.
Related concepts
Probability distribution
Random variables
A random variable X is a measurable function X: Ω → S from the sample space Ω to another measurable space S called the state space.
If A ⊂ S, the notation Pr(X ∈ A) is a commonly used shorthand for .
Defining the events in terms of the sample space
If Ω is countable, we almost always define as the power set of Ω, i.e. which is trivially a σ-algebra and the biggest one we can create using Ω. We can therefore omit and just write (Ω,P) to define the probability space.
On the other hand, if Ω is uncountable and we use we get into trouble defining our probability measure P because is too "large", i.e. there will often be sets to which it will be impossible to assign a unique measure. In this case, we have to use a smaller σ-algebra , for example the Borel algebra of Ω, which is the smallest σ-algebra that makes all open sets measurable.
Conditional probability
Kolmogorov's definition of probability spaces gives rise to the natural concept of conditional probability. Every set with non-zero probability (that is, ) defines another probability measure
on the space. This is usually pronounced as the "probability of B given A".
For any event such that , the function defined by for all events is itself a probability measure.
Independence
Two events, A and B are said to be independent if .
Two random variables, and , are said to be independent if any event defined in terms of is independent of any event defined in terms of . Formally, they generate independent σ-algebras, where two σ-algebras and , which are subsets of are said to be independent if any element of is independent of any element of .
Mutual exclusivity
Two events, and are said to be mutually exclusive or disjoint if the occurrence of one implies the non-occurrence of the other, i.e., their intersection is empty. This is a stronger condition than the probability of their intersection being zero.
If and are disjoint events, then . This extends to a (finite or countably infinite) sequence of events. However, the probability of the union of an uncountable set of events is not the sum of their probabilities. For example, if is a normally distributed random variable, then is 0 for any , but .
The event is referred to as "A and B", and the event as "A or B".
See also
Space (mathematics)
Measure space
Fuzzy measure theory
Filtered probability space
Talagrand's concentration inequality
References
Bibliography
Pierre Simon de Laplace (1812) Analytical Theory of Probability
The first major treatise blending calculus with probability theory, originally in French: Théorie Analytique des Probabilités.
Andrei Nikolajevich Kolmogorov (1950) Foundations of the Theory of Probability
The modern measure-theoretic foundation of probability theory; the original German version (Grundbegriffe der Wahrscheinlichkeitrechnung) appeared in 1933.
Harold Jeffreys (1939) The Theory of Probability
An empiricist, Bayesian approach to the foundations of probability theory.
Edward Nelson (1987) Radically Elementary Probability Theory
Foundations of probability theory based on nonstandard analysis. Downloadable. http://www.math.princeton.edu/~nelson/books.html
Patrick Billingsley: Probability and Measure, John Wiley and Sons, New York, Toronto, London, 1979.
Henk Tijms (2004) Understanding Probability
A lively introduction to probability theory for the beginner, Cambridge Univ. Press.
David Williams (1991) Probability with martingales
An undergraduate introduction to measure-theoretic probability, Cambridge Univ. Press.
External links
Animation demonstrating probability space of dice
Virtual Laboratories in Probability and Statistics (principal author Kyle Siegrist), especially, Probability Spaces
Citizendium
Complete probability space
Experiment (probability theory)
Space (mathematics) | Probability space | [
"Mathematics"
] | 3,420 | [
"Mathematical structures",
"Mathematical objects",
"Space (mathematics)"
] |
43,327 | https://en.wikipedia.org/wiki/Borel%20set | In mathematics, a Borel set is any set in a topological space that can be formed from open sets (or, equivalently, from closed sets) through the operations of countable union, countable intersection, and relative complement. Borel sets are named after Émile Borel.
For a topological space X, the collection of all Borel sets on X forms a σ-algebra, known as the Borel algebra or Borel σ-algebra. The Borel algebra on X is the smallest σ-algebra containing all open sets (or, equivalently, all closed sets).
Borel sets are important in measure theory, since any measure defined on the open sets of a space, or on the closed sets of a space, must also be defined on all Borel sets of that space. Any measure defined on the Borel sets is called a Borel measure. Borel sets and the associated Borel hierarchy also play a fundamental role in descriptive set theory.
In some contexts, Borel sets are defined to be generated by the compact sets of the topological space, rather than the open sets. The two definitions are equivalent for many well-behaved spaces, including all Hausdorff σ-compact spaces, but can be different in more pathological spaces.
Generating the Borel algebra
In the case that X is a metric space, the Borel algebra in the first sense may be described generatively as follows.
For a collection T of subsets of X (that is, for any subset of the power set P(X) of X), let
be all countable unions of elements of T
be all countable intersections of elements of T
Now define by transfinite induction a sequence Gm, where m is an ordinal number, in the following manner:
For the base case of the definition, let be the collection of open subsets of X.
If i is not a limit ordinal, then i has an immediately preceding ordinal i − 1. Let
If i is a limit ordinal, set
The claim is that the Borel algebra is Gω1, where ω1 is the first uncountable ordinal number. That is, the Borel algebra can be generated from the class of open sets by iterating the operation
to the first uncountable ordinal.
To prove this claim, any open set in a metric space is the union of an increasing sequence of closed sets. In particular, complementation of sets maps Gm into itself for any limit ordinal m; moreover if m is an uncountable limit ordinal, Gm is closed under countable unions.
For each Borel set B, there is some countable ordinal αB such that B can be obtained by iterating the operation over αB. However, as B varies over all Borel sets, αB will vary over all the countable ordinals, and thus the first ordinal at which all the Borel sets are obtained is ω1, the first uncountable ordinal.
The resulting sequence of sets is termed the Borel hierarchy.
Example
An important example, especially in the theory of probability, is the Borel algebra on the set of real numbers. It is the algebra on which the Borel measure is defined. Given a real random variable defined on a probability space, its probability distribution is by definition also a measure on the Borel algebra.
The Borel algebra on the reals is the smallest σ-algebra on R that contains all the intervals.
In the construction by transfinite induction, it can be shown that, in each step, the number of sets is, at most, the cardinality of the continuum. So, the total number of Borel sets is less than or equal to
In fact, the cardinality of the collection of Borel sets is equal to that of the continuum (compare to the number of Lebesgue measurable sets that exist, which is strictly larger and equal to ).
Standard Borel spaces and Kuratowski theorems
Let X be a topological space. The Borel space associated to X is the pair (X,B), where B is the σ-algebra of Borel sets of X.
George Mackey defined a Borel space somewhat differently, writing that it is "a set together with a distinguished σ-field of subsets called its Borel sets." However, modern usage is to call the distinguished sub-algebra the measurable sets and such spaces measurable spaces. The reason for this distinction is that the Borel sets are the σ-algebra generated by open sets (of a topological space), whereas Mackey's definition refers to a set equipped with an arbitrary σ-algebra. There exist measurable spaces that are not Borel spaces, for any choice of topology on the underlying space.
Measurable spaces form a category in which the morphisms are measurable functions between measurable spaces. A function is measurable if it pulls back measurable sets, i.e., for all measurable sets B in Y, the set is measurable in X.
Theorem. Let X be a Polish space, that is, a topological space such that there is a metric d on X that defines the topology of X and that makes X a complete separable metric space. Then X as a Borel space is isomorphic to one of
R,
Z,
a finite space.
(This result is reminiscent of Maharam's theorem.)
Considered as Borel spaces, the real line R, the union of R with a countable set, and Rn are isomorphic.
A standard Borel space is the Borel space associated to a Polish space. A standard Borel space is characterized up to isomorphism by its cardinality, and any uncountable standard Borel space has the cardinality of the continuum.
For subsets of Polish spaces, Borel sets can be characterized as those sets that are the ranges of continuous injective maps defined on Polish spaces. Note however, that the range of a continuous noninjective map may fail to be Borel. See analytic set.
Every probability measure on a standard Borel space turns it into a standard probability space.
Non-Borel sets
An example of a subset of the reals that is non-Borel, due to Lusin, is described below. In contrast, an example of a non-measurable set cannot be exhibited, although the existence of such a set is implied, for example, by the axiom of choice.
Every irrational number has a unique representation by an infinite simple continued fraction
where is some integer and all the other numbers are positive integers. Let be the set of all irrational numbers that correspond to sequences with the following property: there exists an infinite subsequence such that each element is a divisor of the next element. This set is not Borel. However, it is analytic (all Borel sets are also analytic), and complete in the class of analytic sets. For more details see descriptive set theory and the book by A. S. Kechris (see References), especially Exercise (27.2) on page 209, Definition (22.9) on page 169, Exercise (3.4)(ii) on page 14, and on page 196.
It's important to note, that while Zermelo–Fraenkel axioms (ZF) are sufficient to formalize the construction of , it cannot be proven in ZF alone that is non-Borel. In fact, it is consistent with ZF that is a countable union of countable sets, so that any subset of is a Borel set.
Another non-Borel set is an inverse image of an infinite parity function . However, this is a proof of existence (via the axiom of choice), not an explicit example.
Alternative non-equivalent definitions
According to Paul Halmos, a subset of a locally compact Hausdorff topological space is called a Borel set if it belongs to the smallest σ-ring containing all compact sets.
Norberg and Vervaat redefine the Borel algebra of a topological space as the -algebra generated by its open subsets and its compact saturated subsets. This definition is well-suited for applications in the case where is not Hausdorff. It coincides with the usual definition if is second countable or if every compact saturated subset is closed (which is the case in particular if is Hausdorff).
See also
Notes
References
William Arveson, An Invitation to C*-algebras, Springer-Verlag, 1981. (See Chapter 3 for an excellent exposition of Polish topology)
Richard Dudley, Real Analysis and Probability. Wadsworth, Brooks and Cole, 1989
See especially Sect. 51 "Borel sets and Baire sets".
Halsey Royden, Real Analysis, Prentice Hall, 1988
Alexander S. Kechris, Classical Descriptive Set Theory, Springer-Verlag, 1995 (Graduate texts in Math., vol. 156)
External links
Formal definition of Borel Sets in the Mizar system, and the list of theorems that have been formally proved about it.
Topology
Descriptive set theory | Borel set | [
"Physics",
"Mathematics"
] | 1,876 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
43,333 | https://en.wikipedia.org/wiki/Measurable%20space | In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured.
It captures and generalises intuitive notions such as length, area, and volume with a set of 'points' in the space, but regions of the space are the elements of the σ-algebra, since the intuitive measures are not usually defined for points. The algebra also captures the relationships that might be expected of regions: that a region can be defined as an intersection of other regions, a union of other regions, or the space with the exception of another region.
Definition
Consider a set and a σ-algebra on Then the tuple is called a measurable space. The elements of are called measurable sets within the measurable space.
Note that in contrast to a measure space, no measure is needed for a measurable space.
Example
Look at the set:
One possible -algebra would be:
Then is a measurable space. Another possible -algebra would be the power set on :
With this, a second measurable space on the set is given by
Common measurable spaces
If is finite or countably infinite, the -algebra is most often the power set on so This leads to the measurable space
If is a topological space, the -algebra is most commonly the Borel -algebra so This leads to the measurable space that is common for all topological spaces such as the real numbers
Ambiguity with Borel spaces
The term Borel space is used for different types of measurable spaces. It can refer to
any measurable space, so it is a synonym for a measurable space as defined above
a measurable space that is Borel isomorphic to a measurable subset of the real numbers (again with the Borel -algebra)
See also
Category of measurable spaces
References
Measure theory
Space (mathematics) | Measurable space | [
"Mathematics"
] | 385 | [
"Mathematical structures",
"Mathematical objects",
"Space (mathematics)"
] |
43,336 | https://en.wikipedia.org/wiki/X.25 | X.25 is an ITU-T standard protocol suite for packet-switched data communication in wide area networks (WAN). It was originally defined by the International Telegraph and Telephone Consultative Committee (CCITT, now ITU-T) in a series of drafts and finalized in a publication known as The Orange Book in 1976.
The protocol suite is designed as three conceptual layers, which correspond closely to the lower three layers of the seven-layer OSI Reference Model, although it was developed several years before the OSI model (1984). It also supports functionality not found in the OSI network layer. An X.25 WAN consists of packet-switching exchange (PSE) nodes as the networking hardware, and leased lines, plain old telephone service connections, or ISDN connections as physical links.
X.25 was popular with telecommunications companies for their public data networks from the late 1970s to 1990s, which provided worldwide coverage. It was also used in financial transaction systems, such as automated teller machines, and by the credit card payment industry. However, most users have since moved to the Internet Protocol Suite (TCP/IP). X.25 is still used, for example by the aviation industry.
History
The CCITT (later ITU-T), the organization responsible for international standardization of telecom services, began developing a standard for packet-switched data communication in the mid-1970s based upon a number of emerging data network projects. Participants in the design of X.25 included engineers from Canada, France, Japan, the UK, and the USA representing a mix of national PTTs (France, Japan, UK) and private operators (Canada, USA). In particular, the work of Rémi Després, contributed significantly to the standard, which was based on a virtual circuit service. A few minor changes, which complemented the proposed specification, were accommodated to enable Larry Roberts to join the agreement. Various updates and additions were worked into the standard, eventually recorded in the ITU series of technical books describing the telecommunication systems. These books were published every fourth year with different-colored covers. The X.25 specification is part of the larger set of X-Series.
How the CCITT standardized virtual circuits
The CCITT appointed a special Rapporteur on packet switching, Halvor Bothner-By, who held an initial meeting in January 1974. This resulted in a question, to be answered by study group (SG) VII for the next CCITT plenary in 1976, which was “Should the packet-node of operation be provided on public data networks and, if so, how should it be implemented?”. A list of packet switching networks “to be considered” was provided: ARPANET (of the ARPA in the USA), EIN (of the European COST), EPSS (of the British Post Office Telecommunications), RCP (of the French PTT), CYCLADES (of IRIA in France), the NPL network (of the NPL in the UK), the SWIFT network (of the international SWIFT society), and the SITA network (of the international SITA company).
The second Rapporteur meeting, hosted in Oslo by the Norwegian Telecommunications Administration in November 1974, gathered 24 participants, including representatives of other international organizations (ISO, IFIP, ECMA). A document submitted by France “with the active support of a number of European administrations” served as “main basis for discussion in this meeting”. It was then “agreed that two types of services should be considered, a ‘datagram’ service and a ‘virtual call’ service”.
At the third meeting, focus had moved from whether there should be packet-mode networks to whether there could be “a standard for the interface between the network and the computers”.
Starting in January 1975, several bilateral and multilateral meetings took place between network operators having commitments for a packet switching service, in view to draft a common interface specification. Meetings started between the Canadian DATAPAC and the French TRANSPAC, continued with the startup Telenet of the USA, and continued with the BPO of the UK.
In March 1975, Halvor Bothner-By produced a list of recommendations to be created, or simply updated, for a packet switching standard to become possible. It was used as a framework at a drafting meeting in Ottawa between engineers of the four operators wishing to have a standard as soon as possible in the USA, Canada, France, UK and Japan. They prepared contributions to be submitted to SG VII in their name by administrations having voting right in CCITT. One contribution was a X.2x interface specification, the first version of what will become X.25).
The fourth Rapporteur meeting, in May 1975 in Geneva, had 45 participants and 27 new documents. The Rapporteur asked whether packet switching recommendations should be issued "with a view to making international interworking possible”, "the French administration answered in the affirmative and Canada strongly supported the French proposal". No firm conclusion was however obtained yet.
The fifth Rapporteur meeting, in September 1975 in Geneva, had about 60 participants. After discussions on the proposed virtual circuit interface, numerous issues were left unresolved. Concerning datagrams, ‘It was proposed by Larry Roberts of the US delegation and supported by representatives from France and Canada respectively that the datagram classification be changed from “E” to “A”’, i.e., from essential "to be available internationally” to additional that "may be available in certain countries and internationally”. The Rapporteur's last report expressed doubts “that a standard would be ready for adoption by SG VII”.
At the last meeting of the full SG VII before the CCITT plenary of September 1976, the available draft X.25 raised numerous clarification questions and/or and technical objections. SG VII's chairman Vern MacDonald appointed an editor and provided a meeting facility for the weekend. After intense work during it, all issues had been dealt with. For an approval by the full study group, a challenge remained: copies of the updated X.25 draft had to be available in two languages. To get them in due time, Tony Rybczynski of DATAPAC and Paul Guinaudeau of TRANSPAC spent a full night to handwrite all negotiated amendments, and to assemble them with paste and scissors into clean documents. COM VII then reviewed distributed copies, and unanimously approved them for submission to the forthcoming CCITT plenary. At this plenary of September 1976, the X.25 recommendation and the other 10 of SG VII were unanimously approved.
As requested by the USA, an optional datagram service was added to the revised X.25 of 1980, together with an alignment of its link layer, now called LAPB, with a recent evolution of HDLC in ISO. In absence of any public network operator implementing this option, datagrams were finally deleted from X.25 in its update of 1984.
Worldwide public data networks
Publicly accessible X.25 networks, commonly called public data networks, were set up in many countries during the late 1970s and 1980s to lower the cost of accessing various online services. Examples include Iberpac, TRANSPAC, Compuserve, Tymnet, Telenet, Euronet, PSS, Datapac, Datanet 1 and AUSTPAC as well as the International Packet Switched Service. Their combined network had large global coverage during the 1980s and into the 1990s.
Beginning in the early 1990s, in North America, use of X.25 networks (predominated by Telenet and Tymnet) started to be replaced by Frame Relay services offered by national telephone companies. Most systems that required X.25 now use TCP/IP, however it is possible to transport X.25 over TCP/IP when necessary.
X.25 networks are still in use throughout the world. A variant called AX.25 is used widely by amateur packet radio. Racal Paknet, now known as Widanet, remains in operation in many regions of the world, running on an X.25 protocol base. In some countries, like the Netherlands or Germany, it is possible to use a stripped version of X.25 via the D-channel of an ISDN-2 (or ISDN BRI) connection for low-volume applications such as point-of-sale terminals; but, the future of this service in the Netherlands is uncertain.
X.25 is still used in the aeronautical business (especially in Asia) even though a transition to modern protocols is increasingly important as X.25 hardware becomes increasingly rare and costly. As recently as March 2006, the United States National Airspace Data Interchange Network has used X.25 to interconnect remote airfields with air route traffic control centers.
France was one of the last remaining countries where commercial end-user service based on X.25 operated. Known as Minitel it was based on Videotex, itself running on X.25. In 2002, Minitel had about 9 million users, and in 2011 it accounted for about 2 million users in France when France Télécom announced it would shut down the service by 30 June 2012. As planned, service was terminated 30 June 2012. There were 800,000 terminals in operation at the time. An X.25 service was still purchasable from BT in the United Kingdom in 2019.
Architecture
The general concept of the X.25 was to create a universal and global packet-switched network. Much of the X.25 system is a description of the rigorous error correction needed to achieve this, as well as more efficient sharing of capital-intensive physical resources.
The X.25 specification defines only the interface between a subscriber (DTE) and an X.25 network (DCE). X.75, a protocol very similar to X.25, defines the interface between two X.25 networks to allow connections to traverse two or more networks. X.25 does not specify how the network operates internally many X.25 network implementations used something very similar to X.25 or X.75 internally, but others used quite different protocols internally. The ISO protocol equivalent to X.25, ISO 8208, is compatible with X.25, but additionally includes provision for two X.25 DTEs to be directly connected to each other with no network in between. By separating the Packet-Layer Protocol, ISO 8208 permits operation over additional networks such as ISO 8802 LLC2 (ISO LAN) and the OSI data link layer.
X.25 originally defined three basic protocol levels or architectural layers. In the original specifications these were referred to as levels and also had a level number, whereas all ITU-T X.25 recommendations and ISO 8208 standards released after 1984 refer to them as layers. The layer numbers were dropped to avoid confusion with the OSI Model layers.
Physical layer: This layer specifies the physical, electrical, functional and procedural characteristics to control the physical link between a DTE and a DCE. Common implementations use X.21, EIA-232, EIA-449 or other serial protocols.
Data link layer: The data link layer consists of the link access procedure for data interchange on the link between a DTE and a DCE. In its implementation, the Link Access Procedure, Balanced (LAPB) is a data link protocol that manages a communication session and controls the packet framing. It is a bit-oriented protocol that provides error correction and orderly delivery.
Packet layer: This layer defined a packet-layer protocol for exchanging control and user data packets to form a packet-switching network based on virtual calls, according to the Packet Layer Protocol.
The X.25 model was based on the traditional telephony concept of establishing reliable circuits through a shared network, but using software to create "virtual calls" through the network. These calls interconnect "data terminal equipment" (DTE) providing endpoints to users, which looked like point-to-point connections. Each endpoint can establish many separate virtual calls to different endpoints.
For a brief period, the specification also included a connectionless datagram service, but this was dropped in the next revision. The "fast select with restricted response facility" is intermediate between full call establishment and connectionless communication. It is widely used in query-response transaction applications involving a single request and response limited to 128 bytes of data carried each way. The data is carried in an extended call request packet and the response is carried in an extended field of the call reject packet, with a connection never being fully established.
Closely related to the X.25 protocol are the protocols to connect asynchronous devices (such as dumb terminals and printers) to an X.25 network: X.3, X.28 and X.29. This functionality was performed using a packet assembler/disassembler or PAD (also known as a triple-X device, referring to the three protocols used).
Relation to the OSI Reference Model
Although X.25 predates the OSI Reference Model (OSIRM), the physical layer of the OSI model corresponds to the X.25 physical layer, the data link layer to the X.25 data link layer, and the network layer to the X.25 packet layer. The X.25 data link layer, LAPB, provides a reliable data path across a data link (or multiple parallel data links, multilink) which may not be reliable itself. The X.25 packet layer provides the virtual call mechanisms, running over X.25 LAPB. The packet layer includes mechanisms to maintain virtual calls and to signal data errors in the event that the data link layer cannot recover from data transmission errors. All but the earliest versions of X.25 include facilities which provide for OSI network layer Addressing (NSAP addressing, see below).
User device support
X.25 was developed in the era of computer terminals connecting to host computers, although it also can be used for communications between computers. Instead of dialing directly “into” the host computer which would require the host to have its own pool of modems and phone lines, and require non-local callers to make long-distance calls the host could have an X.25 connection to a network service provider. Now dumb-terminal users could dial into the network's local “PAD” (packet assembly/disassembly facility), a gateway device connecting modems and serial lines to the X.25 link as defined by the X.29 and X.3 standards.
Having connected to the PAD, the dumb-terminal user tells the PAD which host to connect to, by giving a phone-number-like address in the X.121 address format (or by giving a host name, if the service provider allows for names that map to X.121 addresses). The PAD then places an X.25 call to the host, establishing a virtual call. Note that X.25 provides for virtual calls, so appears to be a circuit switched network, even though in fact the data itself is packet switched internally, similar to the way TCP provides connections even though the underlying data is packet switched. Two X.25 hosts could, of course, call one another directly; no PAD is involved in this case. In theory, it doesn't matter whether the X.25 caller and X.25 destination are both connected to the same carrier, but in practice it was not always possible to make calls from one carrier to another.
For the purpose of flow-control, a sliding window protocol is used with the default window size of 2. The acknowledgements may have either local or end to end significance. A D bit (Data Delivery bit) in each data packet indicates if the sender requires end to end acknowledgement. When D=1, it means that the acknowledgement has end to end significance and must take place only after the remote DTE has acknowledged receipt of the data. When D=0, the network is permitted (but not required) to acknowledge before the remote DTE has acknowledged or even received the data.
While the PAD function defined by X.28 and X.29 specifically supported asynchronous character terminals, PAD equivalents were developed to support a wide range of proprietary intelligent communications devices, such as those for IBM System Network Architecture (SNA).
Error control
Error recovery procedures at the packet layer assume that the data link layer is responsible for retransmitting data received in error. Packet layer error handling focuses on resynchronizing the information flow in calls, as well as clearing calls that have gone into unrecoverable states:
Level 3 Reset packets, which re-initializes the flow on a virtual call (but does not break the virtual call).
Restart packet, which clears down all virtual calls on the data link and resets all permanent virtual circuits on the data link.
Addressing and virtual circuits
X.25 supports two types of virtual circuits; virtual calls (VC) and permanent virtual circuits (PVC). Virtual calls are established on an as-needed basis. For example, a VC is established when a call is placed and torn down after the call is complete. VCs are established through a call establishment and clearing procedure. On the other hand, permanent virtual circuits are preconfigured into the network. PVCs are seldom torn down and thus provide a dedicated connection between end points.
VC may be established using X.121 addresses. The X.121 address consists of a three-digit data country code (DCC) plus a network digit, together forming the four-digit data network identification code (DNIC), followed by the national terminal number (NTN) of at most ten digits. Note the use of a single network digit, seemingly allowing for only 10 network carriers per country, but some countries are assigned more than one DCC to avoid this limitation. Networks often used fewer than the full NTN digits for routing, and made the spare digits available to the subscriber (sometimes called the sub-address) where they could be used to identify applications or for further routing on the subscribers networks.
NSAP addressing facility was added in the X.25(1984) revision of the specification, and this enabled X.25 to better meet the requirements of OSI Connection Oriented Network Service (CONS). Public X.25 networks were not required to make use of NSAP addressing, but, to support OSI CONS, were required to carry the NSAP addresses and other ITU-T specified DTE facilities transparently from DTE to DTE. Later revisions allowed multiple addresses in addition to X.121 addresses to be carried on the same DTE-DCE interface: Telex addressing (F.69), PSTN addressing (E.163), ISDN addressing (E.164), Internet Protocol addresses (IANA ICP), and local IEEE 802.2 MAC addresses.
PVCs are permanently established in the network and therefore do not require the use of addresses for call setup. PVCs are identified at the subscriber interface by their logical channel identifier (see below). However, in practice not many of the national X.25 networks supported PVCs.
One DTE-DCE interface to an X.25 network has a maximum of 4095 logical channels on which it is allowed to establish virtual calls and permanent virtual circuits, although networks are not expected to support a full 4095 virtual circuits. For identifying the channel to which a packet is associated, each packet contains a 12 bit logical channel identifier made up of an 8-bit logical channel number and a 4-bit logical channel group number. Logical channel identifiers remain assigned to a virtual circuit for the duration of the connection. Logical channel identifiers identify a specific logical channel between the DTE (subscriber appliance) and the DCE (network), and only has local significance on the link between the subscriber and the network. The other end of the connection at the remote DTE is likely to have assigned a different logical channel identifier. The range of possible logical channels is split into 4 groups: channels assigned to permanent virtual circuits, assigned to incoming virtual calls, two-way (incoming or outgoing) virtual calls, and outgoing virtual calls. (Directions refer to the direction of virtual call initiation as viewed by the DTE they all carry data in both directions.) The ranges allowed a subscriber to be configured to handle significantly differing numbers of calls in each direction while reserving some channels for calls in one direction. All international networks are required to implement support for permanent virtual circuits, two-way logical channels and one-way logical channels outgoing; one-way logical channels incoming is an additional optional facility. DTE-DCE interfaces are not required to support more than one logical channel. Logical channel identifier zero will not be assigned to a permanent virtual circuit or virtual call. The logical channel identifier of zero is used for packets which don't relate to a specific virtual circuit (e.g. packet layer restart, registration, and diagnostic packets).
Billing
In public networks, X.25 was typically billed as a flat monthly service fee depending on link speed, and then a price-per-segment on top of this. Link speeds varied, typically from 2400 bit/s up to 2 Mbit/s, although speeds above 64 kbit/s were uncommon in the public networks. A segment was 64 bytes of data (rounded up, with no carry-over between packets), charged to the caller (or callee in the case of reverse charged calls, where supported). Calls invoking the Fast Select facility (allowing 128 bytes of data in call request, call confirmation and call clearing phases) would generally attract an extra charge, as might use of some of the other X.25 facilities. PVCs would have a monthly rental charge and a lower price-per-segment than VCs, making them cheaper only where large volumes of data are passed.
X.25 packet types
X.25 details
The network may allow the selection of the maximal length in range 16 to 4096 octets (2n values only) per virtual circuit by negotiation as part of the call setup procedure. The maximal length may be different at the two ends of the virtual circuit.
Data terminal equipment constructs control packets which are encapsulated into data packets. The packets are sent to the data circuit-terminating equipment, using LAPB Protocol.
Data circuit-terminating equipment strips the layer-2 headers in order to encapsulate packets to the internal network protocol.
X.25 facilities
X.25 provides a set of user facilities defined and described in ITU-T Recommendation X.2. The X.2 user facilities fall into five categories:
Essential facilities;
Additional facilities;
Conditional facilities;
Mandatory facilities; and,
Optional facilities.
X.25 also provides X.25 and ITU-T specified DTE optional user facilities defined and described in ITU-T Recommendation X.7. The X.7 optional user facilities fall into four categories of user facilities that require:
Subscription only;
Subscription followed by dynamic invocation;
Subscription or dynamic invocation; and,
Dynamic invocation only.
X.25 protocol versions
The CCITT/ITU-T versions of the protocol specifications are for public data networks (PDN). The ISO/IEC versions address additional features for private networks (e.g. local area networks (LAN) use) while maintaining compatibility with the CCITT/ITU-T specifications.
The user facilities and other features supported by each version of X.25 and ISO/IEC 8208 have varied from edition to edition. Several major protocol versions of X.25 exist:
CCITT Recommendation X.25 (1976) Orange Book
CCITT Recommendation X.25 (1980) Yellow Book
CCITT Recommendation X.25 (1984) Red Book
CCITT Recommendation X.25 (1988) Blue Book
ITU-T Recommendation X.25 (1993) White Book
ITU-T Recommendation X.25 (1996) Grey Book
The X.25 Recommendation allows many options for each network to choose when deciding which features to support and how certain operations are performed. This means each network needs to publish its own document giving the specification of its X.25 implementation, and most networks required DTE appliance manufacturers to undertake protocol conformance testing, which included testing for strict adherence and enforcement of their network specific options. (Network operators were particularly concerned about the possibility of a badly behaving or misconfigured DTE appliance taking out parts of the network and affecting other subscribers.) Therefore, subscriber's DTE appliances have to be configured to match the specification of the particular network to which they are connecting. Most of these were sufficiently different to prevent interworking if the subscriber didn't configure their appliance correctly or the appliance manufacturer didn't include specific support for that network. In spite of protocol conformance testing, this often lead to interworking problems when initially attaching an appliance to a network.
In addition to the CCITT/ITU-T versions of the protocol, four editions of ISO/IEC 8208 exist:
ISO/IEC 8208:1987, First Edition, compatible with X.25 (1980) and (1984)
ISO/IEC 8208:1990, Second Edition, compatible with 1st Ed. and X.25 (1988)
ISO/IEC 8208:1995, Third Edition, compatible with 2nd Ed. and X.25 (1993)
ISO/IEC 8208:2000, Fourth Edition, compatible with 3rd Ed. and X.25 (1996)
Legacy
The X.25 protocol had a lot of overhead to deal with data loss, since circuits back then ran over poor-grade cabling and had a lot of single-bit errors to deal with. As circuits became more and more reliable, the overhead was no longer needed, and less expensive Frame Relay took over. Frame Relay has its technical base in X.25, but does not attempt to correct errors.
The world-wide public data networks based on X.25 helped grow IP as a protocol riding on top.
X.25 was also available in niche applications such as Retronet that allow vintage computers to use the Internet.
See also
History of the Internet
OSI protocol suite
Packet switched networks – are networks, including X.25, that have protocols using "packets"
Protocol Wars
XOT – is an "X.25 Over TCP" protocol, i.e. with X.25 encapsulation on TCP/IP networks
X.PC
AX.25
References
Further reading
Computer Communications, lecture notes by Prof. Chaim Ziegler PhD, Brooklyn College
External links
Recommendation X.25 at ITU-T
Cisco X.25 Reference
An X.25 Networking Guide with comparisons to TCP/IP
X.25 – Directory & Informational Resource
RFCs and other resources by Open Directory
Computer-related introductions in 1976
History of computer networks
Network layer protocols
OSI protocols
Wide area networks
ITU-T recommendations
ITU-T X Series Recommendations
Telecommunication protocols | X.25 | [
"Technology"
] | 5,541 | [
"History of computer networks",
"History of computing"
] |
43,339 | https://en.wikipedia.org/wiki/Packet%20switching | In telecommunications, packet switching is a method of grouping data into short messages in fixed format, i.e. packets, that are transmitted over a digital network. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide.
During the early 1960s, American engineer Paul Baran developed a concept he called distributed adaptive message block switching, with the goal of providing a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the United States Department of Defense. His ideas contradicted then-established principles of pre-allocation of network bandwidth, exemplified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of Welsh computer scientist Donald Davies at the National Physical Laboratory in 1965. Davies coined the modern term packet switching and inspired numerous packet switching networks in the decade following, including the incorporation of the concept into the design of the ARPANET in the United States and the CYCLADES network in France. The ARPANET and CYCLADES were the primary precursor networks of the modern Internet.
Concept
A simple definition of packet switching is:
Packet switching allows delivery of variable bit rate data streams, realized as sequences of short messages in fixed format, i.e. packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. As they traverse networking hardware, such as switches and routers, packets are received, buffered, queued, and retransmitted (stored and forwarded), resulting in variable latency and throughput depending on the link capacity and the traffic load on the network. Packets are normally forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. Packet-based communication may be implemented with or without intermediate forwarding nodes (switches and routers). In case of a shared physical medium (such as radio or 10BASE5), the packets may be delivered according to a multiple access scheme.
Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth specifically for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time, even when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages.
A packet switch has four components: input ports, output ports, routing processor, and switching fabric.
History
Invention and development
The concept of switching small blocks of data was first explored independently by Paul Baran at the RAND Corporation during the early 1960s in the US and Donald Davies at the National Physical Laboratory (NPL) in the UK in 1965.
In the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment (SAGE) radar defense system. Recognizing vulnerabilities in this network, the Air Force sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies (see Mutual assured destruction). In the early 1960s, Baran invented the concept of distributed adaptive message block switching in support of the Air Force initiative. The concept was first presented to the Air Force in the summer of 1961 as briefing B-265, later published as RAND report P-2626 in 1962, and finally in report RM 3420 in 1964. The reports describe a general architecture for a large-scale, distributed, survivable communications network. The proposal was composed of three key ideas: use of a decentralized network with multiple paths between any two points; dividing user messages into message blocks; and delivery of these messages by store and forward switching. Baran's network design was focused on digital communication of voice messages using switches that were low-cost electronics.
Christopher Strachey, who became Oxford University's first Professor of Computation, filed a patent application in the United Kingdom for time-sharing in February 1959. In June that year, he gave a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris where he passed the concept on to J. C. R. Licklider. Licklider (along with John McCarthy) was instrumental in the development of time-sharing. After conversations with Licklider about time-sharing with remote computers in 1965, Davies independently invented a similar data communication concept, using short messages in fixed format with high data transmission rates to achieve rapid communications. He went on to develop a more advanced design for a hierarchical, high-speed computer network including interface computers and communication protocols. He coined the term packet switching, and proposed building a commercial nationwide data network in the UK. He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence (MoD) told him about Baran's work.
Roger Scantlebury, a member of Davies' team, presented their work (and referenced that of Baran) at the October 1967 Symposium on Operating Systems Principles (SOSP). At the conference, Scantlebury proposed packet switching for use in the ARPANET and persuaded Larry Roberts the economics were favorable to message switching. Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. To deal with packet permutations (due to dynamically updated route preferences) and datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control", thus inventing what came to be known as the end-to-end principle. Davies proposed that a local-area network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. After a pilot experiment in early 1969, the NPL Data Communications Network began service in 1970. Davies was invited to Japan to give a series of lectures on packet switching. The NPL team carried out simulation work on datagrams and congestion in networks on a scale to provide data communication across the United Kingdom.
Larry Roberts made the key decisions in the request for proposal to build the ARPANET. Roberts met Baran in February 1967, but did not discuss networks. He asked Frank Westervelt to explore the questions of message size and contents for the network, and to write a position paper on the intercomputer communication protocol including “conventions for character and block transmission, error checking and re transmission, and computer and user identification." Roberts revised his initial design, which was to connect the host computers directly, to incorporate Wesley Clark's idea to use Interface Message Processors (IMPs) to create a message switching network, which he presented at SOSP. Roberts was known for making decisions quickly. Immediately after SOSP, he incorporated Davies' and Baran's concepts and designs for packet switching to enable the data communications on the network.
A contemporary of Roberts' from MIT, Leonard Kleinrock had researched the application of queueing theory in the field of message switching for his doctoral dissertation in 1961–62 and published it as a book in 1964. Davies, in his 1966 paper on packet switching, applied Kleinorck's techniques to show that "there is an ample margin between the estimated performance of the [packet-switched] system and the stated requirement" in terms of a satisfactory response time for a human user. This addressed a key question about the viability of computer networking. Larry Roberts brought Kleinrock into the ARPANET project informally in early 1967. Roberts and Taylor recognized the issue of response time was important, but did not apply Kleinrock's methods to assess this and based their design on a store-and-forward system that was not intended for real-time computing. After SOSP, and after Roberts' direction to use packet switching, Kleinrock sought input from Baran and proposed to retain Baran and RAND as advisors. The ARPANET working group assigned Kleinrock responsibility to prepare a report on software for the IMP. In 1968, Roberts awarded Kleinrock a contract to establish a Network Measurement Center (NMC) at UCLA to measure and model the performance of packet switching in the ARPANET.
Bolt Beranek & Newman (BBN) won the contract to build the network. Designed principally by Bob Kahn, it was the first wide-area packet-switched network with distributed control. The BBN "IMP Guys" independently developed significant aspects of the network's internal operation, including the routing algorithm, flow control, software design, and network control. The UCLA NMC and the BBN team also investigated network congestion. The Network Working Group, led by Steve Crocker, a graduate student of Kleinrock's at UCLA, developed the host-to-host protocol, the Network Control Program, which was approved by Barry Wessler for ARPA, after he ordered certain more exotic elements to be dropped. In 1970, Kleinrock extended his earlier analytic work on message switching to packet switching in the ARPANET. His work influenced the development of the ARPANET and packet-switched networks generally.
The ARPANET was demonstrated at the International Conference on Computer Communication (ICCC) in Washington in October 1972. However, fundamental questions about the design of packet-switched networks remained.
Roberts presented the idea of packet switching to communication industry professionals in the early 1970s. Before ARPANET was operating, they argued that the router buffers would quickly run out. After the ARPANET was operating, they argued packet switching would never be economic without the government subsidy. Baran had faced the same rejection and thus failed to convince the military into constructing a packet switching network in the 1960s.
The CYCLADES network was designed by Louis Pouzin in the early 1970s to study internetworking. It was the first to implement the end-to-end principle of Davies, and make the host computers responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself. His team was thus first to tackle the highly-complex problem of providing user applications with a reliable virtual circuit service while using a best-effort service, an early contribution to what will be the Transmission Control Protocol (TCP).
Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking.
In May 1974, Vint Cerf and Bob Kahn described the Transmission Control Program, an internetworking protocol for sharing resources using packet-switching among the nodes. The specifications of the TCP were then published in (Specification of Internet Transmission Control Program), written by Vint Cerf, Yogen Dalal and Carl Sunshine in December 1974.
The X.25 protocol, developed by Rémi Després and others, was built on the concept of virtual circuits. In the mid-late 1970s and early 1980s, national and international public data networks emerged using X.25 which was developed with participation from France, the UK, Japan, USA and Canada. It was complemented with X.75 to enable internetworking.
Packet switching was shown to be optimal in the Huffman coding sense in 1978.
In the late 1970s, the monolithic Transmission Control Program was layered as the Transmission Control Protocol (TCP), atop the Internet Protocol (IP). Many Internet pioneers developed this into the Internet protocol suite and the associated Internet architecture and governance that emerged in the 1980s.
For a period in the 1980s and early 1990s, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as the Protocol Wars. It was unclear which of the Internet protocol suite and the OSI model would result in the best and most robust computer networks.
Leonard Kleinrock's research work during the 1970s addressed packet switching networks, packet radio networks, local area networks, broadband networks, nomadic computing, peer-to-peer networks, and intelligent software agents. His theoretical work on hierarchical routing with student Farouk Kamoun became critical to the operation of the Internet. Kleinrock published hundreds of research papers, which ultimately launched a new field of research on the theory and application of queuing theory to computer networks.
Complementary metal–oxide–semiconductor (CMOS) VLSI (very-large-scale integration) technology led to the development of high-speed broadband packet switching during the 1980s1990s.
The "paternity dispute"
Roberts claimed in later years that, by the time of the October 1967 SOSP, he already had the concept of packet switching in mind (although not yet named and not written down in his paper published at the conference, which a number of sources describe as "vague"), and that this originated with his old colleague, Kleinrock, who had written about such concepts in his Ph.D. research in 1961-2. In 1997, along with seven other Internet pioneers, Roberts and Kleinrock co-wrote "Brief History of the Internet" published by the Internet Society. In it, Kleinrock is described as having "published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964". Many sources about the history of the Internet began to reflect these claims as uncontroversial facts. This became the subject of what Katie Hafner called a "paternity dispute" in The New York Times in 2001.
The disagreement about Kleinrock's contribution to packet switching dates back to a version of the above claim made on Kleinrock's profile on the UCLA Computer Science department website sometime in the 1990s. Here, he was referred to as the "Inventor of the Internet Technology". The webpage's depictions of Kleinrock's achievements provoked anger among some early Internet pioneers. The dispute over priority became a public issue after Donald Davies posthumously published a paper in 2001 in which he denied that Kleinrock's work was related to packet switching. Davies also described ARPANET project manager Larry Roberts as supporting Kleinrock, referring to Roberts' writings online and Kleinrock's UCLA webpage profile as "very misleading". Walter Isaacson wrote that Kleinrock's claims "led to an outcry among many of the other Internet pioneers, who publicly attacked Kleinrock and said that his brief mention of breaking messages into smaller pieces did not come close to being a proposal for packet switching".
Davies' paper reignited a previous dispute over who deserves credit for getting the ARPANET online between engineers at Bolt, Beranek, and Newman (BBN) who had been involved in building and designing the ARPANET IMP on the one side, and ARPA-related researchers on the other. This earlier dispute is exemplified by BBN's Will Crowther, who in a 1990 oral history described Paul Baran's packet switching design (which he called hot-potato routing), as "crazy" and non-sensical, despite the ARPA team having advocated for it. The reignited debate caused other former BBN employees to make their concerns known, including Alex McKenzie, who followed Davies in disputing that Kleinrock's work was related to packet switching, stating "... there is nothing in the entire 1964 book that suggests, analyzes, or alludes to the idea of packetization".
Former IPTO director Bob Taylor also joined the debate, stating that "authors who have interviewed dozens of Arpanet pioneers know very well that the Kleinrock-Roberts claims are not believed". Walter Isaacson notes that "until the mid-1990s Kleinrock had credited [Baran and Davies] with coming up with the idea of packet switching".
A subsequent version of Kleinrock's biography webpage was copyrighted in 2009 by Kleinrock. He was called on to defend his position over subsequent decades. In 2023, he acknowledged that his published work in the early 1960s was about message switching and claimed he was thinking about packet switching. Primary sources and historians recognize Baran and Davies for independently inventing the concept of digital packet switching used in modern computer networking including the ARPANET and the Internet.
Kleinrock has received many awards for his ground-breaking applied mathematical research on packet switching, carried out in the 1970s, which was an extension of his pioneering work in the early 1960s on the optimization of message delays in communication networks. However, Kleinrock's claims that his work in the early 1960s originated the concept of packet switching and that his work was a source of the packet switching concepts used in the ARPANET have affected sources on the topic, which has created methodological challenges in the historiography of the Internet. Historian Andrew L. Russell said "'Internet history' also suffers from a third, methodological, problem: it tends to be too close to its sources. Many Internet pioneers are alive, active, and eager to shape the histories that describe their accomplishments. Many museums and historians are equally eager to interview the pioneers and to publicize their stories".
Connectionless and connection-oriented modes
Packet switching may be classified into connectionless packet switching, also known as datagram switching, and connection-oriented packet switching, also known as virtual circuit switching. Examples of connectionless systems are Ethernet, IP, and the User Datagram Protocol (UDP). Connection-oriented systems include X.25, Frame Relay, Multiprotocol Label Switching (MPLS), and TCP.
In connectionless mode each packet is labeled with a destination address, source address, and port numbers. It may also be labeled with the sequence number of the packet. This information eliminates the need for a pre-established path to help the packet find its way to its destination, but means that more information is needed in the packet header, which is therefore larger. The packets are routed individually, sometimes taking different paths resulting in out-of-order delivery. At the destination, the original message may be reassembled in the correct order, based on the packet sequence numbers. Thus a virtual circuit carrying a byte stream is provided to the application by a transport layer protocol, although the network only provides a connectionless network layer service.
Connection-oriented transmission requires a setup phase to establish the parameters of communication before any packet is transferred. The signaling protocols used for setup allow the application to specify its requirements and discover link parameters. Acceptable values for service parameters may be negotiated. The packets transferred may include a connection identifier rather than address information and the packet header can be smaller, as it only needs to contain this code and any information, such as length, timestamp, or sequence number, which is different for different packets. In this case, address information is only transferred to each node during the connection setup phase, when the route to the destination is discovered and an entry is added to the switching table in each network node through which the connection passes. When a connection identifier is used, routing a packet requires the node to look up the connection identifier in a table.
Connection-oriented transport layer protocols such as TCP provide a connection-oriented service by using an underlying connectionless network. In this case, the end-to-end principle dictates that the end nodes, not the network itself, are responsible for the connection-oriented behavior.
Packet switching in networks
In telecommunication networks, packet switching is used to optimize the usage of channel capacity and increase robustness. Compared to circuit switching, packet switching is highly dynamic, allocating channel capacity based on usage instead of explicit reservations. This can reduce wasted capacity caused by underutilized reservations at the cost of removing bandwidth guarantees. In practice, congestion control is generally used in IP networks to dynamically negotiate capacity between connections. Packet switching may also increase the robustness of networks in the face of failures. If a node fails, connections do not need to be interrupted, as packets may be routed around the failure.
Packet switching is used in the Internet and most local area networks. The Internet is implemented by the Internet Protocol Suite using a variety of link layer technologies. For example, Ethernet and Frame Relay are common. Newer mobile phone technologies (e.g., GSM, LTE) also use packet switching. Packet switching is associated with connectionless networking because, in these systems, no connection agreement needs to be established between communicating parties prior to exchanging data.
X.25, the international CCITT standard of 1976, is a notable use of packet switching in that it provides to users a service of flow-controlled virtual circuits. These virtual circuits reliably carry variable-length packets with data order preservation. DATAPAC in Canada was the first public network to support X.25, followed by TRANSPAC in France.
Asynchronous Transfer Mode (ATM) is another virtual circuit technology. It differs from X.25 in that it uses small fixed-length packets (cells), and that the network imposes no flow control to users.
Technologies such as MPLS and the Resource Reservation Protocol (RSVP) create virtual circuits on top of datagram networks. MPLS and its predecessors, as well as ATM, have been called "fast packet" technologies. MPLS, indeed, has been called "ATM without cells". Virtual circuits are especially useful in building robust failover mechanisms and allocating bandwidth for delay-sensitive applications.
Packet-switched networks
Donald Davies' work on data communications and computer network design became well known in the United States, Europe and Japan and was the "cornerstone" that inspired numerous packet switching networks in the decade following.
The history of packet-switched networks can be divided into three overlapping eras: early networks before the introduction of X.25; the X.25 era when many postal, telephone, and telegraph (PTT) companies provided public data networks with X.25 interfaces; and the Internet era which initially competed with the OSI model.
Early networks
Research into packet switching at the National Physical Laboratory (NPL) began with a proposal for a wide-area network in 1965, and a local-area network in 1966. ARPANET funding was secured in 1966 by Bob Taylor, and planning began in 1967 when he hired Larry Roberts. The NPL network followed by the ARPANET became operational in 1969, the first two networks to use packet switching. Larry Roberts said many of the packet switching networks built in the 1970s were similar "in nearly all respects" to Donald Davies' original 1965 design.
Before the introduction of X.25 in 1976, about twenty different network technologies had been developed. Two fundamental differences involved the division of functions and tasks between the hosts at the edge of the network and the network core. In the datagram system, operating according to the end-to-end principle, the hosts have the responsibility to ensure orderly delivery of packets. In the virtual call system, the network guarantees sequenced delivery of data to the host. This results in a simpler host interface but complicates the network. The X.25 protocol suite uses this network type.
AppleTalk
AppleTalk is a proprietary suite of networking protocols developed by Apple in 1985 for Apple Macintosh computers. It was the primary protocol used by Apple devices through the 1980s and 1990s. AppleTalk included features that allowed local area networks to be established ad hoc without the requirement for a centralized router or server. The AppleTalk system automatically assigned addresses, updated the distributed namespace, and configured any required inter-network routing. It was a plug-n-play system.
AppleTalk implementations were also released for the IBM PC and compatibles, and the Apple IIGS. AppleTalk support was available in most networked printers, especially laser printers, some file servers and routers.
The protocol was designed to be simple, autoconfiguring, and not require servers or other specialized services to work. These benefits also created drawbacks, as Appletalk tended not to use bandwidth efficiently. AppleTalk support was terminated in 2009.
ARPANET
The ARPANET was a progenitor network of the Internet and one of the first networks, along with ARPA's SATNET, to run the TCP/IP suite using packet switching technologies.
BNRNET
BNRNET was a network which Bell-Northern Research developed for internal use. It initially had only one host but was designed to support many hosts. BNR later made major contributions to the CCITT X.25 project.
Cambridge Ring
The Cambridge Ring was an experimental ring network developed at the Computer Laboratory, University of Cambridge. It operated from 1974 until the 1980s.
CompuServe
CompuServe developed its own packet switching network, implemented on DEC PDP-11 minicomputers acting as network nodes that were installed throughout the US (and later, in other countries) and interconnected. Over time, the CompuServe network evolved into a complicated multi-tiered network incorporating ATM, Frame Relay, IP and X.25 technologies.
CYCLADES
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the early ARPANET design and to support network research generally. It was the first network to use the end-to-end principle and make the hosts responsible for reliable delivery of data, rather than the network itself. Concepts of this network influenced later ARPANET architecture.
DECnet
DECnet is a suite of network protocols created by Digital Equipment Corporation, originally released in 1975 in order to connect two PDP-11 minicomputers. It evolved into one of the first peer-to-peer network architectures, thus transforming DEC into a networking powerhouse in the 1980s. Initially built with three layers, it later (1982) evolved into a seven-layer OSI-compliant networking protocol. The DECnet protocols were designed entirely by Digital Equipment Corporation. However, DECnet Phase II (and later) were open standards with published specifications, and several implementations were developed outside DEC, including one for Linux.
DDX-1
DDX-1 was an experimental network from Nippon PTT. It mixed circuit switching and packet switching. It was succeeded by DDX-2.
EIN
The European Informatics Network (EIN), originally called COST 11, was a project beginning in 1971 to link networks in Britain, France, Italy, Switzerland and Euratom. Six other European countries also participated in the research on network protocols. Derek Barber directed the project, and Roger Scantlebury led the UK technical contribution; both were from NPL. The contract for its implementation was awarded to an Anglo French consortium led by the UK systems house Logica and Sesa and managed by Andrew Karney. Work began in 1973 and it became operational in 1976 including nodes linking the NPL network and CYCLADES. Barber proposed and implemented a mail protocol for EIN. The transport protocol of the EIN helped to launch the INWG and X.25 protocols. EIN was replaced by Euronet in 1979.
EPSS
The Experimental Packet Switched Service (EPSS) was an experiment of the UK Post Office Telecommunications. It was the first public data network in the UK when it began operating in 1976. Ferranti supplied the hardware and software. The handling of link control messages (acknowledgements and flow control) was different from that of most other networks.
GEIS
As General Electric Information Services (GEIS), General Electric was a major international provider of information services. The company originally designed a telephone network to serve as its internal (albeit continent-wide) voice telephone network.
In 1965, at the instigation of Warner Sinback, a data network based on this voice-phone network was designed to connect GE's four computer sales and service centers (Schenectady, New York, Chicago, and Phoenix) to facilitate a computer time-sharing service.
After going international some years later, GEIS created a network data center near Cleveland, Ohio. Very little has been published about the internal details of their network. The design was hierarchical with redundant communication links.
IPSANET
IPSANET was a semi-private network constructed by I. P. Sharp Associates to serve their time-sharing customers. It became operational in May 1976.
IPX/SPX
The Internetwork Packet Exchange (IPX) and Sequenced Packet Exchange (SPX) are Novell networking protocols from the 1980s derived from Xerox Network Systems' IDP and SPP protocols, respectively which date back to the 1970s. IPX/SPX was used primarily on networks using the Novell NetWare operating systems.
Merit Network
Merit Network, an independent nonprofit organization governed by Michigan's public universities, was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development. With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host-to-host connection was made between the IBM mainframe systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972, connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years, in addition to host-to-host interactive connections, the network was enhanced to support terminal-to-host connections, host-to-host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP; additionally, public universities in Michigan joined the network. All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.
NPL
Donald Davies of the National Physical Laboratory (United Kingdom) designed and proposed a national commercial data network based on packet switching in 1965. The proposal was not taken up nationally but the following year, he designed a local network using "interface computers", today known as routers, to serve the needs of NPL and prove the feasibility of packet switching.
By 1968 Davies had begun building the NPL network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions. In 1969, the NPL, followed by the ARPANET, were the first two networks to use packet switching. By 1976, 12 computers and 75 terminal devices were attached, and more were added until the network was replaced in 1986. NPL was the first to use high-speed links.
Octopus
Octopus was a local network at Lawrence Livermore National Laboratory. It connected sundry hosts at the lab to interactive terminals and various computer peripherals including a bulk storage system.
Philips Research
Philips Research Laboratories in Redhill, Surrey developed a packet switching network for internal use. It was a datagram network with a single switching node.
PUP
PARC Universal Packet (PUP or Pup) was one of the two earliest internetworking protocol suites; it was created by researchers at Xerox PARC in the mid-1970s. The entire suite provided routing and packet delivery, as well as higher level functions such as a reliable byte stream, along with numerous applications. Further developments led to Xerox Network Systems (XNS).
RCP
RCP was an experimental network created by the French PTT. It was used to gain experience with packet switching technology before the specification of TRANSPAC was frozen. RCP was a virtual-circuit network in contrast to CYCLADES which was based on datagrams. RCP emphasised terminal-to-host and terminal-to-terminal connection; CYCLADES was concerned with host-to-host communication. RCP influenced the X.25 specification, which was deployed on TRANSPAC and other public data networks.
RETD
Red Especial de Transmisión de Datos (RETD) was a network developed by Compañía Telefónica Nacional de España. It became operational in 1972 and thus was the first public network.
SCANNET
"The experimental packet-switched Nordic telecommunication network SCANNET was implemented in Nordic technical libraries in the 1970s, and it included first Nordic electronic journal Extemplo. Libraries were also among first ones in universities to accommodate microcomputers for public use in the early 1980s."
SITA HLN
SITA is a consortium of airlines. Its High Level Network (HLN) became operational in 1969. Although organised to act like a packet-switching network, it still used message switching. As with many non-academic networks, very little has been published about it.
SRCnet/SERCnet
A number of computer facilities serving the Science Research Council (SRC) community in the United Kingdom developed beginning in the early 1970s. Each had their own star network (ULCC London, UMRCC Manchester, Rutherford Appleton Laboratory). There were also regional networks centred on Bristol (on which work was initiated in the late 1960s) followed in the mid-late 1970s by Edinburgh, the Midlands and Newcastle. These groups of institutions shared resources to provide better computing facilities than could be afforded individually. The networks were each based on one manufacturer's standards and were mutually incompatible and overlapping. In 1981, the SRC was renamed the Science and Engineering Research Council (SERC). In the early 1980s a standardisation and interconnection effort started, hosted on an expansion of the SERCnet research network and based on the Coloured Book protocols, later evolving into JANET.
Systems Network Architecture
Systems Network Architecture (SNA) is IBM's proprietary networking architecture created in 1974. An IBM customer could acquire hardware and software from IBM and lease private lines from a common carrier to construct a private network.
Telenet
Telenet was the first FCC-licensed public data network in the United States. Telenet was incorporated in 1973 and started operations in 1975. It was founded by Bolt Beranek & Newman with Larry Roberts as CEO as a means of making packet switching technology public. Telenet initially used a proprietary Virtual circuit host interface, but changed it to X.25 and the terminal interface to X.29 after their standardization in CCITT. It went public in 1979 and was then sold to GTE.
Tymnet
Tymnet was an international data communications network headquartered in San Jose, CA. In 1969, it began install a network based on minicomputers to connect timesharing terminals to its central computers. The network used store-and-forward and voice-grade lines. Routing was not distributed, rather it was established by a central supervisor on a call-by-call basis.
X.25 era
There were two kinds of X.25 networks. Some such as DATAPAC and TRANSPAC were initially implemented with an X.25 external interface. Some older networks such as TELENET and TYMNET were modified to provide a X.25 host interface in addition to older host connection schemes. DATAPAC was developed by Bell-Northern Research which was a joint venture of Bell Canada (a common carrier) and Northern Telecom (a telecommunications equipment supplier). Northern Telecom sold several DATAPAC clones to foreign PTTs including the Deutsche Bundespost. X.75 and X.121 allowed the interconnection of national X.25 networks.
AUSTPAC
AUSTPAC was an Australian public X.25 network operated by Telstra. Established by Telstra's predecessor Telecom Australia in the early 1980s, AUSTPAC was Australia's first public packet-switched data network and supported applications such as on-line betting, financial applications—the Australian Tax Office made use of AUSTPAC—and remote terminal access to academic institutions, who maintained their connections to AUSTPAC up until the mid-late 1990s in some cases. Access was via a dial-up terminal to a PAD, or, by linking a permanent X.25 node to the network.
ConnNet
ConnNet was a network operated by the Southern New England Telephone Company serving the state of Connecticut. Launched on March 11, 1985, it was the first local public packet-switched network in the United States.
Datanet 1
Datanet 1 was the public switched data network operated by the Dutch PTT Telecom (now known as KPN). Strictly speaking Datanet 1 only referred to the network and the connected users via leased lines (using the X.121 DNIC 2041), the name also referred to the public PAD service Telepad (using the DNIC 2049). And because the main Videotex service used the network and modified PAD devices as infrastructure the name Datanet 1 was used for these services as well.
DATAPAC
DATAPAC was the first operational X.25 network (1976). It covered major Canadian cities and was eventually extended to smaller centers.
Datex-P
Deutsche Bundespost operated the Datex-P national network in Germany. The technology was acquired from Northern Telecom.
Eirpac
Eirpac is the Irish public switched data network supporting X.25 and X.28. It was launched in 1984, replacing Euronet. Eirpac is run by Eircom.
Euronet
Nine member states of the European Economic Community contracted with Logica and the French company SESA to set up a joint venture in 1975 to undertake the Euronet development, using X.25 protocols to form virtual circuits. It was to replace EIN and established a network in 1979 linking a number of European countries until 1984 when the network was handed over to national PTTs.
HIPA-NET
Hitachi designed a private network system for sale as a turnkey package to multi-national organizations. In addition to providing X.25 packet switching, message switching software was also included. Messages were buffered at the nodes adjacent to the sending and receiving terminals. Switched virtual calls were not supported, but through the use of logical ports an originating terminal could have a menu of pre-defined destination terminals.
Iberpac
Iberpac is the Spanish public packet-switched network, providing X.25 services. It was based on RETD which was operational since 1972. Iberpac was run by Telefonica.
IPSS
In 1978, X.25 provided the first international and commercial packet-switching network, the International Packet Switched Service (IPSS).
JANET
JANET was the UK academic and research network, linking all universities, higher education establishments, and publicly funded research laboratories following its launch in 1984. The X.25 network, which used the Coloured Book protocols, was based mainly on GEC 4000 series switches, and ran X.25 links at up to in its final phase before being converted to an IP-based network in 1991. The JANET network grew out of the 1970s SRCnet, later called SERCnet.
PSS
Packet Switch Stream (PSS) was the Post Office Telecommunications (later to become British Telecom) national X.25 network with a DNIC of 2342. British Telecom renamed PSS Global Network Service (GNS), but the PSS name has remained better known. PSS also included public dial-up PAD access, and various InterStream gateways to other services such as Telex.
REXPAC
REXPAC was the nationwide experimental packet switching data network in Brazil, developed by the research and development center of Telebrás, the state-owned public telecommunications provider.
SITA Data Transport Network
SITA is a consortium of airlines. Its Data Transport Network adopted X.25 in 1981, becoming the world's most extensive packet-switching network. As with many non-academic networks, very little has been published about it.
TRANSPAC
TRANSPAC was the national X.25 network in France. It was developed locally at about the same time as DATAPAC in Canada. The development was done by the French PTT and influenced by the experimental RCP network. It began operation in 1978, and served commercial users and, after Minitel began, consumers.
Tymnet
Tymnet utilized virtual call packet switched technology including X.25, SNA/SDLC, BSC and ASCII interfaces to connect host computers (servers) at thousands of large companies, educational institutions, and government agencies. Users typically connected via dial-up connections or dedicated asynchronous serial connections. The business consisted of a large public network that supported dial-up users and a private network business that allowed government agencies and large companies (mostly banks and airlines) to build their own dedicated networks. The private networks were often connected via gateways to the public network to reach locations not on the private network. Tymnet was also connected to dozens of other public networks in the U.S. and internationally via X.25/X.75 gateways.
UNINETT
UNINETT was a wide-area Norwegian packet-switched network established through a joint effort between Norwegian universities, research institutions and the Norwegian Telecommunication administration. The original network was based on X.25; Internet protocols were adopted later.
VENUS-P
VENUS-P was an international X.25 network that operated from April 1982 through March 2006. At its subscription peak in 1999, VENUS-P connected 207 networks in 87 countries.
XNS
Xerox Network Systems (XNS) was a protocol suite promulgated by Xerox, which provided routing and packet delivery, as well as higher level functions such as a reliable stream, and remote procedure calls. It was developed from PARC Universal Packet (PUP).
Internet era
When Internet connectivity was made available to anyone who could pay for an Internet service provider subscription, the distinctions between national networks blurred. The user no longer saw network identifiers such as the DNIC. Some older technologies such as circuit switching have resurfaced with new names such as fast packet switching. Researchers have created some experimental networks to complement the existing Internet.
CSNET
The Computer Science Network (CSNET) was a computer network funded by the NSF that began operation in 1981. Its purpose was to extend networking benefits for computer science departments at academic and research institutions that could not be directly connected to ARPANET due to funding or authorization limitations. It played a significant role in spreading awareness of, and access to, national networking and was a major milestone on the path to the development of the global Internet.
Internet2
Internet2 is a not-for-profit United States computer networking consortium led by members from the research and education communities, industry, and government. The Internet2 community, in partnership with Qwest, built the first Internet2 Network, called Abilene, in 1998 and was a prime investor in the National LambdaRail (NLR) project. In 2006, Internet2 announced a partnership with Level 3 Communications to launch a brand new nationwide network, boosting its capacity from 10 to . In October, 2007, Internet2 officially retired Abilene and now refers to its new, higher capacity network as the Internet2 Network.
NSFNET
The National Science Foundation Network (NSFNET) was a program of coordinated, evolving projects sponsored by the NSF beginning in 1985 to promote advanced research and education networking in the United States. NSFNET was also the name given to several nationwide backbone networks, operating at speeds of , (T1), and (T3), that were constructed to support NSF's networking initiatives from 1985 to 1995. Initially created to link researchers to the nation's NSF-funded supercomputing centers, through further public funding and private industry partnerships it developed into a major part of the Internet backbone.
NSFNET regional networks
In addition to the five NSF supercomputer centers, NSFNET provided connectivity to eleven regional networks and through these networks to many smaller regional and campus networks in the United States. The NSFNET regional networks were:
BARRNet, the Bay Area Regional Research Network in Palo Alto, California;
CERFnet, California Education and Research Federation Network in San Diego, California, serving California and Nevada;
CICNet, the Committee on Institutional Cooperation Network via the Merit Network in Ann Arbor, Michigan and later as part of the T3 upgrade via Argonne National Laboratory outside of Chicago, serving the Big Ten Universities and the University of Chicago in Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin;
Merit/MichNet in Ann Arbor, Michigan serving Michigan, formed in 1966, still in operation ;
MIDnet in Lincoln, Nebraska serving Arkansas, Iowa, Kansas, Missouri, Nebraska, Oklahoma, and South Dakota;
NEARNET, the New England Academic and Research Network in Cambridge, Massachusetts, added as part of the upgrade to T3, serving Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont, established in late 1988, operated by BBN under contract to MIT, BBN assumed responsibility for NEARNET on 1 July 1993;
NorthWestNet in Seattle, Washington, serving Alaska, Idaho, Montana, North Dakota, Oregon, and Washington, founded in 1987;
NYSERNet, New York State Education and Research Network in Ithaca, New York;
JVNCNet, the John von Neumann National Supercomputer Center Network in Princeton, New Jersey, serving Delaware and New Jersey;
SESQUINET, the Sesquicentennial Network in Houston, Texas, founded during the 150th anniversary of the State of Texas;
SURAnet, the Southeastern Universities Research Association network in College Park, Maryland and later as part of the T3 upgrade in Atlanta, Georgia serving Alabama, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, South Carolina, Tennessee, Virginia, and West Virginia, sold to BBN in 1994; and
Westnet in Salt Lake City, Utah and Boulder, Colorado, serving Arizona, Colorado, New Mexico, Utah, and Wyoming.
National LambdaRail
The National LambdaRail (NRL) was launched in September 2003. It is a 12,000-mile high-speed national computer network owned and operated by the US research and education community that runs over fiber-optic lines. It was the first transcontinental 10 Gigabit Ethernet network. It operates with an aggregate capacity of up to and a bitrate. NLR ceased operations in March 2014.
TransPAC2, and TransPAC3
TransPAC2 is a high-speed international Internet service connecting research and education networks in the Asia-Pacific region to those in the US. TransPAC3 is part of the NSF's International Research Network Connections (IRNC) program.
Very high-speed Backbone Network Service (vBNS)
The Very high-speed Backbone Network Service (vBNS) came on line in April 1995 as part of a NSF sponsored project to provide high-speed interconnection between NSF-sponsored supercomputing centers and select access points in the United States. The network was engineered and operated by MCI Telecommunications under a cooperative agreement with the NSF. By 1998, the vBNS had grown to connect more than 100 universities and research and engineering institutions via 12 national points of presence with DS-3 (), OC-3c (), and OC-12 () links on an all OC-12 backbone, a substantial engineering feat for that time. The vBNS installed one of the first ever production OC-48 () IP links in February 1999 and went on to upgrade the entire backbone to OC-48.
In June 1999 MCI WorldCom introduced vBNS+ which allowed attachments to the vBNS network by organizations that were not approved by or receiving support from NSF. After the expiration of the NSF agreement, the vBNS largely transitioned to providing service to the government. Most universities and research centers migrated to the Internet2 educational backbone. In January 2006, when MCI and Verizon merged, vBNS+ became a service of Verizon Business.
See also
Multi-bearer network
Optical burst switching
Packet radio
Transmission delay
Virtual private network
References
Bibliography
Lawrence Roberts, The Evolution of Packet Switching (Proceedings of the IEEE, November, 1978)
Primary sources
Paul Baran et al., On Distributed Communications, Volumes I-XI (RAND Corporation Research Documents, August, 1964)
Paul Baran, On Distributed Communications: I Introduction to Distributed Communications Network (RAND Memorandum RM-3420-PR. August 1964)
Paul Baran, On Distributed Communications Networks, (IEEE Transactions on Communications Systems, Vol. CS-12 No. 1, pp. 1–9, March 1964)
D. W. Davies, K. A. Bartlett, R. A. Scantlebury, and P. T. Wilkinson, A digital communications network for computers giving rapid response at remote terminals (ACM Symposium on Operating Systems Principles. October 1967)
R. A. Scantlebury, P. T. Wilkinson, and K. A. Bartlett, The design of a message switching Centre for a digital communication network (IFIP 1968)
Further reading
External links
Oral history interview with Paul Baran. Charles Babbage Institute University of Minnesota, Minneapolis. Baran describes his working environment at RAND, as well as his initial interest in survivable communications, and the evolution, writing and distribution of his eleven-volume work, "On Distributed Communications". Baran discusses his interaction with the group at ARPA who were responsible for the later development of the ARPANET.
NPL Data Communications Network NPL video, 1970s
Packet Switching History and Design, site reviewed by Baran, Roberts, and Kleinrock
Paul Baran and the Origins of the Internet
Computer networking
History of the Internet
Network protocols | Packet switching | [
"Technology",
"Engineering"
] | 10,171 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
43,342 | https://en.wikipedia.org/wiki/IPsec | In computing, Internet Protocol Security (IPsec) is a secure network protocol suite that authenticates and encrypts packets of data to provide secure encrypted communication between two computers over an Internet Protocol network. It is used in virtual private networks (VPNs).
IPsec includes protocols for establishing mutual authentication between agents at the beginning of a session and negotiation of cryptographic keys to use during the session. IPsec can protect data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host).
IPsec uses cryptographic security services to protect communications over Internet Protocol (IP) networks. It supports network-level peer authentication, data origin authentication, data integrity, data confidentiality (encryption), and protection from replay attacks.
The protocol was designed by a committee instead of being designed via a competition, it was made so complex with a lot of options which has a devastating effect on a security standard. There is alleged interference of NSA to weaken its security features.
History
Starting in the early 1970s, the Advanced Research Projects Agency sponsored a series of experimental ARPANET encryption devices, at first for native ARPANET packet encryption and subsequently for TCP/IP packet encryption; some of these were certified and fielded. From 1986 to 1991, the NSA sponsored the development of security protocols for the Internet under its Secure Data Network Systems (SDNS) program. This brought together various vendors including Motorola who produced a network encryption device in 1988. The work was openly published from about 1988 by NIST and, of these, Security Protocol at Layer 3 (SP3) would eventually morph into the ISO standard Network Layer Security Protocol (NLSP).
In 1992, the US Naval Research Laboratory (NRL) was funded by DARPA CSTO to implement IPv6 and to research and implement IP encryption in 4.4 BSD, supporting both SPARC and x86 CPU architectures. DARPA made its implementation freely available via MIT. Under NRL's DARPA-funded research effort, NRL developed the IETF standards-track specifications (RFC 1825 through RFC 1827) for IPsec. NRL's IPsec implementation was described in their paper in the 1996 USENIX Conference Proceedings. NRL's open-source IPsec implementation was made available online by MIT and became the basis for most initial commercial implementations.
The Internet Engineering Task Force (IETF) formed the IP Security Working Group in 1992 to standardize openly specified security extensions to IP, called IPsec. The NRL developed standards were published by the IETF as RFC 1825 through RFC 1827.
Security architecture
The initial IPv4 suite was developed with few security provisions. As a part of the IPv4 enhancement, IPsec is a layer 3 OSI model or internet layer end-to-end security scheme. In contrast, while some other Internet security systems in widespread use operate above the network layer, such as Transport Layer Security (TLS) that operates above the transport layer and Secure Shell (SSH) that operates at the application layer, IPsec can automatically secure applications at the internet layer.
IPsec is an open standard as a part of the IPv4 suite and uses the following protocols to perform various functions:
Authentication Header (AH) provides connectionless data integrity and data origin authentication for IP datagrams and provides protection against IP header modification attacks and replay attacks.
Encapsulating Security Payload (ESP) provides confidentiality, connectionless data integrity, data origin authentication, an anti-replay service (a form of partial sequence integrity), and limited traffic-flow confidentiality.
Internet Security Association and Key Management Protocol (ISAKMP) provides a framework for authentication and key exchange, with actual authenticated keying material provided either by manual configuration with pre-shared keys, Internet Key Exchange (IKE and IKEv2), Kerberized Internet Negotiation of Keys (KINK), or IPSECKEY DNS records. The purpose is to generate the security associations (SA) with the bundle of algorithms and parameters necessary for AH and/or ESP operations.
Authentication Header
The Security Authentication Header (AH) was developed at the US Naval Research Laboratory in the early 1990s and is derived in part from previous IETF standards' work for authentication of the Simple Network Management Protocol (SNMP) version 2. Authentication Header (AH) is a member of the IPsec protocol suite. AH ensures connectionless integrity by using a hash function and a secret shared key in the AH algorithm. AH also guarantees the data origin by authenticating IP packets. Optionally a sequence number can protect the IPsec packet's contents against replay attacks, using the sliding window technique and discarding old packets.
In IPv4, AH prevents option-insertion attacks. In IPv6, AH protects both against header insertion attacks and option insertion attacks.
In IPv4, the AH protects the IP payload and all header fields of an IP datagram except for mutable fields (i.e. those that might be altered in transit), and also IP options such as the IP Security Option. Mutable (and therefore unauthenticated) IPv4 header fields are DSCP/ToS, ECN, Flags, Fragment Offset, TTL and Header Checksum.
In IPv6, the AH protects most of the IPv6 base header, AH itself, non-mutable extension headers after the AH, and the IP payload. Protection for the IPv6 header excludes the mutable fields: DSCP, ECN, Flow Label, and Hop Limit.
AH operates directly on top of IP, using IP protocol number .
The following AH packet diagram shows how an AH packet is constructed and interpreted:
Encapsulating Security Payload
The IP Encapsulating Security Payload (ESP) was developed at the Naval Research Laboratory starting in 1992 as part of a DARPA-sponsored research project, and was openly published by IETF SIPP Working Group drafted in December 1993 as a security extension for SIPP. This ESP was originally derived from the US Department of Defense SP3D protocol, rather than being derived from the ISO Network-Layer Security Protocol (NLSP). The SP3D protocol specification was published by NIST in the late 1980s, but designed by the Secure Data Network System project of the US Department of Defense.
Encapsulating Security Payload (ESP) is a member of the IPsec protocol suite. It provides origin authenticity through source authentication, data integrity through hash functions and confidentiality through encryption protection for IP packets. ESP also supports encryption-only and authentication-only configurations, but using encryption without authentication is strongly discouraged because it is insecure.
Unlike Authentication Header (AH), ESP in transport mode does not provide integrity and authentication for the entire IP packet. However, in tunnel mode, where the entire original IP packet is encapsulated with a new packet header added, ESP protection is afforded to the whole inner IP packet (including the inner header) while the outer header (including any outer IPv4 options or IPv6 extension headers) remains unprotected.
ESP operates directly on top of IP, using IP protocol number 50.
The following ESP packet diagram shows how an ESP packet is constructed and interpreted:
Security association
The IPsec protocols use a security association, where the communicating parties establish shared security attributes such as algorithms and keys. As such, IPsec provides a range of options once it has been determined whether AH or ESP is used. Before exchanging data, the two hosts agree on which symmetric encryption algorithm is used to encrypt the IP packet, for example AES or ChaCha20, and which hash function is used to ensure the integrity of the data, such as BLAKE2 or SHA256. These parameters are agreed for the particular session, for which a lifetime must be agreed and a session key.
The algorithm for authentication is also agreed before the data transfer takes place and IPsec supports a range of methods. Authentication is possible through pre-shared key, where a symmetric key is already in the possession of both hosts, and the hosts send each other hashes of the shared key to prove that they are in possession of the same key. IPsec also supports public key encryption, where each host has a public and a private key, they exchange their public keys and each host sends the other a nonce encrypted with the other host's public key. Alternatively if both hosts hold a public key certificate from a certificate authority, this can be used for IPsec authentication.
The security associations of IPsec are established using the Internet Security Association and Key Management Protocol (ISAKMP). ISAKMP is implemented by manual configuration with pre-shared secrets, Internet Key Exchange (IKE and IKEv2), Kerberized Internet Negotiation of Keys (KINK), and the use of IPSECKEY DNS records. RFC 5386 defines Better-Than-Nothing Security (BTNS) as an unauthenticated mode of IPsec using an extended IKE protocol. C. Meadows, C. Cremers, and others have used formal methods to identify various anomalies which exist in IKEv1 and also in IKEv2.
In order to decide what protection is to be provided for an outgoing packet, IPsec uses the Security Parameter Index (SPI), an index to the security association database (SADB), along with the destination address in a packet header, which together uniquely identifies a security association for that packet. A similar procedure is performed for an incoming packet, where IPsec gathers decryption and verification keys from the security association database.
For IP multicast a security association is provided for the group, and is duplicated across all authorized receivers of the group. There may be more than one security association for a group, using different SPIs, thereby allowing multiple levels and sets of security within a group. Indeed, each sender can have multiple security associations, allowing authentication, since a receiver can only know that someone knowing the keys sent the data. Note that the relevant standard does not describe how the association is chosen and duplicated across the group; it is assumed that a responsible party will have made the choice.
Keepalives
To ensure that the connection between two endpoints has not been interrupted, endpoints exchange keepalive messages at regular intervals, which can also be used to automatically reestablish a tunnel lost due to connection interruption.
Dead Peer Detection (DPD) is a method of detecting a dead Internet Key Exchange (IKE) peer. The method uses IPsec traffic patterns to minimize the number of messages required to confirm the availability of a peer. DPD is used to reclaim the lost resources in case a peer is found dead and it is also used to perform IKE peer failover.
UDP keepalive is an alternative to DPD.
Modes of operation
The IPsec protocols AH and ESP can be implemented in a host-to-host transport mode, as well as in a network tunneling mode.
Transport mode
In transport mode, only the payload of the IP packet is usually encrypted or authenticated. The routing is intact, since the IP header is neither modified nor encrypted; however, when the authentication header is used, the IP addresses cannot be modified by network address translation, as this always invalidates the hash value. The transport and application layers are always secured by a hash, so they cannot be modified in any way, for example by translating the port numbers.
A means to encapsulate IPsec messages for NAT traversal (NAT-T) has been defined by RFC documents describing the NAT-T mechanism.
Tunnel mode
In tunnel mode, the entire IP packet is encrypted and authenticated. It is then encapsulated into a new IP packet with a new IP header. Tunnel mode is used to create virtual private networks for network-to-network communications (e.g. between routers to link sites), host-to-network communications (e.g. remote user access) and host-to-host communications (e.g. private chat).
Tunnel mode supports NAT traversal.
Algorithms
Symmetric encryption algorithms
Cryptographic algorithms defined for use with IPsec include:
HMAC-SHA1/SHA2 for integrity protection and authenticity.
TripleDES-CBC for confidentiality
AES-CBC and AES-CTR for confidentiality.
AES-GCM and ChaCha20-Poly1305 providing confidentiality and authentication together efficiently.
Refer to RFC 8221 for details.
Key exchange algorithms
Diffie–Hellman (RFC 3526)
ECDH (RFC 4753)
Authentication algorithms
RSA
ECDSA (RFC 4754)
PSK (RFC 6617)
EdDSA (RFC 8420)
Implementations
The IPsec can be implemented in the IP stack of an operating system. This method of implementation is done for hosts and security gateways. Various IPsec capable IP stacks are available from companies, such as HP or IBM. An alternative is so called bump-in-the-stack (BITS) implementation, where the operating system source code does not have to be modified. Here IPsec is installed between the IP stack and the network drivers. This way operating systems can be retrofitted with IPsec. This method of implementation is also used for both hosts and gateways. However, when retrofitting IPsec the encapsulation of IP packets may cause problems for the automatic path MTU discovery, where the maximum transmission unit (MTU) size on the network path between two IP hosts is established. If a host or gateway has a separate cryptoprocessor, which is common in the military and can also be found in commercial systems, a so-called bump-in-the-wire (BITW) implementation of IPsec is possible.
When IPsec is implemented in the kernel, the key management and ISAKMP/IKE negotiation is carried out from user space. The NRL-developed and openly specified "PF_KEY Key Management API, Version 2" is often used to enable the application-space key management application to update the IPsec security associations stored within the kernel-space IPsec implementation. Existing IPsec implementations usually include ESP, AH, and IKE version 2. Existing IPsec implementations on Unix-like operating systems, for example, Solaris or Linux, usually include PF_KEY version 2.
Embedded IPsec can be used to ensure the secure communication among applications running over constrained resource systems with a small overhead.
Standards status
IPsec was developed in conjunction with IPv6 and was originally required to be supported by all standards-compliant implementations of IPv6 before RFC 6434 made it only a recommendation. IPsec is also optional for IPv4 implementations. IPsec is most commonly used to secure IPv4 traffic.
IPsec protocols were originally defined in RFC 1825 through RFC 1829, which were published in 1995. In 1998, these documents were superseded by RFC 2401 and RFC 2412 with a few incompatible engineering details, although they were conceptually identical. In addition, a mutual authentication and key exchange protocol Internet Key Exchange (IKE) was defined to create and manage security associations. In December 2005, new standards were defined in RFC 4301 and RFC 4309 which are largely a superset of the previous editions with a second version of the Internet Key Exchange standard IKEv2. These third-generation documents standardized the abbreviation of IPsec to uppercase "IP" and lowercase "sec". "ESP" generally refers to RFC 4303, which is the most recent version of the specification.
Since mid-2008, an IPsec Maintenance and Extensions (ipsecme) working group is active at the IETF.
Alleged NSA interference
In 2013, as part of Snowden leaks, it was revealed that the US National Security Agency had been actively working to "Insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets" as part of the Bullrun program. There are allegations that IPsec was a targeted encryption system.
The OpenBSD IPsec stack came later on and also was widely copied. In a letter which OpenBSD lead developer Theo de Raadt received on 11 Dec 2010 from Gregory Perry, it is alleged that Jason Wright and others, working for the FBI, inserted "a number of backdoors and side channel key leaking mechanisms" into the OpenBSD crypto code. In the forwarded email from 2010, Theo de Raadt did not at first express an official position on the validity of the claims, apart from the implicit endorsement from forwarding the email. Jason Wright's response to the allegations: "Every urban legend is made more real by the inclusion of real names, dates, and times. Gregory Perry's email falls into this category. ... I will state clearly that I did not add backdoors to the OpenBSD operating system or the OpenBSD Cryptographic Framework (OCF)." Some days later, de Raadt commented that "I believe that NETSEC was probably contracted to write backdoors as alleged. ... If those were written, I don't believe they made it into our tree." This was published before the Snowden leaks.
An alternative explanation put forward by the authors of the Logjam attack suggests that the NSA compromised IPsec VPNs by undermining the Diffie-Hellman algorithm used in the key exchange. In their paper, they allege the NSA specially built a computing cluster to precompute multiplicative subgroups for specific primes and generators, such as for the second Oakley group defined in RFC 2409. As of May 2015, 90% of addressable IPsec VPNs supported the second Oakley group as part of IKE. If an organization were to precompute this group, they could derive the keys being exchanged and decrypt traffic without inserting any software backdoors.
A second alternative explanation that was put forward was that the Equation Group used zero-day exploits against several manufacturers' VPN equipment which were validated by Kaspersky Lab as being tied to the Equation Group and validated by those manufacturers as being real exploits, some of which were zero-day exploits at the time of their exposure. The Cisco PIX and ASA firewalls had vulnerabilities that were used for wiretapping by the NSA.
Furthermore, IPsec VPNs using "Aggressive Mode" settings send a hash of the PSK in the clear. This can be and apparently is targeted by the NSA using offline dictionary attacks.
See also
Dynamic Multipoint Virtual Private Network
Information security
NAT traversal
Opportunistic encryption
tcpcrypt
Tunneling protocol
References
Further reading
Standards track
: The ESP DES-CBC Transform
: The Use of HMAC-MD5-96 within ESP and AH
: The Use of HMAC-SHA-1-96 within ESP and AH
: The ESP DES-CBC Cipher Algorithm With Explicit IV
: The NULL Encryption Algorithm and Its Use With IPsec
: The ESP CBC-Mode Cipher Algorithms
: The Use of HMAC-RIPEMD-160-96 within ESP and AH
: More Modular Exponential (MODP) Diffie-Hellman groups for Internet Key Exchange (IKE)
: The AES-CBC Cipher Algorithm and Its Use with IPsec
: Using Advanced Encryption Standard (AES) Counter Mode With IPsec Encapsulating Security Payload (ESP)
: Negotiation of NAT-Traversal in the IKE
: UDP Encapsulation of IPsec ESP Packets
: The Use of Galois/Counter Mode (GCM) in IPsec Encapsulating Security Payload (ESP)
: Security Architecture for the Internet Protocol
: IP Authentication Header
: IP Encapsulating Security Payload
: Extended Sequence Number (ESN) Addendum to IPsec Domain of Interpretation (DOI) for Internet Security Association and Key Management Protocol (ISAKMP)
: Cryptographic Algorithms for Use in the Internet Key Exchange Version 2 (IKEv2)
: Cryptographic Suites for IPsec
: Using Advanced Encryption Standard (AES) CCM mode with IPsec Encapsulating Security Payload (ESP)
: The Use of Galois Message Authentication Code (GMAC) in IPsec ESP and AH
: IKEv2 Mobility and Multihoming Protocol (MOBIKE)
: Online Certificate Status Protocol (OCSP) Extensions to IKEv2
: Using HMAC-SHA-256, HMAC-SHA-384, and HMAC-SHA-512 with IPsec
: The Internet IP Security PKI Profile of IKEv1/ISAKMP, IKEv2, and PKIX
: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile
: Using Authenticated Encryption Algorithms with the Encrypted Payload of the Internet Key Exchange version 2 (IKEv2) Protocol
: Better-Than-Nothing Security: An Unauthenticated Mode of IPsec
: Modes of Operation for Camellia for Use with IPsec
: Redirect Mechanism for the Internet Key Exchange Protocol Version 2 (IKEv2)
: Internet Key Exchange Protocol Version 2 (IKEv2) Session Resumption
: IKEv2 Extensions to Support Robust Header Compression over IPsec
: IPsec Extensions to Support Robust Header Compression over IPsec
: Internet Key Exchange Protocol Version 2 (IKEv2)
: Cryptographic Algorithm Implementation Requirements and Usage Guidance for Encapsulating Security Payload (ESP) and Authentication Header (AH)
: Internet Key Exchange Protocol Version 2 (IKEv2) Message Fragmentation
: Signature Authentication in the Internet Key Exchange Version 2 (IKEv2)
: ChaCha20, Poly1305, and Their Use in the Internet Key Exchange Protocol (IKE) and IPsec
Experimental RFCs
: Repeated Authentication in Internet Key Exchange (IKEv2) Protocol
Informational RFCs
: PF_KEY Interface
: The OAKLEY Key Determination Protocol
: A Traffic-Based Method of Detecting Dead Internet Key Exchange (IKE) Peers
: IPsec-Network Address Translation (NAT) Compatibility Requirements
: Design of the IKEv2 Mobility and Multihoming (MOBIKE) Protocol
: Requirements for an IPsec Certificate Management Profile
: Problem and Applicability Statement for Better-Than-Nothing Security (BTNS)
: Integration of Robust Header Compression over IPsec Security Associations
: Using Advanced Encryption Standard Counter Mode (AES-CTR) with the Internet Key Exchange version 02 (IKEv2) Protocol
: IPsec Cluster Problem Statement
: IPsec and IKE Document Roadmap
: Suite B Cryptographic Suites for IPsec
: Suite B Profile for Internet Protocol Security (IPsec)
: Secure Password Framework for Internet Key Exchange Version 2 (IKEv2)
Best current practice RFCs
: Guidelines for Specifying the Use of IPsec Version 2
Obsolete/historic RFCs
: Security Architecture for the Internet Protocol (obsoleted by RFC 2401)
: IP Authentication Header (obsoleted by RFC 2402)
: IP Encapsulating Security Payload (ESP) (obsoleted by RFC 2406)
: IP Authentication using Keyed MD5 (historic)
: Security Architecture for the Internet Protocol (IPsec overview) (obsoleted by RFC 4301)
: IP Encapsulating Security Payload (ESP) (obsoleted by RFC 4303 and RFC 4305)
: The Internet IP Security Domain of Interpretation for ISAKMP (obsoleted by RFC 4306)
: The Internet Key Exchange (obsoleted by RFC 4306)
: Cryptographic Algorithm Implementation Requirements for Encapsulating Security Payload (ESP) and Authentication Header (AH) (obsoleted by RFC 4835)
: Internet Key Exchange (IKEv2) Protocol (obsoleted by RFC 5996)
: IKEv2 Clarifications and Implementation Guidelines (obsoleted by RFC 7296)
: Cryptographic Algorithm Implementation Requirements for Encapsulating Security Payload (ESP) and Authentication Header (AH) (obsoleted by RFC 7321)
: Internet Key Exchange Protocol Version 2 (IKEv2) (obsoleted by RFC 7296)
External links
All IETF active security WGs
IETF ipsecme WG ("IP Security Maintenance and Extensions" Working Group)
IETF btns WG ("Better-Than-Nothing Security" Working Group) (chartered to work on unauthenticated IPsec, IPsec APIs, connection latching)]
Securing Data in Transit with IPsec WindowsSecurity.com article by Deb Shinder
IPsec on Microsoft TechNet
Microsoft IPsec Diagnostic Tool on Microsoft Download Center
An Illustrated Guide to IPsec by Steve Friedl
Security Architecture for IP (IPsec) Data Communication Lectures by Manfred Lindner Part IPsec
Creating VPNs with IPsec and SSL/TLS Linux Journal article by Rami Rosen
Cryptographic protocols
Internet protocols
Network layer protocols
Tunneling protocols | IPsec | [
"Engineering"
] | 5,195 | [
"Computer networks engineering",
"Tunneling protocols"
] |
43,395 | https://en.wikipedia.org/wiki/Alfred%20North%20Whitehead | Alfred North Whitehead (15 February 1861 – 30 December 1947) was an English mathematician and philosopher. He created the philosophical school known as process philosophy, which has been applied in a wide variety of disciplines, including ecology, theology, education, physics, biology, economics, and psychology.
In his early career Whitehead wrote primarily on mathematics, logic, and physics. He wrote the three-volume Principia Mathematica (1910–1913), with his former student Bertrand Russell. Principia Mathematica is considered one of the twentieth century's most important works in mathematical logic, and placed 23rd in a list of the top 100 English-language nonfiction books of the twentieth century by Modern Library.
Beginning in the late 1910s and early 1920s, Whitehead gradually turned his attention from mathematics to philosophy of science, and finally to metaphysics. He developed a comprehensive metaphysical system which radically departed from most of Western philosophy. Whitehead argued that reality consists of processes rather than material objects, and that processes are best defined by their relations with other processes, thus rejecting the theory that reality is fundamentally constructed by bits of matter that exist independently of one another. Whitehead's philosophical works – particularly Process and Reality – are regarded as the foundational texts of process philosophy.
Whitehead's process philosophy argues that "there is urgency in coming to see the world as a web of interrelated processes of which we are integral parts, so that all of our choices and actions have consequences for the world around us." For this reason, one of the most promising applications of Whitehead's thought in the 21st century has been in the area of ecological civilization and environmental ethics pioneered by John B. Cobb.
Life
Childhood and education
Alfred North Whitehead was born in Ramsgate, Kent, England, in 1861. His father, Alfred Whitehead, became an Anglican minister after being headmaster of Chatham House Academy, a school for boys previously headed by Alfred's father, Thomas Whitehead. Whitehead himself recalled both of them as being very successful schoolmasters, with his grandfather being the more "remarkable" man.
Whitehead's mother was Maria Sarah Buckmaster. Her maternal great-grandmother was Jane North (1776–1847), whose maiden surname was given to Whitehead, and several other members of his family over time. His mother, Maria Buckmaster had eleven siblings. The son of her brother Thomas, Walter Selby Buckmaster, was twice an Olympics silver medal winner for Polo (1900, 1908) for Britain, and is said to be "one of the finest polo players England has ever produced". Whitehead does not appear to have been close to his mother, although he and Evelyn (full name: Evelyn Ada Maud Rice Willoughby Wade), whom he married in 1890, are recorded in the English Census of 1891 as living with Alfred's mother and father. Lowe notes that there appears to have been mutual dislike between Whitehead's wife, Evelyn, and his mother, Maria.
Griffin relates how Bertrand Russell, a colleague and collaborator of Whitehead, was a very close friend of Whitehead and of his wife, Evelyn. Griffin retells Russell's story of how, one evening in 1901, "they found Evelyn Whitehead in the middle of what appeared to be a dangerous and acutely painful angina attack. ... [but] It seems that she suffered from a psychosomatic disorder ... [and] the danger was illusory." Griffin posits that Russell exaggerated the drama of her illness, and that both Evelyn and Russell were habitually given to melodrama. Intensity of emotion was encouraged by their avant-garde associates in the turbulent Bloomsbury Group which "discussed aesthetic and philosophical questions in a spirit of agnosticism and were strongly influenced by G.E. Moore's Principia Ethica (1903) and by A. N. Whitehead's and Bertrand Russell's Principia Mathematica (1910–13), in the light of which they searched for definitions of the good, the true, and the beautiful".
Alfred's brother Henry became Bishop of Madras and wrote the closely observed ethnographic account Village Gods of South-India (Calcutta: Association Press, 1921).
Whitehead was educated at Sherborne, a prominent English public school, where he excelled in sports and mathematics and was head prefect of his class.
In 1880, he began attending Trinity College, Cambridge, and studied mathematics. His academic advisor was Edward Routh. He earned his B.A. from Trinity in 1884, writing his dissertation on James Clerk Maxwell's A Treatise on Electricity and Magnetism, and graduated as fourth wrangler.
Career
Elected a fellow of Trinity in 1884, Whitehead would teach and write on mathematics and physics at the college until 1910, spending the 1890s writing his Treatise on Universal Algebra (1898), and the 1900s collaborating with his former pupil, Bertrand Russell, on the first edition of Principia Mathematica. He was a Cambridge Apostle.
In 1910, Whitehead resigned his senior lectureship in mathematics at Trinity and moved to London without first obtaining another job. After being unemployed for a year, he accepted a position as lecturer in applied mathematics and mechanics at University College London but was passed over a year later for the Goldsmid Chair of Applied Mathematics and Mechanics, a position for which he had hoped to be seriously considered.
In 1914, Whitehead accepted a position as professor of applied mathematics at the newly chartered Imperial College London, where his old friend Andrew Forsyth had recently been appointed chief professor of mathematics.
In 1918, Whitehead's academic responsibilities began to seriously expand as he accepted a number of high administrative positions within the University of London system, of which Imperial College London was a member at the time. He was elected dean of the Faculty of Science at the University of London in late 1918 (a post he held for four years), a member of the University of London's Senate in 1919, and chairman of the Senate's Academic (leadership) Council in 1920, a post which he held until he departed for America in 1924. Whitehead was able to exert his newfound influence to successfully lobby for a new history of science department, help establish a Bachelor of Science degree (previously only Bachelor of Arts degrees had been offered), and make the school more accessible to less wealthy students.
Toward the end of his time in England, Whitehead turned his attention to philosophy. Though he had no advanced training in philosophy, his philosophical work soon became highly regarded. After publishing The Concept of Nature in 1920, he served as president of the Aristotelian Society from 1922 to 1923.
Move to the United States, 1924
In 1924, Henry Osborn Taylor invited the 63-year-old Whitehead to join the faculty at Harvard University as a professor of philosophy. The Whiteheads would spend the rest of their lives in the United States.
During his time at Harvard, Whitehead produced his most important philosophical contributions. In 1925, he wrote Science and the Modern World, which was immediately hailed as an alternative to the Cartesian dualism then prevalent in popular science. He was elected to the American Academy of Arts and Sciences that same year. He was elected to the American Philosophical Society in 1926. Lectures from 1927 to 1928, were published in 1929 as a book named Process and Reality, which has been compared to Immanuel Kant's Critique of Pure Reason.
Family and death
In 1890, Whitehead married Evelyn Wade, an Irishwoman raised in France; they had a daughter, Jessie, and two sons, Thomas and Eric. Thomas followed his father to Harvard in 1931, to teach at the Business School. Eric died in action at the age of 19, while serving in the Royal Flying Corps during World War I.
From 1910, the Whiteheads had a cottage in the village of Lockeridge, near Marlborough, Wiltshire; from there he completed Principia Mathematica.
The Whiteheads remained in the United States after moving to Harvard in 1924. Alfred retired from Harvard in 1937 and remained in Cambridge, Massachusetts, until his death on 30 December 1947.
Legacy
The two-volume biography of Whitehead by Victor Lowe is the most definitive presentation of the life of Whitehead. However, many details of Whitehead's life remain obscure because he left no Nachlass (personal archive); his family carried out his instructions that all of his papers be destroyed after his death. Additionally, Whitehead was known for his "almost fanatical belief in the right to privacy," and for writing very few personal letters of the kind that would help to gain insight on his life. Wrote Lowe in his preface, "No professional biographer in his right mind would touch him."
Led by Executive Editor Brian G. Henning and General Editor George R. Lucas Jr., the Whitehead Research Project of the Center for Process Studies is currently working on a critical edition of Whitehead's published and unpublished works. The first volume of the Edinburgh Critical Edition of the Complete Works of Alfred North Whitehead was published in 2017 by Paul A. Bogaard and Jason Bell as The Harvard Lectures of Alfred North Whitehead, 1924–1925: The Philosophical Presuppositions of Science.
Mathematics and logic
In addition to numerous articles on mathematics, Whitehead wrote three major books on the subject: A Treatise on Universal Algebra (1898), Principia Mathematica (co-written with Bertrand Russell and published in three volumes between 1910 and 1913), and An Introduction to Mathematics (1911). The former two books were aimed exclusively at professional mathematicians, while the latter book was intended for a larger audience, covering the history of mathematics and its philosophical foundations. Principia Mathematica in particular is regarded as one of the most important works in mathematical logic of the 20th century.
In addition to his legacy as a co-writer of Principia Mathematica, Whitehead's theory of "extensive abstraction" is considered foundational for the branch of ontology and computer science known as "mereotopology," a theory describing spatial relations among wholes, parts, parts of parts, and the boundaries between parts.
A Treatise on Universal Algebra
In A Treatise on Universal Algebra (1898), the term universal algebra had essentially the same meaning that it has today: the study of algebraic structures themselves, rather than examples ("models") of algebraic structures. Whitehead credits William Rowan Hamilton and Augustus De Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself.
At the time, structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: "The main idea of the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but rather the comparative study of their several structures." In a separate review, G. B. Mathews wrote, "It possesses a unity of design which is really remarkable, considering the variety of its themes."
A Treatise on Universal Algebra sought to examine Hermann Grassmann's theory of extension ("Ausdehnungslehre"), Boole's algebra of logic, and Hamilton's quaternions (this last number system was to be taken up in Volume II, which was never finished due to Whitehead's work on Principia Mathematica). Whitehead wrote in the preface:
Whitehead, however, had no results of a general nature. His hope of "form[ing] a uniform method of interpretation of the various algebras" presumably would have been developed in Volume II, had Whitehead completed it. Further work on the subject was minimal until the early 1930s when Garrett Birkhoff and Øystein Ore began publishing on universal algebras.
Principia Mathematica
Principia Mathematica (1910–1913) is Whitehead's most famous mathematical work. Written with former student Bertrand Russell, Principia Mathematica is considered one of the twentieth century's most important works in mathematics, and placed 23rd in a list of the top 100 English-language nonfiction books of the twentieth century by Modern Library.
Principia Mathematicas purpose was to describe a set of axioms and inference rules in symbolic logic from which all mathematical truths could in principle be proven. Whitehead and Russell were working on such a foundational level of mathematics and logic that it took them until page 86 of Volume II to prove that 1+1=2, a proof humorously accompanied by the comment, "The above proposition is occasionally useful."
Whitehead and Russell had thought originally that Principia Mathematica would take a year to complete; it ended up taking them ten years. When it came time for publication, the three-volume work was so long (more than 2,000 pages) and its audience so narrow (professional mathematicians) that it was initially published at a loss of 600 pounds, 300 of which was paid by Cambridge University Press, 200 by the Royal Society of London, and 50 apiece by Whitehead and Russell themselves. Despite the initial loss, today there is likely no major academic library in the world which does not hold a copy of Principia Mathematica.
The ultimate substantive legacy of Principia Mathematica is mixed. It is generally accepted that Kurt Gödel's incompleteness theorem of 1931 definitively demonstrated that for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths of mathematics which could not be deduced from them, and hence that Principia Mathematica could never achieve its aims. However, Gödel could not have come to this conclusion without Whitehead and Russell's book. In this way, Principia Mathematica legacy might be described as its key role in disproving the possibility of achieving its own stated goals. But beyond this somewhat ironic legacy, the book popularized modern mathematical logic and drew important connections between logic, epistemology, and metaphysics.
An Introduction to Mathematics
Unlike Whitehead's previous two books on mathematics, An Introduction to Mathematics (1911) was not aimed exclusively at professional mathematicians but was intended for a larger audience. The book covered the nature of mathematics, its unity and internal structure, and its applicability to nature. Whitehead wrote in the opening chapter:
The book can be seen as an attempt to understand the growth in unity and interconnection of mathematics as a whole, as well as an examination of the mutual influence of mathematics and philosophy, language, and physics. Although the book is little-read, in some ways it prefigures certain points of Whitehead's later work in philosophy and metaphysics.
Views on education
Whitehead showed a deep concern for educational reform at all levels. In addition to his numerous individually written works on the subject, Whitehead was appointed by Britain's Prime Minister David Lloyd George as part of a 20-person committee to investigate the educational systems and practices of the UK in 1921 and recommend reform.
Whitehead's most complete work on education is the 1929 book The Aims of Education and Other Essays, which collected numerous essays and addresses by Whitehead on the subject published between 1912 and 1927. The essay from which Aims of Education derived its name was delivered as an address in 1916 when Whitehead was president of the London Branch of the Mathematical Association. In it, he cautioned against the teaching of what he called "inert ideas" – ideas that are disconnected scraps of information, with no application to real life or culture. He opined that "education with inert ideas is not only useless: it is, above all things, harmful."
Rather than teach small parts of a large number of subjects, Whitehead advocated teaching a relatively few important concepts that the student could organically link to many different areas of knowledge, discovering their application in actual life. For Whitehead, education should be the exact opposite of the multidisciplinary, value-free school model – it should be transdisciplinary, and laden with values and general principles that provide students with a bedrock of wisdom and help them to make connections between areas of knowledge that are usually regarded as separate.
In order to make this sort of teaching a reality, however, Whitehead pointed to the need to minimize the importance of (or radically alter) standard examinations for school entrance. Whitehead writes:
Whitehead argued that curriculum should be developed specifically for its own students by its own staff, or else risk total stagnation, interrupted only by occasional movements from one group of inert ideas to another.
Above all else in his educational writings, Whitehead emphasized the importance of imagination and the free play of ideas. In his essay "Universities and Their Function", Whitehead writes provocatively on imagination:
Whitehead's philosophy of education might adequately be summarized in his statement that "knowledge does not keep any better than fish". In other words, bits of disconnected knowledge are meaningless; all knowledge must find some imaginative application to the students' own lives, or else it becomes useless trivia, and the students themselves become good at parroting facts but not thinking for themselves.
Philosophy and metaphysics
Whitehead did not begin his career as a philosopher. In fact, he never had any formal training in philosophy beyond his undergraduate education. Early in his life, he showed great interest in and respect for philosophy and metaphysics, but it is evident that he considered himself a rank amateur. In one letter to his friend and former student Bertrand Russell, after discussing whether science aimed to be explanatory or merely descriptive, he wrote: "This further question lands us in the ocean of metaphysic, onto which my profound ignorance of that science forbids me to enter." Ironically, in later life, Whitehead would become one of the 20th century's foremost metaphysicians.
However, interest in metaphysics – the philosophical investigation of the nature of the universe and existence – had become unfashionable by the time Whitehead began writing in earnest about it in the 1920s. The ever-more impressive accomplishments of empirical science had led to a general consensus in academia that the development of comprehensive metaphysical systems was a waste of time because they were not subject to empirical testing.
Whitehead was unimpressed by this objection. In the notes of one of his students for a 1927 class, Whitehead was quoted as saying: "Every scientific man in order to preserve his reputation has to say he dislikes metaphysics. What he means is he dislikes having his metaphysics criticized." In Whitehead's view, scientists and philosophers make metaphysical assumptions about how the universe works all the time, but such assumptions are not easily seen precisely because they remain unexamined and unquestioned. While Whitehead acknowledged that "philosophers can never hope finally to formulate these metaphysical first principles", he argued that people need to continually reimagine their basic assumptions about how the universe works if philosophy and science are to make any real progress, even if that progress remains permanently asymptotic. For this reason, Whitehead regarded metaphysical investigations as essential to both good science and good philosophy.
Perhaps foremost among what Whitehead considered faulty metaphysical assumptions was the Cartesian idea that reality is fundamentally constructed of bits of matter that exist totally independently of one another, which he rejected in favour of an event-based or "process" ontology in which events are primary and are fundamentally interrelated and dependent on one another. He also argued that the most basic elements of reality can all be regarded as experiential, indeed that everything is constituted by its experience. He used the term "experience" very broadly so that even inanimate processes such as electron collisions are said to manifest some degree of experience. In this, he went against Descartes' separation of two different kinds of real existence, either exclusively material or else exclusively mental. Whitehead referred to his metaphysical system as the "philosophy of organism," but it would become known more widely as "process philosophy."
Whitehead's philosophy was highly original, and soon garnered interest in philosophical circles. After publishing The Concept of Nature in 1920, he served as president of the Aristotelian Society from 1922 to 1923, and Henri Bergson was quoted as saying that Whitehead was "the best philosopher writing in English." So impressive and different was Whitehead's philosophy that in 1924 he was invited to join the faculty at Harvard University as a professor of philosophy at 63 years of age.
This is not to say that Whitehead's thought was widely accepted or even well understood. His philosophical work is generally considered to be among the most difficult to understand in all of the Western canon. Even professional philosophers struggled to follow Whitehead's writings. One famous story illustrating the level of difficulty of Whitehead's philosophy centres around the delivery of Whitehead's Gifford lectures in 1927–28 – following Arthur Eddington's lectures of the year previous – which Whitehead would later publish as Process and Reality:
It may not be inappropriate to speculate that some fair portion of the respect generally shown to Whitehead by his philosophical peers at the time arose from their sheer bafflement. The Chicago theologian Shailer Mathews once remarked of Whitehead's 1926 book Religion in the Making: "It is infuriating, and I must say embarrassing as well, to read page after page of relatively familiar words without understanding a single sentence."
However, Mathews' frustration with Whitehead's books did not negatively affect his interest. In fact, there were numerous philosophers and theologians at Chicago's Divinity School who perceived the importance of what Whitehead was doing without fully grasping all of the details and implications. In 1927, they invited one of America's only Whitehead experts, Henry Nelson Wieman, to Chicago to give a lecture explaining Whitehead's thoughts. Wieman's lecture was so brilliant that he was promptly hired to the faculty and taught there for twenty years, and for at least thirty years afterwards Chicago's Divinity School was closely associated with Whitehead's thought.
Shortly after Whitehead's book Process and Reality appeared in 1929, Wieman famously wrote in his 1930 review:
Wieman's words proved prophetic. Though Process and Reality has been called "arguably the most impressive single metaphysical text of the twentieth century," it has been little-read and little-understood, partly because it demands – as Isabelle Stengers puts it – "that its readers accept the adventure of the questions that will separate them from every consensus." Whitehead questioned Western philosophy's most dearly held assumptions about how the universe works – but in doing so, he managed to anticipate a number of 21st century scientific and philosophical problems and provide novel solutions.
Whitehead's conception of reality
Whitehead was convinced that the scientific notion of matter was misleading as a way of describing the ultimate nature of things. In his 1925 book Science and the Modern World, he wrote that:
In Whitehead's view, there are a number of problems with this notion of "irreducible brute matter". First, it obscures and minimizes the importance of change. By thinking of any material thing (like a rock, or a person) as being fundamentally the same thing throughout time, with any changes to it being secondary to its "nature", scientific materialism hides the fact that nothing ever stays the same. For Whitehead, change is fundamental and inescapable; he emphasizes that "all things flow".
In Whitehead's view, then, concepts such as "quality", "matter", and "form" are problematic. These "classical" concepts fail to adequately account for change, and overlook the active and experiential nature of the most basic elements of the world. They are useful abstractions but are not the world's basic building blocks. What is ordinarily conceived of as a single person, for instance, is philosophically described as a continuum of overlapping events. After all, people change all the time, if only because they have aged by another second and had some further experience. These occasions of experience are logically distinct but are progressively connected in what Whitehead calls a "society" of events. By assuming that enduring objects are the most real and fundamental things in the universe, materialists have mistaken the abstract for the concrete (what Whitehead calls the "fallacy of misplaced concreteness").
To put it another way, a thing or person is often seen as having a "defining essence" or a "core identity" that is unchanging, and describes what the thing or person really is. In this way of thinking, things and people are seen as fundamentally the same through time, with any changes being qualitative and secondary to their core identity (e.g., "Mark's hair has turned grey as he has gotten older, but he is still the same person"). But in Whitehead's cosmology, the only fundamentally existent things are discrete "occasions of experience" that overlap one another in time and space, and jointly make up the enduring person or thing. On the other hand, what ordinary thinking often regards as "the essence of a thing" or "the identity/core of a person" is an abstract generalization of what is regarded as that person or thing's most important or salient features across time. Identities do not define people; people define identities. Everything changes from moment to moment and to think of anything as having an "enduring essence" misses the fact that "all things flow," though it is often a useful way of speaking.
Whitehead pointed to the limitations of language as one of the main culprits in maintaining a materialistic way of thinking and acknowledged that it may be difficult to ever wholly move past such ideas in everyday speech. After all, every moment of each person's life can hardly be given a different proper name, and it is easy and convenient to think of people and objects as remaining fundamentally the same things, rather than constantly keeping in mind that each thing is a different thing from what it was a moment ago. Yet the limitations of everyday living and everyday speech should not prevent people from realizing that "material substances" or "essences" are a convenient generalized description of a continuum of particular, concrete processes. No one questions that a ten-year-old person is quite different by the time he or she turns thirty years old, and in many ways is not the same person at all; Whitehead points out that it is not philosophically or ontologically sound to think that a person is the same from one second to the next.
A second problem with materialism is that it obscures the importance of relations. It sees every object as distinct and discrete from all other objects. Each object is simply an inert clump of matter that is only externally related to other things. The idea of matter as primary makes people think of objects as being fundamentally separate in time and space, and not necessarily related to anything. But in Whitehead's view, relations take a primary role, perhaps even more important than the relata themselves. A student taking notes in one of Whitehead's fall 1924 classes wrote that, "Reality applies to connections, and only relatively to the things connected. (A) is real for (B), and (B) is real for (A), but [they are] not absolutely real independent of each other." In fact, Whitehead describes any entity as in some sense nothing more and nothing less than the sum of its relations to other entities – its synthesis of and reaction to the world around it. A real thing is just that which forces the rest of the universe to in some way conform to it; that is to say, if theoretically, a thing made strictly no difference to any other entity (i.e., it was not related to any other entity), it could not be said to really exist. Relations are not secondary to what a thing is; they are what the thing is.
To Whitehead, an entity is not merely a sum of its relations, but also a valuation of them and reaction to them. For Whitehead, creativity is the absolute principle of existence, and every entity (whether it is a human being, a tree, or an electron) has some degree of novelty in how it responds to other entities and is not fully determined by causal or mechanistic laws. Most entities do not have consciousness. As a human being's actions cannot always be predicted, the same can be said of where a tree's roots will grow, or how an electron will move, or whether it will rain tomorrow. Moreover, the inability to predict an electron's movement (for instance) is not due to faulty understanding or inadequate technology; rather, the fundamental creativity/freedom of all entities means that there will always remain phenomena that are unpredictable.
The other side of creativity/freedom as the absolute principle is that every entity is constrained by the social structure of existence (i.e., its relations); each actual entity must conform to the settled conditions of the world around it. Freedom always exists within limits. But an entity's uniqueness and individuality arise from its own self-determination as to just how it will take account of the world within the limits that have been set for it.
In summary, Whitehead rejects the idea of separate and unchanging bits of matter as the most basic building blocks of reality, in favour of the idea of reality as interrelated events in the process. He conceives of reality as composed of processes of dynamic "becoming" rather than static "being", emphasizing that all physical things change and evolve and that changeless "essences" such as matter are mere abstractions from the interrelated events that are the final real things that make up the world.
Theory of perception
Since Whitehead's metaphysics described a universe in which all entities experience, he needed a new way of describing perception that was not limited to living, self-conscious beings. The term he coined was "prehension," which comes from the Latin prehensio, meaning "to seize". The term is meant to indicate a kind of perception that can be conscious or unconscious, applying to people as well as electrons. It is also intended to make clear Whitehead's rejection of the theory of representative perception, in which the mind only has private ideas about other entities. For Whitehead, the term "prehension" indicates that the perceiver actually incorporates aspects of the perceived thing into itself. In this way, entities are constituted by their perceptions and relations, rather than being independent of them. Further, Whitehead regards perception as occurring in two modes, causal efficacy (or "physical prehension") and presentational immediacy (or "conceptual prehension").
Whitehead describes causal efficacy as "the experience dominating the primitive living organisms, which have a sense for the fate from which they have emerged, and the fate towards which they go." It is, in other words, the sense of causal relations between entities, a feeling of being influenced and affected by the surrounding environment, unmediated by the senses. Presentational immediacy, on the other hand, is what is usually referred to as "pure sense perception", unmediated by any causal or symbolic interpretation, even unconscious interpretation. In other words, it is pure appearance, which may or may not be delusive (e.g., mistaking an image in a mirror for "the real thing").
In higher organisms (like people), these two modes of perception combine into what Whitehead terms "symbolic reference", which links appearance with causation in a process that is so automatic that both people and animals have difficulty refraining from it. By way of illustration, Whitehead uses the example of a person's encounter with a chair. An ordinary person looks up, sees a coloured shape, and immediately infers that it is a chair. However, an artist, Whitehead supposes, "might not have jumped to the notion of a chair", but instead "might have stopped at the mere contemplation of a beautiful colour and a beautiful shape." This is not the normal human reaction; most people place objects in categories by habit and instinct, without even thinking about it. Moreover, animals do the same thing. Using the same example, Whitehead points out that a dog "would have acted immediately on the hypothesis of a chair and would have jumped onto it by way of using it as such." In this way, symbolic reference is a fusion of pure sense perceptions on the one hand and causal relations on the other, and it is in fact the causal relationships that dominate the more basic mentality (as the dog illustrates), while it is the sense perceptions which indicate a higher grade mentality (as the artist illustrates).
Evolution and value
Whitehead believed that when asking questions about the basic facts of existence, questions about value and purpose can never be fully escaped. This is borne out in his thoughts on abiogenesis, or the hypothetical natural process by which life arises from simple organic compounds.
Whitehead makes the startling observation that "life is comparatively deficient in survival value." If humans can only exist for about a hundred years, and rocks for eight hundred million, then one is forced to ask why complex organisms ever evolved in the first place; as Whitehead humorously notes, "they certainly did not appear because they were better at that game than the rocks around them." He then observes that the mark of higher forms of life is that they are actively engaged in modifying their environment, an activity which he theorizes is directed toward the three-fold goal of living, living well, and living better. In other words, Whitehead sees life as directed toward the purpose of increasing its own satisfaction. Without such a goal, he sees the rise of life as totally unintelligible.
For Whitehead, there is no such thing as wholly inert matter. Instead, all things have some measure of freedom or creativity, however small, which allows them to be at least partly self-directed. The process philosopher David Ray Griffin coined the term "panexperientialism" (the idea that all entities experience) to describe Whitehead's view, and to distinguish it from panpsychism (the idea that all matter has consciousness).
God
Whitehead's idea of God differs from traditional monotheistic notions. Perhaps his most famous and pointed criticism of the Christian conception of God is that "the Church gave unto God the attributes which belonged exclusively to Caesar." Here, Whitehead is criticizing Christianity for defining God as primarily a divine king who imposes his will on the world, and whose most important attribute is power. As opposed to the most widely accepted forms of Christianity, Whitehead emphasized an idea of God that he called "the brief Galilean vision of humility":
For Whitehead, God is not necessarily tied to religion. Rather than springing primarily from religious faith, Whitehead saw God as necessary for his metaphysical system. His system required that an order exist among possibilities, an order that allowed for novelty in the world and provided an aim to all entities. Whitehead posited that these ordered potentials exist in what he called the primordial nature of God. However, Whitehead was also interested in religious experience. This led him to reflect more intensively on what he saw as the second nature of God, the consequent nature. Whitehead's conception of God as a "dipolar" entity has called for fresh theological thinking.
The primordial nature he described as "the unlimited conceptual realization of the absolute wealth of potentiality" – i.e., the unlimited possibility of the universe. This primordial nature is eternal and unchanging, providing entities in the universe with possibilities for realization. Whitehead also calls this primordial aspect "the lure for feeling, the eternal urge of desire," pulling the entities in the universe toward as-yet unrealized possibilities.
God's consequent nature, on the other hand, is anything but unchanging; it is God's reception of the world's activity. As Whitehead puts it, "[God] saves the world as it passes into the immediacy of his own life. It is the judgment of a tenderness which loses nothing that can be saved." In other words, God saves and cherishes all experiences forever, and those experiences go on to change the way God interacts with the world. In this way, God is really changed by what happens in the world and the wider universe, lending the actions of finite creatures an eternal significance.
Whitehead thus sees God and the world as fulfilling one another. He sees entities in the world as fluent and changing things that yearn for a permanence which only God can provide by taking them into God's self, thereafter changing God and affecting the rest of the universe throughout time. On the other hand, he sees God as permanent but as deficient in actuality and change: alone, God is merely eternally unrealized possibilities and requires the world to actualize them. God gives creatures permanence, while the creatures give God actuality and change. Here it is worthwhile to quote Whitehead at length:
The above is some of Whitehead's most evocative writing about God, and was powerful enough to inspire the movement known as process theology, a vibrant theological school of thought that continues to thrive today.
Religion
For Whitehead, the core of religion was individual. While he acknowledged that individuals cannot ever be fully separated from their society, he argued that life is an internal fact for its own sake before it is an external fact relating to others. His most famous remark on religion is that "religion is what the individual does with his own solitariness ... and if you are never solitary, you are never religious." Whitehead saw religion as a system of general truths that transformed a person's character. He took special care to note that while religion is often a good influence, it is not necessarily good – an idea which he called a "dangerous delusion" (e.g., a religion might encourage the violent extermination of a rival religion's adherents).
However, while Whitehead saw religion as beginning in solitariness, he also saw religion as necessarily expanding beyond the individual. In keeping with his process metaphysics in which relations are primary, he wrote that religion necessitates the realization of "the value of the objective world which is a community derivative from the interrelations of its component individuals." In other words, the universe is a community which makes itself whole through the relatedness of each individual entity to all the others; meaning and value do not exist for the individual alone, but only in the context of the universal community. Whitehead writes further that each entity "can find no such value till it has merged its individual claim with that of the objective universe. Religion is world loyalty. The spirit at once surrenders itself to this universal claim and appropriates it for itself." In this way, the individual and universal/social aspects of religion are mutually dependent.
A connection between the works of William DeWitt Hyde and Whitehead further elucidates this necessary duality of social and individual roles in religious experience.
Whitehead also described religion more technically as "an ultimate craving to infuse into the insistent particularity of emotion that non-temporal generality which primarily belongs to conceptual thought alone." In other words, religion takes deeply felt emotions and contextualizes them within a system of general truths about the world, helping people to identify their wider meaning and significance. For Whitehead, religion served as a kind of bridge between philosophy and the emotions and purposes of a particular society. It is the task of religion to make philosophy applicable to the everyday lives of ordinary people.
Influence
Isabelle Stengers wrote that "Whiteheadians are recruited among both philosophers and theologians, and the palette has been enriched by practitioners from the most diverse horizons, from ecology to feminism, practices that unite political struggle and spirituality with the sciences of education." In recent decades, attention to Whitehead's work has become more widespread, with interest extending to intellectuals in Europe and China, and coming from such diverse fields as ecology, physics, biology, education, economics, and psychology. One of the first theologians to attempt to interact with Whitehead's thought was the future Archbishop of Canterbury, William Temple. In Temple's Gifford Lectures of 1932–1934 (subsequently published as "Nature, Man and God"), Whitehead is one of a number of philosophers of the emergent evolution approach with which Temple interacts. However, it was not until the 1970s and 1980s that Whitehead's thought drew much attention outside of a small group of philosophers and theologians, primarily Americans, and even today he is not considered especially influential outside of relatively specialized circles.
Early followers of Whitehead were found primarily at the University of Chicago Divinity School, where Henry Nelson Wieman initiated an interest in Whitehead's work that would last for about thirty years. Professors such as Wieman, Charles Hartshorne, Bernard Loomer, Bernard Meland, and Daniel Day Williams made Whitehead's philosophy arguably the most important intellectual thread running through the divinity school. They taught generations of Whitehead scholars, the most notable of whom is John B. Cobb.
Although interest in Whitehead has since faded at Chicago's divinity school, Cobb effectively grabbed the torch and planted it firmly in Claremont, California, where he began teaching at Claremont School of Theology in 1958 and founded the Center for Process Studies with David Ray Griffin in 1973. Largely due to Cobb's influence, today Claremont remains strongly identified with Whitehead's process thought.
But while Claremont remains the most concentrated hub of Whiteheadian activity, the place where Whitehead's thought currently seems to be growing the most quickly is in China. In order to address the challenges of modernization and industrialization, China has begun to blend traditions of Taoism, Buddhism, and Confucianism with Whitehead's "constructive post-modern" philosophy in order to create an "ecological civilization". To date, the Chinese government has encouraged the building of twenty-three university-based centres for the study of Whitehead's philosophy, and books by process philosophers John Cobb and David Ray Griffin are becoming required reading for Chinese graduate students. Cobb has attributed China's interest in process philosophy partly to Whitehead's stress on the mutual interdependence of humanity and nature, as well as his emphasis on an educational system that includes the teaching of values rather than simply bare facts.
Overall, however, Whitehead's influence is very difficult to characterize. In English-speaking countries, his primary works are little-studied outside of Claremont and a select number of liberal graduate-level theology and philosophy programs. Outside of these circles, his influence is relatively small and diffuse and has tended to come chiefly through the work of his students and admirers rather than Whitehead himself. For instance, Whitehead was a teacher and long-time friend and collaborator of Bertrand Russell, and he also taught and supervised the dissertation of Willard Van Orman Quine, both of whom are important figures in analytic philosophy – the dominant strain of philosophy in English-speaking countries in the 20th century. Whitehead has also had high-profile admirers in the continental tradition, such as French post-structuralist philosopher Gilles Deleuze, who once dryly remarked of Whitehead that "he stands provisionally as the last great Anglo-American philosopher before Wittgenstein's disciples spread their misty confusion, sufficiency, and terror." French sociologist and anthropologist Bruno Latour even went so far as to call Whitehead "the greatest philosopher of the 20th century."
Deleuze's and Latour's opinions, however, are minority ones, as Whitehead has not been recognized as particularly influential within the most dominant philosophical schools. It is impossible to say exactly why Whitehead's influence has not been more widespread, but it may be partly due to his metaphysical ideas seeming somewhat counterintuitive (such as his assertion that matter is an abstraction), or his inclusion of theistic elements in his philosophy, or the perception of metaphysics itself as passé, or simply the sheer difficulty and density of his prose.
Process philosophy and theology
Historically, Whitehead's work has been most influential in the field of American progressive theology. The most important early proponent of Whitehead's thought in a theological context was Charles Hartshorne, who spent a semester at Harvard as Whitehead's teaching assistant in 1925, and is widely credited with developing Whitehead's process philosophy into a full-blown process theology. Other notable process theologians include John B. Cobb, David Ray Griffin, Marjorie Hewitt Suchocki, C. Robert Mesle, Roland Faber, and Catherine Keller.
Process theology typically stresses God's relational nature. Rather than seeing God as impassive or emotionless, process theologians view God as "the fellow sufferer who understands," and as the being who is supremely affected by temporal events. Hartshorne points out that people would not praise a human ruler who was unaffected by either the joys or sorrows of his followers – so why would this be a praiseworthy quality in God? Instead, as the being who is most affected by the world, God is the being who can most appropriately respond to the world. However, process theology has been formulated in a wide variety of ways. C. Robert Mesle, for instance, advocates a "process naturalism" – i.e., a process theology without God.
In fact, process theology is difficult to define because process theologians are so diverse and transdisciplinary in their views and interests. John B. Cobb is a process theologian who has also written books on biology and economics. Roland Faber and Catherine Keller integrate Whitehead with poststructuralist, postcolonialist, and feminist theory. Charles Birch was both a theologian and a geneticist. Franklin I. Gamwell writes on theology and political theory. In Syntheism – Creating God in The Internet Age, futurologists Alexander Bard and Jan Söderqvist repeatedly credit Whitehead for the process theology they see rising out of the participatory culture expected to dominate the digital era.
Process philosophy is even more difficult to pin down than process theology. In practice, the two fields cannot be neatly separated. The 32-volume State University of New York series in constructive postmodern thought edited by process philosopher and theologian David Ray Griffin displays the range of areas in which different process philosophers work, including physics, ecology, medicine, public policy, nonviolence, politics, and psychology.
One philosophical school which has historically had a close relationship with process philosophy is American pragmatism. Whitehead himself thought highly of William James and John Dewey, and acknowledged his indebtedness to them in the preface to Process and Reality. Charles Hartshorne (along with Paul Weiss) edited the collected papers of Charles Sanders Peirce, one of the founders of pragmatism. Noted neopragmatist Richard Rorty was in turn a student of Hartshorne.
Science
Scientists of the early 20th century for whom Whitehead's work has been influential include physical chemist Ilya Prigogine, biologist Conrad Hal Waddington, and geneticists Charles Birch and Sewall Wright.
Henry Murray dedicated his "Explorations in Personality" to Whitehead, a contemporary at Harvard.
In physics, Whitehead's theory of gravitation articulated a view that might perhaps be regarded as dual to Albert Einstein's general relativity. It has been severely criticized. Yutaka Tanaka suggested that the gravitational constant disagrees with experimental findings, and proposed that Einstein's work does not actually refute Whitehead's formulation. Whitehead's view has now been rendered obsolete, with the discovery of gravitational waves, phenomena observed locally that largely violate the kind of local flatness of space that Whitehead assumes. Consequently, Whitehead's cosmology must be regarded as a local approximation, and his assumption of a uniform spatio-temporal geometry, Minkowskian in particular, as an often-locally-adequate approximation. An exact replacement of Whitehead's cosmology would need to admit a Riemannian geometry. Also, although Whitehead himself gave only secondary consideration to quantum theory, his metaphysics of processes has proved attractive to some physicists in that field. Henry Stapp and David Bohm are among those whose work has been influenced by Whitehead.
In the 21st century, Whiteheadian thought is still a stimulating influence: Timothy E. Eastman and Hank Keeton's Physics and Whitehead (2004) and Michael Epperson's Quantum Mechanics and the Philosophy of Alfred North Whitehead (2004) and Foundations of Relational Realism: A Topological Approach to Quantum Mechanics and the Philosophy of Nature (2013), aim to offer Whiteheadian approaches to physics. Brian G. Henning, Adam Scarfe, and Dorion Sagan's Beyond Mechanism (2013) and Rupert Sheldrake's Science Set Free (2012) are examples of Whiteheadian approaches to biology.
Ecology, economy, and sustainability
One of the most promising applications of Whitehead's thought in recent years has been in the area of ecological civilization, sustainability, and environmental ethics.
"Because Whitehead's holistic metaphysics of value lends itself so readily to an ecological point of view, many see his work as a promising alternative to the traditional mechanistic worldview, providing a detailed metaphysical picture of a world constituted by a web of interdependent relations."
This work has been pioneered by John B. Cobb, whose book Is It Too Late? A Theology of Ecology (1971) was the first single-authored book on environmental ethics. Cobb also co-authored a book with leading ecological economist and steady-state theorist Herman Daly entitled For the Common Good: Redirecting the Economy toward Community, the Environment, and a Sustainable Future (1989), which applied Whitehead's thought to economics, and received the Grawemeyer Award for Ideas Improving World Order. Cobb followed this with a second book, Sustaining the Common Good: A Christian Perspective on the Global Economy (1994), which aimed to challenge "economists' zealous faith in the great god of growth."
Education
Whitehead is widely known for his influence in education theory. His philosophy inspired the formation of the Association for Process Philosophy of Education (APPE), which published eleven volumes of a journal titled Process Papers on process philosophy and education from 1996 to 2008. Whitehead's theories on education also led to the formation of new modes of learning and new models of teaching.
One such model is the ANISA model developed by Daniel C. Jordan, which sought to address a lack of understanding of the nature of people in current education systems. As Jordan and Raymond P. Shepard put it: "Because it has not defined the nature of man, education is in the untenable position of having to devote its energies to the development of curricula without any coherent ideas about the nature of the creature for whom they are intended."
Another model is the FEELS model developed by Xie Bangxiu and deployed successfully in China. "FEELS" stands for five things in curriculum and education: Flexible-goals, Engaged-learner, Embodied-knowledge, Learning-through-interactions, and Supportive-teacher. It is used for understanding and evaluating educational curriculum under the assumption that the purpose of education is to "help a person become whole." This work is in part the product of cooperation between Chinese government organizations and the Institute for the Postmodern Development of China.
Whitehead's philosophy of education has also found institutional support in Canada, where the University of Saskatchewan created a Process Philosophy Research Unit and sponsored several conferences on process philosophy and education. Howard Woodhouse at the University of Saskatchewan remains a strong proponent of Whiteheadian education.
Three recent books which further develop Whitehead's philosophy of education include: Modes of Learning: Whitehead's Metaphysics and the Stages of Education (2012) by George Allan; The Adventure of Education: Process Philosophers on Learning, Teaching, and Research (2009) by Adam Scarfe; and "Educating for an Ecological Civilization: Interdisciplinary, Experiential, and Relational Learning" (2017) edited by Marcus Ford and Stephen Rowe. "Beyond the Modern University: Toward a Constructive Postmodern University," (2002) is another text that explores the importance of Whitehead's metaphysics for thinking about higher education.
Business administration
Whitehead has had some influence on the philosophy of business administration and organizational theory. This has led in part to a focus on identifying and investigating the effect of temporal events (as opposed to static things) within organizations through an "organization studies" discourse that accommodates a variety of 'weak' and 'strong' process perspectives from a number of philosophers. One of the leading figures having an explicitly Whiteheadian and panexperientialist stance towards management is Mark Dibben, who works in what he calls "applied process thought" to articulate a philosophy of management and business administration as part of a wider examination of the social sciences through the lens of process metaphysics. For Dibben, this allows "a comprehensive exploration of life as perpetually active experiencing, as opposed to occasional – and thoroughly passive – happening." Dibben has published two books on applied process thought, Applied Process Thought I: Initial Explorations in Theory and Research (2008), and Applied Process Thought II: Following a Trail Ablaze (2009), as well as other papers in this vein in the fields of philosophy of management and business ethics.
Margaret Stout and Carrie M. Staton have also written recently on the mutual influence of Whitehead and Mary Parker Follett, a pioneer in the fields of organizational theory and organizational behaviour. Stout and Staton see both Whitehead and Follett as sharing an ontology that "understands becoming as a relational process; difference as being related, yet unique; and the purpose of becoming as harmonizing difference." This connection is further analyzed by Stout and Jeannine M. Love in Integrative Process: Follettian Thinking from Ontology to Administration
Political views
Whitehead's political views sometimes appear to be libertarian without the label. He wrote:
On the other hand, many Whitehead scholars read his work as providing a philosophical foundation for the social liberalism of the New Liberal movement that was prominent throughout Whitehead's adult life. Morris wrote that "... there is good reason for claiming that Whitehead shared the social and political ideals of the new liberals.". However, Whitehead's comment addresses means and methods, not "ideals" or pretexts or excuses.
Primary works
Books written by Whitehead, listed by date of publication.
A Treatise on Universal Algebra with Applications. Cambridge: Cambridge University Press, 1898. . Available online via Internet Archive
The Axioms of Descriptive Geometry. Cambridge: Cambridge University Press, 1907. Available online at http://quod.lib.umich.edu/u/umhistmath/ABN2643.0001.001.
with Bertrand Russell. Principia Mathematica, Volume I. Cambridge: Cambridge University Press, 1910. Available online at http://www.hti.umich.edu/cgi/b/bib/bibperm?q1=AAT3201.0001.001. Vol. 1 to *56 is available as a CUP paperback.
An Introduction to Mathematics. Cambridge: Cambridge University Press, 1911. Available online at http://quod.lib.umich.edu/u/umhistmath/AAW5995.0001.001. Vol. 56 of the Great Books of the Western World series.
with Bertrand Russell. Principia Mathematica, Volume II. Cambridge: Cambridge University Press, 1912. Available online at http://www.hti.umich.edu/cgi/b/bib/bibperm?q1=AAT3201.0002.001.
with Bertrand Russell. Principia Mathematica, Volume III. Cambridge: Cambridge University Press, 1913. Available online at http://www.hti.umich.edu/cgi/b/bib/bibperm?q1=AAT3201.0003.001.
The Organization of Thought Educational and Scientific. London: Williams & Norgate, 1917. Available online at https://archive.org/details/organisationofth00whit.
An Enquiry Concerning the Principles of Natural Knowledge. Cambridge: Cambridge University Press, 1919. Available online at https://archive.org/details/enquiryconcernpr00whitrich.
The Concept of Nature. Cambridge: Cambridge University Press, 1920. Based on the November 1919 Tarner Lectures delivered at Trinity College. Available online at https://archive.org/details/cu31924012068593.
The Principle of Relativity with Applications to Physical Science. Cambridge: Cambridge University Press, 1922.
Science and the Modern World. New York: Macmillan Company, 1925. Vol. 55 of the Great Books of the Western World series.
Religion in the Making. New York: Macmillan Company, 1926. Based on the 1926 Lowell Lectures.
Symbolism, Its Meaning and Effect. New York: Macmillan Co., 1927. Based on the 1927 Barbour-Page Lectures delivered at the University of Virginia.
Process and Reality: An Essay in Cosmology. New York: Macmillan Company, 1929. Based on the 1927–28 Gifford Lectures delivered at the University of Edinburgh. The 1978 Free Press "corrected edition" edited by David Ray Griffin and Donald W. Sherburne corrects many errors in both the British and American editions and also provides a comprehensive index.
The Aims of Education and Other Essays. New York: Macmillan Company, 1929.
The Function of Reason. Princeton: Princeton University Press, 1929. Based on the March 1929 Louis Clark Vanuxem Foundation Lectures delivered at Princeton University.
Adventures of Ideas. New York: Macmillan Company, 1933. Also published by Cambridge: Cambridge University Press, 1933.
Nature and Life. Chicago: University of Chicago Press, 1934.
Modes of Thought. New York: MacMillan Company, 1938.
"Mathematics and the Good." In The Philosophy of Alfred North Whitehead, edited by Paul Arthur Schilpp, 666–681. Evanston and Chicago: Northwestern University Press, 1941.
"Immortality." In The Philosophy of Alfred North Whitehead, edited by Paul Arthur Schilpp, 682–700. Evanston and Chicago: Northwestern University Press, 1941.
Essays in Science and Philosophy. London: Philosophical Library, 1947.
with Allison Heartz Johnson, ed. The Wit and Wisdom of Whitehead. Boston: Beacon Press, 1948.
In addition, the Whitehead Research Project of the Center for Process Studies is currently working on a critical edition of Whitehead's writings, which is set to include notes taken by Whitehead's students during his Harvard classes, correspondence, and corrected editions of his books.
Paul A. Bogaard and Jason Bell, eds. The Harvard Lectures of Alfred North Whitehead, 1924–1925: Philosophical Presuppositions of Science. Cambridge: Cambridge University Press, 2017.
See also
Great refusal
Pancreativism
Relationalism
Speculative realism
A.N. Whitehead at Sherborne School
References
Further reading
For the most comprehensive list of resources related to Whitehead, see the thematic bibliography of the Center for Process Studies.
Casati, Roberto, and Achille C. Varzi. Parts and Places: The Structures of Spatial Representation. Cambridge, Massachusetts: The MIT Press, 1999.
Ford, Lewis. Emergence of Whitehead's Metaphysics, 1925–1929. Albany: State University of New York Press, 1985.
Hartshorne, Charles. Whitehead's Philosophy: Selected Essays, 1935–1970. Lincoln and London: University of Nebraska Press, 1972.
Henning, Brian G. The Ethics of Creativity: Beauty, Morality, and Nature in a Processive Cosmos. Pittsburgh: University of Pittsburgh Press, 2005.
Holtz, Harald and Ernest Wolf-Gazo, eds. Whitehead und der Prozeßbegriff / Whitehead and The Idea of Process. Proceedings of the First International Whitehead-Symposion. Verlag Karl Alber, Freiburg i. B. / München, 1984.
Jones, Judith A. Intensity: An Essay in Whiteheadian Ontology. Nashville: Vanderbilt University Press, 1998.
Kraus, Elizabeth M. The Metaphysics of Experience. New York: Fordham University Press, 1979.
Malik, Charles H. The Systems of Whitehead's Metaphysics. Zouq Mosbeh, Lebanon: Notre Dame Louaize, 2016. 436 pp.
McDaniel, Jay. What is Process Thought?: Seven Answers to Seven Questions. Claremont: P&F Press, 2008.
McHenry, Leemon. The Event Universe: The Revisionary Metaphysics of Alfred North Whitehead. Edinburgh: Edinburgh University Press, 2015.
Nobo, Jorge L. Whitehead's Metaphysics of Extension and Solidarity. Albany: State University of New York Press, 1986.
Price, Lucien. Dialogues of Alfred North Whitehead. New York: Mentor Books, 1956.
Quine, Willard Van Orman. "Whitehead and the rise of modern logic." In The Philosophy of Alfred North Whitehead, edited by Paul Arthur Schilpp, 125–163. Evanston and Chicago: Northwestern University Press, 1941.
Rapp, Friedrich and Reiner Wiehl, eds. Whiteheads Metaphysik der Kreativität. Internationales Whitehead-Symposium Bad Homburg 1983. Verlag Karl Alber, Freiburg i. B. / München, 1986.
Rescher, Nicholas. Process Metaphysics. Albany: State University of New York Press, 1995.
Rescher, Nicholas. Process Philosophy: A Survey of Basic Issues. Pittsburgh: University of Pittsburgh Press, 2001.
Roelker, Nancy Lyman. An Application Of Whitehead's Concepts Of Conformity and Novelty to the Philosophy of History. Unpublished dissertation, 1940, Harvard University. Held in John Hay Library's Special Collections at Brown University.
Schilpp, Paul Arthur, ed. The Philosophy of Alfred North Whitehead. Evanston and Chicago: Northwestern University Press, 1941. Part of the Library of Living Philosophers series.
Siebers, Johan. The Method of Speculative Philosophy: An Essay on the Foundations of Whitehead's Metaphysics. Kassel: Kassel University Press GmbH, 2002.
Smith, Olav Bryant. Myths of the Self: Narrative Identity and Postmodern Metaphysics. Lanham: Lexington Books, 2004. . It contains a section called Alfred North Whitehead: Toward a More Fundamental Ontology which is an overview of Whitehead's metaphysics.
Weber, Michel. Whitehead's Pancreativism – The Basics. Frankfurt: Ontos Verlag, 2006.
Weber, Michel. Whitehead's Pancreativism – Jamesian Applications, Frankfurt / Paris: Ontos Verlag, 2011.
Weber, Michel and Will Desmond (eds.). Handbook of Whiteheadian Process Thought, Frankfurt / Lancaster: Ontos Verlag, 2008.
Alan Van Wyk and Michel Weber (eds.). Creativity and Its Discontents. The Response to Whitehead's Process and Reality, Frankfurt / Lancaster: Ontos Verlag, 2009.
Will, Clifford. Theory and Experiment in Gravitational Physics. Cambridge: Cambridge University Press, 1993.
External links
The Philosophy of Organism in Philosophy Now magazine. An accessible summary of Alfred North Whitehead's philosophy.
Center for Process Studies in Claremont, California. A faculty research center of Claremont School of Theology, in association with Claremont Graduate University. The Center organizes conferences and events and publishes materials pertaining to Whitehead and process thought. It also maintains extensive Whitehead-related bibliographies.
Summary of Whitehead's Philosophy A Brief Introduction to Whitehead's Metaphysics
Society for the Study of Process Philosophies, a scholarly society that holds periodic meetings in conjunction with each of the divisional meetings of the American Philosophical Association, as well as at the annual meeting of the Society for the Advancement of American Philosophy.
"Alfred North Whitehead" in the MacTutor History of Mathematics archive, by John J. O'Connor and Edmund F. Robertson.
"Alfred North Whitehead: New World Philosopher" at the Harvard Square Library.
Jesus, Jazz, and Buddhism: Process Thinking for a More Hospitable World
"What is Process Thought?" an introductory video series to process thought by Jay McDaniel.
Centre de philosophie pratique « Chromatiques whiteheadiennes »
"Whitehead's Principle of Relativity" by John Lighton Synge on arXiv.org
Whitehead at Monoskop.org, with extensive bibliography.
1861 births
1947 deaths
19th-century American mathematicians
19th-century American non-fiction writers
19th-century American philosophers
19th-century American theologians
19th-century English mathematicians
19th-century English philosophers
19th-century English theologians
19th-century English writers
19th-century English essayists
19th-century mystics
20th-century American mathematicians
20th-century American philosophers
20th-century American theologians
20th-century American writers
20th-century English mathematicians
20th-century English philosophers
20th-century English theologians
20th-century American essayists
20th-century English non-fiction writers
20th-century mystics
Academics of Imperial College London
Academics of University College London
Alumni of Trinity College, Cambridge
American logicians
American male essayists
American male non-fiction writers
American theologians
American philosophers of technology
Analytic philosophers
Aristotelian philosophers
Cambridge University Moral Sciences Club
Consciousness researchers and theorists
English essayists
English logicians
English male non-fiction writers
English theologians
Environmental philosophers
Environmental writers
British epistemologists
Fellows of the British Academy
Fellows of the Royal Society
Former atheists and agnostics
Harvard University Department of Philosophy faculty
Mathematical logicians
Mathematics popularizers
Metaphilosophers
British metaphysicians
Metaphysics writers
Ontologists
People educated at Sherborne School
People from Ramsgate
Philosophers from Massachusetts
Philosophers of economics
British philosophers of education
British philosophers of language
British philosophers of logic
Philosophers of mathematics
British philosophers of mind
Philosophers of psychology
British philosophers of religion
British philosophers of science
Philosophical theists
British philosophy academics
Philosophers of time
Philosophy writers
Presidents of the Aristotelian Society
Writers about religion and science
British relativity theorists
20th-century American male writers | Alfred North Whitehead | [
"Mathematics"
] | 13,478 | [] |
43,405 | https://en.wikipedia.org/wiki/Actuary | An actuary is a professional with advanced mathematical skills who deals with the measurement and management of risk and uncertainty. These risks can affect both sides of the balance sheet and require asset management, liability management, and valuation skills. Actuaries provide assessments of financial security systems, with a focus on their complexity, their mathematics, and their mechanisms. The name of the corresponding academic discipline is actuarial science.
While the concept of insurance dates to antiquity, the concepts needed to scientifically measure and mitigate risks have their origins in the 17th century studies of probability and annuities. Actuaries of the 21st century require analytical skills, business knowledge, and an understanding of human behavior and information systems to design programs that manage risk, by determining if the implementation of strategies proposed for mitigating potential risks, does not exceed the expected cost of those risks actualized. The steps needed to become an actuary, including education and licensing, are specific to a given country, with various additional requirements applied by regional administrative units; however, almost all processes impart universal principles of risk assessment, statistical analysis, and risk mitigation, involving rigorously structured training and examination schedules, taking many years to complete.
The profession has consistently been ranked as one of the most desirable. In various studies in the United States, being an actuary was ranked first or second multiple times since 2010, and in the top 20 for most of the past decade.
Responsibilities
Actuaries use skills primarily in mathematics, particularly calculus-based probability and mathematical statistics, but also economics, computer science, finance, and business. For this reason, actuaries are essential to the insurance and reinsurance industries, either as staff employees or as consultants; to other businesses, including sponsors of pension plans; and to government agencies such as the Government Actuary's Department in the United Kingdom or the Social Security Administration in the United States of America. Actuaries assemble and analyze data to estimate the probability and likely cost of the occurrence of an event such as death, sickness, injury, disability, or loss of property. Actuaries also address financial questions, including those involving the level of pension contributions required to produce a certain retirement income and the way in which a company should invest resources to maximize its return on investments in light of potential risk. Using their broad knowledge, actuaries help design and price insurance policies, pension plans, and other financial strategies in a manner that will help ensure that the plans are maintained on a sound financial basis.
Disciplines
Most traditional actuarial disciplines fall into two main categories: life and non-life.
Life actuaries, which includes health and pension actuaries, primarily deal with mortality risk, morbidity risk, and investment risk. Products prominent in their work include life insurance, annuities, pensions, short and long term disability insurance, health insurance, health savings accounts, and long-term care insurance. In addition to these risks, social insurance programs are influenced by public opinion, politics, budget constraints, changing demographics, and other factors such as medical technology, inflation, and cost of living considerations.
Non-life actuaries, also known as "property and casualty" (mainly US) or "general insurance" (mainly UK) actuaries, deal with both physical and legal risks that affect people or their property. Products prominent in their work include auto insurance, homeowners insurance, commercial property insurance, workers' compensation, malpractice insurance, product liability insurance, marine insurance, terrorism insurance, and other types of liability insurance.
Actuaries are also called upon for their expertise in enterprise risk management. This can involve dynamic financial analysis, stress testing, the formulation of corporate risk policy, and the setting up and running of corporate risk departments. Actuaries are also involved in other areas in the economic and financial field, such as analyzing securities offerings or market research.
Traditional employment
On both the life and casualty sides, the classical function of actuaries is to calculate premiums and reserves for insurance policies covering various risks. On the casualty side, this analysis often involves quantifying the probability of a loss event, called the frequency, and the size of that loss event, called the severity. The amount of time that occurs before the loss event is important, as the insurer will not have to pay anything until after the event has occurred. On the life side, the analysis often involves quantifying how much a potential sum of money or a financial liability will be worth at different points in the future. Since neither of these kinds of analysis are purely deterministic processes, stochastic models are often used to determine frequency and severity distributions and the parameters of these distributions. Forecasting interest yields and currency movements also plays a role in determining future costs, especially on the life side.
Actuaries do not always attempt to predict aggregate future events. Often, their work may relate to determining the cost of financial liabilities that have already occurred, called retrospective reinsurance, or the development or re-pricing of new products.
Actuaries also design and maintain products and systems. They are involved in financial reporting of companies' assets and liabilities. They must communicate complex concepts to clients who may not share their language or depth of knowledge. Actuaries work under a code of ethics that covers their communications and work products.
Non-traditional employment
As an outgrowth of their more traditional roles, actuaries also work in the fields of risk management and enterprise risk management for both financial and non-financial corporations. Actuaries in traditional roles study and use the tools and data previously in the domain of finance. The Basel II accord for financial institutions (2004), and its analogue, the Solvency II accord for insurance companies (in force since 2016), require institutions to account for operational risk separately, and in addition to, credit, reserve, asset, and insolvency risk. Actuarial skills are well suited to this environment because of their training in analyzing various forms of risk, and judging the potential for upside gain, as well as downside loss associated with these forms of risk.
Actuaries are also involved in investment advice and asset management, and can be general business managers and chief financial officers. They analyze business prospects with their financial skills in valuing or discounting risky future cash flows, and apply their pricing expertise from insurance to other lines of business. For example, insurance securitization requires both actuarial and finance skills. Actuaries also act as expert witnesses by applying their analysis in court trials to estimate the economic value of losses such as lost profits or lost wages.
History
Need for insurance
The basic requirements of communal interests gave rise to risk sharing since the dawn of civilization. For example, people who lived their entire lives in a camp had the risk of fire, which would leave their band or family without shelter. After barter came into existence, more complex risks emerged and new forms of risk manifested. Merchants embarking on trade journeys bore the risk of losing goods entrusted to them, their own possessions, or even their lives. Intermediaries developed to warehouse and trade goods, which exposed them to financial risk. The primary providers in extended families or households ran the risk of premature death, disability or infirmity, which could leave their dependents to starve. Credit procurement was difficult if the creditor worried about repayment in the event of the borrower's death or infirmity. Alternatively, people sometimes lived too long from a financial perspective, exhausting their savings, if any, or becoming a burden on others in the extended family or society.
Early attempts
In the ancient world there was not always room for the sick, suffering, disabled, aged, or the poor—these were often not part of the cultural consciousness of societies. Early methods of protection, aside from the normal support of the extended family, involved charity; religious organizations or neighbors would collect for the destitute and needy. By the middle of the 3rd century, charitable operations in Rome supported 1,500 suffering people. Charitable protection remains an active form of support in the modern era, but receiving charity is uncertain and often accompanied by social stigma.
Elementary mutual aid agreements and pensions did arise in antiquity. Early in the Roman empire, associations were formed to meet the expenses of burial, cremation, and monuments—precursors to burial insurance and friendly societies. A small sum was paid into a communal fund on a weekly basis, and upon the death of a member, the fund would cover the expenses of rites and burial. These societies sometimes sold shares in the building of columbāria, or burial vaults, owned by the fund. Other early examples of mutual surety and assurance pacts can be traced back to various forms of fellowship within the Saxon clans of England and their Germanic forebears, and to Celtic society.
Non-life insurance started as a hedge against loss of cargo during sea travel. Anecdotal reports of such guarantees occur in the writings of Demosthenes, who lived in the 4th century BCE. The earliest records of an official non-life insurance policy come from Sicily, where there is record of a 14th-century contract to insure a shipment of wheat. In 1350, Lenardo Cattaneo assumed "all risks from act of God, or of man, and from perils of the sea" that may occur to a shipment of wheat from Sicily to Tunis up to a maximum of 300 florins. For this he was paid a premium of 18%.
Development of theory
During the 17th century, a more scientific basis for risk management was being developed. In 1662, a London draper named John Graunt showed that there were predictable patterns of longevity and death in a defined group, or cohort, of people, despite the uncertainty about the future longevity or mortality of any one individual. This study became the basis for the original life table. Combining this idea with that of compound interest and annuity valuation, it became possible to set up an insurance scheme to provide life insurance or pensions for a group of people, and to calculate with some degree of accuracy each member's necessary contributions to a common fund, assuming a fixed rate of interest. The first person to correctly calculate these values was Edmond Halley. In his work, Halley demonstrated a method of using his life table to calculate the premium someone of a given age should pay to purchase a life-annuity.
Early actuaries
James Dodson's pioneering work on the level premium system led to the formation of the Society for Equitable Assurances on Lives and Survivorship (now commonly known as Equitable Life) in London in 1762. This was the first life insurance company to use premium rates that were calculated scientifically for long-term life policies, using Dodson's work. After Dodson's death in 1757, Edward Rowe Mores took over the leadership of the group that eventually became the Society for Equitable Assurances. It was he who specified that the chief official should be called an actuary. Previously, the use of the term had been restricted to an official who recorded the decisions, or acts, of ecclesiastical courts, in ancient times originally the secretary of the Roman senate, responsible for compiling the Acta Senatus. Other companies that did not originally use such mathematical and scientific methods most often failed or were forced to adopt the methods pioneered by Equitable.
Development of the modern profession
In the 18th and 19th centuries, computational complexity was limited to manual calculations. The calculations required to compute fair insurance premiums can be burdensome. The actuaries of that time developed methods to construct easily used tables, using arithmetical short-cuts called commutation functions, to facilitate timely, accurate, manual calculations of premiums. In the mid-19th century, professional bodies were founded to support and further both actuaries and actuarial science, and to protect the public interest by ensuring competency and ethical standards. Since calculations were cumbersome, actuarial shortcuts were commonplace.
Non-life actuaries followed in the footsteps of their life compatriots in the early 20th century. In the United States, the 1920 revision to workers' compensation rates took over two months of around-the-clock work by day and night teams of actuaries. In the 1930s and 1940s, rigorous mathematical foundations for stochastic processes were developed. Actuaries began to forecast losses using models of random events instead of deterministic methods. Computers further revolutionized the actuarial profession. From pencil-and-paper to punchcards to microcomputers, the modeling and forecasting ability of the actuary has grown vastly.
Another modern development is the convergence of modern finance theory with actuarial science. In the early 20th century, some economists and actuaries were developing techniques that can be found in modern financial theory, but for various historical reasons, these developments did not achieve much recognition. In the late 1980s and early 1990s, there was a distinct effort for actuaries to combine financial theory and stochastic methods into their established models. In the 21st century, the profession, both in practice and in the educational syllabi of many actuarial organizations, combines tables, loss models, stochastic methods, and financial theory, but is still not completely aligned with modern financial economics.
Remuneration and ranking
As there are relatively few actuaries in the world compared to other professions, actuaries are in high demand, and are highly paid for the services they render.
The actuarial profession has been consistently ranked for decades as one of the most desirable. Actuaries work comparatively reasonable hours, in comfortable conditions, without the need for physical exertion that may lead to injury, are well paid, and the profession consistently has a good hiring outlook. Not only has the overall profession ranked highly, but it also is considered one of the best professions for women, and one of the best recession-proof professions. In the United States, the profession was rated as the best profession by CareerCast, which uses five key criteria to rank jobs—environment, income, employment outlook, physical demands, and stress, in 2010, 2013, and 2015. In other years, it remained in the top 20.
Credentialing and exams
Becoming a fully credentialed actuary requires passing a rigorous series of professional examinations, usually taking several years. In some countries, such as Denmark, most study takes place in a university setting. In others, such as the US, most study takes place during employment through a series of examinations. In the UK, and countries based on its process, there is a hybrid university-exam structure.
Exam support
As these qualifying exams are extremely rigorous, support is usually available to people progressing through the exams. Often, employers provide paid on-the-job study time and paid attendance at seminars designed for the exams. Also, many companies that employ actuaries have automatic pay raises or promotions when exams are passed. As a result, actuarial students have strong incentives for devoting adequate study time during off-work hours. A common rule of thumb for exam students is that, for the Society of Actuaries examinations, roughly 400 hours of study time are necessary for each four-hour exam. Thus, thousands of hours of study time should be anticipated over several years, assuming no failures.
Pass marks and pass rates
Historically, the actuarial profession has been reluctant to specify the pass marks for its examinations. To address concerns that there are pre-existing pass/fail quotas, a former chairman of the Board of Examiners of the Institute and Faculty of Actuaries stated: "Although students find it hard to believe, the Board of Examiners does not have fail quotas to achieve. Accordingly, pass rates are free to vary (and do). They are determined by the quality of the candidates sitting the examination and in particular how well prepared they are. Fitness to pass is the criterion, not whether you can achieve a mark in the top 40% of candidates sitting." In 2000, the Casualty Actuarial Society (CAS) decided to start releasing pass marks for the exams it offers. The CAS's policy is also not to grade to specific pass ratios; the CAS board affirmed in 2001 that "the CAS shall use no predetermined pass ratio as a guideline for setting the pass mark for any examination. If the CAS determines that 70% of all candidates have demonstrated sufficient grasp of the syllabus material, then those 70% should pass. Similarly, if the CAS determines that only 30% of all candidates have demonstrated sufficient grasp of the syllabus material, then only those 30% should pass."
Notable actuaries
Nathaniel Bowditch (1773–1838)
Early American mathematician remembered for his work on ocean navigation. In 1804, Bowditch became what was probably the United States of America's second insurance actuary as president of the Essex Fire and Marine Insurance Company in Salem, Massachusetts
Harald Cramér (1893–1985)
Swedish actuary and probabilist notable for his contributions in mathematical statistics, such as the Cramér–Rao inequality. Cramér was an Honorary President of the Swedish Actuarial Society
James Dodson (c. 1705 – 1757)
Head of the Royal Mathematical School, and Stone's School, Dodson built on the statistical mortality tables developed by Edmund Halley in 1693
Edmond Halley (1656–1742)
While Halley actually predated much of what is now considered the start of the actuarial profession, he was the first to rigorously calculate premiums for a life insurance policy mathematically and statistically
James C. Hickman (1927–2006)
American actuarial educator, researcher, and author
Oswald Jacoby (1902–1984)
American actuary best known as a contract bridge player, he was the youngest person ever to pass four examinations of the Society of Actuaries
David X. Li
Canadian qualified actuary who in the first decade of the 21st century pioneered the use of Gaussian copula models for the pricing of collateralized debt obligations (CDOs)
Edward Rowe Mores (1731–1778)
First person to use the title 'actuary' with respect to a business position
William Morgan (1750–1833)
Morgan was the appointed Actuary of the Society for Equitable Assurances in 1775. He expanded on Mores's and Dodson's work, and may be considered the father of the actuarial profession in that his title became applied to the field as a whole.
Robert J. Myers (1912–2010)
American actuary who was instrumental in the creation of the U.S. Social Security program
Frank Redington (1906–1984)
British actuary who developed the Redington Immunization Theory.
Isaac M. Rubinow (1875–1936)
Founder and first president of the Casualty Actuarial Society.
Elizur Wright (1804–1885)
American actuary and abolitionist, professor of mathematics at Western Reserve College (Ohio). He campaigned for laws that required life insurance companies to hold sufficient reserves to guarantee that policies would be paid.
Fictional actuaries
Actuaries have appeared in works of fiction including literature, theater, television, and film. At times, they have been portrayed as "math-obsessed, socially disconnected individuals with shockingly bad comb-overs", which has resulted in a mixed response amongst actuaries themselves.
Citations
Works cited
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
External links
Be an Actuary: The SOA and CAS jointly sponsored web site
Actuarial science
Financial services occupations
Mathematical science occupations | Actuary | [
"Mathematics"
] | 3,944 | [
"Applied mathematics",
"Actuarial science"
] |
43,410 | https://en.wikipedia.org/wiki/VHDL | VHDL (VHSIC Hardware Description Language) is a hardware description language that can model the behavior and structure of digital systems at multiple levels of abstraction, ranging from the system level down to that of logic gates, for design entry, documentation, and verification purposes. The language was developed for the US military VHSIC program in the 1980s, and has been standardized by the Institute of Electrical and Electronics Engineers (IEEE) as IEEE Std 1076; the latest version of which is IEEE Std 1076-2019. To model analog and mixed-signal systems, an IEEE-standardized HDL based on VHDL called VHDL-AMS (officially IEEE 1076.1) has been developed.
History
In 1983, VHDL was originally developed at the behest of the U.S. Department of Defense in order to document the behavior of the ASICs that supplier companies were including in equipment. The standard MIL-STD-454N in Requirement 64 in section 4.5.1 "ASIC documentation in VHDL" explicitly requires documentation of "Microelectronic Devices" in VHDL.
The idea of being able to simulate the ASICs from the information in this documentation was so obviously attractive that logic simulators were developed that could read the VHDL files. The next step was the development of logic synthesis tools that read the VHDL and output a definition of the physical implementation of the circuit.
Due to the Department of Defense requiring as much of the syntax as possible to be based on Ada, in order to avoid re-inventing concepts that had already been thoroughly tested in the development of Ada, VHDL borrows heavily from the Ada programming language in both concept and syntax.
The initial version of VHDL, designed to IEEE standard IEEE 1076–1987, included a wide range of data types, including numerical (integer and real), logical (bit and Boolean), character and time, plus arrays of bit called bit_vector and of character called string.
A problem not solved by this edition, however, was "multi-valued logic", where a signal's drive strength (none, weak or strong) and unknown values are also considered. This required IEEE standard 1164, which defined the 9-value logic types: scalar std_logic and its vector version std_logic_vector. Being a resolved subtype of its std_Ulogic parent type, std_logic-typed signals allow multiple driving for modeling bus structures, whereby the connected resolution function handles conflicting assignments adequately.
The updated IEEE 1076, in 1993, made the syntax more consistent, allowed more flexibility in naming, extended the character type to allow ISO-8859-1 printable characters, added the xnor operator, etc.
Minor changes in the standard (2000 and 2002) added the idea of protected types (similar to the concept of class in C++) and removed some restrictions from port mapping rules.
In addition to IEEE standard 1164, several child standards were introduced to extend functionality of the language. IEEE standard 1076.2 added better handling of real and complex data types. IEEE standard 1076.3 introduced signed and unsigned types to facilitate arithmetical operations on vectors. IEEE standard 1076.1 (known as VHDL-AMS) provided analog and mixed-signal circuit design extensions.
Some other standards support wider use of VHDL, notably VITAL (VHDL Initiative Towards ASIC Libraries) and microwave circuit design extensions.
In June 2006, the VHDL Technical Committee of Accellera (delegated by IEEE to work on the next update of the standard) approved so-called Draft 3.0 of VHDL-2006. While maintaining full compatibility with older versions, this proposed standard provides numerous extensions that make writing and managing VHDL code easier. Key changes include incorporation of child standards (1164, 1076.2, 1076.3) into the main 1076 standard, an extended set of operators, more flexible syntax of case and generate statements, incorporation of VHPI (VHDL Procedural Interface) (interface to C/C++ languages) and a subset of PSL (Property Specification Language). These changes should improve quality of synthesizable VHDL code, make testbenches more flexible, and allow wider use of VHDL for system-level descriptions.
In February 2008, Accellera approved VHDL 4.0, also informally known as VHDL 2008, which addressed more than 90 issues discovered during the trial period for version 3.0 and includes enhanced generic types. In 2008, Accellera released VHDL 4.0 to the IEEE for balloting for inclusion in IEEE 1076–2008. The VHDL standard IEEE 1076-2008 was published in January 2009.
Standardization
The IEEE Standard 1076 defines the VHSIC Hardware Description Language, or VHDL. It was originally developed under contract F33615-83-C-1003 from the United States Air Force awarded in 1983 to a team of Intermetrics, Inc. as language experts and prime contractor, Texas Instruments as chip design experts and IBM as computer-system design experts. The language has undergone numerous revisions and has a variety of sub-standards associated with it that augment or extend it in important ways.
1076 was and continues to be a milestone in the design of electronic systems.
Revisions
IEEE 1076-1987 First standardized revision of ver 7.2 of the language from the United States Air Force.
IEEE 1076-1993 (also published with ). Significant improvements resulting from several years of feedback. Probably the most widely used version with the greatest vendor tool support.
IEEE 1076–2000. Minor revision. Introduces the use of protected types.
IEEE 1076–2002. Minor revision of 1076–2000. Rules with regard to buffer ports are relaxed.
IEC 61691-1-1:2004. IEC adoption of IEEE 1076–2002.
IEEE 1076c-2007. Introduced VHPI, the VHDL procedural interface, which provides software with the means to access the VHDL model. The VHDL language required minor modifications to accommodate the VHPI.
IEEE 1076-2008 (previously referred to as 1076-200x). Major revision released on 2009-01-26. Among other changes, this standard incorporates a basic subset of PSL, allows for generics on packages and subprograms and introduces the use of external names.
IEC 61691-1-1:2011. IEC adoption of IEEE 1076–2008.
IEEE 1076–2019. Major revision.
Related standards
IEEE 1076.1 VHDL Analog and Mixed-Signal (VHDL-AMS)
IEEE 1076.1.1 VHDL-AMS Standard Packages (stdpkgs)
IEEE 1076.2 VHDL Math Package
IEEE 1076.3 VHDL Synthesis Package (vhdlsynth) (numeric std)
IEEE 1076.3 VHDL Synthesis Package – Floating Point (fphdl)
IEEE 1076.4 Timing (VHDL Initiative Towards ASIC Libraries: vital)
IEEE 1076.6 VHDL Synthesis Interoperability (withdrawn in 2010)
IEEE 1164 VHDL Multivalue Logic (std_logic_1164) Packages
Design
VHDL is generally used to write text models that describe a logic circuit. Such a model is processed by a synthesis program, only if it is part of the logic design. A simulation program is used to test the logic design using simulation models to represent the logic circuits that interface to the design. This collection of simulation models is commonly called a testbench.
A VHDL simulator is typically an event-driven simulator. This means that each transaction is added to an event queue for a specific scheduled time. E.g. if a signal assignment should occur after 1 nanosecond, the event is added to the queue for time +1ns. Zero delay is also allowed, but still needs to be scheduled: for these cases delta delay is used, which represent an infinitely small time step. The simulation alters between two modes: statement execution, where triggered statements are evaluated, and event processing, where events in the queue are processed.
VHDL has constructs to handle the parallelism inherent in hardware designs, but these constructs (processes) differ in syntax from the parallel constructs in Ada (tasks). Like Ada, VHDL is strongly typed and is not case sensitive. In order to directly represent operations which are common in hardware, there are many features of VHDL which are not found in Ada, such as an extended set of Boolean operators including nand and nor.
VHDL has file input and output capabilities, and can be used as a general-purpose language for text processing, but files are more commonly used by a simulation testbench for stimulus or verification data. There are some VHDL compilers which build executable binaries. In this case, it might be possible to use VHDL to write a testbench to verify the functionality of the design using files on the host computer to define stimuli, to interact with the user, and to compare results with those expected. However, most designers leave this job to the simulator.
It is relatively easy for an inexperienced developer to produce code that simulates successfully but that cannot be synthesized into a real device, or is too large to be practical. One particular pitfall is the accidental production of transparent latches rather than D-type flip-flops as storage elements.
One can design hardware in a VHDL IDE (for FPGA implementation such as Xilinx ISE, Altera Quartus, Synopsys Synplify or Mentor Graphics HDL Designer) to produce the RTL schematic of the desired circuit. After that, the generated schematic can be verified using simulation software which shows the waveforms of inputs and outputs of the circuit after generating the appropriate testbench. To generate an appropriate testbench for a particular circuit or VHDL code, the inputs have to be defined correctly. For example, for clock input, a loop process or an iterative statement is required.
A final point is that when a VHDL model is translated into the "gates and wires" that are mapped onto a programmable logic device such as a CPLD or FPGA, then it is the actual hardware being configured, rather than the VHDL code being "executed" as if on some form of a processor chip.
Advantages
The key advantage of VHDL, when used for systems design, is that it allows the behavior of the required system to be described (modeled) and verified (simulated) before synthesis tools translate the design into real hardware (gates and wires).
Another benefit is that VHDL allows the description of a concurrent system. VHDL is a dataflow language in which every statement is considered for execution simultaneously, unlike procedural computing languages such as BASIC, C, and assembly code, where a sequence of statements is run sequentially one instruction at a time.
A VHDL project is multipurpose. Being created once, a calculation block can be used in many other projects. However, many formational and functional block parameters can be tuned (capacity parameters, memory size, element base, block composition and interconnection structure).
A VHDL project is portable. Being created for one element base, a computing device project can be ported on another element base, for example VLSI with various technologies.
A big advantage of VHDL compared to original Verilog is that VHDL has a full type system. Designers can use the type system to write much more structured code (especially by declaring record types).
Design examples
In VHDL, a design consists at a minimum of an entity which describes the interface and an architecture which contains the actual implementation. In addition, most designs import library modules. Some designs also contain multiple architectures and configurations.
A simple AND gate in VHDL would look something like
-- (this is a VHDL comment)
/*
this is a block comment (VHDL-2008)
*/
-- import std_logic from the IEEE library
library IEEE;
use IEEE.std_logic_1164.all;
-- this is the entity
entity ANDGATE is
port (
I1 : in std_logic;
I2 : in std_logic;
O : out std_logic);
end entity ANDGATE;
-- this is the architecture
architecture RTL of ANDGATE is
begin
O <= I1 and I2;
end architecture RTL;
(Notice that RTL stands for Register transfer level design.) While the example above may seem verbose to HDL beginners, many parts are either optional or need to be written only once. Generally simple functions like this are part of a larger behavioral module, instead of having a separate module for something so simple. In addition, use of elements such as the std_logic type might at first seem to be an overkill. One could easily use the built-in bit type and avoid the library import in the beginning. However, using a form of many-valued logic, specifically 9-valued logic (U,X,0,1,Z,W,H,L,-), instead of simple bits (0,1) offers a very powerful simulation and debugging tool to the designer which currently does not exist in any other HDL.
In the examples that follow, you will see that VHDL code can be written in a very compact form. However, more experienced designers usually avoid these compact forms and use a more verbose coding style for the sake of readability and maintainability.
Synthesizable constructs and VHDL templates
VHDL is frequently used for two different goals: simulation of electronic designs and synthesis of such designs. Synthesis is a process where a VHDL is compiled and mapped into an implementation technology such as an FPGA or an ASIC.
Not all constructs in VHDL are suitable for synthesis. For example, most constructs that explicitly deal with timing such as wait for 10 ns; are not synthesizable despite being valid for simulation. While different synthesis tools have different capabilities, there exists a common synthesizable subset of VHDL that defines what language constructs and idioms map into common hardware for many synthesis tools. IEEE 1076.6 defines a subset of the language that is considered the official synthesis subset. It is generally considered a "best practice" to write very idiomatic code for synthesis as results can be incorrect or suboptimal for non-standard constructs.
MUX template
The multiplexer, or 'MUX' as it is usually called, is a simple construct very common in hardware design. The example below demonstrates a simple two to one MUX, with inputs A and B, selector S and output X. Note that there are many other ways to express the same MUX in VHDL.
X <= A when S = '1' else B;
A more complex example of a MUX with 4x3 inputs and a 2-bit selector:
library IEEE;
use IEEE.std_logic_1164.all;
entity mux4 is
port(
a1 : in std_logic_vector(2 downto 0);
a2 : in std_logic_vector(2 downto 0);
a3 : in std_logic_vector(2 downto 0);
a4 : in std_logic_vector(2 downto 0);
sel : in std_logic_vector(1 downto 0);
b : out std_logic_vector(2 downto 0)
);
end mux4;
architecture rtl of mux4 is
-- declarative part: empty
begin
p_mux : process(a1,a2,a3,a4,sel)
begin
case sel is
when "00" => b <= a1 ;
when "01" => b <= a2 ;
when "10" => b <= a3 ;
when others => b <= a4 ;
end case;
end process p_mux;
end rtl;
Latch template
A transparent latch is basically one bit of memory which is updated when an enable signal is raised. Again, there are many other ways this can be expressed in VHDL.
-- latch template 1:
Q <= D when Enable = '1' else Q;
-- latch template 2:
process(all)
begin
Q <= D when(Enable);
end process;
D-type flip-flops
The D-type flip-flop samples an incoming signal at the rising (or falling) edge of a clock. This example has an asynchronous, active-high reset, and samples at the rising clock edge.
DFF : process(all) is
begin
if RST then
Q <= '0';
elsif rising_edge(CLK) then
Q <= D;
end if;
end process DFF;
Another common way to write edge-triggered behavior in VHDL is with the 'event' signal attribute. A single apostrophe has to be written between the signal name and the name of the attribute.
DFF : process(RST, CLK) is
begin
if RST then
Q <= '0';
elsif CLK'event and CLK = '1' then
Q <= D;
end if;
end process DFF;
VHDL also lends itself to "one-liners" such as
DFF : Q <= '0' when RST = '1' else D when rising_edge(clk);
or
DFF : process(all) is
begin
if rising_edge(CLK) then
Q <= D;
end if;
if RST then
Q <= '0';
end if;
end process DFF;
or:Library IEEE;
USE IEEE.Std_logic_1164.all;
entity RisingEdge_DFlipFlop_SyncReset is
port(
Q : out std_logic;
Clk : in std_logic;
sync_reset : in std_logic;
D : in std_logic
);
end RisingEdge_DFlipFlop_SyncReset;
architecture Behavioral of RisingEdge_DFlipFlop_SyncReset is
begin
process(Clk)
begin
if (rising_edge(Clk)) then
if (sync_reset='1') then
Q <= '0';
else
Q <= D;
end if;
end if;
end process;
end Behavioral;Which can be useful if not all signals (registers) driven by this process should be reset.
Example: a counter
The following example is an up-counter with asynchronous reset, parallel load and configurable width. It demonstrates the use of the 'unsigned' type, type conversions between 'unsigned' and 'std_logic_vector' and VHDL generics. The generics are very close to arguments or templates in other traditional programming languages like C++. The example is in VHDL 2008 language.
library IEEE;
use IEEE.std_logic_1164.all;
use IEEE.numeric_std.all; -- for the unsigned type
entity COUNTER is
generic (
WIDTH : in natural := 32);
port (
RST : in std_logic;
CLK : in std_logic;
LOAD : in std_logic;
DATA : in std_logic_vector(WIDTH-1 downto 0);
Q : out std_logic_vector(WIDTH-1 downto 0));
end entity COUNTER;
architecture RTL of COUNTER is
begin
process(all) is
begin
if RST then
Q <= (others => '0');
elsif rising_edge(CLK) then
if LOAD='1' then
Q <= DATA;
else
Q <= std_logic_vector(unsigned(Q) + 1);
end if;
end if;
end process;
end architecture RTL;
More complex counters may add if/then/else statements within the rising_edge(CLK) elsif to add other functions, such as count enables, stopping or rolling over at some count value, generating output signals like terminal count signals, etc. Care must be taken with the ordering and nesting of such controls if used together, in order to produce the desired priorities and minimize the number of logic levels needed.
Simulation-only constructs
A large subset of VHDL cannot be translated into hardware. This subset is known as the non-synthesizable or the simulation-only subset of VHDL and can only be used for prototyping, simulation and debugging. For example, the following code will generate a clock with a frequency of 50 MHz. It can, for example, be used to drive a clock input in a design during simulation. It is, however, a simulation-only construct and cannot be implemented in hardware. In actual hardware, the clock is generated externally; it can be scaled down internally by user logic or dedicated hardware.
process
begin
CLK <= '1'; wait for 10 NS;
CLK <= '0'; wait for 10 NS;
end process;
The simulation-only constructs can be used to build complex waveforms in very short time. Such waveform can be used, for example, as test vectors for a complex design or as a prototype of some synthesizer logic that will be implemented in the future.
process
begin
wait until START = '1'; -- wait until START is high
for i in 1 to 10 loop -- then wait for a few clock periods...
wait until rising_edge(CLK);
end loop;
for i in 1 to 10 loop -- write numbers 1 to 10 to DATA, 1 every cycle
DATA <= to_unsigned(i, 8);
wait until rising_edge(CLK);
end loop;
-- wait until the output changes
wait on RESULT;
-- now raise ACK for clock period
ACK <= '1';
wait until rising_edge(CLK);
ACK <= '0';
-- and so on...
end process;
VHDL-2008 Features
Hierarchical Aliases
library ieee;
use ieee.std_logic_1164.all;
entity bfm is end entity;
architecture beh of bfm is
signal en :std_logic;
begin
-- insert implementation here
end architecture;
//------------------------------------------
library ieee;
use ieee.std_logic_1164.all;
entity test1 is end entity;
architecture beh of test1 is
begin
ibfm: entity work.bfm;
-- The testbench process
process
alias probe_en is <<signal .test1.ibfm.en :std_logic>>;
begin
probe_en <= '1';
wait for 100 ns;
probe_en <= '0';
wait for 100 ns;
probe_en <= '1';
wait for 100 ns;
std.env.stop(0);
end process;
end architecture;
Standard libraries
Also referred as standard packages.
IEEE Standard Package
The IEEE Standard Package includes the following:
numeric_std
std_logic_1164
std_logic_arith
std_logic_unsigned
std_logic_signed
std_logic_misc
VHDL simulators
Commercial:
Aldec Active-HDL
Cadence Incisive
Mentor Graphics ModelSim
Mentor Graphics Questa Advanced Simulator
Synopsys VCS-MX
Xilinx Vivado Design Suite (features the Vivado Simulator)
Other:
EDA Playground - Free web browser-based VHDL IDE (uses Synopsys VCS, Cadence Incisive, Aldec Riviera-PRO and GHDL for VHDL simulation)
GHDL is an open source VHDL compiler that can execute VHDL programs.
boot by freerangefactory.org is a VHDL compiler and simulator based on GHDL and GTKWave
VHDL Simili by Symphony EDA is a free commercial VHDL simulator.
nvc by Nick Gasson is an open source VHDL compiler and simulator
freehdl by Edwin Naroska was an open source VHDL simulator, abandoned since 2001.
See also
References
Notes
Further reading
Peter J. Ashenden, "The Designer's Guide to VHDL, Third Edition (Systems on Silicon)", 2008, . (The VHDL reference book written by one of the lead developers of the language)
Bryan Mealy, Fabrizio Tappero (February 2012). . The no-frills guide to writing powerful VHDL code for your digital implementations. Archived from the original Free Range VHDL on 2015-02-13.
— Sandstrom presents a table relating VHDL constructs to Verilog constructs.
Janick Bergeron, "Writing Testbenches: Functional Verification of HDL Models", 2000, . (The HDL Testbench Bible)
External links
VHDL Analysis and Standardization Group (VASG)
Hardware description languages
IEEE standards
IEC standards
Ada programming language family
Domain-specific programming languages
Programming languages created in 1983 | VHDL | [
"Technology",
"Engineering"
] | 5,324 | [
"Hardware description languages",
"Computer standards",
"IEC standards",
"Electronic engineering",
"IEEE standards"
] |
43,411 | https://en.wikipedia.org/wiki/Very%20High%20Speed%20Integrated%20Circuit%20Program | The Very High Speed Integrated Circuit (VHSIC) Program was a United States Department of Defense (DOD) research program that ran from 1980 to 1990. Its mission was to research and develop very high-speed integrated circuits for the United States Armed Forces.
Program
VHSIC was launched in 1980 as a joint tri-service (Army/Navy/Air Force) program. The program led to advances in integrated circuit materials, lithography, packaging, testing, and algorithms, and created numerous computer-aided design (CAD) tools. A well-known part of the program's contribution is VHDL (VHSIC Hardware Description Language), a hardware description language (HDL). The program also redirected the military's interest in GaAs ICs back toward the commercial mainstream of CMOS circuits.
More than $1 billion in total was spent for the VHSIC program for silicon integrated circuit technology development.
A DARPA project which ran concurrently, the VLSI Project, having begun two years earlier in 1978, contributed BSD Unix, the RISC processor, the MOSIS research design fab, and greatly furthered the Mead and Conway revolution in VLSI design automation. By contrast, the VHSIC program was comparatively less cost-effective for the funds invested over a contemporaneous time frame, though the projects had different final objectives and are not entirely comparable for that reason.
By the time the program ended in 1990, commercial processors were far outperforming what the Pentagon's program had produced; however, it did manage to subsidize US semiconductor equipment manufacturing, stimulating an industry that shipped much of its product abroad (mainly to Asia).
References
Integrated circuits
Science and technology in the United States | Very High Speed Integrated Circuit Program | [
"Technology",
"Engineering"
] | 352 | [
"Computer engineering",
"Integrated circuits"
] |
43,421 | https://en.wikipedia.org/wiki/Henry%20David%20Thoreau | Henry David Thoreau (July 12, 1817May 6, 1862) was an American naturalist, essayist, poet, and philosopher. A leading transcendentalist, he is best known for his book Walden, a reflection upon simple living in natural surroundings, and his essay "Civil Disobedience" (originally published as "Resistance to Civil Government"), an argument in favor of citizen disobedience against an unjust state.
Thoreau's books, articles, essays, journals, and poetry amount to more than 20 volumes. Among his lasting contributions are his writings on natural history and philosophy, in which he anticipated the methods and findings of ecology and environmental history, two sources of modern-day environmentalism. His literary style interweaves close observation of nature, personal experience, pointed rhetoric, symbolic meanings, and historical lore, while displaying a poetic sensibility, philosophical austerity, and attention to practical detail. He was also deeply interested in the idea of survival in the face of hostile elements, historical change, and natural decay; at the same time he advocated abandoning waste and illusion in order to discover life's true essential needs.
Thoreau was a lifelong abolitionist, delivering lectures that attacked the fugitive slave law while praising the writings of Wendell Phillips and defending the abolitionist John Brown. Thoreau's philosophy of civil disobedience later influenced the political thoughts and actions of notable figures such as Leo Tolstoy, Mahatma Gandhi, and Martin Luther King Jr.
Thoreau is sometimes referred to retrospectively as an anarchist, but may perhaps be more properly regarded as a proto-anarchist. In his seminal essay, "Civil Disobedience", Thoreau wrote as follows:
"I heartily accept the 'That government is best which governs least;' and I should like to see it acted up to more rapidly and systematically. Carried out, it finally amounts to this, which also I 'That government is best which governs not at all;' and when men are prepared for it, that will be the kind of government which they will have.... But, to speak practically and as a citizen, unlike those who call themselves no-government men, I ask for, not at once no government, but at once a better government."
Pronunciation of his name
Amos Bronson Alcott and Thoreau's aunt each wrote that "Thoreau" is pronounced like the word thorough ( —in General American, but more precisely —in 19th-century New England). Edward Waldo Emerson wrote that the name should be pronounced "Thó-row", with the h sounded and stress on the first syllable. Among modern-day American English speakers, it is perhaps more commonly pronounced —with stress on the second syllable.
Physical appearance
Thoreau had a distinctive appearance, with a nose that he called his "most prominent feature". Of his appearance and disposition, Ellery Channing wrote:
His face, once seen, could not be forgotten. The features were quite marked: the nose aquiline or very Roman, like one of the portraits of Caesar (more like a beak, as was said); large overhanging brows above the deepest set blue eyes that could be seen, in certain lights, and in others gray,—eyes expressive of all shades of feeling, but never weak or near-sighted; the forehead not unusually broad or high, full of concentrated energy and purpose; the mouth with prominent lips, pursed up with meaning and thought when silent, and giving out when open with the most varied and unusual instructive sayings.
Life
Early life and education, 1817–1837
Henry David Thoreau was born David Henry Thoreau in Concord, Massachusetts, into the "modest New England family" of John Thoreau, a pencil maker, and Cynthia Dunbar. His father was of French Protestant descent. His paternal grandfather had been born on the UK crown dependency island of Jersey. His maternal grandfather, Asa Dunbar, led Harvard's 1766 student "Butter Rebellion", the first recorded student protest in the American colonies. David Henry was named after his recently deceased paternal uncle, David Thoreau. He began to call himself Henry David after he finished college; he never petitioned to make a legal name change.
He had two older siblings, Helen and John Jr., and a younger sister, Sophia Thoreau. None of the children married. Helen (1812–1849) died at age 37, from tuberculosis. John Jr. (1814–1842) died at age 27, of tetanus after cutting himself while shaving. Henry David (1817–1862) died at age 44, of tuberculosis. Sophia (1819–1876) survived him by 14 years, dying at age 56, of tuberculosis.
He studied at Harvard College between 1833 and 1837. He lived in Hollis Hall and took courses in rhetoric, classics, philosophy, mathematics, and science. He was a member of the Institute of 1770 (now the Hasty Pudding Club). According to legend, Thoreau refused to pay the five-dollar fee (approximately ) for a Harvard master's diploma, which he described thus: Harvard College offered it to graduates "who proved their physical worth by being alive three years after graduating, and their saving, earning, or inheriting quality or condition by having Five Dollars to give the college". He commented, "Let every sheep keep its own skin", a reference to the tradition of using sheepskin vellum for diplomas.
Thoreau's birthplace still exists on Virginia Road in Concord. The house has been restored by the Thoreau Farm Trust, a nonprofit organization, and is now open to the public.
Return to Concord, 1837–1844
The traditional professions open to college graduates—law, the church, business, medicine—did not interest Thoreau, so in 1835 he took a leave of absence from Harvard, during which he taught at a school in Canton, Massachusetts, living for two years at an earlier version of today's Colonial Inn in Concord. His grandfather owned the earliest of the three buildings that were later combined. After he graduated in 1837, Thoreau joined the faculty of the Concord public school, but he resigned after a few weeks rather than administer corporal punishment. He and his brother John then opened the Concord Academy, a grammar school in Concord, in 1838. They introduced several progressive concepts, including nature walks and visits to local shops and businesses. The school closed when John became fatally ill from tetanus in 1842 after cutting himself while shaving. He died in Henry's arms.
Upon graduation Thoreau returned home to Concord, where he met Ralph Waldo Emerson through a mutual friend. Emerson, who was 14 years his senior, took a paternal and at times patron-like interest in Thoreau, advising the young man and introducing him to a circle of local writers and thinkers, including Ellery Channing, Margaret Fuller, Bronson Alcott, and Nathaniel Hawthorne and his son Julian Hawthorne, who was a boy at the time.
Emerson urged Thoreau to contribute essays and poems to a quarterly periodical, The Dial, and lobbied the editor, Margaret Fuller, to publish those writings. Thoreau's first essay published in The Dial was "Aulus Persius Flaccus", an essay on the Roman poet and satirist, in July 1840. It consisted of revised passages from his journal, which he had begun keeping at Emerson's suggestion. The first journal entry, on October 22, 1837, reads, What are you doing now?' he asked. 'Do you keep a journal?' So I make my first entry to-day."
Thoreau was a philosopher of nature and its relation to the human condition. In his early years he followed transcendentalism, a loose and eclectic idealist philosophy advocated by Emerson, Fuller, and Alcott. They held that an ideal spiritual state transcends, or goes beyond, the physical and empirical, and that one achieves that insight via personal intuition rather than religious doctrine. In their view, Nature is the outward sign of inward spirit, expressing the "radical correspondence of visible things and human thoughts", as Emerson wrote in Nature (1836).
On April 18, 1841, Thoreau moved in with the Emersons. There, from 1841 to 1844, he served as the children's tutor; he was also an editorial assistant, repairman and gardener. For a few months in 1843, he moved to the home of William Emerson on Staten Island, and tutored the family's sons while seeking contacts among literary men and journalists in the city who might help publish his writings, including his future literary representative Horace Greeley.
Thoreau returned to Concord and worked in his family's pencil factory, which he would continue to do alongside his writing and other work for most of his adult life. He resurrected the process of making good pencils with inferior graphite by using clay as a binder. The process of mixing graphite and clay, known as the Conté process, had been first patented by Nicolas-Jacques Conté in 1795. Thoreau made profitable use of a graphite source found in New Hampshire that had been purchased in 1821 by his uncle, Charles Dunbar. The company's other source of graphite had been Tantiusques, a mine operated by Native Americans in Sturbridge, Massachusetts. Later, Thoreau converted the pencil factory to produce plumbago, a name for graphite at the time, which was used in the electrotyping process.
Once back in Concord, Thoreau went through a restless period. In April 1844 he and his friend Edward Hoar accidentally set a fire that consumed of Walden Woods.
"Civil Disobedience" and the Walden years, 1845–1850
Thoreau felt a need to concentrate and work more on his writing. In 1845, Ellery Channing told Thoreau, "Go out upon that, build yourself a hut, & there begin the grand process of devouring yourself alive. I see no other alternative, no other hope for you." Thus, on July 4, 1845, Thoreau embarked on a two-year experiment in simple living, moving to a small house he had built on land owned by Emerson in a second growth forest around the shores of Walden Pond, having had a request to build a hut on Flints Pond, near that of his friend Charles Stearns Wheeler, denied by the landowners due to the Fairhaven Bay incident. The house was in "a pretty pasture and woodlot" of that Emerson had bought, from his family home. Whilst there, he wrote his only extended piece of literary criticism, "Thomas Carlyle and His Works".
On July 24 or July 25, 1846, Thoreau ran into the local tax collector, Sam Staples, who asked him to pay six years of delinquent poll taxes. Thoreau refused because of his opposition to the Mexican–American War and slavery, and he spent a night in jail because of this refusal. The next day Thoreau was freed when someone, likely to have been his aunt, paid the tax, against his wishes. The experience had a strong impact on Thoreau. In January and February 1848, he delivered lectures on "The Rights and Duties of the Individual in relation to Government", explaining his tax resistance at the Concord Lyceum. Bronson Alcott attended the lecture, writing in his journal on January 26:
Thoreau revised the lecture into an essay titled "Resistance to Civil Government" (also known as "Civil Disobedience"). It was published by Elizabeth Peabody in the Aesthetic Papers in May 1849. Thoreau had taken up a version of Percy Shelley's principle in the political poem "The Mask of Anarchy" (1819), which begins with the powerful images of the unjust forms of authority of his time and then imagines the stirrings of a radically new form of social action.
At Walden Pond, Thoreau completed a first draft of A Week on the Concord and Merrimack Rivers, an elegy to his brother John, describing their trip to the White Mountains in 1839. Thoreau did not find a publisher for the book and instead printed 1,000 copies at his own expense; fewer than 300 were sold. He self-published on the advice of Emerson, using Emerson's publisher, Munroe, who did little to publicize the book.
In August 1846, Thoreau briefly left Walden to make a trip to Mount Katahdin in Maine, a journey that was later recorded in "Ktaadn", the first part of The Maine Woods.
Thoreau left Walden Pond on September 6, 1847. At Emerson's request, he immediately moved back to the Emerson house to help Emerson's wife, Lidian, manage the household while her husband was on an extended trip to Europe. Over several years, as he worked to pay off his debts, he continuously revised the manuscript of what he eventually published as Walden, or Life in the Woods in 1854, recounting the two years, two months, and two days he had spent at Walden Pond. The book compresses that time into a single calendar year, using the passage of the four seasons to symbolize human development. Part memoir and part spiritual quest, Walden at first won few admirers, but later critics have regarded it as a classic American work that explores natural simplicity, harmony, and beauty as models for just social and cultural conditions.
The American poet Robert Frost wrote of Thoreau, "In one book ... he surpasses everything we have had in America."
The American author John Updike said of the book, "A century and a half after its publication, Walden has become such a totem of the back-to-nature, preservationist, anti-business, civil-disobedience mindset, and Thoreau so vivid a protester, so perfect a crank and hermit saint, that the book risks being as revered and unread as the Bible."
Thoreau moved out of Emerson's house in July 1848 and stayed at a house on nearby Belknap Street. In 1850, he moved into a house at 255 Main Street, where he lived until his death.
In the summer of 1850, Thoreau and Channing journeyed from Boston to Montreal and Quebec City. These would be Thoreau's only travels outside the United States. It is as a result of this trip that he developed lectures that eventually became A Yankee in Canada. He jested that all he got from this adventure "was a cold". In fact, this proved an opportunity to contrast American civic spirit and democratic values with a colony apparently ruled by illegitimate religious and military power. Whereas his own country had had its revolution, in Canada history had failed to turn.
Later years, 1851–1862
In 1851, Thoreau became increasingly fascinated with natural history and narratives of travel and expedition. He read avidly on botany and often wrote observations on this topic into his journal. He admired William Bartram and Charles Darwin's Voyage of the Beagle. He kept detailed observations on Concord's nature lore, recording everything from how the fruit ripened over time to the fluctuating depths of Walden Pond and the days certain birds migrated. The point of this task was to "anticipate" the seasons of nature, in his word.
He became a land surveyor and continued to write increasingly detailed observations on the natural history of the town, covering an area of , in his journal, a two-million-word document he kept for 24 years. He also kept a series of notebooks, and these observations became the source of his late writings on natural history, such as "Autumnal Tints", "The Succession of Trees", and "Wild Apples", an essay lamenting the destruction of the local wild apple species.
With the rise of environmental history and ecocriticism as academic disciplines, several new readings of Thoreau began to emerge, showing him to have been both a philosopher and an analyst of ecological patterns in fields and woodlots. For instance, "The Succession of Forest Trees", shows that he used experimentation and analysis to explain how forests regenerate after fire or human destruction, through the dispersal of seeds by winds or animals. In this lecture, first presented to a cattle show in Concord, and considered his greatest contribution to ecology, Thoreau explained why one species of tree can grow in a place where a different tree did previously. He observed that squirrels often carry nuts far from the tree from which they fell to create stashes. These seeds are likely to germinate and grow should the squirrel die or abandon the stash. He credited the squirrel for performing a "great service ... in the economy of the universe."
He traveled to Canada East once, Cape Cod four times, and Maine three times; these landscapes inspired his "excursion" books, A Yankee in Canada, Cape Cod, and The Maine Woods, in which travel itineraries frame his thoughts about geography, history and philosophy. Other travels took him southwest to Philadelphia and New York City in 1854 and west across the Great Lakes region in 1861, when he visited Niagara Falls, Detroit, Chicago, Milwaukee, St. Paul and Mackinac Island. He was provincial in his own travels, but he read widely about travel in other lands. He devoured all the first-hand travel accounts available in his day, at a time when the last unmapped regions of the earth were being explored. He read Magellan and James Cook; the arctic explorers John Franklin, Alexander Mackenzie and William Parry; David Livingstone and Richard Francis Burton on Africa; Lewis and Clark; and hundreds of lesser-known works by explorers and literate travelers. Astonishing amounts of reading fed his endless curiosity about the peoples, cultures, religions and natural history of the world and left its traces as commentaries in his voluminous journals. He processed everything he read, in the local laboratory of his Concord experience. Among his famous aphorisms is his advice to "live at home like a traveler".
After John Brown's raid on Harpers Ferry, many prominent voices in the abolitionist movement distanced themselves from Brown or damned him with faint praise. Thoreau was disgusted by this, and he composed a key speech, "A Plea for Captain John Brown", which was uncompromising in its defense of Brown and his actions. Thoreau's speech proved persuasive: the abolitionist movement began to accept Brown as a martyr, and by the time of the American Civil War entire armies of the North were literally singing Brown's praises. As a biographer of Brown put it, "If, as Alfred Kazin suggests, without John Brown there would have been no Civil War, we would add that without the Concord Transcendentalists, John Brown would have had little cultural impact."
Tuberculosis and death
Thoreau contracted tuberculosis in 1835 and suffered from it sporadically afterwards. In 1860, following a late-night excursion to count the rings of tree stumps during a rainstorm, he became ill with bronchitis. His health declined, with brief periods of remission, and he eventually became bedridden. Recognizing the terminal nature of his disease, Thoreau spent his last years revising and editing his unpublished works, particularly The Maine Woods and Excursions, and petitioning publishers to print revised editions of A Week and Walden. He wrote letters and journal entries until he became too weak to continue. His friends were alarmed at his diminished appearance and were fascinated by his tranquil acceptance of death. When his aunt Louisa asked him in his last weeks if he had made his peace with God, Thoreau responded, "I did not know we had ever quarreled."
Aware he was dying, Thoreau's last words were "Now comes good sailing", followed by two lone words, "moose" and "Indian". He died on May 6, 1862, at age 44. Amos Bronson Alcott planned the service and read selections from Thoreau's works, and Channing presented a hymn. Emerson wrote the eulogy spoken at the funeral. Thoreau was buried in the Dunbar family plot; his remains and those of members of his immediate family were eventually moved to Sleepy Hollow Cemetery in Concord, Massachusetts.
Nature and human existence
Thoreau was an early advocate of recreational hiking and canoeing, of conserving natural resources on private land, and of preserving wilderness as public land. He was a highly skilled canoeist; Nathaniel Hawthorne, after a ride with him, noted that "Mr. Thoreau managed the boat so perfectly, either with two paddles or with one, that it seemed instinct with his own will, and to require no physical effort to guide it."
He was not a strict vegetarian, though he said he preferred that diet and advocated it as a means of self-improvement. He wrote in Walden, "The practical objection to animal food in my case was its uncleanness; and besides, when I had caught and cleaned and cooked and eaten my fish, they seemed not to have fed me essentially. It was insignificant and unnecessary, and cost more than it came to. A little bread or a few potatoes would have done as well, with less trouble and filth."
Thoreau neither rejected civilization nor fully embraced wilderness. Instead he sought a middle ground, the pastoral realm that integrates nature and culture. His philosophy required that he be a didactic arbitrator between the wilderness he based so much on and the spreading mass of humanity in North America. He decried the latter endlessly but felt that a teacher needs to be close to those who needed to hear what he wanted to tell them. The wildness he enjoyed was the nearby swamp or forest, and he preferred "partially cultivated country". His idea of being "far in the recesses of the wilderness" of Maine was to "travel the logger's path and the Indian trail", but he also hiked on pristine land.
In an essay titled, "Henry David Thoreau, Philosopher", environmental historian Roderick Nash wrote, "Thoreau left Concord in 1846 for the first of three trips to northern Maine. His expectations were high because he hoped to find genuine, primeval America. But contact with real wilderness in Maine affected him far differently than had the idea of wilderness in Concord. Instead of coming out of the woods with a deepened appreciation of the wilds, Thoreau felt a greater respect for civilization and realized the necessity of balance."
Of alcohol, Thoreau wrote, "I would fain keep sober always. ... I believe that water is the only drink for a wise man; wine is not so noble a liquor. ... Of all ebriosity, who does not prefer to be intoxicated by the air he breathes?"
Sexuality
Thoreau never married and was childless. In 1840, when he was 23, he proposed to eighteen-year old Ellen Sewall, but she refused him, on the advice of her father. Sophia Foord proposed to him, but he rejected her.
Thoreau's sexuality has long been the subject of speculation, including by his contemporaries. Critics have called him heterosexual, homosexual, or asexual. There is no evidence to suggest he had physical relations with anyone, man or woman. Bronson Alcott wrote that Thoreau "seemed to have no temptations. All those strong wants that do battle with other men's nature, he knew not." Some scholars have suggested that homoerotic sentiments run through his writings and concluded that he was homosexual. The elegy "Sympathy" was inspired by the eleven-year-old Edmund Sewall, who had just spent five days in the Thoreau household in 1839. One scholar has suggested that he wrote the poem to Edmund because he could not bring himself to write it to Edmund's sister Anna, and another that Thoreau's "emotional experiences with women are memorialized under a camouflage of masculine pronouns", but other scholars dismiss this. It has been argued that the long paean in Walden to the French-Canadian woodchopper Alek Therien, which includes allusions to Achilles and Patroclus, is an expression of conflicted desire. In some of Thoreau's writing there is the sense of a secret self. In 1840 he writes in his journal: "My friend is the apology for my life. In him are the spaces which my orbit traverses". Thoreau was strongly influenced by the moral reformers of his time, and this may have instilled anxiety and guilt over sexual desire.
Politics
Thoreau was fervently against slavery and actively supported the abolitionist movement. He participated as a conductor in the Underground Railroad, delivered lectures that attacked the Fugitive Slave Law, and in opposition to the popular opinion of the time, supported radical abolitionist militia leader John Brown and his party. Two weeks after the ill-fated raid on Harpers Ferry and in the weeks leading up to Brown's execution, Thoreau delivered a speech to the citizens of Concord, Massachusetts, in which he compared the American government to Pontius Pilate and likened Brown's execution to the crucifixion of Jesus Christ:
In "The Last Days of John Brown", Thoreau described the words and deeds of John Brown as noble and an example of heroism. In addition, he lamented the newspaper editors who dismissed Brown and his scheme as "crazy".
Thoreau was a proponent of limited government and individualism. Although he was hopeful that mankind could potentially have, through self-betterment, the kind of government which "governs not at all", he distanced himself from contemporary "no-government men" (anarchists), writing: "I ask for, not at once no government, but at once a better government."
Thoreau deemed the evolution from absolute monarchy to limited monarchy to democracy as "a progress toward true respect for the individual" and theorized about further improvements "towards recognizing and organizing the rights of man". Echoing this belief, he went on to write: "There will never be a really free and enlightened State until the State comes to recognize the individual as a higher and independent power, from which all its power and authority are derived, and treats him accordingly."
It is on this basis that Thoreau could so strongly inveigh against the British administration and Catholicism in A Yankee in Canada. Despotic authority, Thoreau argued, had crushed the people's sense of ingenuity and enterprise; the Canadian habitants had been reduced, in his view, to a perpetual childlike state. Ignoring the recent rebellions, he argued that there would be no revolution in the St. Lawrence River valley.
Although Thoreau believed resistance to unjustly exercised authority could be both violent (exemplified in his support for John Brown) and nonviolent (his own example of tax resistance as described in "Resistance to Civil Government"), he regarded pacifist nonresistance as temptation to passivity, writing: "Let not our Peace be proclaimed by the rust on our swords, or our inability to draw them from their scabbards; but let her at least have so much work on her hands as to keep those swords bright and sharp." Furthermore, in a formal lyceum debate in 1841, he debated the subject "Is it ever proper to offer forcible resistance?", arguing the affirmative.
Likewise, his condemnation of the Mexican–American War did not stem from pacifism, but rather because he considered Mexico "unjustly overrun and conquered by a foreign army" as a means to expand the slave territory.
Thoreau was ambivalent towards industrialization and capitalism. On one hand he regarded commerce as "unexpectedly confident and serene, adventurous, and unwearied" and expressed admiration for its associated cosmopolitanism, writing:
On the other hand, he wrote disparagingly of the factory system:
Thoreau also favored the protection of animals and wild areas, free trade, and taxation for schools and highways, and espoused views that at least in part align with what is today known as bioregionalism. He disapproved of the subjugation of Native Americans, slavery, philistinism, technological utopianism, and what can be regarded in today's terms as consumerism, mass entertainment, and frivolous applications of technology.
Intellectual interests, influences, and affinities
Indian sacred texts and philosophy
Thoreau was influenced by Indian spiritual thought. In Walden, there are many overt references to the sacred texts of India. For example, in the first chapter ("Economy"), he writes: "How much more admirable the Bhagvat-Geeta than all the ruins of the East!" American Philosophy: An Encyclopedia classes him as one of several figures who "took a more pantheist or pandeist approach by rejecting views of God as separate from the world", also a characteristic of Hinduism.
Furthermore, in "The Pond in Winter", he equates Walden Pond with the sacred Ganges river, writing:
Thoreau was aware his Ganges imagery could have been factual. He wrote about ice harvesting at Walden Pond. And he knew that New England's ice merchants were shipping ice to foreign ports, including Calcutta.
Additionally, Thoreau followed various Hindu customs, including a diet largely consisting of rice ("It was fit that I should live on rice, mainly, who loved so well the philosophy of India."), flute playing (reminiscent of the favorite musical pastime of Krishna), and yoga.
In an 1849 letter to his friend H.G.O. Blake, he wrote about yoga and its meaning to him:
Biology
Thoreau read contemporary works in the new science of biology, including the works of Alexander von Humboldt, Charles Darwin, and Asa Gray (Charles Darwin's staunchest American ally). Thoreau was deeply influenced by Humboldt, especially his work Cosmos.
In 1859, Thoreau purchased and read Darwin's On the Origin of Species. Unlike many natural historians at the time, including Louis Agassiz who publicly opposed Darwinism in favor of a static view of nature, Thoreau was immediately enthusiastic about the theory of evolution by natural selection and endorsed it, stating:
Influence
Thoreau's political writings had little impact during his lifetime, as "his contemporaries did not see him as a theorist or as a radical", viewing him instead as a naturalist. They either dismissed or ignored his political essays, including "Civil Disobedience". The only two complete books (as opposed to essays) that were published in his lifetime, Walden and A Week on the Concord and Merrimack Rivers (1849), both dealt with Nature, in which he "loved to wander". His obituary was lumped in with others, rather than as a separate article, in an 1862 yearbook. Critics and the public continued either to disdain or to ignore Thoreau for years, but the publication of extracts from his journal in the 1880s by his friend H.G.O. Blake, and of a definitive set of Thoreau's works by the Riverside Press between 1893 and 1906, led to the rise of what literary historian F. L. Pattee called a "Thoreau cult".
Thoreau's writings went on to influence many public figures. Political leaders and reformers like Mohandas Gandhi, U.S. President John F. Kennedy, American civil rights activist Martin Luther King Jr., U.S. Supreme Court Justice William O. Douglas, and Russian author Leo Tolstoy all spoke of being strongly affected by Thoreau's work, particularly "Civil Disobedience", as did "right-wing theorist Frank Chodorov [who] devoted an entire issue of his monthly, Analysis, to an appreciation of Thoreau".
Thoreau also influenced many artists and authors including Edward Abbey, Willa Cather, Marcel Proust, William Butler Yeats, Sinclair Lewis, Ernest Hemingway, Upton Sinclair, E. B. White, Lewis Mumford, Frank Lloyd Wright, Alexander Posey, and Gustav Stickley. Thoreau also influenced naturalists like John Burroughs, John Muir, E. O. Wilson, Edwin Way Teale, Joseph Wood Krutch, B. F. Skinner, David Brower, and Loren Eiseley, who Publishers Weekly called "the modern Thoreau".
Thoreau's friend William Ellery Channing published his first biography, Thoreau the Poet-Naturalist, in 1873. English writer Henry Stephens Salt wrote a biography of Thoreau in 1890, which popularized Thoreau's ideas in Britain: George Bernard Shaw, Edward Carpenter, and Robert Blatchford were among those who became Thoreau enthusiasts as a result of Salt's advocacy.
Mohandas Gandhi first read Walden in 1906, while working as a civil rights activist in Johannesburg, South Africa. Gandhi first read "Civil Disobedience" while he sat in a South African prison for the crime of nonviolently protesting discrimination against the Indian population in the Transvaal. The essay galvanized Gandhi, who wrote and published a synopsis of Thoreau's argument, calling what he termed its "incisive logic ... unanswerable" and referring to Thoreau as "one of the greatest and most moral men America has produced." He told American reporter Webb Miller, "[Thoreau's] ideas influenced me greatly. I adopted some of them and recommended the study of Thoreau to all of my friends who were helping me in the cause of Indian Independence. Why I actually took the name of my movement from Thoreau's essay 'On the Duty of Civil Disobedience', written about 80 years ago."
Martin Luther King Jr. noted in his autobiography that his first encounter with the idea of nonviolent resistance was reading "On Civil Disobedience" in 1944 while attending Morehouse College. He wrote in his autobiography that it was,
Here, in this courageous New Englander's refusal to pay his taxes and his choice of jail rather than support a war that would spread slavery's territory into Mexico, I made my first contact with the theory of nonviolent resistance. Fascinated by the idea of refusing to cooperate with an evil system, I was so deeply moved that I reread the work several times. I became convinced that noncooperation with evil is as much a moral obligation as is cooperation with good. No other person has been more eloquent and passionate in getting this idea across than Henry David Thoreau. As a result of his writings and personal witness, we are the heirs of a legacy of creative protest. The teachings of Thoreau came alive in our civil rights movement; indeed, they are more alive than ever before. Whether expressed in a sit-in at lunch counters; a freedom ride into Mississippi; a peaceful protest in Albany, Georgia; a bus boycott in Montgomery, Alabama; these are outgrowths of Thoreau's insistence that evil must be resisted and that no moral man can patiently adjust to injustice.
American psychologist B. F. Skinner wrote that he carried a copy of Thoreau's Walden with him in his youth. In Walden Two (published in 1948), Skinner wrote about a fictional utopian community of about 1,000 members inspired by the life of Henry Thoreau. Thoreau and his fellow Transcendentalists from Concord, Massachusetts were also a major inspiration for the American composer Charles Ives, whose 1915 Piano Sonata No. 2, known as the Concord Sonata, features "impressionistic pictures of Emerson and Thoreau", and includes a part for flute, Thoreau's instrument, in its 4th movement.
Actor Ron Thompson did a dramatic portrayal of Henry David Thoreau in the 1976 NBC television series The Rebels.
Thoreau's ideas have impacted and resonated with various strains in the anarchist movement, with Emma Goldman referring to him as "the greatest American anarchist". Green anarchism and anarcho-primitivism in particular have both derived inspiration and ecological points-of-view from the writings of Thoreau. John Zerzan included Thoreau's text "Excursions" (1863) in his edited compilation of works in the anarcho-primitivist tradition titled Against civilization: Readings and reflections. Additionally, Murray Rothbard, the founder of anarcho-capitalism, has opined that Thoreau was one of the "great intellectual heroes" of his movement. Thoreau was also an important influence on late 19th-century anarchist naturism. Globally, Thoreau's concepts also held importance within individualist anarchist circles in Spain, France, and Portugal.
For the 200th anniversary of his birth, publishers released several new editions of his work: a recreation of Walden 1902 edition with illustrations, a picture book with excerpts from Walden, and an annotated collection of Thoreau's essays on slavery. The United States Postal Service issued a commemorative stamp honoring Thoreau on May 23, 2017, in Concord, MA.
Critical reception
Thoreau's work and career received little attention from his contemporaries until 1865, when the North American Review published James Russell Lowell's review of various papers of Thoreau's that Emerson had collected and edited. Lowell's essay, Letters to Various Persons, which he republished as a chapter in his book, My Study Windows, derided Thoreau as a humorless poseur trafficking in commonplaces, a sentimentalist lacking in imagination, a "Diogenes in his barrel", resentfully criticizing what he could not attain. Lowell's caustic analysis influenced Scottish author Robert Louis Stevenson, who criticized Thoreau as a "skulker", saying "He did not wish virtue to go out of him among his fellow-men, but slunk into a corner to hoard it for himself."
Nathaniel Hawthorne had mixed feelings about Thoreau. He noted that "He is a keen and delicate observer of nature—a genuine observer—which, I suspect, is almost as rare a character as even an original poet; and Nature, in return for his love, seems to adopt him as her especial child, and shows him secrets which few others are allowed to witness." On the other hand, he also wrote that Thoreau "repudiated all regular modes of getting a living, and seems inclined to lead a sort of Indian life among civilized men".
In a similar vein, poet John Greenleaf Whittier detested what he deemed to be the "wicked" and "heathenish" message of Walden, claiming that Thoreau wanted man to "lower himself to the level of a woodchuck and walk on four legs".
In response to such criticisms, the English novelist George Eliot, writing decades later for the Westminster Review, characterized such critics as uninspired and narrow-minded:
Thoreau himself also responded to the criticism in a paragraph of his work Walden by highlighting what he felt was the irrelevance of their inquiries:
Recent criticism has accused Thoreau of hypocrisy, misanthropy, and being sanctimonious, based on his writings in Walden, although these criticisms have been regarded as highly selective.
Selected works
Many of Thoreau's works were not published during his lifetime, including his journals and numerous unfinished manuscripts.
"Aulus Persius Flaccus" (1840)
The Service (1840)
"A Walk to Wachusett" (1842)
"Paradise (to be) Regained" (1843)
"The Landlord" (1843)
"Sir Walter Raleigh" (1844)
"Herald of Freedom" (1844)
"Wendell Phillips Before the Concord Lyceum" (1845)
"Reform and the Reformers" (1846–48)
"Thomas Carlyle and His Works" (1847)
A Week on the Concord and Merrimack Rivers (1849)
"Resistance to Civil Government", or "Civil Disobedience"", or "On the Duty of Civil Disobedience"" (1849)
"An Excursion to Canada" (1853)
"Slavery in Massachusetts" (1854)
Walden (1854)
"A Plea for Captain John Brown" (1859)
"Remarks After the Hanging of John Brown" (1859)
"The Last Days of John Brown" (1860)
"Walking" (1862)
"Autumnal Tints" (1862)
"Wild Apples: The History of the Apple Tree" (1862)
"The Fall of the Leaf" (1863)
Excursions (1863)
"Life Without Principle" (1863)
"Night and Moonlight" (1863)
"The Highland Light" (1864)
"The Maine Woods" (1864) Fully Annotated Edition. Jeffrey S. Cramer, ed., Yale University Press, 2009
"Cape Cod" (1865)
"Letters to Various Persons" (1865)
A Yankee in Canada, with Anti-Slavery and Reform Papers (1866)
"Early Spring in Massachusetts" (1881)
"Summer" (1884)
"Winter" (1888)
"Autumn" (1892)
Miscellanies (1894)
Familiar Letters of Henry David Thoreau (1894)
Poems of Nature (1895)
Some Unpublished Letters of Henry D. and Sophia E. Thoreau (1898)
The First and Last Journeys of Thoreau (1905)
Journal of Henry David Thoreau (1906)
The Correspondence of Henry David Thoreau edited by Walter Harding and Carl Bode (Washington Square: New York University Press, 1958)
"I Was Made Erect and Lone"
"The Bluebird Carries the Sky on His Back" (Stanyan, 1970)
"The Dispersion of Seeds" published as Faith in a Seed (Island Press, 1993)
The Indian Notebooks (1847–1861) selections by Richard F. Fleck
Wild Fruits (Unfinished at his death, W.W. Norton, 1999)
See also
American philosophy
List of American philosophers
List of peace activists
Thoreau Society
Walden Woods Project
References
Further reading
Balthrop‐Lewis, Alda. "Exemplarist Environmental Ethics: Thoreau's Political Ascetism against Solution Thinking." Journal of Religious Ethics 47.3 (2019): 525–550.
Bode, Carl. Best of Thoreau's Journals. Southern Illinois University Press. 1967.
Botkin, Daniel. No Man's Garden
Buell, Lawrence. The Environmental Imagination: Thoreau, Nature Writing, and the Formation of American Culture (Harvard UP, 1995)
Cafaro, Philip. Thoreau's Living Ethics: "Walden" and the Pursuit of Virtue (U of Georgia Press, 2004)
Chodorov, Frank. The Disarming Honesty of Henry David Thoreau
Conrad, Randall. Who He Was & Why He Matters
Cramer, Jeffrey S. Solid Seasons: The Friendship of Henry David Thoreau and Ralph Waldo Emerson (Counterpoint Press, 2019).
Dean, Bradley P. ed., Letters to a Spiritual Seeker. New York: W. W. Norton & Company, 2004.
Finley, James S., ed. Henry David Thoreau in Context (Cambridge UP, 2017).
Furtak, Rick, Ellsworth, Jonathan, and Reid, James D., eds. Thoreau's Importance for Philosophy. New York: Fordham University Press, 2012.
Gionfriddo, Michael. "Thoreau, the Work of Breathing, and Building Castles in the Air: Reading Walden's 'Conclusion'." The Concord Saunterer 25 (2017): 49–90 online .
Guhr, Sebastian. Mr. Lincoln & Mr. Thoreau. S. Marix Verlag, Wiesbaden 2021.
Harding, Walter. The Days of Henry Thoreau. Princeton University Press, 1982.
Hendrick, George. "The Influence of Thoreau's 'Civil Disobedience' on Gandhi's Satyagraha." The New England Quarterly 29, no. 4 (December 1956). 462–471.
Howarth, William. The Book of Concord: Thoreau's Life as a Writer. Viking Press, 1982
Judd, Richard W. Finding Thoreau: The Meaning of Nature in the Making of an Environmental Icon (2018) excerpt
McGregor, Robert Kuhn. A Wider View of the Universe: Henry Thoreau's Study of Nature (U of Illinois Press, 1997).
Marble, Annie Russell. Thoreau: His Home, Friends and Books. New York: AMS Press. 1969 [1902]
Myerson, Joel et al. The Cambridge Companion to Henry David Thoreau. Cambridge University Press. 1995
Nash, Roderick. Henry David Thoreau, Philosopher
Paolucci, Stefano. "The Foundations of Thoreau's 'Castles in the Air'" , Thoreau Society Bulletin, No. 290 (Summer 2015), 10. (See also the Full Unedited Version of the same article.)
Parrington, Vernon. Main Current in American Thought . V 2 online. 1927
Parrington, Vernon L. Henry Thoreau: Transcendental Economist
Petroski, Henry. "H. D. Thoreau, Engineer." American Heritage of Invention and Technology, Vol. 5, No. 2, pp. 8–16
Petrulionis, Sandra Harbert, ed., Thoreau in His Own Time: A Biographical Chronicle of His Life, Drawn From Recollections, Interviews, and Memoirs by Family, Friends, and Associates. Iowa City: University of Iowa Press, 2012.
Richardson, Robert D. Henry Thoreau: A Life of the Mind. University of California Press Berkeley and Los Angeles. 1986.
Ridl, Jack. "Moose. Indian. " Scintilla (poem on Thoreau's last words)
Schneider, Richard Civilizing Thoreau: Human Ecology and the Emerging Social Sciences in the Major Works Rochester, New York. Camden House. 2016.
Smith, David C. "The Transcendental Saunterer: Thoreau and the Search for Self." Savannah, Georgia: Frederic C. Beil, 1997.
Sullivan, Mark W. "Henry David Thoreau in the American Art of the 1950s." The Concord Saunterer: A Journal of Thoreau Studies, New Series, Vol. 18 (2010), pp. 68–89.
Sullivan, Mark W. Picturing Thoreau: Henry David Thoreau in American Visual Culture. Lanham, Maryland: Lexington Books, 2015
Tauber, Alfred I. Henry David Thoreau and the Moral Agency of Knowing. University of California, Berkeley. 2001.
Henry David Thoreau – Internet Encyclopedia of Philosophy
Henry David Thoreau – Stanford Encyclopedia of Philosophy
Thorson, Robert M. The Boatman: Henry David Thoreau's River Years (Harvard UP, 2017), on his scientific study of the Concord River in the late 1850s.
Thorson, Robert M. Walden's Shore: Henry David Thoreau and Nineteenth-Century Science (2015).
Thorson, Robert M. The Guide to Walden Pond: An Exploration of the History, Nature, Landscape, and Literature of One of America's Most Iconic Places (2018).
Walls, Laura Dassow. Seeing New Worlds: Henry David Thoreau and 19th Century Science. University of Wisconsin. 1995.
Walls, Laura Dassow. Henry David Thoreau: A Life. The University of Chicago Press. 2017.
Ward, John William. 1969 Red, White, and Blue: Men, Books, and Ideas in American Culture. New York: Oxford University Press
External links
The Thoreau Society
The Thoreau Edition
"Writings of Emerson and Thoreau" from C-SPAN's American Writers: A Journey Through History
Texts
Works by Thoreau at Open Library
Poems by Thoreau at the Academy of American Poets
The Thoreau Reader by The Thoreau Society
The Writings of Henry David Thoreau at The Walden Woods Project
Scans of Thoreau's Land Surveys at the Concord Free Public Library
Henry David Thoreau Online– The Works and Life of Henry D. Thoreau
19th-century American essayists
19th-century American diarists
19th-century American non-fiction writers
19th-century American poets
19th-century American philosophers
19th-century American naturalists
American abolitionists
Underground Railroad people
American anarchists
American environmentalists
American lecturers
American male essayists
American male non-fiction writers
American male poets
American nature writers
American naturists
American nomads
American non-fiction environmental writers
American opinion journalists
American political philosophers
American spiritual writers
American surveyors
American tax resisters
American travel writers
American anarchist writers
Proto-anarchists
Anti-consumerists
Critics of work and the work ethic
Hall of Fame for Great Americans inductees
Harvard College alumni
Hasty Pudding alumni
Hikers
Pantheists
Simple living advocates
American philosophers of culture
Philosophers of history
Philosophers of love
American philosophers of mind
American philosophers of science
Philosophers from Massachusetts
Poets from Massachusetts
Writers from Massachusetts
People from Concord, Massachusetts
American people of French descent
Burials at Sleepy Hollow Cemetery (Concord, Massachusetts)
Tuberculosis deaths in Massachusetts
19th-century deaths from tuberculosis
1817 births
1862 deaths | Henry David Thoreau | [
"Environmental_science"
] | 9,997 | [] |
43,476 | https://en.wikipedia.org/wiki/Operations%20research | Operations research () (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a branch of applied mathematics that deals with the development and application of analytical methods to improve decision-making. Although the term management science is sometimes used similarly, the two fields differ in their scope and emphasis.
Employing techniques from other mathematical sciences, such as modeling, statistics, and optimization, operations research arrives at optimal or near-optimal solutions to decision-making problems. Because of its emphasis on practical applications, operations research has overlapped with many other disciplines, notably industrial engineering. Operations research is often concerned with determining the extreme values of some real-world objective: the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost). Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries.
Overview
Operations research (OR) encompasses the development and the use of a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, ordinal priority approach, neural networks, expert systems, decision analysis, and the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power, or develop a new technique specific to the problem at hand (and, afterwards, to that type of problem).
The major sub-disciplines (but not limited to) in modern operational research, as identified by the journal Operations Research and The Journal of the Operational Research Society are:
Computing and information technologies
Financial engineering
Manufacturing, service sciences, and supply chain management
Policy modeling and public sector work
Revenue management
Simulation
Stochastic models
Transportation theory
Game theory for strategies
Linear programming
Nonlinear programming
Integer programming in NP-complete problem specially for 0-1 integer linear programming for binary
Dynamic programming in Aerospace engineering and Economics
Information theory used in Cryptography, Quantum computing
Quadratic programming for solutions of Quadratic equation and Quadratic function
History
In the decades after the two world wars, the tools of operations research were more widely applied to problems in business, industry, and society. Since that time, operational research has expanded into a field widely used in industries ranging from petrochemicals to airlines, finance, logistics, and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize sometimes complex systems, and has become an area of active academic and industrial research.
Historical origins
In the 17th century, mathematicians Blaise Pascal and Christiaan Huygens solved problems involving sometimes complex decisions (problem of points) by using game-theoretic ideas and expected values; others, such as Pierre de Fermat and Jacob Bernoulli, solved these types of problems using combinatorial reasoning instead. Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, and to studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge. Beginning in the 20th century, study of inventory management could be considered the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may have originated in the efforts of military planners during World War I (convoy theory and Lanchester's laws). Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would later attempt to extend these to the social sciences.
Modern operational research originated at the Bawdsey Research Station in the UK in 1937 as the result of an initiative of the station's superintendent, A. P. Rowe and Robert Watson-Watt. Rowe conceived the idea as a means to analyse and improve the working of the UK's early-warning radar system, code-named "Chain Home" (CH). Initially, Rowe analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnel's behaviour. This revealed unappreciated limitations of the CH network and allowed remedial action to be taken.
Scientists in the United Kingdom (including Patrick Blackett (later Lord Blackett OM PRS), Cecil Gordon, Solly Zuckerman, (later Baron Zuckerman OM, KCB, FRS), C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson), and in the United States (George Dantzig) looked for ways to make better decisions in such areas as logistics and training schedules.
Second World War
The modern field of operational research arose during World War II. In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control". Other names for it included operational analysis (UK Ministry of Defence from 1962) and quantitative management.
During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army.
Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment (RAE) he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an average of over 20,000 at the start of the Battle of Britain to 4,000 in 1941.
In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941 and then early in 1942 to the Admiralty. Blackett's team at Coastal Command's Operational Research Section (CC-ORS) included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of crucial analyses that aided the war effort. Britain introduced the convoy system to reduce shipping losses, but while the principle of using warships to accompany merchant ships was generally accepted, it was unclear whether it was better for convoys to be small or large. Convoys travel at the speed of the slowest member, so small convoys can travel faster. It was also argued that small convoys would be harder for German U-boats to detect. On the other hand, large convoys could deploy more warships against an attacker. Blackett's staff showed that the losses suffered by convoys depended largely on the number of escort vessels present, rather than the size of the convoy. Their conclusion was that a few large convoys are more defensible than many small ones.
While performing an analysis of the methods used by RAF Coastal Command to hunt and destroy submarines, one of the analysts asked what colour the aircraft were. As most of them were from Bomber Command they were painted black for night-time operations. At the suggestion of CC-ORS a test was run to see if that was the best colour to camouflage the aircraft for daytime operations in the grey North Atlantic skies. Tests showed that aircraft painted white were on average not spotted until they were 20% closer than those painted black. This change indicated that 30% more submarines would be attacked and sunk for the same number of sightings. As a result of these findings Coastal Command changed their aircraft to using white undersurfaces.
Other work by the CC-ORS indicated that on average if the trigger depth of aerial-delivered depth charges were changed from 100 to 25 feet, the kill ratios would go up. The reason was that if a U-boat saw an aircraft only shortly before it arrived over the target then at 100 feet the charges would do no damage (because the U-boat wouldn't have had time to descend as far as 100 feet), and if it saw the aircraft a long way from the target it had time to alter course under water so the chances of it being within the 20-foot kill zone of the charges was small. It was more efficient to attack those submarines close to the surface when the targets' locations were better known than to attempt their destruction at greater depths when their positions could only be guessed. Before the change of settings from 100 to 25 feet, 1% of submerged U-boats were sunk and 14% damaged. After the change, 7% were sunk and 11% damaged; if submarines were caught on the surface but had time to submerge just before being attacked, the numbers rose to 11% sunk and 15% damaged. Blackett observed "there can be few cases where such a great operational gain had been obtained by such a small and simple change of tactics".
Bomber Command's Operational Research Section (BC-ORS), analyzed a report of a survey carried out by RAF Bomber Command. For the survey, Bomber Command inspected all bombers returning from bombing raids over Germany over a particular period. All damage inflicted by German air defenses was noted and the recommendation was given that armor be added in the most heavily damaged areas. This recommendation was not adopted because the fact that the aircraft were able to return with these areas damaged indicated the areas were not vital, and adding armor to non-vital areas where damage is acceptable reduces aircraft performance. Their suggestion to remove some of the crew so that an aircraft loss would result in fewer personnel losses, was also rejected by RAF command. Blackett's team made the logical recommendation that the armor be placed in the areas which were completely untouched by damage in the bombers who returned. They reasoned that the survey was biased, since it only included aircraft that returned to Britain. The areas untouched in returning aircraft were probably vital areas, which, if hit, would result in the loss of the aircraft. This story has been disputed, with a similar damage assessment study completed in the US by the Statistical Research Group at Columbia University, the result of work done by Abraham Wald.
When Germany organized its air defences into the Kammhuber Line, it was realized by the British that if the RAF bombers were to fly in a bomber stream they could overwhelm the night fighters who flew in individual cells directed to their targets by ground controllers. It was then a matter of calculating the statistical loss from collisions against the statistical loss from night fighters to calculate how close the bombers should fly to minimize RAF losses.
The "exchange rate" ratio of output to input was a characteristic feature of operational research. By comparing the number of flying hours put in by Allied aircraft to the number of U-boat sightings in a given area, it was possible to redistribute aircraft to more productive patrol areas. Comparison of exchange rates established "effectiveness ratios" useful in planning. The ratio of 60 mines laid per ship sunk was common to several campaigns: German mines in British ports, British mines on German routes, and United States mines in Japanese routes.
Operational research doubled the on-target bomb rate of B-29s bombing Japan from the Marianas Islands by increasing the training ratio from 4 to 10 percent of flying hours; revealed that wolf-packs of three United States submarines were the most effective number to enable all members of the pack to engage targets discovered on their individual patrol stations; revealed that glossy enamel paint was more effective camouflage for night fighters than conventional dull camouflage paint finish, and a smooth paint finish increased airspeed by reducing skin friction.
On land, the operational research sections of the Army Operational Research Group (AORG) of the Ministry of Supply (MoS) were landed in Normandy in 1944, and they followed British forces in the advance across Europe. They analyzed, among other topics, the effectiveness of artillery, aerial bombing and anti-tank shooting.
After World War II
In 1947, under the auspices of the British Association, a symposium was organized in Dundee. In his opening address, Watson-Watt offered a definition of the aims of OR:
"To examine quantitatively whether the user organization is getting from the operation of its equipment the best attainable contribution to its overall objective."
With expanded techniques and growing awareness of the field at the close of the war, operational research was no longer limited to only operational, but was extended to encompass equipment procurement, training, logistics and infrastructure. Operations research also grew in many areas other than the military once scientists learned to apply its principles to the civilian sector. The development of the simplex algorithm for linear programming was in 1947.
In the 1950s, the term Operations Research was used to describe heterogeneous mathematical methods such as game theory, dynamic programming, linear programming, warehousing, spare parts theory, queue theory, simulation and production control, which were used primarily in civilian industry. Scientific societies and journals on the subject of operations research were founded in the 1950s, such as the Operation Research Society of America (ORSA) in 1952 and the Institute for Management Science (TIMS) in 1953. Philip Morse, the head of the Weapons Systems Evaluation Group of the Pentagon, became the first president of ORSA and attracted the companies of the military-industrial complex to ORSA, which soon had more than 500 members. In the 1960s, ORSA reached 8000 members. Consulting companies also founded OR groups. In 1953, Abraham Charnes and William Cooper published the first textbook on Linear Programming.
In the 1950s and 1960s, chairs of operations research were established in the U.S. and United Kingdom (from 1964 in Lancaster) in the management faculties of universities. Further influences from the U.S. on the development of operations research in Western Europe can be traced here. The authoritative OR textbooks from the U.S. were published in Germany in German language and in France in French (but not in Italian), such as the book by George Dantzig "Linear Programming"(1963) and the book by C. West Churchman et al. "Introduction to Operations Research"(1957). The latter was also published in Spanish in 1973, opening at the same time Latin American readers to Operations Research. NATO gave important impulses for the spread of Operations Research in Western Europe; NATO headquarters (SHAPE) organised four conferences on OR in the 1950s – the one in 1956 with 120 participants – bringing OR to mainland Europe. Within NATO, OR was also known as "Scientific Advisory" (SA) and was grouped together in the Advisory Group of Aeronautical Research and Development (AGARD). SHAPE and AGARD organized an OR conference in April 1957 in Paris. When France withdrew from the NATO military command structure, the transfer of NATO headquarters from France to Belgium led to the institutionalization of OR in Belgium, where Jacques Drèze founded CORE, the Center for Operations Research and Econometrics at the Catholic University of Leuven in 1966.
With the development of computers over the next three decades, Operations Research can now solve problems with hundreds of thousands of variables and constraints. Moreover, the large volumes of data required for such problems can be stored and manipulated very efficiently." Much of operations research (modernly known as 'analytics') relies upon stochastic variables and a therefore access to truly random numbers. Fortunately, the cybernetics field also required the same level of randomness. The development of increasingly better random number generators has been a boon to both disciplines. Modern applications of operations research includes city planning, football strategies, emergency planning, optimizing all facets of industry and economy, and undoubtedly with the likelihood of the inclusion of terrorist attack planning and definitely counterterrorist attack planning. More recently, the research approach of operations research, which dates back to the 1950s, has been criticized for being collections of mathematical models but lacking an empirical basis of data collection for applications. How to collect data is not presented in the textbooks. Because of the lack of data, there are also no computer applications in the textbooks.
Problems addressed
Critical path analysis or project planning: identifying those processes in a multiple-dependency project which affect the overall duration of the project
Floorplanning: designing the layout of equipment in a factory or components on a computer chip to reduce manufacturing time (therefore reducing cost)
Network optimization: for instance, setup of telecommunications or power system networks to maintain quality of service during outages
Resource allocation problems
Facility location
Assignment Problems:
Assignment problem
Generalized assignment problem
Quadratic assignment problem
Weapon target assignment problem
Bayesian search theory: looking for a target
Optimal search
Routing, such as determining the routes of buses so that as few buses are needed as possible
Supply chain management: managing the flow of raw materials and products based on uncertain demand for the finished products
Project production activities: managing the flow of work activities in a capital project in response to system variability through operations research tools for variability reduction and buffer allocation using a combination of allocation of capacity, inventory and time
Efficient messaging and customer response tactics
Automation: automating or integrating robotic systems in human-driven operations processes
Globalization: globalizing operations processes in order to take advantage of cheaper materials, labor, land or other productivity inputs
Transportation: managing freight transportation and delivery systems (Examples: LTL shipping, intermodal freight transport, travelling salesman problem, driver scheduling problem)
Scheduling:
Personnel staffing
Manufacturing steps
Project tasks
Network data traffic: these are known as queueing models or queueing systems.
Sports events and their television coverage
Blending of raw materials in oil refineries
Determining optimal prices, in many retail and B2B settings, within the disciplines of pricing science
Cutting stock problem: Cutting small items out of bigger ones.
Finding the optimal parameter (weights) setting of an algorithm that generates the realisation of a figured bass in Baroque compositions (classical music) by using weighted local cost and transition cost rules
Operational research is also used extensively in government where evidence-based policy is used.
Management science
The field of management science (MS) is known as using operations research models in business. Stafford Beer characterized this in 1967. Like operational research itself, management science is an interdisciplinary branch of applied mathematics devoted to optimal decision planning, with strong links with economics, business, engineering, and other sciences. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near-optimal solutions to sometimes complex decision problems. Management scientists help businesses to achieve their goals using the scientific methods of operational research.
The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. Of course, the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups.
Management science is concerned with developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems, as well as designing and developing new and better models of organizational excellence.
Related fields
Some of the fields that have considerable overlap with Operations Research and Management Science include:
Artificial Intelligence
Business analytics
Computer science
Data mining/Data science/Big data
Decision analysis
Decision intelligence
Engineering
Financial engineering
Forecasting
Game theory
Geography/Geographic information science
Graph theory
Industrial engineering
Inventory control
Logistics
Mathematical modeling
Mathematical optimization
Probability and statistics
Project management
Policy analysis
Queueing theory
Simulation
Social network/Transportation forecasting models
Stochastic processes
Supply chain management
Systems engineering
Applications
Applications are abundant such as in airlines, manufacturing companies, service organizations, military branches, and government. The range of problems and issues to which it has contributed insights and solutions is vast. It includes:
Scheduling (of airlines, trains, buses etc.)
Assignment (assigning crew to flights, trains or buses; employees to projects; commitment and dispatch of power generation facilities)
Facility location (deciding most appropriate location for new facilities such as warehouses; factories or fire station)
Hydraulics & Piping Engineering (managing flow of water from reservoirs)
Health Services (information and supply chain management)
Game Theory (identifying, understanding; developing strategies adopted by companies)
Urban Design
Computer Network Engineering (packet routing; timing; analysis)
Telecom & Data Communication Engineering (packet routing; timing; analysis)
Management is also concerned with so-called soft-operational analysis which concerns methods for strategic planning, strategic decision support, problem structuring methods.
In dealing with these sorts of challenges, mathematical modeling and simulation may not be appropriate or may not suffice. Therefore, during the past 30 years, a number of non-quantified modeling methods have been developed. These include:
stakeholder based approaches including metagame analysis and drama theory
morphological analysis and various forms of influence diagrams
cognitive mapping
strategic choice
robustness analysis
Societies and journals
Societies
The International Federation of Operational Research Societies (IFORS) is an umbrella organization for operational research societies worldwide, representing approximately 50 national societies including those in the US, UK, France, Germany, Italy, Canada, Australia, New Zealand, Philippines, India, Japan and South Africa. For the institutionalization of Operations Research, the foundation of IFORS in 1960 was of decisive importance, which stimulated the foundation of national OR societies in Austria, Switzerland and Germany. IFORS held important international conferences every three years since 1957. The constituent members of IFORS form regional groups, such as that in Europe, the Association of European Operational Research Societies (EURO). Other important operational research organizations are Simulation Interoperability Standards Organization (SISO) and Interservice/Industry Training, Simulation and Education Conference (I/ITSEC)
In 2004, the US-based organization INFORMS began an initiative to market the OR profession better, including a website entitled The Science of Better which provides an introduction to OR and examples of successful applications of OR to industrial problems. This initiative has been adopted by the Operational Research Society in the UK, including a website entitled Learn About OR.
Journals of INFORMS
The Institute for Operations Research and the Management Sciences (INFORMS) publishes thirteen scholarly journals about operations research, including the top two journals in their class, according to 2005 Journal Citation Reports. They are:
Decision Analysis
Information Systems Research
INFORMS Journal on Computing
INFORMS Transactions on Education (an open access journal)
Interfaces
Management Science
Manufacturing & Service Operations Management
Marketing Science
Mathematics of Operations Research
Operations Research
Organization Science
Service Science
Transportation Science
Other journals
These are listed in alphabetical order of their titles.
4OR-A Quarterly Journal of Operations Research: jointly published the Belgian, French and Italian Operations Research Societies (Springer);
Decision Sciences published by Wiley-Blackwell on behalf of the Decision Sciences Institute
European Journal of Operational Research (EJOR): Founded in 1975 and is presently by far the largest operational research journal in the world, with its around 9,000 pages of published papers per year. In 2004, its total number of citations was the second largest amongst Operational Research and Management Science journals;
INFOR Journal: published and sponsored by the Canadian Operational Research Society;
Journal of Defense Modeling and Simulation (JDMS): Applications, Methodology, Technology: a quarterly journal devoted to advancing the science of modeling and simulation as it relates to the military and defense.
Journal of the Operational Research Society (JORS): an official journal of The OR Society; this is the oldest continuously published journal of OR in the world, published by Taylor & Francis;
Military Operations Research (MOR): published by the Military Operations Research Society;
Omega - The International Journal of Management Science;
Operations Research Letters;
Opsearch: official journal of the Operational Research Society of India;
OR Insight: a quarterly journal of The OR Society published by Palgrave;
Pesquisa Operacional, the official journal of the Brazilian Operations Research Society
Production and Operations Management, the official journal of the Production and Operations Management Society
TOP: the official journal of the Spanish Statistics and Operations Research Society.
See also
Operations research topics
Black box analysis
Dynamic programming
Inventory theory
Optimal maintenance
Real options valuation
Artificial intelligence
Operations researchers
Operations researchers (category)
George Dantzig
Leonid Kantorovich
Tjalling Koopmans
Russell L. Ackoff
Stafford Beer
Alfred Blumstein
C. West Churchman
William W. Cooper
Robert Dorfman
Richard M. Karp
Ramayya Krishnan
Frederick W. Lanchester
Thomas L. Magnanti
Alvin E. Roth
Peter Whittle
Related fields
Behavioral operations research
Big data
Business engineering
Business process management
Database normalization
Engineering management
Geographic information systems
Industrial engineering
Industrial organization
Managerial economics
Military simulation
Operational level of war
Power system simulation
Project production management
Reliability engineering
Scientific management
Search-based software engineering
Simulation modeling
Strategic management
Supply chain engineering
System safety
Wargaming
References
Further reading
Classic books and articles
R. E. Bellman, Dynamic Programming, Princeton University Press, Princeton, 1957
Abraham Charnes, William W. Cooper, Management Models and Industrial Applications of Linear Programming, Volumes I and II, New York, John Wiley & Sons, 1961
Abraham Charnes, William W. Cooper, A. Henderson, An Introduction to Linear Programming, New York, John Wiley & Sons, 1953
C. West Churchman, Russell L. Ackoff & E. L. Arnoff, Introduction to Operations Research, New York: J. Wiley and Sons, 1957
George B. Dantzig, Linear Programming and Extensions, Princeton, Princeton University Press, 1963
Lester K. Ford, Jr., D. Ray Fulkerson, Flows in Networks, Princeton, Princeton University Press, 1962
Jay W. Forrester, Industrial Dynamics, Cambridge, MIT Press, 1961
L. V. Kantorovich, "Mathematical Methods of Organizing and Planning Production" Management Science, 4, 1960, 266–422
Ralph Keeney, Howard Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, New York, John Wiley & Sons, 1976
H. W. Kuhn, "The Hungarian Method for the Assignment Problem," Naval Research Logistics Quarterly, 1–2, 1955, 83–97
H. W. Kuhn, A. W. Tucker, "Nonlinear Programming," pp. 481–492 in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability
B. O. Koopman, Search and Screening: General Principles and Historical Applications, New York, Pergamon Press, 1980
Tjalling C. Koopmans, editor, Activity Analysis of Production and Allocation, New York, John Wiley & Sons, 1951
Charles C. Holt, Franco Modigliani, John F. Muth, Herbert A. Simon, Planning Production, Inventories, and Work Force, Englewood Cliffs, NJ, Prentice-Hall, 1960
Philip M. Morse, George E. Kimball, Methods of Operations Research, New York, MIT Press and John Wiley & Sons, 1951
Robert O. Schlaifer, Howard Raiffa, Applied Statistical Decision Theory, Cambridge, Division of Research, Harvard Business School, 1961
Classic textbooks
Taha, Hamdy A., "Operations Research: An Introduction", Pearson, 10th Edition, 2016
Frederick S. Hillier & Gerald J. Lieberman, Introduction to Operations Research, McGraw-Hill: Boston MA; 10th Edition, 2014
Robert J. Thierauf & Richard A. Grosse, "Decision Making Through Operations Research", John Wiley & Sons, INC, 1970
Harvey M. Wagner, Principles of Operations Research, Englewood Cliffs, Prentice-Hall, 1969
Wentzel (Ventsel), E. S. Introduction to Operations Research, Moscow: Soviet Radio Publishing House, 1964.
History
Saul I. Gass, Arjang A. Assad, An Annotated Timeline of Operations Research: An Informal History. New York, Kluwer Academic Publishers, 2005.
Saul I. Gass (Editor), Arjang A. Assad (Editor), Profiles in Operations Research: Pioneers and Innovators. Springer, 2011
Maurice W. Kirby (Operational Research Society (Great Britain)). Operational Research in War and Peace: The British Experience from the 1930s to 1970, Imperial College Press, 2003. ,
J. K. Lenstra, A. H. G. Rinnooy Kan, A. Schrijver (editors) History of Mathematical Programming: A Collection of Personal Reminiscences, North-Holland, 1991
Charles W. McArthur, Operations Analysis in the U.S. Army Eighth Air Force in World War II, History of Mathematics, Vol. 4, Providence, American Mathematical Society, 1990
C. H. Waddington, O. R. in World War 2: Operational Research Against the U-boat, London, Elek Science, 1973.
Richard Vahrenkamp: Mathematical Management – Operations Research in the United States and Western Europe, 1945 – 1990, in: Management Revue – Socio-Economic Studies, vol. 34 (2023), issue 1, pp. 69–91.
External links
What is Operations Research?
International Federation of Operational Research Societies
The Institute for Operations Research and the Management Sciences (INFORMS)
Occupational Outlook Handbook, U.S. Department of Labor Bureau of Labor Statistics
Industrial engineering
Mathematical optimization in business
Applied statistics
Engineering disciplines
Mathematical and quantitative methods (economics)
Mathematical economics
Decision-making | Operations research | [
"Mathematics",
"Engineering"
] | 5,917 | [
"Applied mathematics",
"Industrial engineering",
"Operations research",
"nan",
"Mathematical economics",
"Applied statistics"
] |
43,487 | https://en.wikipedia.org/wiki/Probability%20density%20function | In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.
More precisely, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and the area under the entire curve is equal to 1.
The terms probability distribution function and probability function have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. "Density function" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.
Example
Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on.
In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour−1). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour−1. This quantity 2 hour−1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour−1) dt. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour−1)×(1 nanosecond) ≈ (using the unit conversion nanoseconds = 1 hour).
There is a probability density function f with f(5 hours) = 2 hour−1. The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window.
Absolutely continuous univariate distributions
A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable has density , where is a non-negative Lebesgue-integrable function, if:
Hence, if is the cumulative distribution function of , then:
and (if is continuous at )
Intuitively, one can think of as being the probability of falling within the infinitesimal interval .
Formal definition
(This definition may be extended to any probability distribution using the measure-theoretic definition of probability.)
A random variable with values in a measurable space (usually with the Borel sets as measurable subsets) has as probability distribution the pushforward measure X∗P on : the density of with respect to a reference measure on is the Radon–Nikodym derivative:
That is, f is any measurable function with the property that:
for any measurable set
Discussion
In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof).
It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere.
Further details
Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval has probability density for and elsewhere.
The standard normal distribution has probability density
If a random variable is given and its distribution admits a probability density function , then the expected value of (if the expected value exists) can be calculated as
Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point.
A distribution has a density function if its cumulative distribution function is absolutely continuous. In this case: is almost everywhere differentiable, and its derivative can be used as probability density:
If a probability distribution admits a density, then the probability of every one-point set is zero; the same holds for finite and countable sets.
Two probability densities and represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero.
In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:
If is an infinitely small number, the probability that is included within the interval is equal to , or:
Link between discrete and continuous distributions
It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution—that is, taking −1 or 1 for values, with probability each. The density of probability associated with this variable is:
More generally, if a discrete variable can take different values among real numbers, then the associated probability density function is:
where are the discrete values accessible to the variable and are the probabilities associated with these values.
This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean, variance, and kurtosis), starting from the formulas given for a continuous distribution of the probability.
Families of densities
It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by and respectively, giving the family of densities
Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution.
Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones.
Densities associated with multiple variables
For continuous random variables , it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the variables, such that, for any domain in the -dimensional space of the values of the variables , the probability that a realisation of the set variables falls inside the domain is
If is the cumulative distribution function of the vector , then the joint probability density function can be computed as a partial derivative
Marginal densities
For , let be the probability density function associated with variable alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables by integrating over all values of the other variables:
Independence
Continuous random variables admitting a joint density are all independent from each other if
Corollary
If the joint probability density function of a vector of random variables can be factored into a product of functions of one variable
(where each is not necessarily a density) then the variables in the set are all independent from each other, and the marginal probability density function of each of them is given by
Example
This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call a 2-dimensional random vector of coordinates : the probability to obtain in the quarter plane of positive and is
Function of random variables and change of variables in the probability density function
If the probability density function of a random variable (or vector) is given as , it is possible (but often not necessary; see below) to calculate the probability density function of some variable . This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape using a known (for instance, uniform) random number generator.
It is tempting to think that in order to find the expected value , one must first find the probability density of the new random variable . However, rather than computing
one may find instead
The values of the two integrals are the same in all cases in which both and actually have probability density functions. It is not necessary that be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician.
Scalar to scalar
Let be a monotonic function, then the resulting density function is
Here denotes the inverse function.
This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is,
or
For functions that are not monotonic, the probability density function for is
where is the number of solutions in for the equation , and are these solutions.
Vector to vector
Suppose is an -dimensional random variable with joint density . If , where is a bijective, differentiable function, then has density :
with the differential regarded as the Jacobian of the inverse of , evaluated at .
For example, in the 2-dimensional case , suppose the transform is given as , with inverses , . The joint distribution for y = (y1, y2) has density
Vector to scalar
Let be a differentiable function and be a random vector taking values in , be the probability density function of and be the Dirac delta function. It is possible to use the formulas above to determine , the probability density function of , which will be given by
This result leads to the law of the unconscious statistician:
Proof:
Let be a collapsed random variable with probability density function (i.e., a constant equal to zero). Let the random vector and the transform be defined as
It is clear that is a bijective mapping, and the Jacobian of is given by:
which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that
which if marginalized over leads to the desired probability density function.
Sums of independent random variables
The probability density function of the sum of two independent random variables and , each of which has a probability density function, is the convolution of their separate density functions:
It is possible to generalize the previous relation to a sum of N independent random variables, with densities :
This can be derived from a two-way change of variables involving and , similarly to the example below for the quotient of independent random variables.
Products and quotients of independent random variables
Given two independent random variables and , each of which has a probability density function, the density of the product and quotient can be computed by a change of variables.
Example: Quotient distribution
To compute the quotient of two independent random variables and , define the following transformation:
Then, the joint density can be computed by a change of variables from U,V to Y,Z, and can be derived by marginalizing out from the joint density.
The inverse transformation is
The absolute value of the Jacobian matrix determinant of this transformation is:
Thus:
And the distribution of can be computed by marginalizing out :
This method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because can be mapped directly back to , and for a given the quotient is monotonic. This is similarly the case for the sum , difference and product .
Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables.
Example: Quotient of two standard normals
Given two standard normal variables and , the quotient can be computed as follows. First, the variables have the following density functions:
We transform as described above:
This leads to:
This is the density of a standard Cauchy distribution.
See also
Uses as position probability density:
References
Further reading
Chapters 7 to 9 are about continuous variables.
External links
Functions related to probability distributions
Equations of physics
Probability theory | Probability density function | [
"Physics",
"Mathematics"
] | 3,079 | [
"Mathematical objects",
"Equations of physics",
"Equations"
] |
43,519 | https://en.wikipedia.org/wiki/List%20of%20hypothetical%20Solar%20System%20objects | A hypothetical Solar System object is a planet, natural satellite, subsatellite or similar body in the Solar System whose existence is not known, but has been inferred from observational scientific evidence. Over the years a number of hypothetical planets have been proposed, and many have been disproved. However, even today there is scientific speculation about the possibility of planets yet unknown that may exist beyond the range of our current knowledge.
Planets
Counter-Earth, a planet situated on the other side of the Sun from that of the Earth.
Fifth planet (hypothetical), historical speculation about a planet between the orbits of Mars and Jupiter.
Phaeton, a planet situated between the orbits of Mars and Jupiter whose destruction supposedly led to the formation of the asteroid belt. This hypothesis is now considered unlikely, since the asteroid belt has far too little mass to have resulted from the explosion of a large planet. In 2018, a study from researchers at the University of Florida found the asteroid belt was created from the fragments of at least five or six ancient planetary-sized objects instead of a single planet.
Krypton, named after the destroyed native world of Superman, theorized by Michael Ovenden to have been a gas giant between Mars and Jupiter nearly as large as Saturn and also attributed for the formation of the asteroid belt
Planet V, a planet thought by John Chambers and Jack Lissauer to have once existed between Mars and the asteroid belt, based on computer simulations.
Various planets beyond Neptune:
Planet Nine, a planet proposed to explain apparent alignments in the orbits of a number of distant trans-Neptunian objects.
Planet X, a hypothetical planet beyond Neptune. Initially employed to account for supposed perturbations (systematic deviations) in the orbits of Uranus and Neptune, belief in its existence ultimately inspired the search for Pluto. Though the concept has since been abandoned following more precise measurements of Neptune's mass, which accounted for all observed perturbations, it has been re-applied to account for supposed deviations in the motions of Kuiper belt objects. Such explanations are still controversial, however.
Hyperion, a large distant 10th planet theorized in 2000 to have had an effect on Kuiper Belt formation.
Tyche, a hypothetical planet in the Oort Cloud supposedly responsible for producing the statistical excess in long period comets in a band. Results from the WISE telescope survey in 2014 have ruled it out.
Up to three planets at 42 (named Oceanus), 56, and 72 AU (both unnamed) from the sun respectively, proposed by Thomas Jefferson Jackson See in 1909.
Brahma and Vishnu, proposed by Venkatesh P. Ketakar.
Hades, proposed by Theodor Grigull
"Planet Ten" as proposed by Volk and Malhotra, a Mars-sized planetoid believed to be responsible for the inclination of Kuiper Belt objects beyond the Kuiper cliff at 50 AU
"Planet Ten" as proposed by Sverre Aarseth and Carlos and Raúl de la Fuente Marcos, which they believe stabilizes the orbits of other Kuiper Belt objects
Planets O, P, Q, R, S, T, and U, proposed by William Henry Pickering
A Trans-Plutonian planet proposed by Tadashi Mukai and Patryk Sofia Lykawka, roughly the size of Earth or Mars with an eccentric orbit between 100 and 200 AU
Another Trans-Neptunian planet at 1,500 AU away from the Sun, proposed by Rodney Gomes in 2012
Theia or Orpheus, a Mars-sized impactor believed to have collided with the Earth roughly 4.5 billion years ago; an event which created the Moon. Evidence from 2019 suggests that it may have originated in the outer Solar System.
Vulcan, a hypothetical planet once believed to exist inside the orbit of Mercury. Initially proposed as the cause for the perturbations in the orbit of Mercury, some astronomers spent many years searching for it, with many instances of people claiming to have found it. The perturbations in Mercury's orbit were later accounted for via Einstein's General Theory of Relativity.
Vulcanoids, asteroids that may exist within a gravitationally stable region inside Mercury's orbit. They may have originated as debris resulting from a collision between Mercury and another protoplanet, stripping away much of Mercury's inner crust and mantle. None have been detected by STEREO or SOHO.
The lack of vulcanoids led to a suggestion in 2016 that a super-Earth planet that once orbited the Sun closer to Mercury was able to clear its neighborhood before spiraling down into the Sun.
The Fifth Giant is a hypothetical fifth giant planet originally in an orbit between Saturn and Uranus but was ejected from the Solar System into interstellar space after a close encounter with Jupiter, resulting in a rapid divergence of Jupiter's and Saturn's orbit which may have ensured the orbital stability of the terrestrial planets in the inner Solar System. It may have also precipitated the Late Heavy Bombardment of the inner Solar System. The Fifth Giant may be hypothetical Planet Nine due to either the gravity of a nearby star or drag from the gaseous remnants of the Solar nebula which reduced the eccentricity of its orbit.
A and B, two super-Earth (or even supergiant) planets theorized by Michael Woolfson as part of his Capture theory on Solar System formation. Originally the Solar System's two innermost planets, these two collided, ejecting A (save its moons Mars, the Moon, Pluto, and the other dwarf planets) out of the Solar System and shattering B to form the Earth, Venus, Mercury, asteroid belt, and comets.
A captured planet from another solar system was proposed to exist in the Oort cloud much further than the hypothetical Planet Nine.
Moons
Chiron, a moon of Saturn supposedly sighted by Hermann Goldschmidt in 1861 but never observed by anyone else.
Chrysalis, a hypothetical moon of Saturn, named in 2022 by scientists of the Massachusetts Institute of Technology using data from the Cassini–Huygens mission, thought to have been torn apart by Saturn's tidal forces, somewhere between 200 and 100 million years ago, with up to 99% of its mass being swallowed by Saturn, and the remaining 1% forming the rings of Saturn.
Other moons of Earth, such as Petit's moon, Lilith, Waltemath's moons and Bagby's moons.
Mercury's moon, hypothesised to account for an unusual pattern of radiation detected by Mariner 10 in the vicinity of Mercury. Subsequent data from the mission revealed the actual source to be the star 31 Crateris.
Neith, a purported moon of Venus, falsely detected by a number of telescopic observers in the 17th and 18th centuries. Now known not to exist, the object has been explained as a series of misidentified stars and internal reflections inside the optics of particular telescope designs. It was also alternatively proposed by Jean-Charles Houzeau to be a heliocentric planet that orbited the Sun every 283 days and be in conjunction with Venus every 1080 days.
Themis, a moon of Saturn which astronomer William Pickering claimed to have discovered in 1905, but which was never observed again.
Stars
Nemesis, a brown or red dwarf whose existence was suggested in 1984 by physicist Richard A. Muller, based on purported periodicities in mass extinctions within Earth's fossil record. Its regular passage through the Solar System's Oort cloud would send large numbers of comets towards Earth, massively increasing the chances of an impact. Also believed to be the cause of minor planet Sedna's unusual elongated orbit. The existence of the Nemesis in the modern Solar system was ruled out in 2014 after the infrared survey performed by WISE spacecraft found no brown dwarf up to from Sun.
Raymond Arthur Lyttleton's model on the formation of the Solar System had a former binary star system by the Sun, which merged and broke into two due to rotational instability forming Jupiter and Saturn.
Fred Hoyle's model on Solar System formation had a former and more massive binary companion to the Sun that exploded in a supernova due to nuclear fusion failing within its interior and it collapsing as a result (which had not yet been verified at the time). The star's supernova remnant would be captured by the Sun and shaped into a protoplanetary disk, from which the planets formed.
One assumption suggests that the hypothetical Planet Nine is actually a primordial black hole.
See also
Subsatellite
Oort cloud
Planets beyond Neptune
Nebular hypothesis
Tenth planet (disambiguation)
Theoretical planetology
Trans-Neptunian object
Trans-Neptunian objects in fiction
References
Planetary science
Solar System | List of hypothetical Solar System objects | [
"Astronomy"
] | 1,770 | [
"Astronomical hypotheses",
"Astronomical sub-disciplines",
"Outer space",
"Astronomical myths",
"Hypothetical astronomical objects",
"Planetary science",
"Astronomical objects",
"Solar System"
] |
43,530 | https://en.wikipedia.org/wiki/Schist | Schist ( ) is a medium-grained metamorphic rock showing pronounced schistosity (named for the rock). This means that the rock is composed of mineral grains easily seen with a low-power hand lens, oriented in such a way that the rock is easily split into thin flakes or plates. This texture reflects a high content of platy minerals, such as mica, talc, chlorite, or graphite. These are often interleaved with more granular minerals, such as feldspar or quartz.
Schist typically forms during regional metamorphism accompanying the process of mountain building (orogeny) and usually reflects a medium grade of metamorphism. Schist can form from many different kinds of rocks, including sedimentary rocks such as mudstones and igneous rocks such as tuffs. Schist metamorphosed from mudstone is particularly common and is often very rich in mica (a mica schist). Where the type of the original rock (the protolith) is discernible, the schist is usually given a name reflecting its protolith, such as schistose metasandstone. Otherwise, the names of the constituent minerals will be included in the rock name, such as quartz-felspar-biotite schist.
Schist bedrock can pose a challenge for civil engineering because of its pronounced planes of weakness.
Etymology
The word schist is derived ultimately from the Greek word σχίζειν (schízein), meaning "to split", which refers to the ease with which schists can be split along the plane in which the platy minerals lie.
Definition
Before the mid-19th century, the terms slate, shale and schist were not sharply differentiated by those involved with mining. Geologists define schist as medium-grained metamorphic rock that shows well-developed schistosity. Schistosity is a thin layering of the rock produced by metamorphism (a foliation) that permits the rock to easily be split into flakes or slabs less than thick. The mineral grains in a schist are typically from in size and so are easily seen with a 10× hand lens. Typically, over half the mineral grains in a schist show a preferred orientation. Schists make up one of the three divisions of metamorphic rock by texture, with the other two divisions being gneiss, which has poorly developed schistosity and thicker layering, and granofels, which has no discernible schistosity.
Schists are defined by their texture without reference to their composition, and while most are a result of medium-grade metamorphism, they can vary greatly in mineral makeup. However, schistosity normally develops only when the rock contains abundant platy minerals, such as mica or chlorite. Grains of these minerals are strongly oriented in a preferred direction in schist, often also forming very thin parallel layers. The ease with which the rock splits along the aligned grains accounts for the schistosity. Though not a defining characteristic, schists very often contain porphyroblasts (individual crystals of unusual size) of distinctive minerals, such as garnet, staurolite, kyanite, sillimanite, or cordierite.
Because schists are a very large class of metamorphic rock, geologists will formally describe a rock as a schist only when the original type of the rock prior to metamorphism (the protolith) is unknown and its mineral content is not yet determined. Otherwise, the modifier schistose will be applied to a more precise type name, such as schistose semipelite (when the rock is known to contain moderate amounts of mica) or a schistose metasandstone (if the protolith is known to have been a sandstone). If all that is known is that the protolith was a sedimentary rock, the schist will be described as a paraschist, while if the protolith was an igneous rock, the schist will be described as an orthoschist. Mineral qualifiers are important when naming a schist. For example, a quartz-feldspar-biotite schist is a schist of uncertain protolith that contains biotite mica, feldspar, and quartz in order of apparent decreasing abundance.
Lineated schist has a strong linear fabric in a rock which otherwise has well-developed schistosity.
Formation
Schistosity is developed at elevated temperature when the rock is more strongly compressed in one direction than in other directions (nonhydrostatic stress). Nonhydrostatic stress is characteristic of regional metamorphism where mountain building is taking place (an orogenic belt). The schistosity develops perpendicular to the direction of greatest compression, also called the shortening direction, as platy minerals are rotated or recrystallized into parallel layers. While platy or elongated minerals are most obviously reoriented, even quartz or calcite may take up preferred orientations. At the microscopic level, schistosity is divided into internal schistosity, in which inclusions within porphyroblasts take a preferred orientation, and external schistosity, which is the orientation of grains in the surrounding medium-grained rock.
The composition of the rock must permit formation of abundant platy minerals. For example, the clay minerals in mudstone are metamorphosed to mica, producing a mica schist. Early stages of metamorphism convert mudstone to a very fine-grained metamorphic rock called slate, which with further metamorphism becomes fine-grained phyllite. Further recrystallization produces medium-grained mica schist. If the metamorphism proceeds further, the mica schist experiences dehydration reactions that convert platy minerals to granular minerals such as feldspars, decreasing schistosity and turning the rock into a gneiss.
Other platy minerals found in schists include chlorite, talc, and graphite. Chlorite schist is typically formed by metamorphism of ultramafic igneous rocks, as is talc schist. Talc schist also forms from metamorphosis of talc-bearing carbonate rocks formed by hydrothermal alteration. Graphite schist is uncommon but can form from metamorphosis of sedimentary beds containing abundant organic carbon. This may be of algal origin. Graphite schist is known to have experienced greenschist facies metamorphism, for example in the northern Andes.
Metamorphosis of felsic volcanic rock, such as tuff, can produce quartz-muscovite schist.
Engineering considerations
In geotechnical engineering a schistosity plane often forms a discontinuity that may have a large influence on the mechanical behavior (strength, deformation, etc.) of rock masses in, for example, tunnel, foundation, or slope construction. A hazard may exist even in undisturbed terrain. On August 17, 1959, a magnitude 7.2 earthquake destabilized a mountain slope near Hebgen Lake, Montana, composed of schist. This caused a massive landslide that killed 26 people camping in the area.
See also
References
External links
Photographs of Manhattan schist.
Metamorphic rocks
Natural materials | Schist | [
"Physics"
] | 1,560 | [
"Natural materials",
"Materials",
"Matter"
] |
43,536 | https://en.wikipedia.org/wiki/Beeswax | Italic text
Beeswax (also known as cera alba) is a natural wax produced by honey bees of the genus Apis. The wax is formed into scales by eight wax-producing glands in the abdominal segments of worker bees, which discard it in or at the hive. The hive workers collect and use it to form cells for honey storage and larval and pupal protection within the beehive. Chemically, beeswax consists mainly of esters of fatty acids and various long-chain alcohols.
Beeswax has been used since prehistory as the first plastic, as a lubricant and waterproofing agent, in lost wax casting of metals and glass, as a polish for wood and leather, for making candles, as an ingredient in cosmetics and as an artistic medium in encaustic painting.
Beeswax is edible, having similarly negligible toxicity to plant waxes, and is approved for food use in most countries and in the European Union under the E number E901. However, due to its inability to be broken down by the human digestive system, it has insignificant nutritional value.
Production
Beeswax is formed by worker bees, which secrete it from eight wax-producing mirror glands on the inner sides of the sternites (the ventral shield or plate of each segment of the body) on abdominal segments 4 to 7. The sizes of these wax glands depend on the age of the worker, and after many daily flights, these glands gradually begin to atrophy.
The new wax is initially glass-clear and colorless, becoming opaque after chewing and being contaminated with pollen by the hive worker bees, becoming progressively yellower or browner by incorporation of pollen oils and propolis. The wax scales are about across and thick, and about 1100 are needed to make a gram of wax. Worker bees use the beeswax to build honeycomb cells. For the wax-making bees to secrete wax, the ambient temperature in the hive must be .
The book Beeswax Production, Harvesting, Processing and Products suggests of beeswax is sufficient to store of honey. Another study estimated that of wax can store of honey.
Sugars from honey are metabolized into beeswax in wax-gland-associated fat cells. The amount of honey used by bees to produce wax has not been accurately determined, but according to Whitcomb's 1946 experiment, of honey yields of wax.
Processing
Beeswax as a product for human use may come from cappings cut off the cells in the process of extraction, from old comb that is scrapped, or from unwanted burr comb and brace comb removed from a hive. Its color varies from nearly white to brownish, but most often is a shade of yellow, depending on purity, the region, and the type of flowers gathered by the bees. The wax from the brood comb of the honey bee hive tends to be darker than wax from the honeycomb because impurities accumulate more quickly in the brood comb. Due to the impurities, the wax must be rendered before further use. The leftovers are called slumgum, and is derived from old breeding rubbish (pupa casings, cocoons, shed larva skins, etc.), bee droppings, propolis, and general rubbish.
The wax may be clarified further by heating in water. As with petroleum waxes, it may be softened by dilution with mineral oil or vegetable oil to make it more workable at room temperature.
Physical characteristics
Beeswax is a fragrant solid at room temperature. The colors are light yellow, medium yellow, or dark brown and white. Beeswax is a tough wax formed from a mixture of several chemical compounds.
Beeswax has a relatively low melting point range of . If beeswax is heated above discoloration occurs. The flash point of beeswax is .
When natural beeswax is cold, it is brittle, and its fracture is dry and granular. At room temperature (conventionally taken as about ), it is tenacious and it softens further at human body temperature ().
Chemical composition
An approximate chemical formula for beeswax is C15H31COOC30H61. Its main constituents are palmitate, palmitoleate, and oleate esters of long-chain (30–32 carbons) aliphatic alcohols, with the ratio of triacontanyl palmitate CH3(CH2)29O-CO-(CH2)14CH3 to cerotic acid CH3(CH2)24COOH, the two principal constituents, being 6:1. Beeswax can be classified generally into European and Oriental types. The saponification value is lower (3–5) for European beeswax, and higher (8–9) for Oriental types. The analytical characterization can be done by high-temperature gas chromatography.
Adulteration
Beeswax faces challenges in the market due to the presence of various suppliers, making it difficult to distinguish authentic from fake variants. Adulterated beeswax often contains paraffin and other toxic additives, posing potential health risks and lacking the genuine honey-scented aroma of pure beeswax.
Pharmaceutical grades of pure beeswax are distributed in the shape of pellets for the cosmetic, phamaceutical and food industries, among other uses.
Production
In 2020, world production of beeswax was 62,116 tonnes, led by India with 38% of the total.
Uses
Candle-making has long involved the use of beeswax, which burns readily and cleanly, and this material was traditionally prescribed for the making of the Paschal candle or "Easter candle". Beeswax candles are purported to be superior to other wax candles, because they burn brighter and longer, do not bend, and burn cleaner. It is further recommended for the making of other candles used in the liturgy of the Roman Catholic Church. Beeswax is also the candle constituent of choice in the Eastern Orthodox Church.
Refined beeswax plays a prominent role in art materials both as a binder in encaustic paint and as a stabilizer in oil paint to add body.
Beeswax is an ingredient in surgical bone wax, which is used during surgery to control bleeding from bone surfaces; shoe polish and furniture polish can both use beeswax as a component, dissolved in turpentine or sometimes blended with linseed oil or tung oil; modeling waxes can also use beeswax as a component; pure beeswax can also be used as an organic surfboard wax. Beeswax blended with pine rosin is used for waxing, and can serve as an adhesive to attach reed plates to the structure inside a squeezebox. It can also be used to make Cutler's resin, an adhesive used to glue handles onto cutlery knives. It is used in Eastern Europe in egg decoration; it is used for writing, via resist dyeing, on batik eggs (as in pysanky) and for making beaded eggs.
Beeswax is used by percussionists to make a surface on tambourines for thumb rolls. It can also be used as a metal injection moulding binder component along with other polymeric binder materials.
Beeswax was formerly used in the manufacture of phonograph cylinders. It may still be used to seal formal legal or royal decree and academic parchments such as placing an awarding stamp imprimatur of the university upon completion of postgraduate degrees.
Purified and bleached beeswax is used in the production of food, cosmetics, and pharmaceuticals. The three main types of beeswax products are yellow, white, and beeswax absolute. Yellow beeswax is the crude product obtained from the honeycomb, white beeswax is bleached or filtered yellow beeswax, and beeswax absolute is yellow beeswax treated with alcohol. In food preparation, it is used as a coating for cheese; by sealing out the air, protection is given against spoilage (mold growth). Beeswax may also be used as a food additive E901, in small quantities acting as a glazing agent, which serves to prevent water loss, or used to provide surface protection for some fruits. Soft gelatin capsules and tablet coatings may also use E901. Beeswax is also a common ingredient of natural chewing gum. The wax monoesters in beeswax are poorly hydrolysed in the guts of humans and other mammals, so they have insignificant nutritional value. Some birds, such as honeyguides, can digest beeswax. Beeswax is the main diet of wax moth larvae.
The use of beeswax in skin care and cosmetics has been increasing. A German study found beeswax to be superior to similar barrier creams (usually mineral oil-based creams such as petroleum jelly), when used according to its protocol.
Beeswax is used in lip balm, lip gloss, hand creams, salves, and moisturizers; and in cosmetics such as eye shadow, blush, and eye liner. Beeswax is also an important ingredient in moustache wax and hair pomades, which make hair look sleek and shiny.
In oil spill control, beeswax is processed to create Petroleum Remediation Product (PRP). It is used to absorb oil or petroleum-based pollutants from water.
Historical uses
Beeswax was among the first plastics to be used, alongside other natural polymers such as gutta-percha, horn, tortoiseshell, and shellac. For thousands of years, beeswax has had a wide variety of applications; it has been found in the tombs of Egypt, in wrecked Viking ships, and in Roman ruins. Beeswax never goes bad and can be heated and reused. Historically, it has been used:
As candles - the oldest intact beeswax candles north of the Alps were found in the Alamannic graveyard of Oberflacht, Germany, dating to 6th/7th century AD
In the manufacture of cosmetics
As a modelling material in the lost-wax casting process, or cire perdue
For wax tablets used for a variety of writing purposes
In encaustic paintings such as the Fayum mummy portraits
In bow making
To strengthen and preserve sewing thread, cordage, shoe laces, etc.
As a component of sealing wax
To strengthen and to forestall splitting and cracking of wind instrument reeds
To form the mouthpieces of a didgeridoo, and the frets on the Philippine kutiyapi – a type of boat lute
As a sealant or lubricant for bullets in cap and ball firearms
To stabilize the military explosive Torpex – before being replaced by a petroleum-based product
In producing Javanese batik
As an ancient form of dental tooth filling
As the joint filler in the slate bed of pool and billiard tables.
See also
Carnauba wax
Candelilla wax
Paraffin wax
Ozokerite (ceresin)
Spermaceti
References
External links
The chemistry of bees Joel Loveridge, School of Chemistry, University of Bristol, accessed November 2005
Bee products
Animal glandular products
Waxes
Biodegradable materials
Sewing equipment
Articles containing video clips
E-number additives | Beeswax | [
"Physics",
"Chemistry"
] | 2,318 | [
"Biodegradable materials",
"Biodegradation",
"Materials",
"Matter",
"Waxes"
] |
43,551 | https://en.wikipedia.org/wiki/Ruby | Ruby is a pinkish red to blood-red colored gemstone, a variety of the mineral corundum (aluminium oxide). Ruby is one of the most popular traditional jewelry gems and is very durable. Other varieties of gem-quality corundum are called sapphires. Ruby is one of the traditional cardinal gems, alongside amethyst, sapphire, emerald, and diamond. The word ruby comes from ruber, Latin for red. The color of a ruby is due to the element chromium.
Some gemstones that are popularly or historically called rubies, such as the Black Prince's Ruby in the British Imperial State Crown, are actually spinels. These were once known as "Balas rubies".
The quality of a ruby is determined by its color, cut, and clarity, which, along with carat weight, affect its value. The brightest and most valuable shade of red, called blood-red or pigeon blood, commands a large premium over other rubies of similar quality. After color follows clarity: similar to diamonds, a clear stone will command a premium, but a ruby without any needle-like rutile inclusions may indicate that the stone has been treated. Ruby is the traditional birthstone for July and is usually red/pinker than garnet, although some rhodolite garnets have a similar pinkish hue to most rubies. The world's most valuable ruby to be sold at auction is the Estrela de Fura, which sold for US$34.8 million.
Physical properties
Rubies have a hardness of 9.0 on the Mohs scale of mineral hardness. Among the natural gems, only moissanite and diamond are harder, with diamond having a Mohs hardness of 10.0 and moissanite falling somewhere in between corundum (ruby) and diamond in hardness. Sapphire, ruby, and pure corundum are α-alumina, the most stable form of AlO, in which 3 electrons leave each aluminium ion to join the regular octahedral group of six nearby O ions; in pure corundum this leaves all of the aluminium ions with a very stable configuration of no unpaired electrons or unfilled energy levels, and the crystal is perfectly colorless, and transparent except for flaws.
When a chromium atom replaces an occasional aluminium atom, it too loses 3 electrons to become a chromium ion to maintain the charge balance of the AlO crystal. However, the Cr ions are larger and have electron orbitals in different directions than aluminium. The octahedral arrangement of the O ions is distorted, and the energy levels of the different orbitals of those Cr ions are slightly altered because of the directions to the O ions. Those energy differences correspond to absorption in the ultraviolet, violet, and yellow-green regions of the spectrum.
If one percent of the aluminium ions are replaced by chromium in ruby, the yellow-green absorption results in a red color for the gem. Additionally, absorption at any of the above wavelengths stimulates fluorescent emission of 694-nanometer-wavelength red light, which adds to its red color and perceived luster. The chromium concentration in artificial rubies can be adjusted (in the crystal growth process) to be ten to twenty times less than in the natural gemstones. Theodore Maiman says that "because of the low chromium level in these crystals they display a lighter red color than gemstone ruby and are referred to as pink ruby."
After absorbing short-wavelength light, there is a short interval of time when the crystal lattice of ruby is in an excited state before fluorescence occurs. If 694-nanometer photons pass through the crystal during that time, they can stimulate more fluorescent photons to be emitted in-phase with them, thus strengthening the intensity of that red light. By arranging mirrors or other means to pass emitted light repeatedly through the crystal, a ruby laser in this way produces a very high intensity of coherent red light.
All natural rubies have imperfections in them, including color impurities and inclusions of rutile needles known as "silk". Gemologists use these needle inclusions found in natural rubies to distinguish them from synthetics, simulants, or substitutes. Usually, the rough stone is heated before cutting. These days, almost all rubies are treated in some form, with heat treatment being the most common practice. Untreated rubies of high quality command a large premium.
Some rubies show a three-point or six-point asterism or "star". These rubies are cut into cabochons to display the effect properly. Asterisms are best visible with a single-light source and move across the stone as the light moves or the stone is rotated. Such effects occur when light is reflected off the "silk" (the structurally oriented rutile needle inclusions) in a certain way. This is one example where inclusions increase the value of a gemstone. Furthermore, rubies can show color changes—though this occurs very rarely—as well as chatoyancy or the "cat's eye" effect.
Versus pink sapphire
Generally, gemstone-quality corundum in all shades of red, including pink, are called rubies. However, in the United States, a minimum color saturation must be met to be called a ruby; otherwise, the stone will be called a pink sapphire. Drawing a distinction between rubies and pink sapphires is relatively new, having arisen sometime in the 20th century. Often, the distinction between ruby and pink sapphire is not clear and can be debated. As a result of the difficulty and subjectiveness of such distinctions, trade organizations such as the International Colored Gemstone Association (ICGA) have adopted the broader definition for ruby which encompasses its lighter shades, including pink.
Occurrence and mining
Historically, rubies have been mined in Thailand, in the Pailin and Samlout District of Cambodia, as well as in Afghanistan, Australia, Brazil, Colombia, India, Namibia, Japan, and Scotland. After the Second World War, ruby deposits were found in Madagascar, Mozambique, Nepal, Pakistan, Tajikistan, Tanzania, and Vietnam.
The Republic of North Macedonia is the only country in mainland Europe to have naturally occurring rubies. They can mainly be found around the city of Prilep. Macedonian rubies have a unique raspberry color.
A few rubies have been found in the U.S. states of Montana, North Carolina, South Carolina and Wyoming.
Spinel, another red gemstone, is sometimes found along with rubies in the same gem gravel or marble. Red spinels may be mistaken for rubies by those lacking experience with gems. However, the finest red spinels, now heavily sought, can have values approaching all but the finest examples of ruby.
The Mogok Valley in Upper Myanmar (Burma) was for centuries the world's main source for rubies. That region has produced some exceptional rubies; however, in recent years few good rubies have been found. In central Myanmar, the area of Mong Hsu began producing rubies during the 1990s and rapidly became the world's main ruby mining area. The most recently found ruby deposit in Myanmar is in Namya (Namyazeik) located in the northern state of Kachin.
In Pakistani Kashmir there are vast proven reserves of millions of rubies, worth up to half a billion dollars. However, as of 2017 there was only one mine (at Chitta Katha) due to lack of investment. In Afghanistan, rubies are mined at Jegdalek. In 2017 the Aappaluttoq mine in Greenland began running.
The rubies in Greenland are said to be among the oldest in the world at approximately 3 billion years old. The Aappaluttoq mine in Greenland is located 160 kilometers south of Nuuk, the capital of Greenland. The rubies are traceable from mine to market.
The Montepuez ruby mine in northeastern Mozambique is situated on one of the most significant ruby deposits in the world, although, rubies were only discovered here for the first time in 2009. In less than a decade, Mozambique has become the world's most productive source for gem-quality ruby.
Factors affecting value
Rubies, as with other gemstones, are graded using criteria known as the four Cs, namely color, cut, clarity and carat weight. Rubies are also evaluated on the basis of their geographic origin.
Color
In the evaluation of colored gemstones, color is the most important factor. Color divides into three components: hue, saturation and tone. Hue refers to color as we normally use the term. Transparent gemstones occur in the pure spectral hues of red, orange, yellow, green, blue, violet. In nature, there are rarely pure hues, so when speaking of the hue of a gemstone, we speak of primary and secondary and sometimes tertiary hues. Ruby is defined to be red. All other hues of the gem species corundum are called sapphire. Ruby may exhibit a range of secondary hues, including orange, purple, violet, and pink.
Clarity
Because rubies host many inclusions, their clarity is evaluated by the inclusions’ size, number, location, and visibility. Rubies with the highest clarity grades are known as “eye-clean,” because their inclusions are the least visible to the naked human eye. Rubies may also have thin, intersecting inclusions called silk. Silk can scatter light, brightening the gem's appearance, and the presence of silk can also show whether a ruby has been previously heat treated, since intense heat will degrade a ruby's silk.
Treatments and enhancements
Improving the quality of gemstones by treating them is common practice. Some treatments are used in almost all cases and are therefore considered acceptable. During the late 1990s, a large supply of low-cost materials caused a sudden surge in supply of heat-treated rubies, leading to a downward pressure on ruby prices.
Improvements used include color alteration, improving transparency by dissolving rutile inclusions, healing of fractures (cracks) or even completely filling them.
The most common treatment is the application of heat. Most rubies at the lower end of the market are heat treated to improve color, remove purple tinge, blue patches, and silk. These heat treatments typically occur around temperatures of 1800 °C (3300 °F). Some rubies undergo a process of low tube heat, when the stone is heated over charcoal of a temperature of about 1300 °C (2400 °F) for 20 to 30 minutes. The silk is partially broken, and the color is improved.
Another treatment, which has become more frequent in recent years, is lead glass filling. Filling the fractures inside the ruby with lead glass (or a similar material) dramatically improves the transparency of the stone, making previously unsuitable rubies fit for applications in jewelry. The process is done in four steps:
The rough stones are pre-polished to eradicate all surface impurities that may affect the process
The rough is cleaned with hydrogen fluoride
The first heating process during which no fillers are added. The heating process eradicates impurities inside the fractures. Although this can be done at temperatures up to 1400 °C (2500 °F) it most likely occurs at a temperature of around 900 °C (1600 °F) since the rutile silk is still intact.
The second heating process in an electrical oven with different chemical additives. Different solutions and mixes have shown to be successful; however, mostly lead-containing glass-powder is used at present. The ruby is dipped into oils, then covered with powder, embedded on a tile and placed in the oven where it is heated at around 900 °C (1600 °F) for one hour in an oxidizing atmosphere. The orange colored powder transforms upon heating into a transparent to yellow-colored paste, which fills all fractures. After cooling the color of the paste is fully transparent and dramatically improves the overall transparency of the ruby.
If a color needs to be added, the glass powder can be "enhanced" with copper or other metal oxides as well as elements such as sodium, calcium, potassium etc.
The second heating process can be repeated three to four times, even applying different mixtures. When jewelry containing rubies is heated (for repairs) it should not be coated with boracic acid or any other substance, as this can etch the surface; it does not have to be "protected" like a diamond.
The treatment can be identified by noting bubbles in cavities and fractures using a 10× loupe.
Synthesis and imitation
In 1837, Gaudin made the first synthetic rubies by fusing potash alum at a high temperature with a little chromium as a pigment. In 1847, Ebelmen made white sapphire by fusing alumina in boric acid. In 1877, Edmond Frémy and industrial glass-maker Charles Feil made crystal corundum from which small stones could be cut. In 1887, Fremy and Auguste Verneuil manufactured artificial ruby by fusing BaF and AlO with a little chromium at red heat.
In 1903, Verneuil announced he could produce synthetic rubies on a commercial scale using this flame fusion process, later also known as the Verneuil process. By 1910, Verneuil's laboratory had expanded into a 30 furnace production facility, with annual gemstone production having reached in 1907.
Other processes in which synthetic rubies can be produced are through Czochralski's pulling process, flux process, and the hydrothermal process. Most synthetic rubies originate from flame fusion, due to the low costs involved. Synthetic rubies may have no imperfections visible to the naked eye but magnification may reveal curved striae and gas bubbles. The fewer the number and the less obvious the imperfections, the more valuable the ruby is; unless there are no imperfections (i.e., a perfect ruby), in which case it will be suspected of being artificial. Dopants are added to some manufactured rubies so they can be identified as synthetic, but most need gemological testing to determine their origin.
Synthetic rubies have technological uses as well as gemological ones. Rods of synthetic ruby are used to make ruby lasers and masers. The first working laser was made by Theodore H. Maiman in 1960. Maiman used a solid-state light-pumped synthetic ruby to produce red laser light at a wavelength of 694 nanometers (nm). Ruby lasers are still in use.
Rubies are also used in applications where high hardness is required such as at wear-exposed locations in mechanical clockworks, or as scanning probe tips in a coordinate measuring machine.
Imitation rubies are also marketed. Red spinels, red garnets, and colored glass have been falsely claimed to be rubies. Imitations go back to Roman times and already in the 17th century techniques were developed to color foil red—by burning scarlet wool in the bottom part of the furnace—which was then placed under the imitation stone. Trade terms such as balas ruby for red spinel and rubellite for red tourmaline can mislead unsuspecting buyers. Such terms are therefore discouraged from use by many gemological associations such as the Laboratory Manual Harmonisation Committee (LMHC).
Records and famous examples
The Smithsonian's National Museum of Natural History in Washington, D.C. has some of the world's largest and finest ruby gemstones. The Burmese ruby, set in a platinum ring with diamonds, was donated by businessman and philanthropist Peter Buck in memory of his late wife Carmen Lúcia. This gemstone displays a richly saturated red color combined with an exceptional transparency. The finely proportioned cut provides vivid red reflections. The stone was mined from the Mogok region of Burma (now Myanmar) in the 1930s.
In 2007, the London jeweler Garrard & Co featured a heart-shaped 40.63-carat ruby on their website.
On 13/14 December 2011, Elizabeth Taylor's complete jewelry collection was auctioned by Christie's. Several ruby-set pieces were included in the sale, notably a ring set with an 8.24 ct gem that broke the 'price-per-carat' record for rubies (US$512,925 per carat – i.e., over US$4.2 million in total), and a necklace that sold for over US$3.7 million.
The Liberty Bell Ruby is the largest mined ruby in the world. It was stolen in a heist in 2011.
The Sunrise Ruby was the world's most expensive ruby, most expensive colored gemstone, and most expensive gemstone other than a diamond when it sold at auction in Switzerland to an anonymous buyer for US$30 million In May 2015.
A synthetic ruby crystal became the gain medium in the world's first optical laser, conceived, designed and constructed by Theodore H. "Ted" Maiman, on 16 May 1960 at Hughes Research Laboratories.
The concept of electromagnetic radiation amplification through the mechanism of stimulated emission had already been successfully demonstrated in the laboratory by way of the maser, using other materials such as ammonia and, later, ruby, but the ruby laser was the first device to work at optical (694.3 nm) wavelengths. Maiman's prototype laser is still in working order.
Historical and cultural references
The Old Testament of the Bible mentions ruby many times in the Book of Exodus, and many times in the Book of Proverbs, as well as various other times. It is not certain that the Biblical words mean 'ruby' as distinct from other jewels.
An early recorded transport and trading of rubies arises in the literature on the North Silk Road of China, wherein about 200 BC rubies were carried along this ancient trackway moving westward from China.
Rubies have always been held in high esteem in Asian countries. They were used to ornament armor, scabbards, and harnesses of noblemen in India and China. Rubies were laid beneath the foundation of buildings to secure good fortune to the structure.
A traditional Hindu astrological belief holds rubies as the "gemstone of the Sun and also the heavenly deity Surya, the leader of the nine heavenly bodies (Navagraha)." The belief is that worshiping and wearing rubies causes the Sun to be favorable to the wearer.
In the Marvel comic books, the Godstone is a ruby that the son of J. Jonah Jameson, John Jameson found on the Moon that becomes activated by moonlight, grafts itself to his chest which turns him into the Man-Wolf.
See also
Anyolite
List of individual gemstones
List of minerals
Shelby Gem Factory
Verneuil process
Emerald
References
External links
International Colored Stone Association's ruby overview page
Webmineral crystallographic and mineral info
Aluminium minerals
Oxide minerals
Superhard materials
Trigonal minerals
Minerals in space group 167
Luminescent minerals
Corundum gemstones | Ruby | [
"Physics",
"Chemistry"
] | 3,894 | [
"Luminescence",
"Luminescent minerals",
"Materials",
"Superhard materials",
"Matter"
] |
43,589 | https://en.wikipedia.org/wiki/Fluorite | Fluorite (also called fluorspar) is the mineral form of calcium fluoride, CaF2. It belongs to the halide minerals. It crystallizes in isometric cubic habit, although octahedral and more complex isometric forms are not uncommon.
The Mohs scale of mineral hardness, based on scratch hardness comparison, defines value 4 as fluorite.
Pure fluorite is colourless and transparent, both in visible and ultraviolet light, but impurities usually make it a colorful mineral and the stone has ornamental and lapidary uses. Industrially, fluorite is used as a flux for smelting, and in the production of certain glasses and enamels. The purest grades of fluorite are a source of fluoride for hydrofluoric acid manufacture, which is the intermediate source of most fluorine-containing fine chemicals. Optically clear transparent fluorite has anomalous partial dispersion, that is, its refractive index varies with the wavelength of light in a manner that differs from that of commonly used glasses, so fluorite is useful in making apochromatic lenses, and particularly valuable in photographic optics. Fluorite optics are also usable in the far-ultraviolet and mid-infrared ranges, where conventional glasses are too opaque for use. Fluorite also has low dispersion, and a high refractive index for its density.
History and etymology
The word fluorite is derived from the Latin verb fluere, meaning to flow. The mineral is used as a flux in iron smelting to decrease the viscosity of slag. The term flux comes from the Latin adjective fluxus, meaning flowing, loose, slack. The mineral fluorite was originally termed fluorspar and was first discussed in print in a 1530 work Bermannvs sive de re metallica dialogus [Bermannus; or dialogue about the nature of metals], by Georgius Agricola, as a mineral noted for its usefulness as a flux. Agricola, a German scientist with expertise in philology, mining, and metallurgy, named fluorspar as a Neo-Latinization of the German Flussspat from Fluss (stream, river) and Spat (meaning a nonmetallic mineral akin to gypsum, spærstān, spear stone, referring to its crystalline projections).
In 1852, fluorite gave its name to the phenomenon of fluorescence, which is prominent in fluorites from certain locations, due to certain impurities in the crystal. Fluorite also gave the name to its constitutive element fluorine. Currently, the word "fluorspar" is most commonly used for fluorite as an industrial and chemical commodity, while "fluorite" is used mineralogically and in most other senses.
In archeology, gemmology, classical studies, and Egyptology, the Latin terms murrina and myrrhina refer to fluorite. In book 37 of his Naturalis Historia, Pliny the Elder describes it as a precious stone with purple and white mottling, and noted that the Romans prized objects carved from it.
Structure
Fluorite crystallizes in a cubic motif. Crystal twinning is common and adds complexity to the observed crystal habits. Fluorite has four perfect cleavage planes that help produce octahedral fragments. The structural motif adopted by fluorite is so common that the motif is called the fluorite structure. Element substitution for the calcium cation often includes strontium and certain rare-earth elements (REE), such as yttrium and cerium.
Occurrence and mining
Fluorite forms as a late-crystallizing mineral in felsic igneous rocks typically through hydrothermal activity. It is particularly common in granitic pegmatites. It may occur as a vein deposit formed through hydrothermal activity particularly in limestones. In such vein deposits it can be associated with galena, sphalerite, barite, quartz, and calcite. Fluorite can also be found as a constituent of sedimentary rocks either as grains or as the cementing material in sandstone.
It is a common mineral mainly distributed in South Africa, China, Mexico, Mongolia, the United Kingdom, the United States, Canada, Tanzania, Rwanda and Argentina.
The world reserves of fluorite are estimated at 230 million tonnes (Mt) with the largest deposits being in South Africa (about 41 Mt), Mexico (32 Mt) and China (24 Mt). China is leading the world production with about 3 Mt annually (in 2010), followed by Mexico (1.0 Mt), Mongolia (0.45 Mt), Russia (0.22 Mt), South Africa (0.13 Mt), Spain (0.12 Mt) and Namibia (0.11 Mt).
One of the largest deposits of fluorspar in North America is located on the Burin Peninsula, Newfoundland, Canada. The first official recognition of fluorspar in the area was recorded by geologist J.B. Jukes in 1843. He noted an occurrence of "galena" or lead ore and fluoride of lime on the west side of St. Lawrence harbour. It is recorded that interest in the commercial mining of fluorspar began in 1928 with the first ore being extracted in 1933. Eventually, at Iron Springs Mine, the shafts reached depths of . In the St. Lawrence area, the veins are persistent for great lengths and several of them have wide lenses. The area with veins of known workable size comprises about .
In 2018, Canada Fluorspar Inc. commenced mine production again in St. Lawrence; in spring 2019, the company was planned to develop a new shipping port on the west side of Burin Peninsula as a more affordable means of moving their product to markets, and they successfully sent the first shipload of ore from the new port on July 31, 2021. This marks the first time in 30 years that ore has been shipped directly out of St. Lawrence.
Cubic crystals up to 20 cm across have been found at Dalnegorsk, Russia. The largest documented single crystal of fluorite was a cube 2.12 meters in size and weighing approximately 16 tonnes.
In Asturias (Spain) there are several fluorite deposits known internationally for the quality of the specimens they have yielded. In the area of Berbes, Ribadesella, fluorite appears as cubic crystals, sometimes with dodecahedron modifications, which can reach a size of up to 10 cm of edge, with internal colour zoning, almost always violet in colour. It is associated with quartz and leafy aggregates of baryte. In the Emilio mine, in Loroñe, Colunga, the fluorite crystals, cubes with small modifications of other figures, are colourless and transparent. They can reach 10 cm of edge. In the Moscona mine, in Villabona, the fluorite crystals, cubic without modifications of other shapes, are yellow, up to 3 cm of edge. They are associated with large crystals of calcite and barite.
"Blue John"
One of the most famous of the older-known localities of fluorite is Castleton in Derbyshire, England, where, under the name of "Derbyshire Blue John", purple-blue fluorite was extracted from several mines or caves. During the 19th century, this attractive fluorite was mined for its ornamental value. The mineral Blue John is now scarce, and only a few hundred kilograms are mined each year for ornamental and lapidary use. Mining still takes place in Blue John Cavern and Treak Cliff Cavern.
Recently discovered deposits in China have produced fluorite with coloring and banding similar to the classic Blue John stone.
Fluorescence
George Gabriel Stokes named the phenomenon of fluorescence from fluorite, in 1852.
Many samples of fluorite exhibit fluorescence under ultraviolet light, a property that takes its name from fluorite. Many minerals, as well as other substances, fluoresce. Fluorescence involves the elevation of electron energy levels by quanta of ultraviolet light, followed by the progressive falling back of the electrons into their previous energy state, releasing quanta of visible light in the process. In fluorite, the visible light emitted is most commonly blue, but red, purple, yellow, green, and white also occur. The fluorescence of fluorite may be due to mineral impurities, such as yttrium and ytterbium, or organic matter, such as volatile hydrocarbons in the crystal lattice. In particular, the blue fluorescence seen in fluorites from certain parts of Great Britain responsible for the naming of the phenomenon of fluorescence itself, has been attributed to the presence of inclusions of divalent europium in the crystal. Natural samples containing rare earth impurities such as erbium have also been observed to display upconversion fluorescence, in which infrared light stimulates emission of visible light, a phenomenon usually only reported in synthetic materials.
One fluorescent variety of fluorite is chlorophane, which is reddish or purple in color and fluoresces brightly in emerald green when heated (thermoluminescence), or when illuminated with ultraviolet light.
The color of visible light emitted when a sample of fluorite is fluorescing depends on where the original specimen was collected; different impurities having been included in the crystal lattice in different places. Neither does all fluorite fluoresce equally brightly, even from the same locality. Therefore, ultraviolet light is not a reliable tool for the identification of specimens, nor for quantifying the mineral in mixtures. For example, among British fluorites, those from Northumberland, County Durham, and eastern Cumbria are the most consistently fluorescent, whereas fluorite from Yorkshire, Derbyshire, and Cornwall, if they fluoresce at all, are generally only feebly fluorescent.
Fluorite also exhibits the property of thermoluminescence.
Color
Fluorite is allochromatic, meaning that it can be tinted with elemental impurities. Fluorite comes in a wide range of colors and has consequently been dubbed "the most colorful mineral in the world". Every color of the rainbow in various shades is represented by fluorite samples, along with white, black, and clear crystals. The most common colors are purple, blue, green, yellow, or colorless. Less common are pink, red, white, brown, and black. Color zoning or banding is commonly present. The color of the fluorite is determined by factors including impurities, exposure to radiation, and the absence of voids of the color centers.
Uses
Source of fluorine and fluoride
Fluorite is a major source of hydrogen fluoride, a commodity chemical used to produce a wide range of materials. Hydrogen fluoride is liberated from the mineral by the action of concentrated sulfuric acid:
CaF2(s) + H2SO4 → CaSO4(s) + 2 HF(g)
The resulting HF is converted into fluorine, fluorocarbons, and diverse fluoride materials. As of the late 1990s, five billion kilograms were mined annually.
There are three principal types of industrial use for natural fluorite, commonly referred to as "fluorspar" in these industries, corresponding to different grades of purity. Metallurgical grade fluorite (60–85% CaF2), the lowest of the three grades, has traditionally been used as a flux to lower the melting point of raw materials in steel production to aid the removal of impurities, and later in the production of aluminium. Ceramic grade fluorite (85–95% CaF2) is used in the manufacture of opalescent glass, enamels, and cooking utensils. The highest grade, "acid grade fluorite" (97% or more CaF2), accounts for about 95% of fluorite consumption in the US where it is used to make hydrogen fluoride and hydrofluoric acid by reacting the fluorite with sulfuric acid.
Internationally, acid-grade fluorite is also used in the production of AlF3 and cryolite (Na3AlF6), which are the main fluorine compounds used in aluminium smelting. Alumina is dissolved in a bath that consists primarily of molten Na3AlF6, AlF3, and fluorite (CaF2) to allow electrolytic recovery of aluminium. Fluorine losses are replaced entirely by the addition of AlF3, the majority of which react with excess sodium from the alumina to form Na3AlF6.
Niche uses
Lapidary uses
Natural fluorite mineral has ornamental and lapidary uses. Fluorite may be drilled into beads and used in jewelry, although due to its relative softness it is not widely used as a semiprecious stone. It is also used for ornamental carvings, with expert carvings taking advantage of the stone's zonation.
Optics
In the laboratory, calcium fluoride is commonly used as a window material for both infrared and ultraviolet wavelengths, since it is transparent in these regions (about 0.15 μm to 9 μm) and exhibits an extremely low change in refractive index with wavelength. Furthermore, the material is attacked by few reagents. At wavelengths as short as 157 nm, a common wavelength used for semiconductor stepper manufacture for integrated circuit lithography, the refractive index of calcium fluoride shows some non-linearity at high power densities, which has inhibited its use for this purpose. In the early years of the 21st century, the stepper market for calcium fluoride collapsed, and many large manufacturing facilities have been closed. Canon and other manufacturers have used synthetically grown crystals of calcium fluoride components in lenses to aid apochromatic design, and to reduce light dispersion. This use has largely been superseded by newer glasses and computer-aided design. As an infrared optical material, calcium fluoride is widely available and was sometimes known by the Eastman Kodak trademarked name "Irtran-3", although this designation is obsolete.
Fluorite should not be confused with fluoro-crown (or fluorine crown) glass, a type of low-dispersion glass that has special optical properties approaching fluorite. True fluorite is not a glass but a crystalline material. Lenses or optical groups made using this low dispersion glass as one or more elements exhibit less chromatic aberration than those utilizing conventional, less expensive crown glass and flint glass elements to make an achromatic lens. Optical groups employ a combination of different types of glass; each type of glass refracts light in a different way. By using combinations of different types of glass, lens manufacturers are able to cancel out or significantly reduce unwanted characteristics; chromatic aberration being the most important. The best of such lens designs are often called apochromatic (see above). Fluoro-crown glass (such as Schott FK51) usually in combination with an appropriate "flint" glass (such as Schott KzFSN 2) can give very high performance in telescope objective lenses, as well as microscope objectives, and camera telephoto lenses. Fluorite elements are similarly paired with complementary "flint" elements (such as Schott LaK 10). The refractive qualities of fluorite and of certain flint elements provide a lower and more uniform dispersion across the spectrum of visible light, thereby keeping colors focused more closely together. Lenses made with fluorite are superior to fluoro-crown based lenses, at least for doublet telescope objectives; but are more difficult to produce and more costly.
The use of fluorite for prisms and lenses was studied and promoted by Victor Schumann near the end of the 19th century. Naturally occurring fluorite crystals without optical defects were only large enough to produce microscope objectives.
With the advent of synthetically grown fluorite crystals in the 1950s - 60s, it could be used instead of glass in some high-performance optical telescope and camera lens elements. In telescopes, fluorite elements allow high-resolution images of astronomical objects at high magnifications. Canon Inc. produces synthetic fluorite crystals that are used in their better telephoto lenses. The use of fluorite for telescope lenses has declined since the 1990s, as newer designs using fluoro-crown glass, including triplets, have offered comparable performance at lower prices. Fluorite and various combinations of fluoride compounds can be made into synthetic crystals which have applications in lasers and special optics for UV and infrared.
Exposure tools for the semiconductor industry make use of fluorite optical elements for ultraviolet light at wavelengths of about 157 nanometers. Fluorite has a uniquely high transparency at this wavelength. Fluorite objective lenses are manufactured by the larger microscope firms (Nikon, Olympus, Carl Zeiss and Leica). Their transparence to ultraviolet light enables them to be used for fluorescence microscopy. The fluorite also serves to correct optical aberrations in these lenses. Nikon has previously manufactured at least one fluorite and synthetic quartz element camera lens (105 mm f/4.5 UV) for the production of ultraviolet images. Konica produced a fluorite lens for their SLR cameras – the Hexanon 300 mm f/6.3.
Source of fluorine gas in nature
In 2012, the first source of naturally occurring fluorine gas was found in fluorite mines in Bavaria, Germany. It was previously thought that fluorine gas did not occur naturally because it is so reactive, and would rapidly react with other chemicals. Fluorite is normally colorless, but some varied forms found nearby look black, and are known as 'fetid fluorite' or antozonite. The minerals, containing small amounts of uranium and its daughter products, release radiation sufficiently energetic to induce oxidation of fluoride anions within the structure, to fluorine that becomes trapped inside the mineral. The color of fetid fluorite is predominantly due to the calcium atoms remaining. Solid-state fluorine-19 NMR carried out on the gas contained in the antozonite, revealed a peak at 425 ppm, which is consistent with F2.
Gallery
See also
List of countries by fluorite production
List of minerals
Magnesium fluoride – also used in UV optics
References
External links
Educational article about the different colors of fluorites crystals from Asturias, Spain
An educational tour of Weardale Fluorite
Illinois State Geologic Survey
Illinois state mineral
Barber Cup and Crawford Cup, related Roman cups at British Museum
Cubic minerals
Minerals in space group 225
Evaporite
Fluorine minerals
Luminescent minerals
Industrial minerals
Symbols of Illinois | Fluorite | [
"Chemistry"
] | 3,878 | [
"Luminescence",
"Luminescent minerals"
] |
43,590 | https://en.wikipedia.org/wiki/Flux | Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications in physics. For transport phenomena, flux is a vector quantity, describing the magnitude and direction of the flow of a substance or property. In vector calculus flux is a scalar quantity, defined as the surface integral of the perpendicular component of a vector field over a surface.
Terminology
The word flux comes from Latin: fluxus means "flow", and fluere is "to flow". As fluxion, this term was introduced into differential calculus by Isaac Newton.
The concept of heat flux was a key contribution of Joseph Fourier, in the analysis of heat transfer phenomena. His seminal treatise Théorie analytique de la chaleur (The Analytical Theory of Heat), defines fluxion as a central quantity and proceeds to derive the now well-known expressions of flux in terms of temperature differences across a slab, and then more generally in terms of temperature gradients or differentials of temperature, across other geometries. One could argue, based on the work of James Clerk Maxwell, that the transport definition precedes the definition of flux used in electromagnetism. The specific quote from Maxwell is:
According to the transport definition, flux may be a single vector, or it may be a vector field / function of position. In the latter case flux can readily be integrated over a surface. By contrast, according to the electromagnetism definition, flux is the integral over a surface; it makes no sense to integrate a second-definition flux for one would be integrating over a surface twice. Thus, Maxwell's quote only makes sense if "flux" is being used according to the transport definition (and furthermore is a vector field rather than single vector). This is ironic because Maxwell was one of the major developers of what we now call "electric flux" and "magnetic flux" according to the electromagnetism definition. Their names in accordance with the quote (and transport definition) would be "surface integral of electric flux" and "surface integral of magnetic flux", in which case "electric flux" would instead be defined as "electric field" and "magnetic flux" defined as "magnetic field". This implies that Maxwell conceived of these fields as flows/fluxes of some sort.
Given a flux according to the electromagnetism definition, the corresponding flux density, if that term is used, refers to its derivative along the surface that was integrated. By the Fundamental theorem of calculus, the corresponding flux density is a flux according to the transport definition. Given a current such as electric current—charge per time, current density would also be a flux according to the transport definition—charge per time per area. Due to the conflicting definitions of flux, and the interchangeability of flux, flow, and current in nontechnical English, all of the terms used in this paragraph are sometimes used interchangeably and ambiguously. Concrete fluxes in the rest of this article will be used in accordance to their broad acceptance in the literature, regardless of which definition of flux the term corresponds to.
Flux as flow rate per unit area
In transport phenomena (heat transfer, mass transfer and fluid dynamics), flux is defined as the rate of flow of a property per unit area, which has the dimensions [quantity]·[time]−1·[area]−1. The area is of the surface the property is flowing "through" or "across". For example, the amount of water that flows through a cross section of a river each second divided by the area of that cross section, or the amount of sunlight energy that lands on a patch of ground each second divided by the area of the patch, are kinds of flux.
General mathematical definition (transport)
Here are 3 definitions in increasing order of complexity. Each is a special case of the following. In all cases the frequent symbol j, (or J) is used for flux, q for the physical quantity that flows, t for time, and A for area. These identifiers will be written in bold when and only when they are vectors.
First, flux as a (single) scalar:
where
In this case the surface in which flux is being measured is fixed and has area A. The surface is assumed to be flat, and the flow is assumed to be everywhere constant with respect to position and perpendicular to the surface.
Second, flux as a scalar field defined along a surface, i.e. a function of points on the surface:
As before, the surface is assumed to be flat, and the flow is assumed to be everywhere perpendicular to it. However the flow need not be constant. q is now a function of p, a point on the surface, and A, an area. Rather than measure the total flow through the surface, q measures the flow through the disk with area A centered at p along the surface.
Finally, flux as a vector field:
In this case, there is no fixed surface we are measuring over. q is a function of a point, an area, and a direction (given by a unit vector ), and measures the flow through the disk of area A perpendicular to that unit vector. I is defined picking the unit vector that maximizes the flow around the point, because the true flow is maximized across the disk that is perpendicular to it. The unit vector thus uniquely maximizes the function when it points in the "true direction" of the flow. (Strictly speaking, this is an abuse of notation because the "argmax" cannot directly compare vectors; we take the vector with the biggest norm instead.)
Properties
These direct definitions, especially the last, are rather unwieldy. For example, the argmax construction is artificial from the perspective of empirical measurements, when with a weathervane or similar one can easily deduce the direction of flux at a point. Rather than defining the vector flux directly, it is often more intuitive to state some properties about it. Furthermore, from these properties the flux can uniquely be determined anyway.
If the flux j passes through the area at an angle θ to the area normal , then the dot product
That is, the component of flux passing through the surface (i.e. normal to it) is jcosθ, while the component of flux passing tangential to the area is jsinθ, but there is no flux actually passing through the area in the tangential direction. The only component of flux passing normal to the area is the cosine component.
For vector flux, the surface integral of j over a surface S, gives the proper flowing per unit of time through the surface:
where A (and its infinitesimal) is the vector area combination of the magnitude of the area A through which the property passes and a unit vector normal to the area.
Unlike in the second set of equations, the surface here need not be flat.
Finally, we can integrate again over the time duration t1 to t2, getting the total amount of the property flowing through the surface in that time (t2 − t1):
Transport fluxes
Eight of the most common forms of flux from the transport phenomena literature are defined as follows:
Momentum flux, the rate of transfer of momentum across a unit area (N·s·m−2·s−1). (Newton's law of viscosity)
Heat flux, the rate of heat flow across a unit area (J·m−2·s−1). (Fourier's law of conduction) (This definition of heat flux fits Maxwell's original definition.)
Diffusion flux, the rate of movement of molecules across a unit area (mol·m−2·s−1). (Fick's law of diffusion)
Volumetric flux, the rate of volume flow across a unit area (m3·m−2·s−1). (Darcy's law of groundwater flow)
Mass flux, the rate of mass flow across a unit area (kg·m−2·s−1). (Either an alternate form of Fick's law that includes the molecular mass, or an alternate form of Darcy's law that includes the density.)
Radiative flux, the amount of energy transferred in the form of photons at a certain distance from the source per unit area per second (J·m−2·s−1). Used in astronomy to determine the magnitude and spectral class of a star. Also acts as a generalization of heat flux, which is equal to the radiative flux when restricted to the electromagnetic spectrum.
Energy flux, the rate of transfer of energy through a unit area (J·m−2·s−1). The radiative flux and heat flux are specific cases of energy flux.
Particle flux, the rate of transfer of particles through a unit area ([number of particles] m−2·s−1)
These fluxes are vectors at each point in space, and have a definite magnitude and direction. Also, one can take the divergence of any of these fluxes to determine the accumulation rate of the quantity in a control volume around a given point in space. For incompressible flow, the divergence of the volume flux is zero.
Chemical diffusion
As mentioned above, chemical molar flux of a component A in an isothermal, isobaric system is defined in Fick's law of diffusion as:
where the nabla symbol ∇ denotes the gradient operator, DAB is the diffusion coefficient (m2·s−1) of component A diffusing through component B, cA is the concentration (mol/m3) of component A.
This flux has units of mol·m−2·s−1, and fits Maxwell's original definition of flux.
For dilute gases, kinetic molecular theory relates the diffusion coefficient D to the particle density n = N/V, the molecular mass m, the collision cross section , and the absolute temperature T by
where the second factor is the mean free path and the square root (with the Boltzmann constant k) is the mean velocity of the particles.
In turbulent flows, the transport by eddy motion can be expressed as a grossly increased diffusion coefficient.
Quantum mechanics
In quantum mechanics, particles of mass m in the quantum state ψ(r, t) have a probability density defined as
So the probability of finding a particle in a differential volume element d3r is
Then the number of particles passing perpendicularly through unit area of a cross-section per unit time is the probability flux;
This is sometimes referred to as the probability current or current density, or probability flux density.
Flux as a surface integral
General mathematical definition (surface integral)
As a mathematical concept, flux is represented by the surface integral of a vector field,
where F is a vector field, and dA is the vector area of the surface A, directed as the surface normal. For the second, n is the outward pointed unit normal vector to the surface.
The surface has to be orientable, i.e. two sides can be distinguished: the surface does not fold back onto itself. Also, the surface has to be actually oriented, i.e. we use a convention as to flowing which way is counted positive; flowing backward is then counted negative.
The surface normal is usually directed by the right-hand rule.
Conversely, one can consider the flux the more fundamental quantity and call the vector field the flux density.
Often a vector field is drawn by curves (field lines) following the "flow"; the magnitude of the vector field is then the line density, and the flux through a surface is the number of lines. Lines originate from areas of positive divergence (sources) and end at areas of negative divergence (sinks).
See also the image at right: the number of red arrows passing through a unit area is the flux density, the curve encircling the red arrows denotes the boundary of the surface, and the orientation of the arrows with respect to the surface denotes the sign of the inner product of the vector field with the surface normals.
If the surface encloses a 3D region, usually the surface is oriented such that the influx is counted positive; the opposite is the outflux.
The divergence theorem states that the net outflux through a closed surface, in other words the net outflux from a 3D region, is found by adding the local net outflow from each point in the region (which is expressed by the divergence).
If the surface is not closed, it has an oriented curve as boundary. Stokes' theorem states that the flux of the curl of a vector field is the line integral of the vector field over this boundary. This path integral is also called circulation, especially in fluid dynamics. Thus the curl is the circulation density.
We can apply the flux and these theorems to many disciplines in which we see currents, forces, etc., applied through areas.
Electromagnetism
Electric flux
An electric "charge", such as a single proton in space, has a magnitude defined in coulombs. Such a charge has an electric field surrounding it. In pictorial form, the electric field from a positive point charge can be visualized as a dot radiating electric field lines (sometimes also called "lines of force"). Conceptually, electric flux can be thought of as "the number of field lines" passing through a given area. Mathematically, electric flux is the integral of the normal component of the electric field over a given area. Hence, units of electric flux are, in the MKS system, newtons per coulomb times meters squared, or N m2/C. (Electric flux density is the electric flux per unit area, and is a measure of strength of the normal component of the electric field averaged over the area of integration. Its units are N/C, the same as the electric field in MKS units.)
Two forms of electric flux are used, one for the E-field:
and one for the D-field (called the electric displacement):
This quantity arises in Gauss's law – which states that the flux of the electric field E out of a closed surface is proportional to the electric charge QA enclosed in the surface (independent of how that charge is distributed), the integral form is:
where ε0 is the permittivity of free space.
If one considers the flux of the electric field vector, E, for a tube near a point charge in the field of the charge but not containing it with sides formed by lines tangent to the field, the flux for the sides is zero and there is an equal and opposite flux at both ends of the tube. This is a consequence of Gauss's Law applied to an inverse square field. The flux for any cross-sectional surface of the tube will be the same. The total flux for any surface surrounding a charge q is q/ε0.
In free space the electric displacement is given by the constitutive relation D = ε0 E, so for any bounding surface the D-field flux equals the charge QA within it. Here the expression "flux of" indicates a mathematical operation and, as can be seen, the result is not necessarily a "flow", since nothing actually flows along electric field lines.
Magnetic flux
The magnetic flux density (magnetic field) having the unit Wb/m2 (Tesla) is denoted by B, and magnetic flux is defined analogously:
with the same notation above. The quantity arises in Faraday's law of induction, where the magnetic flux is time-dependent either because the boundary is time-dependent or magnetic field is time-dependent. In integral form:
where d is an infinitesimal vector line element of the closed curve , with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve , with the sign determined by the integration direction.
The time-rate of change of the magnetic flux through a loop of wire is minus the electromotive force created in that wire. The direction is such that if current is allowed to pass through the wire, the electromotive force will cause a current which "opposes" the change in magnetic field by itself producing a magnetic field opposite to the change. This is the basis for inductors and many electric generators.
Poynting flux
Using this definition, the flux of the Poynting vector S over a specified surface is the rate at which electromagnetic energy flows through that surface, defined like before:
The flux of the Poynting vector through a surface is the electromagnetic power, or energy per unit time, passing through that surface. This is commonly used in analysis of electromagnetic radiation, but has application to other electromagnetic systems as well.
Confusingly, the Poynting vector is sometimes called the power flux, which is an example of the first usage of flux, above. It has units of watts per square metre (W/m2).
SI radiometry units
See also
AB magnitude
Explosively pumped flux compression generator
Eddy covariance flux (aka, eddy correlation, eddy flux)
Fast Flux Test Facility
Fluence (flux of the first sort for particle beams)
Fluid dynamics
Flux footprint
Flux pinning
Flux quantization
Gauss's law
Inverse-square law
Jansky (non SI unit of spectral flux density)
Latent heat flux
Luminous flux
Magnetic flux
Magnetic flux quantum
Neutron flux
Poynting flux
Poynting theorem
Radiant flux
Rapid single flux quantum
Sound energy flux
Volumetric flux (flux of the first sort for fluids)
Volumetric flow rate (flux of the second sort for fluids)
Notes
Further reading
External links
Physical quantities
Vector calculus
Rates | Flux | [
"Physics",
"Mathematics"
] | 3,596 | [
"Physical phenomena",
"Quantity",
"Physical properties",
"Physical quantities"
] |
43,592 | https://en.wikipedia.org/wiki/John%20Herschel | Sir John Frederick William Herschel, 1st Baronet (; 7 March 1792 – 11 May 1871) was an English polymath active as a mathematician, astronomer, chemist, inventor and experimental photographer who invented the blueprint and did botanical work.
Herschel originated the use of the Julian day system in astronomy. He named seven moons of Saturn and four moons of Uranus – the seventh planet, discovered by his father Sir William Herschel. He made many contributions to the science of photography, and investigated colour blindness and the chemical power of ultraviolet rays. His Preliminary Discourse (1831), which advocated an inductive approach to scientific experiment and theory-building, was an important contribution to the philosophy of science.
Early life and work on astronomy
Herschel was born in Slough, Buckinghamshire, the son of Mary Baldwin and astronomer Sir William Herschel. He was the nephew of astronomer Caroline Herschel. He studied shortly at Eton College and St John's College, Cambridge, graduating as Senior Wrangler in 1813. It was during his time as an undergraduate that he became friends with the mathematicians Charles Babbage and George Peacock. He left Cambridge in 1816 and started working with his father. He took up astronomy in 1816, building a reflecting telescope with a mirror in diameter, and with a focal length. Between 1821 and 1823 he re-examined, with James South, the double stars catalogued by his father. He was one of the founders of the Royal Astronomical Society in 1820. For his work with his father, he was presented with the Gold Medal of the Royal Astronomical Society in 1826 (which he won again in 1836), and with the Lalande Medal of the French Academy of Sciences in 1825, while in 1821 the Royal Society bestowed upon him the Copley Medal for his mathematical contributions to their Transactions. Herschel was made a Knight of the Royal Guelphic Order in 1831. He also seemed to be aware of Indian thought and mathematics introduced to him by George Everest as claimed by Mary Boole:
He stated in his historical article Mathematics in Brewster's Cyclopedia:
Herschel served as president of the Royal Astronomical Society three times: 1827–1829, 1839–1841 and 1847–1849.
Herschel's A preliminary discourse on the study of natural philosophy, published early in 1831 as part of Dionysius Lardner's Cabinet cyclopædia, set out methods of scientific investigation with an orderly relationship between observation and theorising. He described nature as being governed by laws which were difficult to discern or to state mathematically, and the highest aim of natural philosophy was understanding these laws through inductive reasoning, finding a single unifying explanation for a phenomenon. This became an authoritative statement with wide influence on science, particularly at the University of Cambridge where it inspired the student Charles Darwin with "a burning zeal" to contribute to this work.
He was elected as a member to the American Philosophical Society in 1854.
Herschel published a catalogue of his astronomical observations in 1864, as the General Catalogue of Nebulae and Clusters, a compilation of his own work and that of his father's, expanding on the senior Herschel's Catalogue of Nebulae. A further complementary volume was published posthumously, as the General Catalogue of 10,300 Multiple and Double Stars.
Herschel correctly considered astigmatism to be due to irregularity of the cornea and theorised that vision could be improved by the application of some animal jelly contained in a capsule of glass against the cornea. His views were published in an article entitled Light in 1828 and the Encyclopædia Metropolitana in 1845.
Discoveries of Herschel include the galaxies NGC 7, NGC 10, NGC 25, and NGC 28.
Visit to South Africa
He declined an offer from the Duke of Sussex that they travel to South Africa on a Navy ship.
Herschel had his own inherited money and he paid £500 for passage on the S.S. Mountstuart Elphinstone. He, his wife, their three children and his 20 inch telescope departed from Portsmouth on 13 November 1833.
The voyage to South Africa was made to catalogue the stars, nebulae, and other objects of the southern skies. This was to be a completion as well as extension of the survey of the northern heavens undertaken initially by his father William Herschel. He arrived in Cape Town on 15 January 1834 and set up a private telescope at Feldhausen (site of present day Grove Primary School) at Claremont, a suburb of Cape Town. Amongst his other observations during this time was that of the return of Comet Halley. Herschel collaborated with Thomas Maclear, the Astronomer Royal at the Cape of Good Hope and the members of the two families became close friends. During this time, he also witnessed the Great Eruption of Eta Carinae (December 1837).
In addition to his astronomical work, however, this voyage to a far corner of the British empire also gave Herschel an escape from the pressures under which he found himself in London, where he was one of the most sought-after of all British men of science. While in southern Africa, he engaged in a broad variety of scientific pursuits free from a sense of strong obligations to a larger scientific community. It was, he later recalled, probably the happiest time in his life. A village in the contemporary province of Eastern Cape is named after him.
Herschel combined his talents with those of his wife, Margaret, and between 1834 and 1838 they produced 131 botanical illustrations of fine quality, showing the Cape flora. Herschel used a camera lucida to obtain accurate outlines of the specimens and left the details to his wife. Even though their portfolio had been intended as a personal record, and despite the lack of floral dissections in the paintings, their accurate rendition makes them more valuable than many contemporary collections. Some 112 of the 132 known flower studies were collected and published as Flora Herscheliana in 1996. The book also included work by Charles Davidson Bell and Thomas Bowler.
As their home during their stay in the Cape, the Herschels had selected 'Feldhausen' ("Field Houses"), an old estate on the south-eastern side of Table Mountain. Here John set up his reflector to begin his survey of the southern skies.
Herschel, at the same time, read widely. Intrigued by the ideas of gradual formation of landscapes set out in Charles Lyell's Principles of Geology, he wrote to Lyell on 20 February 1836 praising the book as a work that would bring "a complete revolution in [its] subject, by altering entirely the point of view in which it must thenceforward be contemplated" and opening a way for bold speculation on "that mystery of mysteries, the replacement of extinct species by others." Herschel himself thought catastrophic extinction and renewal "an inadequate conception of the Creator" and by analogy with other intermediate causes, "the origination of fresh species, could it ever come under our cognizance, would be found to be a natural in contradistinction to a miraculous process". He prefaced his words with the couplet:
Taking a gradualist view of development and referring to evolutionary descent from a proto-language, Herschel commented:
The document was circulated, and Charles Babbage incorporated extracts in his ninth and unofficial Bridgewater Treatise, which postulated laws set up by a divine programmer. When HMS Beagle called at Cape Town, Captain Robert FitzRoy and the young naturalist Charles Darwin visited Herschel on 3 June 1836. Later on, Darwin would be influenced by Herschel's writings in developing his theory advanced in The Origin of Species. In the opening lines of that work, Darwin writes that his intent is "to throw some light on the origin of species – that mystery of mysteries, as it has been called by one of our greatest philosophers," referring to Herschel. However, Herschel ultimately rejected the theory of natural selection.
Herschel returned to England in 1838, was created a baronet, of Slough in the County of Buckingham, and published Results of Astronomical Observations made at the Cape of Good Hope in 1847. In this publication he proposed the names still used today for the seven then-known satellites of Saturn: Mimas, Enceladus, Tethys, Dione, Rhea, Titan, and Iapetus. In the same year, Herschel received his second Copley Medal from the Royal Society for this work. A few years later, in 1852, he proposed the names still used today for the four then-known satellites of Uranus: Ariel, Umbriel, Titania, and Oberon. A stone obelisk, erected in 1842 and now in the grounds of The Grove Primary School, marks the site where his 20-ft reflector once stood.
Photography
Herschel made numerous important contributions to photography. He made improvements in photographic processes, particularly in inventing the cyanotype process, which became known as blueprints, and variations, such as the chrysotype. In 1839, he made a photograph on glass, which still exists, and experimented with some colour reproduction, noting that rays of different parts of the spectrum tended to impart their own colour to a photographic paper. Herschel made experiments using photosensitive emulsions of vegetable juices, called phytotypes, also known as anthotypes, and published his discoveries in the Philosophical Transactions of the Royal Society of London in 1842. He collaborated in the early 1840s with Henry Collen, portrait painter to Queen Victoria. Herschel originally discovered the platinum process on the basis of the light sensitivity of platinum salts, later developed by William Willis.
Herschel coined the term photography in 1839. Herschel was also the first to apply the terms negative and positive to photography.
Herschel discovered sodium thiosulfate to be a solvent of silver halides in 1819, and informed Talbot and Daguerre of his discovery that this "hyposulphite of soda" ("hypo") could be used as a photographic fixer, to "fix" pictures and make them permanent, after experimentally applying it thus in early 1839.
Herschel's ground-breaking research on the subject was read at the Royal Society in London in March 1839 and January 1840.
Other aspects of Herschel's career
Herschel wrote many papers and articles, including entries on meteorology, physical geography and the telescope for the eighth edition of the Encyclopædia Britannica. He also translated the Iliad of Homer.
In 1823, Herschel published his findings on the optical spectra of metal salts.
Herschel invented the actinometer in 1825 to measure the direct heating power of the Sun's rays, and his work with the instrument is of great importance in the early history of photochemistry.
Herschel proposed a correction to the Gregorian calendar, making years that are multiples of 4000 common years rather than leap years, thus reducing the average length of the calendar year from 365.2425 days to 365.24225. Although this is closer to the mean tropical year of 365.24219 days, his proposal has never been adopted because the Gregorian calendar is based on the mean time between vernal equinoxes (currently days).
Herschel was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832, and in 1836, a foreign member of the Royal Swedish Academy of Sciences.
In 1835, the New York Sun newspaper wrote a series of satiric articles that came to be known as the Great Moon Hoax, with statements falsely attributed to Herschel about his supposed discoveries of animals living on the Moon, including batlike winged humanoids.
Several locations are named for him: the village of Herschel in western Saskatchewan, Canada, site of the discovery of Dolichorhynchops herschelensis, a type of plesiosaur; Mount Herschel in Antarctica; the crater J. Herschel on the Moon; and the settlement of Herschel, Eastern Cape and the Herschel Girls' School in Cape Town, South Africa.
While it is commonly accepted that Herschel Island, in the Arctic Ocean, part of the Yukon Territory, was named after him, the entries in the expedition journal of Sir John Franklin state that the latter wished to honour the Herschel family, of which John Herschel's father, Sir William Herschel, and his aunt, Caroline Herschel, are as notable as John.
Family
Herschel married Margaret Brodie Stewart (1810–1884) on 3 March 1829 at St. Marlyebone Church in London, and was father of the following children:
Caroline Emilia Mary Herschel (31 March 1830 – 29 January 1909), who married the soldier and politician Alexander Hamilton-Gordon
Isabella Herschel (5 June 1831 – 1893)
Sir William James Herschel, 2nd Bt. (9 January 1833 – 1917),
Margaret Louisa Herschel (1834–1861), an accomplished artist
Alexander Stewart Herschel (1836–1907), FRS, FRAS
Col. John Herschel FRS, FRAS, (1837–1921) surveyor
Maria Sophia Herschel (1839–1929)
Amelia Herschel (1841–1926) married Sir Thomas Francis Wade, diplomat and sinologist
Julia Herschel (1842–1933) married on 4 June 1878 to Captain (later Admiral) John Fiot Lee Pearse Maclear
Matilda Rose Herschel (1844–1914), a gifted artist, married William Waterfield (Indian Civil Service)
Francisca Herschel (1846–1932)
Constance Anne Herschel (1855–20 June 1939), mathematician and scientist who became lecturer in natural sciences at Girton College, Cambridge
Death
Herschel died on 11 May 1871 at age 79 at Collingwood, his home near Hawkhurst in Kent. On his death, he was given a national funeral and buried in Westminster Abbey.
His obituary by Henry W Field of London was read to the American Philosophical Society on 1 December 1871.
Arms
Bibliography
In chronological order
,
(The Encyclopædia Metropolitana was published in 30 vols. from 1817–1845)
In Popular Culture
Sir John Herschel served as the basis for the character of the same name in the radio-musical series Pulp Musicals. Played by Curt Mega, the series features a highly fictionalized version of Herschel.
References
Works cited
Further reading
On Herschel's relationship with Charles Babbage, William Whewell, and Richard Jones, see
External links
Biographical information
R. Derek Wood (2008), 'Fourteenth March 1839, Herschel's Key to Photography'
Herschel Museum of Astronomy
Science in the Making Herschel's papers in the Royal Society's archives
Wikisource copy of a notice from 1823 concerning the star catalogue, published in Astronomische Nachrichten
1792 births
1871 deaths
19th-century English astronomers
Photographers from Buckinghamshire
19th-century English photographers
Alumni of St John's College, Cambridge
Baronets in the Baronetage of the United Kingdom
Burials at Westminster Abbey
English Christians
English people of German descent
Fellows of the American Academy of Arts and Sciences
Fellows of the Royal Astronomical Society
Fellows of the Royal Society
Honorary members of the Saint Petersburg Academy of Sciences
Masters of the Mint
Members of the Royal Swedish Academy of Sciences
People educated at Eton College
People from Slough
Pioneers of photography
Presidents of the Royal Astronomical Society
Proto-evolutionary biologists
Recipients of the Copley Medal
Recipients of the Gold Medal of the Royal Astronomical Society
Recipients of the Pour le Mérite (civil class)
Rectors of the University of Aberdeen
Royal Medal winners
Senior Wranglers
Spectroscopists
John
Recipients of the Lalande Prize
Translators of Homer
Wynberg, Cape Town | John Herschel | [
"Physics",
"Chemistry",
"Biology"
] | 3,176 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Non-Darwinian evolution",
"Spectroscopy",
"Biology theories",
"Proto-evolutionary biologists"
] |
43,596 | https://en.wikipedia.org/wiki/United%20States%20Naval%20Observatory | The United States Naval Observatory (USNO) is a scientific and military facility that produces geopositioning, navigation and timekeeping data for the United States Navy and the United States Department of Defense. Established in 1830 as the Depot of Charts and Instruments, it is one of the oldest scientific agencies in the United States, and remains the country's leading facility for astronomical and timing data.
The observatory is located in Northwest Washington, D.C. at the northwestern end of Embassy Row. It is among the few pre-20th century astronomical observatories located in an urban area. In 1893, in an effort to escape light pollution, it was relocated from Foggy Bottom near the city's center, to its Northwest Washington, D.C. location.
The USNO has conducted significant scientific studies throughout its history, including measuring the speed of light, observing solar eclipses, and discovering the moons of Mars. Its achievements include providing data for the first radio time signals, constructing some of the earliest and most accurate telescopes of their kind, and helping develop universal time. The Naval Observatory performs radio VLBI-based positions of quasars for astrometry and geodesy with numerous global collaborators (IERS), in order to produce Earth orientation parameters and to realize the celestial reference system (ICRF).
Aside from its scientific mission, since the 1970s the Naval Observatory campus hosts the official residence of the vice president of the United States.
History
President John Quincy Adams, who in 1825 signed the bill for the creation of a national observatory just before leaving presidential office, had intended for it to be called the National Observatory.
The names "National Observatory" and "Naval Observatory" were both used for 10 years, until the Secretary of the Navy officially adopted the latter.
Adams had made protracted efforts to bring astronomy to a national level. He spent many nights at the observatory, watching and charting the stars, which had always been one of his interests.
Established by order of the United States Secretary of the Navy John Branch on 6 December 1830 as the Depot of Charts and Instruments, the Observatory rose from humble beginnings: Placed under the command of Lieutenant Louis M. Goldsborough, with an annual budget of $330; its primary function was the restoration, repair, and rating of navigational instruments.
Old Naval Observatory
It was established as a national observatory in 1842 by federal law and a Congressional appropriation of $25,000. Lt. J.M. Gilliss was put in charge of "obtaining the instruments needed and books." Lt. Gilliss visited the principal observatories of Europe with the mission to purchase telescopes and other scientific devices, and books.
The observatory's primary mission was to care for the United States Navy's marine chronometers, charts, and other navigational equipment. It calibrated ships' chronometers by timing the transit of stars across the meridian. It opened in 1844 in Foggy Bottom, north of the site of the Lincoln Memorial and west of the White House.
In 1893, the observatory moved to its current location in Northwest Washington, D.C. located on a 2000 foot circle of land atop "Observatory Hill", overlooking Massachusetts Avenue.
In 2017, the facilities were listed on the National Register of Historic Places.
The time ball
The first superintendent was Navy Commander M.F. Maury. Maury had the world's first vulcanized time ball, created to his specifications by Charles Goodyear for the U.S. Observatory. Placed into service in 1845, it was the first time ball in the United States and the 12th in the world. Maury kept accurate time by the stars and planets.
The time ball was dropped every day except Sunday, precisely at the astronomically defined moment of mean solar noon; this enabled all ships and civilians within sight to know the exact time. By the end of the American Civil War, the Observatory's clocks were linked via telegraph to ring the alarm bells in all of the Washington, D.C. firehouses three times a day.
The USNO held a one-off time-ball re-enactment for the year-2000 celebration.
Nautical Almanac Office
In 1849, the Nautical Almanac Office (NAO) was established in Cambridge, Massachusetts as a separate organization. In 1866, it was moved to Washington, D.C., operating near Fort Myer. It relocated to the U.S. Naval Observatory grounds in 1893.
On 20 September 1894, the NAO became a "branch" of USNO; however, it remained autonomous for several years.
The site houses the largest astronomy library in the United States (and the largest astrophysical periodicals collection in the world). The library includes a large collection of rare physics and astronomy books from the past millennium.
Measuring the astronomical unit
An early scientific duty assigned to the Observatory was the U.S. contribution to the definition of the Astronomical Unit, or the , which defines a standard mean distance between the Sun and the Earth. This was conducted under the auspices of the congressionally-funded U.S. Transit of Venus Commission. The astronomical measurements taken of the transit of Venus by a number of countries since 1639 resulted in a progressively more accurate definition of the .
Relying strongly on photographic methods, the naval observers returned 350 photographic plates in 1874, and 1,380 measurable plates in 1882. The results of the surveys conducted simultaneously from several locations around the world (for each of the two transits) produced a final value of the solar parallax, after adjustments, of 8.809″, with a probable error of 0.0059″, yielding a U.S.-determined Earth-Sun distance of , with a probable error of . The calculated distance was a significant improvement over several previous estimates.
The 26 inch and 40 inch refractors
The telescope used for the discovery of the Moons of Mars was the 26 inch (66 cm) refractor telescope, then located at Foggy Bottom, Washington, DC. In 1893 it was moved to its Northwest DC location.
In 1934, the largest optical telescope installed at USNO saw "first light". This 40 inch aperture instrument was also the second (and final) telescope made by famed optician, George Willis Ritchey. The Ritchey–Chrétien telescope design has since become the de facto optical design for nearly all major telescopes, including the famed Keck telescopes and the space-borne Hubble Space Telescope.
Because of light pollution in the Washington metropolitan area, USNO relocated the 40 inch telescope to Flagstaff, Arizona. A new Navy command, now called the USNO Flagstaff Station (NOFS), was established there. Those operations began in 1955. Within a decade, the Navy's largest telescope, the 61 inch "Kaj Strand Astrometric Reflector" was built; it saw light at Flagstaff in 1964.
USNO continues to maintain its dark-sky observatory, NOFS, near Flagstaff. This facility now oversees the Navy Precision Optical Interferometer.
History of the time service
By the early 1870s the USNO daily noon-time signal was distributed electrically, nationwide, via the Western Union Telegraph Company. Time was also "sold" to the railroads and was used in conjunction with railroad chronometers to schedule American rail transport. Early in the 20th century, the service was broadcast by radio, with Arlington time signal available to those with wireless receivers.
In November 1913 the Paris Observatory, using the Eiffel Tower as an antenna, exchanged sustained wireless (radio) signals with the U.S. Naval Observatory to determine the exact difference of longitude between the two institutions, via an antenna in Arlington, Virginia.
The U.S. Naval Observatory in Washington continues to be a major authority in the areas of Precise Time and Time Interval, Earth orientation, astrometry, and celestial observation. In collaboration with many national and international scientific establishments, it determines the timing and astronomical data required for accurate navigation, astrometry, and fundamental astronomy, and calculation methods — and distributes this information (such as star catalogs) on-line and in the annual publications The Astronomical Almanac and The Nautical Almanac.
Former USNO director Gernot M. R. Winkler initiated the "Master clock" service that the USNO still operates, and which provides precise time to the GPS satellite constellation run by the United States Space Force. The alternate Master Clock time service continues to operate at Schriever Space Force Base in Colorado.
Departments
In 1990 two departments were established: Orbital Mechanics and Astronomical Applications, with the Nautical Almanac Office a division in Astronomical Applications. The Orbital Mechanics Department operated under P. Kenneth Seidelmann until 1994, when the department was abolished and its functions transferred to a group within the Astronomical Applications Department.
In 2010, USNO's astronomical 'department' known as the Naval Observatory Flagstaff Station (NOFS) was officially made autonomous as an Echelon 5 command, separate from, but still reporting to the USNO in Washington. In the alpine woodlands above 7,000 feet altitude outside Flagstaff, Arizona, NOFS performs its national, Celestial Reference Frame (CRF) mission under dark skies in that region.
Official residence of the vice president of the United States
A house situated on the grounds of the observatory, at Number One Observatory Circle, has been the official residence of the vice president of the United States since 1974. It is protected by tight security control enforced by the Secret Service. The house is separated from the Naval Observatory.
Before serving as the vice president's residence, it was that of the observatory's superintendent, and later the chief of naval operations.
Time service
The U.S. Naval Observatory operates two “Master Clock” facilities, one in Washington, DC, and the other at Schriever SFB near Colorado Springs, CO.
The primary facility, in Washington, D.C. maintains 57 HP/Agilent/Symmetricom 5071A-001 high performance cesium atomic clocks and 24 hydrogen masers.
The alternate facility, at Schriever Space Force Base, maintains 12 cesium clocks and 3 masers.
The observatory also operates four rubidium atomic fountain clocks, which have a stability reaching 7. The observatory plans to build several more of this type for use at its two facilities.
The clocks used for the USNO timescale are kept in 19 environmental chambers, whose temperatures are kept constant to within 0.1°C. The relative humidities are kept constant in all maser, and most cesiums enclosures, to within 1%. Time-scale management only uses the clocks in Washington, DC, and of those, preferentially uses the clocks that currently conform reliably to the time reports of the majority. It is the combined ‘vote’ of the ensemble that constitutes the otherwise-fictitious “Master Clock”. The time-scale computations on 7 June 2007 weighted 70 of the clocks into the standard.
The U.S. Naval Observatory provides public time service via 26 NTP servers on the public Internet, and via telephone voice announcements:
+1 202 762-1401 (Washington, DC)
+1 202 762-1069 (Washington, DC)
+1 719 567-6742 (Colorado Springs, CO)
The voice of actor Fred Covington (1928–1993) has been announcing the USNO time since 1978.
The voice announcements always begin with the local time (daylight or standard), and include a background of 1 second ticks. Local time announcements are made on the minute, and 15, 30, and 45 seconds after the minute. Coordinated Universal Time (UTC) is announced 5 seconds after the local time. Upon connecting, only the second-marking ticks are heard for the few seconds before the next scheduled local time announcement
The USNO also operates a modem time service, and provides time to the Global Positioning System.
Instrument shop
The United States Naval Observatory Instrument shop has been designing and manufacturing precise instrumentation since the early 1900s.
Publications
Astronomical Observations made at the U.S. Naval Observatory (USNOA) (v. 1–6: 1846–1867)
Astronomical and Meteorological Observations made at the U.S. Naval Observatory (USNOM) (v. 1–22: 1862–1880)
Observations made at the U.S. Naval Observatory (USNOO) (v. 1–7: 1887–1893)
Publications of the U.S. Naval Observatory, Second Series (PUSNO) (v. 1–16: 1900–1949)
U.S. Naval Observatory Circulars
The Astronomical Almanac
The Nautical Almanac
The Air Almanac
Astronomical Phenomena
See also
Astronomy and observatories
dark-sky movement
List of astronomical observatories
The Old Naval Observatory
USNO Flagstaff Station
Technology and technical resources
Coordinated Universal Time (UTC)
Naval Observatory Vector Astrometry Subroutines
railroad chronometer
time ball
Time signal
time service radio stations WWV, WWVH, & WWVB
USNO personnel
Rear Admiral Samuel P. Carter
Lieutenant James Melville Gilliss
Lieutenant Louis M. Goldsborough
Commander Matthew Fontaine Maury
astronomer P. Kenneth Seidelmann
director Gernot M. R. Winkler
Notes
References
Further reading
(British edition).
External links
Transcription: Lieut. Matthew Fontaine Maury’s 1847 Letter to President John Quincy Adams on the many details of the United States National Observatory that was later called the "Navy" Observatory
Old photographs at the Paris Observatory
Naval Observatory
Naval Observatory
Naval Observatory
Naval Observatory
Naval Observatory
Time balls
1830 establishments in Washington, D.C.
National Register of Historic Places in Washington, D.C.
Astrometry
Geodesy organizations | United States Naval Observatory | [
"Astronomy"
] | 2,778 | [
"Astrometry",
"Astronomical sub-disciplines"
] |
43,597 | https://en.wikipedia.org/wiki/Exciton | An exciton is a bound state of an electron and an electron hole which are attracted to each other by the electrostatic Coulomb force resulting from their opposite charges. It is an electrically neutral quasiparticle regarded as an elementary excitation primarily in condensed matter, such as insulators, semiconductors, some metals, and in some liquids. It transports energy without transporting net electric charge.
An exciton can form when an electron from the valence band of a crystal is promoted in energy to the conduction band e.g., when a material absorbs a photon. Promoting the electron to the conduction band leaves a positively charged hole in the valence band. Here 'hole' represents the unoccupied quantum mechanical electron state with a positive charge, an analogue in crystal of a positron. Because of the attractive coulomb force between the electron and the hole, a bound state is formed, akin to that of the electron and proton in a hydrogen atom or the electron and positron in positronium. Excitons are composite bosons since they are formed from two fermions which are the electron and the hole.
Excitons are often treated in the two limiting cases:
The small radius excitons, or Frenkel excitons, where the electron-hole relative distance is restricted to one or only a few nearest neighbour unit cells. Frenkel excitons typically occur in insulators and organic semiconductors with relatively narrow allowed energy bands and accordingly, rather heavy Effective mass.
the large radius excitons are called Wannier-Mott excitons, for which the relative motion of electron and hole in the crystal covers many unit cells. Wannier-Mott excitons are considered as hydrogen-like quasiparticles. The wavefunction of the bound state then is said to be hydrogenic, resulting in a series of energy states in analogy to a hydrogen atom. Compared to a hydrogen atom, the exciton binding energy in a crystal is much smaller and the exciton's size (radius) is much larger. This is mainly because of two effects: (a) Coulomb forces are screened in a crystal, which is expressed as a relative permittivity εr significantly larger than 1 and (b) the Effective mass of the electron and hole in a crystal are typically smaller compared to that of free electrons. Wannier-Mott excitons with binding energies ranging from a few to hundreds of meV, depending on the crystal, occur in many semiconductors including Cu2 O, GaAs, other III-V and II-VI semiconductors, transition metal dichalcogenides such as MoS2.
Excitons give rise to spectrally narrow lines in optical absorption, reflection, transmission and luminescence spectra with the energies below the free-particle band gap of an insulator or a semiconductor. Exciton binding energy and radius can be extracted from optical absorption measurements in applied magnetic fields.
The exciton as a quasiparticle is characterized by the momentum (or wavevector K) describing free propagation of the electron-hole pair as a composite particle in the crystalline lattice in agreement with the Bloch theorem. The exciton energy depends on K and is typically parabolic for the wavevectors much smaller than the reciprocal lattice vector of the host lattice. The exciton energy also depends on the respective orientation of the electron and hole spins, whether they are parallel or anti-parallel. The spins are coupled by the exchange interaction, giving rise to exciton energy fine structure.
In metals and highly doped semiconductors a concept of the Gerald Mahan exciton is invoked where the hole in a valence band is correlated with the Fermi sea of conduction electrons. In that case no bound state in a strict sense is formed, but the Coulomb interaction leads to a significant enhancement of absorption in the vicinity of the fundamental absorption edge also known as the Mahan or Fermi-edge singularity.
History
The concept of excitons was first proposed by Yakov Frenkel in 1931, when he described the excitation of an atomic lattice considering what is now called the tight-binding description of the band structure. In his model the electron and the hole bound by the coulomb interaction are located either on the same or on the nearest neighbouring sites of the lattice, but the exciton as a composite quasi-particle is able to travel through the lattice without any net transfer of charge, which led to many propositions for optoelectronic devices.
Types
Frenkel exciton
In materials with a relatively small dielectric constant, the Coulomb interaction between an electron and a hole may be strong and the excitons thus tend to be small, of the same order as the size of the unit cell. Molecular excitons may even be entirely located on the same molecule, as in fullerenes. This Frenkel exciton, named after Yakov Frenkel, has a typical binding energy on the order of 0.1 to 1 eV. Frenkel excitons are typically found in alkali halide crystals and in organic molecular crystals composed of aromatic molecules, such as anthracene and tetracene. Another example of Frenkel exciton includes on-site d-d excitations in transition metal compounds with partially filled d-shells. While d-d transitions are in principle forbidden by symmetry, they become weakly-allowed in a crystal when the symmetry is broken by structural relaxations or other effects. Absorption of a photon resonant with a d-d transition leads to the creation of an electron-hole pair on a single atomic site, which can be treated as a Frenkel exciton.
Wannier–Mott exciton
In semiconductors, the dielectric constant is generally large. Consequently, electric field screening tends to reduce the Coulomb interaction between electrons and holes. The result is a Wannier–Mott exciton, which has a radius larger than the lattice spacing. Small effective mass of electrons that is typical of semiconductors also favors large exciton radii. As a result, the effect of the lattice potential can be incorporated into the effective masses of the electron and hole. Likewise, because of the lower masses and the screened Coulomb interaction, the binding energy is usually much less than that of a hydrogen atom, typically on the order of . This type of exciton was named for Gregory Wannier and Nevill Francis Mott. Wannier–Mott excitons are typically found in semiconductor crystals with small energy gaps and high dielectric constants, but have also been identified in liquids, such as liquid xenon. They are also known as large excitons.
In single-wall carbon nanotubes, excitons have both Wannier–Mott and Frenkel character. This is due to the nature of the Coulomb interaction between electrons and holes in one-dimension. The dielectric function of the nanotube itself is large enough to allow for the spatial extent of the wave function to extend over a few to several nanometers along the tube axis, while poor screening in the vacuum or dielectric environment outside of the nanotube allows for large (0.4 to ) binding energies.
Often more than one band can be chosen as source for the electron and the hole, leading to different types of excitons in the same material. Even high-lying bands can be effective as femtosecond two-photon experiments have shown. At cryogenic temperatures, many higher excitonic levels can be observed approaching the edge of the band, forming a series of spectral absorption lines that are in principle similar to hydrogen spectral series.
3D semiconductors
In a bulk semiconductor, a Wannier exciton has an energy and radius associated with it, called exciton Rydberg energy and exciton Bohr radius respectively. For the energy, we have
where is the Rydberg unit of energy (cf. Rydberg constant), is the (static) relative permittivity, is the reduced mass of the electron and hole, and is the electron mass. Concerning the radius, we have
where is the Bohr radius.
For example, in GaAs, we have relative permittivity of 12.8 and effective electron and hole masses as 0.067m0 and 0.2m0 respectively; and that gives us meV and nm.
2D semiconductors
In two-dimensional (2D) materials, the system is quantum confined in the direction perpendicular to the plane of the material. The reduced dimensionality of the system has an effect on the binding energies and radii of Wannier excitons. In fact, excitonic effects are enhanced in such systems.
For a simple screened Coulomb potential, the binding energies take the form of the 2D hydrogen atom
.
In most 2D semiconductors, the Rytova–Keldysh form is a more accurate approximation to the exciton interaction
where is the so-called screening length, is the vacuum permittivity, is the elementary charge, the average dielectric constant of the surrounding media, and the exciton radius. For this potential, no general expression for the exciton energies may be found. One must instead turn to numerical procedures, and it is precisely this potential that gives rise to the nonhydrogenic Rydberg series of the energies in 2D semiconductors.
Example: excitons in transition metal dichalcogenides (TMDs)
Monolayers of a transition metal dichalcogenide (TMD) are a good and cutting-edge example where excitons play a major role. In particular, in these systems, they exhibit a bounding energy of the order of 0.5 eV with a Coulomb attraction between the hole and the electrons stronger than in other traditional quantum wells. As a result, optical excitonic peaks are present in these materials even at room temperatures.
0D semiconductors
In nanoparticles which exhibit quantum confinement effects and hence behave as quantum dots (also called 0-dimensional semiconductors), excitonic radii are given by
where is the relative permittivity, is the reduced mass of the electron-hole system, is the electron mass, and is the Bohr radius.
Hubbard exciton
Hubbard excitons are linked to electrons not by a Coulomb's interaction, but by a magnetic force. Their name derives by the English physicist John Hubbard.
Hubbard excitons were observed for the first time in 2023 through the Terahertz time-domain spectroscopy. Those particles have been obtained by applying a light to a Mott antiferromagnetic insulator.
Charge-transfer exciton
An intermediate case between Frenkel and Wannier excitons is the charge-transfer (CT) exciton. In molecular physics, CT excitons form when the electron and the hole occupy adjacent molecules. They occur primarily in organic and molecular crystals; in this case, unlike Frenkel and Wannier excitons, CT excitons display a static electric dipole moment. CT excitons can also occur in transition metal oxides, where they involve an electron in the transition metal 3d orbitals and a hole in the oxygen 2p orbitals. Notable examples include the lowest-energy excitons in correlated cuprates or the two-dimensional exciton of TiO2. Irrespective of the origin, the concept of CT exciton is always related to a transfer of charge from one atomic site to another, thus spreading the wave-function over a few lattice sites.
Surface exciton
At surfaces it is possible for so called image states to occur, where the hole is inside the solid and the electron is in the vacuum. These electron-hole pairs can only move along the surface.
Dark exciton
Dark excitons are those where the electrons have a different momentum from the holes to which they are bound that is they are in an optically forbidden transition which prevents them from photon absorption and therefore to reach their state they need phonon scattering. They can even outnumber normal bright excitons formed by absorption alone.
Atomic and molecular excitons
Alternatively, an exciton may be described as an excited state of an atom, ion, or molecule, if the excitation is wandering from one cell of the lattice to another.
When a molecule absorbs a quantum of energy that corresponds to a transition from one molecular orbital to another molecular orbital, the resulting electronic excited state is also properly described as an exciton. An electron is said to be found in the lowest unoccupied orbital and an electron hole in the highest occupied molecular orbital, and since they are found within the same molecular orbital manifold, the electron-hole state is said to be bound. Molecular excitons typically have characteristic lifetimes on the order of nanoseconds, after which the ground electronic state is restored and the molecule undergoes photon or phonon emission. Molecular excitons have several interesting properties, one of which is energy transfer (see Förster resonance energy transfer) whereby if a molecular exciton has proper energetic matching to a second molecule's spectral absorbance, then an exciton may transfer (hop) from one molecule to another. The process is strongly dependent on intermolecular distance between the species in solution, and so the process has found application in sensing and molecular rulers.
The hallmark of molecular excitons in organic molecular crystals are doublets and/or triplets of exciton absorption bands strongly polarized along crystallographic axes. In these crystals an elementary cell includes several molecules sitting in symmetrically identical positions, which results in the level degeneracy that is lifted by intermolecular interaction. As a result, absorption bands are polarized along the symmetry axes of the crystal. Such multiplets were discovered by Antonina Prikhot'ko and their genesis was proposed by Alexander Davydov. It is known as 'Davydov splitting'.
Giant oscillator strength of bound excitons
Excitons are lowest excited states of the electronic subsystem of pure crystals. Impurities can bind excitons, and when the bound state is shallow, the oscillator strength for producing bound excitons is so high that impurity absorption can compete with intrinsic exciton absorption even at rather low impurity concentrations. This phenomenon is generic and applicable both to the large radius (Wannier–Mott) excitons and molecular (Frenkel) excitons. Hence, excitons bound to impurities and defects possess giant oscillator strength.
Self-trapping of excitons
In crystals, excitons interact with phonons, the lattice vibrations. If this coupling is weak as in typical semiconductors such as GaAs or Si, excitons are scattered by phonons. However, when the coupling is strong, excitons can be self-trapped. Self-trapping results in dressing excitons with a dense cloud of virtual phonons which strongly suppresses the ability of excitons to move across the crystal. In simpler terms, this means a local deformation of the crystal lattice around the exciton. Self-trapping can be achieved only if the energy of this deformation can compete with the width of the exciton band. Hence, it should be of atomic scale, of about an electron volt.
Self-trapping of excitons is similar to forming strong-coupling polarons but with three essential differences. First, self-trapped exciton states are always of a small radius, of the order of lattice constant, due to their electric neutrality. Second, there exists a self-trapping barrier separating free and self-trapped states, hence, free excitons are metastable. Third, this barrier enables coexistence of free and self-trapped states of excitons. This means that spectral lines of free excitons and wide bands of self-trapped excitons can be seen simultaneously in absorption and luminescence spectra. While the self-trapped states are of lattice-spacing scale, the barrier has typically much larger scale. Indeed, its spatial scale is about where is effective mass of the exciton, is the exciton-phonon coupling constant, and is the characteristic frequency of optical phonons. Excitons are self-trapped when and are large, and then the spatial size of the barrier is large compared with the lattice spacing. Transforming a free exciton state into a self-trapped one proceeds as a collective tunneling of coupled exciton-lattice system (an instanton). Because is large, tunneling can be described by a continuum theory. The height of the barrier . Because both and appear in the denominator of , the barriers are basically low. Therefore, free excitons can be seen in crystals with strong exciton-phonon coupling only in pure samples and at low temperatures. Coexistence of free and self-trapped excitons was observed in rare-gas solids, alkali-halides, and in molecular crystal of pyrene.
Interaction
Excitons are the main mechanism for light emission in semiconductors at low temperature (when the characteristic thermal energy kT is less than the exciton binding energy), replacing the free electron-hole recombination at higher temperatures.
The existence of exciton states may be inferred from the absorption of light associated with their excitation. Typically, excitons are observed just below the band gap.
When excitons interact with photons a so-called polariton (or more specifically exciton-polariton) is formed. These excitons are sometimes referred to as dressed excitons.
Provided the interaction is attractive, an exciton can bind with other excitons to form a biexciton, analogous to a dihydrogen molecule. If a large density of excitons is created in a material, they can interact with one another to form an electron-hole liquid, a state observed in k-space indirect semiconductors.
Additionally, excitons are integer-spin particles obeying Bose statistics in the low-density limit. In some systems, where the interactions are repulsive, a Bose–Einstein condensed state, called excitonium, is predicted to be the ground state. Some evidence of excitonium has existed since the 1970s but has often been difficult to discern from a Peierls phase. Exciton condensates have allegedly been seen in a double quantum well systems. In 2017 Kogar et al. found "compelling evidence" for observed excitons condensing in the three-dimensional semimetal 1T-TiSe2.
Spatially direct and indirect excitons
Normally, excitons in a semiconductor have a very short lifetime due to the close proximity of the electron and hole. However, by placing the electron and hole in spatially separated quantum wells with an insulating barrier layer in between so called 'spatially indirect' excitons can be created. In contrast to ordinary (spatially direct), these spatially indirect excitons can have large spatial separation between the electron and hole, and thus possess a much longer lifetime. This is often used to cool excitons to very low temperatures in order to study Bose–Einstein condensation (or rather its two-dimensional analog).
Fractional excitons
Fractional excitons are a class of quantum particles discovered in bilayer graphene systems under the fractional quantum Hall effect. These excitons form when electrons and holes bind in a two-dimensional material separated by an insulating layer of hexagonal boron nitride. When exposed to strong magnetic fields, these systems display fractionalized excitonic behavior with distinct quantum properties.
See also
Orbiton
Oscillator strength
Plasmon
Polariton superfluid
Trion
References
Quasiparticles
Bosons | Exciton | [
"Physics",
"Materials_science"
] | 4,141 | [
"Matter",
"Bosons",
"Condensed matter physics",
"Quasiparticles",
"Subatomic particles"
] |
43,601 | https://en.wikipedia.org/wiki/Gnuplot | gnuplot is a command-line and GUI program that can generate two- and three-dimensional plots of functions, data, and data fits. The program runs on all major computers and operating systems (Linux, Unix, Microsoft Windows, macOS, FreeDOS, and many others).
Originally released in 1986, its listed authors are Thomas Williams, Colin Kelley, Russell Lang, Dave Kotz, John Campbell, Gershon Elber, Alexander Woo "and many others." Despite its name, this software is not part of the GNU Project.
Features
gnuplot can produce output directly on screen, or in many formats of graphics files, including Portable Network Graphics (PNG), Encapsulated PostScript (EPS), Scalable Vector Graphics (SVG), JPEG and many others. It is also capable of producing LaTeX code that can be included directly in LaTeX documents, making use of LaTeX's fonts and powerful formula notation abilities. The program can be used both interactively and in batch mode using scripts.
gnuplot can read data in multiple formats, including ability to read data on the fly generated by other programs (piping), create multiple plots on one image, do 2D, 3D, contour plots, parametric equations, supports various linear and non-linear coordinate systems, projections, geographic and time data reading and presentation, box plots of various forms, histograms, labels, and other custom elements on the plot, including shapes, text and images, that can be set manually, computed by script or automatically from input data.
gnuplot also provides scripting capabilities, looping, functions, text processing, variables, macros, arbitrary pre-processing of input data (usually across columns), as well as the ability to perform non-linear multi-dimensional multi-set weighted data fitting (see Curve fitting and Levenberg–Marquardt algorithm).
The gnuplot core code is programmed in C. Modular subsystems for output via Qt, wxWidgets, and LaTeX/TikZ/ConTeXt are written in C++ and Lua.
The code below creates the graph to the right.
set title "Some Math Functions"
set xrange [-10:10]
set yrange [-2:2]
set zeroaxis
plot (x/4)**2, sin(x), 1/x
The name of this program was originally chosen to avoid conflicts with a program called "newplot", and was originally a compromise between "llamaplot" and "nplot".
Support of Epidemic daily and weekly formats in Version 5.4.2 is a result of pandemic coronavirus data needs.
Distribution terms
Despite gnuplot's name, it is not named after, part of or related to the GNU Project, nor does it use the GNU General Public License. It was named as part of a compromise by the original authors, punning on gnu (the animal) and newplot (a planned name that was discarded due to already being used).
Official source code to gnuplot is freely redistributable, but modified versions thereof are not. The gnuplot license allows instead distribution of patches against official releases, optionally accompanied by officially released source code. Binaries may be distributed along with the unmodified source code and any patches applied thereto. Contact information must be supplied with derived works for technical support for the modified software.
Permission to modify the software is granted, but not the right to distribute the complete modified source code. Modifications are to be distributed as patches to the released version.
Despite this restriction, gnuplot is accepted and used by many GNU packages and is widely included in Linux distributions including the stricter ones such as Debian and Fedora. The OSI Open Source Definition and the Debian Free Software Guidelines specifically allow for restrictions on distribution of modified source code, given explicit permission to distribute both patches and source code.
Newer gnuplot modules (e.g. Qt, wxWidgets, and cairo drivers) have been contributed under dual-licensing terms, e.g. gnuplot + BSD or gnuplot + GPL.
GUIs and programs that use gnuplot
Several third-party programs have graphical user interfaces that can be used to generate graphs using gnuplot as the plotting engine. These include:
gretl, a statistics package for econometrics
JGNUPlot, a java-based GUI
Kayali a computer algebra system
xldlas, an old X11 statistics package
gnuplotxyz, an old Windows program
wxPinter, a graphical plot manager for gnuplot
Maxima is a text-based computer algebra system which itself has several third-party GUIs
REDUCE is a text-based computer algebra system; versions using CSL have a GUI and there are several third-party GUIs
Other programs that use gnuplot include:
GNU Octave, a mathematical programming language
statist, a terminal-based program
gplot.pl provides a simpler command-line interface.
feedgnuplot provides a plotting of stored and realtime data from a pipe
ElchemeaAnalytical, an impedance spectroscopy plotting and fitting program developed by DTU Energy
Gnuplot add-in for Microsoft Excel
Calc, the GNU Emacs calculator
Programming and application interfaces
gnuplot can be used from various programming languages to graph data, including C++ (via g3p), Perl (via PDL and other CPAN packages), Python (via gnuplotlib, Gnuplot-py and SageMath), R via (Rgnuplot), Julia (via Gaston.jl), Java (via JavaGnuplotHybrid and jgnuplot), Ruby (via Ruby Gnuplot), Ch (via Ch Gnuplot), Haskell (via Haskell gnuplot), Fortran 95, Smalltalk (Squeak and GNU Smalltalk) and Rust (via RustGnuplot).
gnuplot also supports piping, which is typical of scripts. For script-driven graphics, gnuplot is one of the most popular programs.
Gnuplot output formats
Gnuplot allows the user to display or store plots in several ways:
On the console, such as dumb, sixel.
In a desktop window, such as Qt, wxt, X11, aquaterm, win.
Embedded in a web page, such as SVG, HTML5, PNG, JPEG, animated GIF.
File formats designed for document processing, such as PostScript, PDF, cgm, emf, LaTeX variants.
See also
List of graphing software
References
Further reading and external links
Gnuplot 5: an interactive ebook about gnuplot v.5.
gnuplotting: a blog of gnuplot examples and tips
spplotters: a blog of gnuplot examples and tips
gnuplot surprising: a blog of gnuplot examples and tips
gnuplot online: WebAssembly compiled online gnuplot v.5.x
Visualize your data with gnuplot: an IBM tutorial
Articles containing video clips
Computer animation
Cross-platform free software
Data analysis software
Free 3D graphics software
Free educational software
Free mathematics software
Free plotting software
Free software programmed in C
Plotting software
Regression and curve fitting software
Software that uses wxWidgets
Software that uses Qt | Gnuplot | [
"Mathematics"
] | 1,539 | [
"Free mathematics software",
"Mathematical software"
] |
43,604 | https://en.wikipedia.org/wiki/Bud | In botany, a bud is an undeveloped or embryonic shoot and normally occurs in the axil of a leaf or at the tip of a stem. Once formed, a bud may remain for some time in a dormant condition, or it may form a shoot immediately. Buds may be specialized to develop flowers or short shoots or may have the potential for general shoot development. The term bud is also used in zoology, where it refers to an outgrowth from the body which can develop into a new individual.
Overview
The buds of many woody plants, especially in temperate or cold climates, are protected by a covering of modified leaves called scales which tightly enclose the more delicate parts of the bud. Many bud scales are covered by a gummy substance which serves as added protection. When the bud develops, the scales may enlarge somewhat but usually just drop off, leaving a series of horizontally-elongated scars on the surface of the growing stem. By means of these scars one can determine the age of any young branch, since each year's growth ends in the formation of a bud, the formation of which produces an additional group of bud scale scars. Continued growth of the branch causes these scars to be obliterated after a few years so that the total age of older branches cannot be determined by this means.
In many plants, scales do not form over the bud, and the bud is then called a naked bud. The minute underdeveloped leaves in such buds are often excessively hairy. Naked buds are found in some shrubs, like some species of the Sumac and Viburnums (Viburnum alnifolium and V. lantana) and in herbaceous plants. In many of the latter, buds are even more reduced, often consisting of undifferentiated masses of cells in the axils of leaves. A terminal bud occurs on the end of a stem and lateral buds are found on the side. A head of cabbage (see Brassica) is an exceptionally large terminal bud, while Brussels sprouts are large lateral buds.
Since buds are formed in the axils of leaves, their distribution on the stem is the same as that of leaves. There are alternate, opposite, and whorled buds, as well as the terminal bud at the tip of the stem. In many plants buds appear in unexpected places: these are known as adventitious buds.
Often it is possible to find a bud in a remarkable series of gradations of bud scales. In the buckeye, for example, one may see a complete gradation from the small brown outer scale through larger scales which on unfolding become somewhat green to the inner scales of the bud, which are remarkably leaf-like. Such a series suggests that the scales of the bud are in truth leaves, modified to protect the more delicate parts of the plant during unfavorable periods.
Types of buds
Buds are often useful in the identification of plants, especially for woody plants in winter when leaves have fallen. Buds may be classified and described according to different criteria: location, status, morphology, and function.
Botanists commonly use the following terms:
for location:
, when located at the tip of a stem (apical is equivalent but rather reserved for the one at the top of the plant);
axillary, when located in the axil of a leaf (lateral is the equivalent but some adventitious buds may be lateral too);
adventitious, when located elsewhere, for example on the trunk or roots (some adventitious buds may be former axillary ones that are reduced and hidden under the bark, while other adventitious buds are completely new formed ones).
for status:
accessory, for secondary buds formed besides a principal bud (axillary or terminal);
resting, for a bud that forms at the end of a growth season, and then lies dormant until the onset of the next growth season;
dormant or latent, for buds whose growth has been delayed for a rather long time. The term is usable as a synonym of resting, but is better employed for buds waiting undeveloped for years, for example epicormic buds;
pseudoterminal, for an axillary bud taking over the function of a terminal bud (characteristic of species whose growth is sympodial: terminal bud dies and is replaced by the closer axillary bud, for examples beech, persimmon, Platanus have sympodial growth).
for morphology:
scaly or covered (perulate), when scales, also referred to as a perule (lat. perula, perulaei) (which are in fact transformed and reduced leaves) cover and protect the embryonic parts;
naked, when not covered by scales;
hairy, when also protected by hairs (it may apply either to scaly or to naked buds).
for function:
vegetative, only containing vegetative structures: a leaf bud is an embryonic shoot containing leaves;
reproductive, only containing embryonic flower(s): a flower bud contains a single flower while an inflorescence bud contains an inflorescence;
mixed, containing both embryonic leaves and flower(s).
Image gallery
References
Plant physiology
Plant morphology | Bud | [
"Biology"
] | 1,056 | [
"Plant morphology",
"Plant physiology",
"Plants"
] |
43,607 | https://en.wikipedia.org/wiki/Six%20Degrees%20of%20Kevin%20Bacon | Six Degrees of Kevin Bacon or Bacon's Law is a parlor game where players challenge each other to arbitrarily choose an actor and then connect them to another actor via a film that both actors have appeared in together, repeating this process to try to find the shortest path that ultimately leads to prolific American actor Kevin Bacon. It rests on the assumption that anyone involved in the Hollywood film industry can be linked through their film roles to Bacon within six steps. The game's name is a reference to "six degrees of separation", a concept that posits that any two people on Earth are six or fewer acquaintance links apart.
In 2007, Bacon started a charitable organization called SixDegrees.org. In 2020, Bacon started a podcast called The Last Degree of Kevin Bacon.
History
In a January 1994 interview with Premiere magazine, Kevin Bacon mentioned while discussing the film The River Wild that "he had worked with everybody in Hollywood or someone who's worked with them." Following this, a lengthy newsgroup thread which was headed "Kevin Bacon is the Center of the Universe" appeared. In 1994, three Albright College students - Craig Fass, Brian Turtle and Mike Ginelli - invented the game that became known as "Six Degrees of Kevin Bacon" after seeing two movies on television that featured Bacon back to back, Footloose and The Air Up There. During the latter film they began to speculate on how many movies Bacon had been in and the number of people with whom he had worked.
They wrote a letter to talk show host Jon Stewart, telling him that "Kevin Bacon was the center of the entertainment universe" and explaining the game. They appeared on The Jon Stewart Show and The Howard Stern Show with Bacon to explain the game. Bacon admitted that he initially disliked the game because he believed it was ridiculing him, but he eventually came to enjoy it. The three inventors released a book, Six Degrees of Kevin Bacon (), with an introduction written by Bacon. A board game based on the concept was released by Endless Games.
In popular culture
In 1995 Cartoon Network referenced the concept in a commercial, having Velma (from Scooby-Doo) as the central figure in the 'Cartoon Network Universe'. The commercial cites connections as arbitrary as fake appearances, sharing of clothes, or physical resemblance.
The concept was also presented in an episode of the TV show Mad About You dated November 19, 1996, in which a character expressed the opinion that every actor is only three degrees of separation from Kevin Bacon. Bacon spoofed the concept himself in a cameo he performed for the independent film We Married Margo. Playing himself in a 2003 episode of Will and Grace, Bacon connects himself to Val Kilmer through Tom Cruise and jokes "Hey, that was a short one!". The headline of The Onion, a satirical newspaper, on October 30, 2002, was "Kevin Bacon Linked To Al-Qaeda". Bacon provides the voice-over commentary for the NY Skyride attraction at the Empire State Building in New York City. At several points throughout the commentary, Bacon alludes to his connections to Hollywood stars via other actors with whom he has worked.
In Scream 2, written by Kevin Williamson, a sorority sister played by Portia De Rossi refers to Six Degrees of Kevin Bacon. Bacon himself later starred in The Following, also created and written by Williamson, and broadcast on Fox between 2013 and 2015.
The annual 31 Days of Oscar event on the Turner Classic Movies television channel sometimes includes a "360 Degrees of Oscar" strand where each film shown shares an actor with the previous one. It has been used as recently as 2020.
In 2009, Bacon narrated a National Geographic Channel show The Human Family Tree – a program which describes the efforts of that organization's Genographic Project to establish the genetic interconnectedness of all humans. Bacon appeared in a commercial for the Visa check card that referenced the game. In the commercial, Bacon wants to write a check to buy a book, but the clerk asks for his ID, which he does not have. He leaves and returns with a group of people, then says to the clerk, "Okay, I was in a movie with an extra, Eunice, whose hairdresser, Wayne, attended Sunday school with Father O'Neill, who plays racquetball with Dr. Sanjay, who recently removed the appendix of Kim, who dumped you sophomore year. So you see, we're practically brothers."
In 2011, James Franco made reference to Six Degrees of Kevin Bacon while hosting the 83rd Academy Awards. EE began a UK television advertising campaign in November 2012, based on the Six Degrees concept, where Bacon illustrates his connections and draws attention to how the EE 4G network allows similar connectivity.
In "Weird Al" Yankovic's song "Lame Claim to Fame", one of the lines is, "I know a guy who knows a guy who knows a guy who knows a guy who knows a guy who knows Kevin Bacon." American rapper MC Zappa also makes reference to the game in his 2018 song "Level Up (The Ill Cypher)".
The most highly connected nodes of the Internet have been referred to as "the Kevin Bacons of the Web", inasmuch as they enable most users to navigate to most sites in 19 clicks or less.
Bacon numbers
The Bacon number of an actor is the number of degrees of separation they have from Kevin Bacon, as defined by the game. This is an application of the Erdős number concept to the Hollywood movie industry. The higher the Bacon number, the greater the separation from Kevin Bacon the actor is.
The computation of a Bacon number for actor X is a "shortest path" algorithm, applied to the co-stardom network:
Kevin Bacon himself has a Bacon number of 0.
Actors who have worked directly with Kevin Bacon have a Bacon number of 1.
If the lowest Bacon number of any actor with whom X has appeared in any movie is N, X's Bacon number is N+1.
Examples
Elvis Presley was in Change of Habit (1969) with Ed Asner. Ed Asner was in JFK (1991) with Kevin Bacon. Therefore, Asner has a Bacon number of 1, and Presley (who never appeared in a film with Bacon) has a Bacon number of 2.
Ian McKellen was in X-Men: Days of Future Past (2014) with Michael Fassbender and James McAvoy. McAvoy and Fassbender were in X-Men: First Class (2011) with Kevin Bacon. Therefore, McAvoy and Fassbender have Bacon numbers of 1, and McKellen has a Bacon number of 2.
Because some people have both a finite Bacon and a finite Erdős number because of acting and publications, there are a rare few who have a finite Erdős–Bacon number, which is defined as the sum of a person's independent Erdős and Bacon numbers.
Photography book
Inspired by the game, the British photographer Andy Gotts tried to reach Kevin Bacon through photographic links instead of film links.
Gotts wrote to 300 actors asking to take their pictures and received permission only from Joss Ackland. Ackland then suggested that Gotts photograph Greta Scacchi, with whom he had appeared in the film White Mischief. Gotts proceeded from there, asking each actor to refer him to one or more friends or colleagues. Eventually, Christian Slater referred him to Bacon. Gotts' photograph of Bacon completed the project, eight years after it began. Gotts published the photos in a book, Degrees (), with text by Alan Bates, Pierce Brosnan, and Bacon.
See also
Small-world experiment
Morphy Number, connections via chess games to Paul Morphy
Shusaku number, equivalent in the Go world with Honinbo Shusaku
Erdős number, equivalent for Mathematicians with Paul Erdős
References
External links
The Oracle of Bacon computes the Bacon number of any actor or actress
Six Degrees of Lois Weisberg by Malcolm Gladwell
Endless Games games
Games of mental skill
Separation numbers
de:Bacon-Zahl
it:Kevin Bacon#Il numero di Bacon | Six Degrees of Kevin Bacon | [
"Mathematics"
] | 1,657 | [
"Separation numbers",
"Mathematical objects",
"Numbers"
] |
43,661 | https://en.wikipedia.org/wiki/Cadenza | In music, a cadenza, (from , meaning cadence; plural, cadenze ) is, generically, an improvised or written-out ornamental passage played or sung by a soloist(s), usually in a "free" rhythmic style, and often allowing virtuosic display. During this time the accompaniment will rest, or sustain a note or chord. Thus an improvised cadenza is indicated in written notation by a fermata in all parts. A cadenza will usually occur over either the final or penultimate note in a piece, the lead-in (), or the final or penultimate note in an important subsection of a piece. A cadenza can also be found before a final coda or ritornello.
Origin
Initially, cadenzas were more simple and structured - a performer would add small embellishments such as trills to the end of cadences. These small embellishments of the early cadenza did not affect meter. However, as the improvised embellishments continued, they became longer and more thought out. This made way for the 'composed' cadenza which ultimately progressed into the 'free' metered feel that is more commonly associated with cadenzas today. Performers are able to play without being tied to meter or a strict time, and accompanists in orchestra await their entrance.
In concerti
The term cadenza often refers to a portion of a concerto in which the orchestra stops playing, leaving the soloist to play alone in free time (without a strict, regular pulse) and can be written or improvised, depending on what the composer specifies. Sometimes, a cadenza will include small parts for other instruments besides the soloist; an example is in Sergei Rachmaninoff's Piano Concerto No. 3, where a solo flute, clarinet and horn are used over rippling arpeggios in the piano. A cadenza normally occurs near the end of the first movement, though it can be at any point in a concerto. An example is Tchaikovsky's First Piano Concerto, where in the first five minutes a cadenza is used. The cadenza is usually the most elaborate and virtuosic part that the solo instrument plays during the whole piece. At the end of the cadenza, the orchestra re-enters, and generally finishes off the movement on their own, or, less often, with the solo instrument.
Cadential trill
Typically during the classical period, a solo cadenza in a concerto would end with a trill, usually on the supertonic, preceding the re-entry of the orchestra for the movement's coda. Extended cadential trills were frequent in Mozart's piano concerti; they may also be found in violin concerti and concerti for stringed instruments of the period up to the early 19th century (see illustration at head of this article).
As a vocal flourish
The cadenza was originally, and remains, a vocal flourish improvised by a performer to elaborate a cadence in an aria. It was later used in instrumental music, and soon became a standard part of the concerto. Cadenzas for voice and wind instruments were to be performed in one breath, and they should not use distant keys. Originally, it was improvised in this context as well, but during the 19th century, composers began to write cadenzas out in full. Third parties also wrote cadenzas for works in which it was intended by the composer to be improvised, so the soloist could have a well formed solo that they could practice in advance. Some of these have become so widely played and sung that they are effectively part of the standard repertoire, as is the case with Joseph Joachim's cadenza for Johannes Brahms' Violin Concerto, Beethoven's set of cadenzas for Mozart's Piano Concerto no. 20, and Estelle Liebling's edition of cadenzas for operas such as Donizetti's La fille du régiment and Lucia di Lammermoor.
In jazz
Perhaps the most notable deviations from this tendency towards written (or absent) cadenzas are to be found in jazz, most often at the end of a ballad, though cadenzas in this genre are usually brief. Saxophonist John Coltrane, however, usually improvised an extended cadenza when performing "I Want To Talk About You", in which he showcased his predilections for scalar improvisation and multiphonics. The recorded examples of "I Want To Talk About You" (Live at Birdland and Afro Blue Impressions) are approximately 8 minutes in length, with Coltrane's unaccompanied cadenza taking up approximately 3 minutes. More sardonically, jazz critic Martin Williams once described Coltrane's improvisations on "Africa/Brass" as "essentially extended cadenzas to pieces that never get played." Equally noteworthy is saxophonist Sonny Rollins' shorter improvised cadenza at the close of "Three Little Words" (Sonny Rollins on Impulse!).
Cadenzas are also found in instrumental solos with piano or other accompaniment, where they are placed near the beginning or near the end or sometimes in both places (e.g. the cornet solo "The Maid of the Mist" by Herbert L. Clarke, or the end of "Think of Me" in Andrew Lloyd Webber's The Phantom of the Opera, where Christine Daaé sings a short but involved cadenza).
Notable examples
Concertos are not the only pieces that feature cadenzas; Scena di Canta Gitano, the fourth movement of Nikolai Rimsky-Korsakov's Capriccio Espagnol, contains cadenzas for horns and trumpets, violin, flute, clarinet, and harp in its beginning section.
Johann Strauss II unusually wrote a cadenza-like solo for cello and flute for the final section of his Emperor Waltz, before the piece is brought to an end by a round of trumpets and then the whole orchestra.
The second movement of Bach's third Brandenburg Concerto consists of just two chords; it is generally taken to indicate a cadenza to be improvised around that cadence.
The first movement of Bach's fifth Brandenburg Concerto features an extensive written cadenza for harpsichord.
The coloratura arias of bel canto composers Gaetano Donizetti, Vincenzo Bellini, and Gioachino Rossini.
Mozart wrote the cadenzas for violin and viola duet in the first and second movements of the Sinfonia Concertante for Violin, Viola, and Orchestra, K. 364.
Mozart wrote a cadenza into the third and final movement of Piano Sonata in B-flat major, K. 333, which was an unusual (but not unique) choice at that time because the movement is otherwise in sonata-rondo form.
Beethoven's "Emperor" Concerto contains a notated cadenza. It begins with a cadenza that is partly accompanied by the orchestra. Later in the first movement, the composer specifies that the soloist should play the music that is written out in the score, and not add a cadenza on one's own.
Beethoven famously included a cadenza-like solo for oboe in the recapitulation section of the first movement of his Symphony No. 5.
Tchaikovsky's first piano concerto is notable not only for having a cadenza within the first few minutes of the first movement, but also for having a second – substantially longer – cadenza in a more conventional place, near the end of the movement.
Rachmaninoff's Piano Concerto No. 3, in which the first movement features a long and incredibly difficult toccata-like cadenza with an even longer alternative or ossia cadenza written in a heavier chordal style. Both cadenzas lead to an identical section with arpeggios in the piano and a solo flute accompanying, before the cadenza ends quietly.
Fritz Kreisler's cadenzas for the first and third movements of Beethoven's Violin Concerto.
Aaron Copland uses a cadenza in his Clarinet Concerto to connect the two movements.
Karlheinz Stockhausen composed five ensemble cadenzas in his wind quintet Zeitmaße (1955–1956), cadenzas for piccolo trumpet and piccolo in Luzifers Tanz (1983), and a cadenza for cor anglais in his trio Balance (2007)
Karol Szymanowski's two violin concertos both feature cadenzas written by the violinist who was intended to play them, Paweł Kochański.
In the third movement of Elgar's Violin Concerto, there is an unexpected cadenza in which the orchestra supports the solo with a pizzicato tremolando effect ("cadenza accompagnato").
Franz Liszt's Hungarian Rhapsody No. 2 for piano contains the instruction cadenza ad libitum before the final coda, meaning it is at the pianist's discretion that such a cadenza is added. Whilst most performers prefer to decline the invitation, some pianists such as Alfred Cortot, Sergei Rachmaninoff and Marc-André Hamelin have produced notable cadenzas for the work.
Pianists Chick Corea and Makoto Ozone incorporated jazz cadenzas into an otherwise traditional performance in Japan of the Mozart Double Piano Concerto.
Rimsky-Korsakov's Scheherazade features numerous cadenzas for violin.
Mozart wrote a cadenza in his own Horn Concerto No. 3, towards the end of the first of three movements.
Sergei Prokofiev's second piano concerto contains a taxing five-minute cadenza that closes out the first movement.
In Dmitri Shostakovich's first cello concerto the third movement on its own is a cadenza connecting the second and fourth movements.
Carlos Chávez's Violin Concerto has a seven-minute unaccompanied cadenza as the third of its five main sections, despite the fact that the soloist plays almost without a break throughout the rest of the 35-minute-long composition
Composed cadenzas
Composers who have written cadenzas for other performers in works not their own include:
Carl Baermann's cadenza for the second movement of Mozart's Clarinet Concerto.
Ludwig van Beethoven wrote cadenzas for Mozart's Piano Concerto No. 20 in D minor first and third movements.
Joseph Joachim wrote a cadenza for Brahms's Violin Concerto.
Benjamin Britten wrote a cadenza for Haydn's Cello Concerto No. 1 in C for Mstislav Rostropovich.
David Johnstone wrote A Manual of Cadenzas and Cadences for Cello, pub. Creighton's Collection (2007).
Wilhelm Kempff wrote cadenzas for Beethoven's first four piano concertos.
Clara Schumann wrote a cadenza for Beethoven's Piano Concerto No. 3.
Karlheinz Stockhausen composed cadenzas for two Mozart concerti for wind instruments (flute and clarinet), for Kathinka Pasveer and Suzanne Stephens, respectively, and one cadenza each for the trumpet concertos by Leopold Mozart and Joseph Haydn, for his son Markus.
Richard Strauss wrote a vocal cadenza in 1919 for soprano Elisabeth Schumann to sing in Mozart's solo motet Exsultate, jubilate. This cadenza was sung by Kathleen Battle in her recording.
Friedrich Wührer composed and published cadenzas for Mozart's piano concerti in C major, K. 467; C minor, K. 491; and D major, K. 537.
Sergei Rachmaninoff wrote a cadenza for Liszt's Hungarian Rhapsody No. 2 and was recorded playing the piece with this cadenza in 1919.
Alfred Schnittke wrote two cadenzas for Beethoven's Violin Concerto, of which the first includes musical quotations from violin concertos of Berg, Brahms, Bartók (Concertos No. 1 and No. 2), Shostakovich (Concerto No. 1), as well as from Beethoven's 7th Symphony. Schnittke also wrote a cadenza for the first movement of Mozart's Piano Concerto No. 24 in 1975.
Fritz Kreisler composed a half polyphonic cadenza for Beethoven's Violin Concerto.
John Williams composed a 6-minute segment consisting of a cadenza, a series of variations, and a few more elaborations to go over the opening credits of the 1971 film Fiddler on the Roof, performed by violinist Isaac Stern.
Alma Deutscher composed a cadenza for Mozart's 8th Piano Concerto when she was ten.
David Popper composed a set of cadenzas for 5 different concertos (Haydn's Concerto No. 2 in D major, Op. 101; Saint Saëns' Cello Concerto No. 1 in A minor, Op. 33; Schumann's Cello Concerto in A minor, Op. 129; Volkmann's Cello Concerto in A minor, Op. 33; and Molique's Cello Concerto in D major, Op. 45).
Émile Sauret wrote a cadenza for Paganini's Violin Concerto No. 1, Op. 6.
References
Further reading
Badura-Skoda, Eva, et al. "Cadenza". Grove Music Online ed. L. Macy (subscription required). Accessed 2007-04-06.
Lawson, Colin (1999). The Historical Performance of Music: An Introduction, p. 75–76. .
External links
Cadences
Formal sections in music analysis
Italian opera terminology
Improvisation
Music performance
Ornamentation
Solo music | Cadenza | [
"Technology"
] | 2,747 | [
"Components",
"Formal sections in music analysis"
] |
43,705 | https://en.wikipedia.org/wiki/Cryptocrystalline | Cryptocrystalline is a rock texture made up of such minute crystals that its crystalline nature is only vaguely revealed even microscopically in thin section by transmitted polarized light. Among the sedimentary rocks, chert and flint are cryptocrystalline. Carbonado, a form of diamond, is also cryptocrystalline. Volcanic rocks, especially of the felsic type such as felsites and rhyolites, may have a cryptocrystalline groundmass as distinguished from pure obsidian (felsic) or tachylyte (mafic), which are natural rock glasses. Agate and onyx are examples of cryptocrystalline silica (chalcedony). The quartz crystals in chalcedony are so tiny that they cannot be distinguished with the naked eye.
See also
List of rock textures
Macrocrystalline
Microcrystalline
Nanocrystalline
Rock microstructure
References
Crystals
Lithics
Petrology | Cryptocrystalline | [
"Chemistry",
"Materials_science"
] | 204 | [
"Crystallography",
"Crystals"
] |
43,709 | https://en.wikipedia.org/wiki/Radiation%20pressure | Radiation pressure (also known as light pressure) is mechanical pressure exerted upon a surface due to the exchange of momentum between the object and the electromagnetic field. This includes the momentum of light or electromagnetic radiation of any wavelength that is absorbed, reflected, or otherwise emitted (e.g. black-body radiation) by matter on any scale (from macroscopic objects to dust particles to gas molecules). The associated force is called the radiation pressure force, or sometimes just the force of light.
The forces generated by radiation pressure are generally too small to be noticed under everyday circumstances; however, they are important in some physical processes and technologies. This particularly includes objects in outer space, where it is usually the main force acting on objects besides gravity, and where the net effect of a tiny force may have a large cumulative effect over long periods of time. For example, had the effects of the Sun's radiation pressure on the spacecraft of the Viking program been ignored, the spacecraft would have missed Mars orbit by about . Radiation pressure from starlight is crucial in a number of astrophysical processes as well. The significance of radiation pressure increases rapidly at extremely high temperatures and can sometimes dwarf the usual gas pressure, for instance, in stellar interiors and thermonuclear weapons. Furthermore, large lasers operating in space have been suggested as a means of propelling sail craft in beam-powered propulsion.
Radiation pressure forces are the bedrock of laser technology and the branches of science that rely heavily on lasers and other optical technologies. That includes, but is not limited to, biomicroscopy (where light is used to irradiate and observe microbes, cells, and molecules), quantum optics, and optomechanics (where light is used to probe and control objects like atoms, qubits and macroscopic quantum objects). Direct applications of the radiation pressure force in these fields are, for example, laser cooling (the subject of the 1997 Nobel Prize in Physics), quantum control of macroscopic objects and atoms (2012 Nobel Prize in Physics), interferometry (2017 Nobel Prize in Physics) and optical tweezers (2018 Nobel Prize in Physics).
Radiation pressure can equally well be accounted for by considering the momentum of a classical electromagnetic field or in terms of the momenta of photons, particles of light. The interaction of electromagnetic waves or photons with matter may involve an exchange of momentum. Due to the law of conservation of momentum, any change in the total momentum of the waves or photons must involve an equal and opposite change in the momentum of the matter it interacted with (Newton's third law of motion), as is illustrated in the accompanying figure for the case of light being perfectly reflected by a surface. This transfer of momentum is the general explanation for what we term radiation pressure.
Discovery
Johannes Kepler put forward the concept of radiation pressure in 1619 to explain the observation that a tail of a comet always points away from the Sun.
The assertion that light, as electromagnetic radiation, has the property of momentum and thus exerts a pressure upon any surface that is exposed to it was published by James Clerk Maxwell in 1862, and proven experimentally by Russian physicist Pyotr Lebedev in 1900 and by Ernest Fox Nichols and Gordon Ferrie Hull in 1901. The pressure is very small, but can be detected by allowing the radiation to fall upon a delicately poised vane of reflective metal in a Nichols radiometer (this should not be confused with the Crookes radiometer, whose characteristic motion is not caused by radiation pressure but by air flow caused by temperature differentials.)
Theory
Radiation pressure can be viewed as a consequence of the conservation of momentum given the momentum attributed to electromagnetic radiation. That momentum can be equally well calculated on the basis of electromagnetic theory or from the combined momenta of a stream of photons, giving identical results as is shown below.
Radiation pressure from momentum of an electromagnetic wave
According to Maxwell's theory of electromagnetism, an electromagnetic wave carries momentum. Momentum will be transferred to any surface it strikes that absorbs or reflects the radiation.
Consider the momentum transferred to a perfectly absorbing (black) surface. The energy flux (irradiance) of a plane wave is calculated using the Poynting vector , which is the cross product of the electric field vector E and the magnetic field's auxiliary field vector (or magnetizing field) H. The magnitude, denoted by S, divided by the speed of light is the density of the linear momentum per unit area (pressure) of the electromagnetic field. So, dimensionally, the Poynting vector is , which is the speed of light, , times pressure, . That pressure is experienced as radiation pressure on the surface:
where is pressure (usually in pascals), is the incident irradiance (usually in W/m2) and is the speed of light in vacuum. Here, .
If the surface is planar at an angle α to the incident wave, the intensity across the surface will be geometrically reduced by the cosine of that angle and the component of the radiation force against the surface will also be reduced by the cosine of α, resulting in a pressure:
The momentum from the incident wave is in the same direction of that wave. But only the component of that momentum normal to the surface contributes to the pressure on the surface, as given above. The component of that force tangent to the surface is not called pressure.
Radiation pressure from reflection
The above treatment for an incident wave accounts for the radiation pressure experienced by a black (totally absorbing) body. If the wave is specularly reflected, then the recoil due to the reflected wave will further contribute to the radiation pressure. In the case of a perfect reflector, this pressure will be identical to the pressure caused by the incident wave:
thus doubling the net radiation pressure on the surface:
For a partially reflective surface, the second term must be multiplied by the reflectivity (also known as reflection coefficient of intensity), so that the increase is less than double. For a diffusely reflective surface, the details of the reflection and geometry must be taken into account, again resulting in an increased net radiation pressure of less than double.
Radiation pressure by emission
Just as a wave reflected from a body contributes to the net radiation pressure experienced, a body that emits radiation of its own (rather than reflected) obtains a radiation pressure again given by the irradiance of that emission in the direction normal to the surface Ie:
The emission can be from black-body radiation or any other radiative mechanism. Since all materials emit black-body radiation (unless they are totally reflective or at absolute zero), this source for radiation pressure is ubiquitous but usually tiny. However, because black-body radiation increases rapidly with temperature (as the fourth power of temperature, given by the Stefan–Boltzmann law), radiation pressure due to the temperature of a very hot object (or due to incoming black-body radiation from similarly hot surroundings) can become significant. This is important in stellar interiors.
Radiation pressure in terms of photons
Electromagnetic radiation can be viewed in terms of particles rather than waves; these particles are known as photons. Photons do not have a rest-mass; however, photons are never at rest (they move at the speed of light) and acquire a momentum nonetheless which is given by:
where is momentum, is the Planck constant, is wavelength, and is speed of light in vacuum. And is the energy of a single photon given by:
The radiation pressure again can be seen as the transfer of each photon's momentum to the opaque surface, plus the momentum due to a (possible) recoil photon for a (partially) reflecting surface. Since an incident wave of irradiance over an area has a power of , this implies a flux of photons per second per unit area striking the surface. Combining this with the above expression for the momentum of a single photon, results in the same relationships between irradiance and radiation pressure described above using classical electromagnetics. And again, reflected or otherwise emitted photons will contribute to the net radiation pressure identically.
Compression in a uniform radiation field
In general, the pressure of electromagnetic waves can be obtained from the vanishing of the trace of the electromagnetic stress tensor: since this trace equals 3P − u, we get
where is the radiation energy per unit volume.
This can also be shown in the specific case of the pressure exerted on surfaces of a body in thermal equilibrium with its surroundings, at a temperature : the body will be surrounded by a uniform radiation field described by the Planck black-body radiation law and will experience a compressive pressure due to that impinging radiation, its reflection, and its own black-body emission. From that it can be shown that the resulting pressure is equal to one third of the total radiant energy per unit volume in the surrounding space.
By using Stefan–Boltzmann law, this can be expressed as
where is the Stefan–Boltzmann constant.
Solar radiation pressure
Solar radiation pressure is due to the Sun's radiation at closer distances, thus especially within the Solar System. While it acts on all objects, its net effect is generally greater on smaller bodies, since they have a larger ratio of surface area to mass. All spacecraft experience such a pressure, except when they are behind the shadow of a larger orbiting body.
Solar radiation pressure on objects near the Earth may be calculated using the Sun's irradiance at 1 AU, known as the solar constant, or GSC, whose value is set at 1361 W/m2 as of 2011.
All stars have a spectral energy distribution that depends on their surface temperature. The distribution is approximately that of black-body radiation. This distribution must be taken into account when calculating the radiation pressure or identifying reflector materials for optimizing a solar sail, for instance.
Momentary or hours long solar pressures can indeed escalate due to release of solar flares and coronal mass ejections, but effects remain essentially immeasureable in relation to Earth's orbit. However these pressures persist over eons, such that cumulatively having produced a measureable movement on the Earth-Moon system's orbit.
Pressures of absorption and reflection
Solar radiation pressure at the Earth's distance from the Sun, may be calculated by dividing the solar constant GSC (above) by the speed of light c. For an absorbing sheet facing the Sun, this is simply:
This result is in pascals, equivalent to N/m2 (newtons per square meter). For a sheet at an angle α to the Sun, the effective area A of a sheet is reduced by a geometrical factor resulting in a force in the direction of the sunlight of:
To find the component of this force normal to the surface, another cosine factor must be applied resulting in a pressure P on the surface of:
Note, however, that in order to account for the net effect of solar radiation on a spacecraft for instance, one would need to consider the total force (in the direction away from the Sun) given by the preceding equation, rather than just the component normal to the surface that we identify as "pressure".
The solar constant is defined for the Sun's radiation at the distance to the Earth, also known as one astronomical unit (au). Consequently, at a distance of R astronomical units (R thus being dimensionless), applying the inverse-square law, we would find:
Finally, considering not an absorbing but a perfectly reflecting surface, the pressure is doubled due to the reflected wave, resulting in:
Note that unlike the case of an absorbing material, the resulting force on a reflecting body is given exactly by this pressure acting normal to the surface, with the tangential forces from the incident and reflecting waves canceling each other. In practice, materials are neither totally reflecting nor totally absorbing, so the resulting force will be a weighted average of the forces calculated using these formulas.
Radiation pressure perturbations
Solar radiation pressure is a source of orbital perturbations. It significantly affects the orbits and trajectories of small bodies including all spacecraft.
Solar radiation pressure affects bodies throughout much of the Solar System. Small bodies are more affected than large ones because of their lower mass relative to their surface area. Spacecraft are affected along with natural bodies (comets, asteroids, dust grains, gas molecules).
The radiation pressure results in forces and torques on the bodies that can change their translational and rotational motions. Translational changes affect the orbits of the bodies. Rotational rates may increase or decrease. Loosely aggregated bodies may break apart under high rotation rates. Dust grains can either leave the Solar System or spiral into the Sun.
A whole body is typically composed of numerous surfaces that have different orientations on the body. The facets may be flat or curved. They will have different areas. They may have optical properties differing from other aspects.
At any particular time, some facets are exposed to the Sun, and some are in shadow. Each surface exposed to the Sun is reflecting, absorbing, and emitting radiation. Facets in shadow are emitting radiation. The summation of pressures across all of the facets defines the net force and torque on the body. These can be calculated using the equations in the preceding sections.
The Yarkovsky effect affects the translation of a small body. It results from a face leaving solar exposure being at a higher temperature than a face approaching solar exposure. The radiation emitted from the warmer face is more intense than that of the opposite face, resulting in a net force on the body that affects its motion.
The YORP effect is a collection of effects expanding upon the earlier concept of the Yarkovsky effect, but of a similar nature. It affects the spin properties of bodies.
The Poynting–Robertson effect applies to grain-size particles. From the perspective of a grain of dust circling the Sun, the Sun's radiation appears to be coming from a slightly forward direction (aberration of light). Therefore, the absorption of this radiation leads to a force with a component against the direction of movement. (The angle of aberration is tiny, since the radiation is moving at the speed of light, while the dust grain is moving many orders of magnitude slower than that.) The result is a gradual spiral of dust grains into the Sun. Over long periods of time, this effect cleans out much of the dust in the Solar System.
While rather small in comparison to other forces, the radiation pressure force is inexorable. Over long periods of time, the net effect of the force is substantial. Such feeble pressures can produce marked effects upon minute particles like gas ions and electrons, and are essential in the theory of electron emission from the Sun, of cometary material, and so on.
Because the ratio of surface area to volume (and thus mass) increases with decreasing particle size, dusty (micrometre-size) particles are susceptible to radiation pressure even in the outer Solar System. For example, the evolution of the outer rings of Saturn is significantly influenced by radiation pressure.
As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction", which would oppose the movement of matter. He wrote: "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief."
Solar sails
Solar sailing, an experimental method of spacecraft propulsion, uses radiation pressure from the Sun as a motive force. The idea of interplanetary travel by light was mentioned by Jules Verne in his 1865 novel From the Earth to the Moon.
A sail reflects about 90% of the incident radiation. The 10% that is absorbed is radiated away from both surfaces, with the proportion emitted from the unlit surface depending on the thermal conductivity of the sail. A sail has curvature, surface irregularities, and other minor factors that affect its performance.
The Japan Aerospace Exploration Agency (JAXA) has successfully unfurled a solar sail in space, which has already succeeded in propelling its payload with the IKAROS project.
Cosmic effects of radiation pressure
Radiation pressure has had a major effect on the development of the cosmos, from the birth of the universe to ongoing formation of stars and shaping of clouds of dust and gasses on a wide range of scales.
Early universe
The photon epoch is a phase when the energy of the universe was dominated by photons, between 10 seconds and 380,000 years after the Big Bang.
Galaxy formation and evolution
The process of galaxy formation and evolution began early in the history of the cosmos. Observations of the early universe strongly suggest that objects grew from bottom-up (i.e., smaller objects merging to form larger ones). As stars are thereby formed and become sources of electromagnetic radiation, radiation pressure from the stars becomes a factor in the dynamics of remaining circumstellar material.
Clouds of dust and gases
The gravitational compression of clouds of dust and gases is strongly influenced by radiation pressure, especially when the condensations lead to star births. The larger young stars forming within the compressed clouds emit intense levels of radiation that shift the clouds, causing either dispersion or condensations in nearby regions, which influences birth rates in those nearby regions.
Clusters of stars
Stars predominantly form in regions of large clouds of dust and gases, giving rise to star clusters. Radiation pressure from the member stars eventually disperses the clouds, which can have a profound effect on the evolution of the cluster.
Many open clusters are inherently unstable, with a small enough mass that the escape velocity of the system is lower than the average velocity of the constituent stars. These clusters will rapidly disperse within a few million years. In many cases, the stripping away of the gas from which the cluster formed by the radiation pressure of the hot young stars reduces the cluster mass enough to allow rapid dispersal.
Star formation
Star formation is the process by which dense regions within molecular clouds in interstellar space collapse to form stars. As a branch of astronomy, star formation includes the study of the interstellar medium and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function.
Stellar planetary systems
Planetary systems are generally believed to form as part of the same process that results in star formation. A protoplanetary disk forms by gravitational collapse of a molecular cloud, called a solar nebula, and then evolves into a planetary system by collisions and gravitational capture. Radiation pressure can clear a region in the immediate vicinity of the star. As the formation process continues, radiation pressure continues to play a role in affecting the distribution of matter. In particular, dust and grains can spiral into the star or escape the stellar system under the action of radiation pressure.
Stellar interiors
In stellar interiors the temperatures are very high. Stellar models predict a temperature of 15 MK in the center of the Sun, and at the cores of supergiant stars the temperature may exceed 1 GK. As the radiation pressure scales as the fourth power of the temperature, it becomes important at these high temperatures. In the Sun, radiation pressure is still quite small when compared to the gas pressure. In the heaviest non-degenerate stars, radiation pressure is the dominant pressure component.
Comets
Solar radiation pressure strongly affects comet tails. Solar heating causes gases to be released from the comet nucleus, which also carry away dust grains. Radiation pressure and solar wind then drive the dust and gases away from the Sun's direction. The gases form a generally straight tail, while slower moving dust particles create a broader, curving tail.
Laser applications of radiation pressure
Optical tweezers
Lasers can be used as a source of monochromatic light with wavelength . With a set of lenses, one can focus the laser beam to a point that is in diameter (or ).
The radiation pressure of a P = 30 mW laser with λ = 1064 nm can therefore be computed as follows.
Area:
force:
pressure:
This is used to trap or levitate particles in optical tweezers.
Light–matter interactions
The reflection of a laser pulse from the surface of an elastic solid can give rise to various types of elastic waves that propagate inside the solid or liquid. In other words, the light can excite and/or amplify motion of, and in, materials. This is the subject of study in the field of optomechanics. The weakest waves are generally those that are generated by the radiation pressure acting during the reflection of the light. Such light-pressure-induced elastic waves have for example observed inside an ultrahigh-reflectivity dielectric mirror. These waves are the most basic fingerprint of a light-solid matter interaction on the macroscopic scale. In the field of cavity optomechanics, light is trapped and resonantly enhanced in optical cavities, for example between mirrors. This serves the purpose of gravely enhancing the power of the light, and the radiation pressure it can exert on objects and materials. Optical control (that is, manipulation of the motion) of a plethora of objects has been realized: from kilometers long beams (such as in the LIGO interferometer) to clouds of atoms, and from micro-engineered trampolines to superfluids.
Opposite to exciting or amplifying motion, light can also damp the motion of objects. Laser cooling is a method of cooling materials very close to absolute zero by converting some of material's motional energy into light. Kinetic energy and thermal energy of the material are synonyms here, because they represent the energy associated with Brownian motion of the material. Atoms traveling towards a laser light source perceive a doppler effect tuned to the absorption frequency of the target element. The radiation pressure on the atom slows movement in a particular direction until the Doppler effect moves out of the frequency range of the element, causing an overall cooling effect.
An other active research area of laser–matter interaction is the radiation pressure acceleration of ions or protons from thin–foil targets. High ion energy beams can be generated for medical applications (for example in ion beam therapy) by the radiation pressure of short laser pulses on ultra-thin foils.
See also
Absorption (electromagnetic radiation)
Cavity optomechanics
Laser cooling
LIGO
Optical tweezers
Photon
Poynting vector
Poynting's theorem
Poynting–Robertson effect
Quantum optics
Solar constant
Solar sail
Sunlight
Wave–particle duality
Yarkovsky effect
Yarkovsky–O'Keefe–Radzievskii–Paddack effect
References
Further reading
Demir, Dilek, "A table-top demonstration of radiation pressure", 2011, Diplomathesis, E-Theses univie
Celestial mechanics
Radiation effects
Radiation | Radiation pressure | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,747 | [
"Transport phenomena",
"Physical phenomena",
"Radiation effects",
"Classical mechanics",
"Astrophysics",
"Materials science",
"Waves",
"Radiation",
"Condensed matter physics",
"Celestial mechanics"
] |
43,710 | https://en.wikipedia.org/wiki/Silicon%20dioxide | Silicon dioxide, also known as silica, is an oxide of silicon with the chemical formula , commonly found in nature as quartz. In many parts of the world, silica is the major constituent of sand. Silica is one of the most complex and abundant families of materials, existing as a compound of several minerals and as a synthetic product. Examples include fused quartz, fumed silica, opal, and aerogels. It is used in structural materials, microelectronics, and as components in the food and pharmaceutical industries. All forms are white or colorless, although impure samples can be colored.
Silicon dioxide is a common fundamental constituent of glass.
Structure
In the majority of silicon dioxides, the silicon atom shows tetrahedral coordination, with four oxygen atoms surrounding a central Si atom (see 3-D Unit Cell). Thus, SiO2 forms 3-dimensional network solids in which each silicon atom is covalently bonded in a tetrahedral manner to 4 oxygen atoms. In contrast, CO2 is a linear molecule. The starkly different structures of the dioxides of carbon and silicon are a manifestation of the double bond rule.
Based on the crystal structural differences, silicon dioxide can be divided into two categories: crystalline and non-crystalline (amorphous). In crystalline form, this substance can be found naturally occurring as quartz, tridymite (high-temperature form), cristobalite (high-temperature form), stishovite (high-pressure form), and coesite (high-pressure form). On the other hand, amorphous silica can be found in nature as opal and diatomaceous earth. Quartz glass is a form of intermediate state between these structures.
All of these distinct crystalline forms always have the same local structure around Si and O. In α-quartz the Si–O bond length is 161 pm, whereas in α-tridymite it is in the range 154–171 pm. The Si–O–Si angle also varies between a low value of 140° in α-tridymite, up to 180° in β-tridymite. In α-quartz, the Si–O–Si angle is 144°.
Polymorphism
Alpha quartz is the most stable form of solid SiO2 at room temperature. The high-temperature minerals, cristobalite and tridymite, have both lower densities and indices of refraction than quartz. The transformation from α-quartz to beta-quartz takes place abruptly at 573 °C. Since the transformation is accompanied by a significant change in volume, it can easily induce fracturing of ceramics or rocks passing through this temperature limit. The high-pressure minerals, seifertite, stishovite, and coesite, though, have higher densities and indices of refraction than quartz. Stishovite has a rutile-like structure where silicon is 6-coordinate. The density of stishovite is 4.287 g/cm3, which compares to α-quartz, the densest of the low-pressure forms, which has a density of 2.648 g/cm3. The difference in density can be ascribed to the increase in coordination as the six shortest Si–O bond lengths in stishovite (four Si–O bond lengths of 176 pm and two others of 181 pm) are greater than the Si–O bond length (161 pm) in α-quartz.
The change in the coordination increases the ionicity of the Si–O bond.
Faujasite silica, another polymorph, is obtained by the dealumination of a low-sodium, ultra-stable Y zeolite with combined acid and thermal treatment. The resulting product contains over 99% silica, and has high crystallinity and specific surface area (over 800 m2/g). Faujasite-silica has very high thermal and acid stability. For example, it maintains a high degree of long-range molecular order or crystallinity even after boiling in concentrated hydrochloric acid.
Molten SiO2
Molten silica exhibits several peculiar physical characteristics that are similar to those observed in liquid water: negative temperature expansion, density maximum at temperatures ~5000 °C, and a heat capacity minimum. Its density decreases from 2.08 g/cm3 at 1950 °C to 2.03 g/cm3 at 2200 °C.
Molecular SiO2
The molecular SiO2 has a linear structure like . It has been produced by combining silicon monoxide (SiO) with oxygen in an argon matrix.
The dimeric silicon dioxide, (SiO2)2 has been obtained by reacting O2 with matrix isolated dimeric silicon monoxide, (Si2O2). In dimeric silicon dioxide there are two oxygen atoms bridging between the silicon atoms with an Si–O–Si angle of 94° and bond length of 164.6 pm and the terminal Si–O bond length is 150.2 pm. The Si–O bond length is 148.3 pm, which compares with the length of 161 pm in α-quartz. The bond energy is estimated at 621.7 kJ/mol.
Natural occurrence
Geology
is most commonly encountered in nature as quartz, which comprises more than 10% by mass of the Earth's crust. Quartz is the only polymorph of silica stable at the Earth's surface. Metastable occurrences of the high-pressure forms coesite and stishovite have been found around impact structures and associated with eclogites formed during ultra-high-pressure metamorphism. The high-temperature forms of tridymite and cristobalite are known from silica-rich volcanic rocks. In many parts of the world, silica is the major constituent of sand.
Biology
Even though it is poorly soluble, silica occurs in many plants such as rice. Plant materials with high silica phytolith content appear to be of importance to grazing animals, from chewing insects to ungulates. Silica accelerates tooth wear, and high levels of silica in plants frequently eaten by herbivores may have developed as a defense mechanism against predation.
Silica is also the primary component of rice husk ash, which is used, for example, in filtration and as supplementary cementitious material (SCM) in cement and concrete manufacturing.
Silicification in and by cells has been common in the biological world and it occurs in bacteria, protists, plants, and animals (invertebrates and vertebrates).
Prominent examples include:
Tests or frustules (i.e. shells) of diatoms, Radiolaria, and testate amoebae.
Silica phytoliths in the cells of many plants including Equisetaceae, many grasses, and a wide range of dicotyledons.
The spicules forming the skeleton of many sponges.
Uses
Structural use
About 95% of the commercial use of silicon dioxide (sand) is in the construction industry, e.g. in the production of concrete (Portland cement concrete).
Certain deposits of silica sand, with desirable particle size and shape and desirable clay and other mineral content, were important for sand casting of metallic products. The high melting point of silica enables it to be used in such applications such as iron casting; modern sand casting sometimes uses other minerals for other reasons.
Crystalline silica is used in hydraulic fracturing of formations which contain tight oil and shale gas.
Precursor to glass and silicon
Silica is the primary ingredient in the production of most glass. As other minerals are melted with silica, the principle of freezing point depression lowers the melting point of the mixture and increases fluidity. The glass transition temperature of pure SiO2 is about 1475 K. When molten silicon dioxide SiO2 is rapidly cooled, it does not crystallize, but solidifies as a glass. Because of this, most ceramic glazes have silica as the main ingredient.
The structural geometry of silicon and oxygen in glass is similar to that in quartz and most other crystalline forms of silicon and oxygen, with silicon surrounded by regular tetrahedra of oxygen centres. The difference between the glass and crystalline forms arises from the connectivity of the tetrahedral units: Although there is no long-range periodicity in the glassy network, ordering remains at length scales well beyond the SiO bond length. One example of this ordering is the preference to form rings of 6-tetrahedra.
The majority of optical fibers for telecommunications are also made from silica. It is a primary raw material for many ceramics such as earthenware, stoneware, and porcelain.
Silicon dioxide is used to produce elemental silicon. The process involves carbothermic reduction in an electric arc furnace:
SiO2 + 2 C -> Si + 2 CO
Fumed silica
Fumed silica, also known as pyrogenic silica, is prepared by burning SiCl4 in an oxygen-rich hydrogen flame to produce a "smoke" of SiO2.
SiCl4 + 2 H2 + O2 -> SiO2 + 4 HCl
It can also be produced by vaporizing quartz sand in a 3000 °C electric arc. Both processes result in microscopic droplets of amorphous silica fused into branched, chainlike, three-dimensional secondary particles which then agglomerate into tertiary particles, a white powder with extremely low bulk density (0.03-0.15 g/cm3) and thus high surface area. The particles act as a thixotropic thickening agent, or as an anti-caking agent, and can be treated to make them hydrophilic or hydrophobic for either water or organic liquid applications.
Silica fume is an ultrafine powder collected as a by-product of the silicon and ferrosilicon alloy production. It consists of amorphous (non-crystalline) spherical particles with an average particle diameter of 150 nm, without the branching of the pyrogenic product. The main use is as pozzolanic material for high performance concrete. Fumed silica nanoparticles can be successfully used as an anti-aging agent in asphalt binders.
Food, cosmetic, and pharmaceutical applications
Silica, either colloidal, precipitated, or pyrogenic fumed, is a common additive in food production. It is used primarily as a flow or anti-caking agent in powdered foods such as spices and non-dairy coffee creamer, or powders to be formed into pharmaceutical tablets. It can adsorb water in hygroscopic applications. Colloidal silica is used as a fining agent for wine, beer, and juice, with the E number reference E551.
In cosmetics, silica is useful for its light-diffusing properties and natural absorbency.
Diatomaceous earth, a mined product, has been used in food and cosmetics for centuries. It consists of the silica shells of microscopic diatoms; in a less processed form it was sold as "tooth powder". Manufactured or mined hydrated silica is used as the hard abrasive in toothpaste.
Semiconductors
Silicon dioxide is widely used in the semiconductor technology:
for the primary passivation (directly on the semiconductor surface),
as an original gate dielectric in MOS technology. Today when scaling (dimension of the gate length of the MOS transistor) has progressed below 10 nm, silicon dioxide has been replaced by other dielectric materials like hafnium oxide or similar with higher dielectric constant compared to silicon dioxide,
as a dielectric layer between metal (wiring) layers (sometimes up to 8–10) connecting elements and
as a second passivation layer (for protecting semiconductor elements and the metallization layers) typically today layered with some other dielectrics like silicon nitride.
Because silicon dioxide is a native oxide of silicon it is more widely used compared to other semiconductors like gallium arsenide or indium phosphide.
Silicon dioxide could be grown on a silicon semiconductor surface. Silicon oxide layers could protect silicon surfaces during diffusion processes, and could be used for diffusion masking.
Surface passivation is the process by which a semiconductor surface is rendered inert, and does not change semiconductor properties as a result of interaction with air or other materials in contact with the surface or edge of the crystal. The formation of a thermally grown silicon dioxide layer greatly reduces the concentration of electronic states at the silicon surface. SiO2 films preserve the electrical characteristics of p–n junctions and prevent these electrical characteristics from deteriorating by the gaseous ambient environment. Silicon oxide layers could be used to electrically stabilize silicon surfaces. The surface passivation process is an important method of semiconductor device fabrication that involves coating a silicon wafer with an insulating layer of silicon oxide so that electricity could reliably penetrate to the conducting silicon below. Growing a layer of silicon dioxide on top of a silicon wafer enables it to overcome the surface states that otherwise prevent electricity from reaching the semiconducting layer.
The process of silicon surface passivation by thermal oxidation (silicon dioxide) is critical to the semiconductor industry. It is commonly used to manufacture metal–oxide–semiconductor field-effect transistors (MOSFETs) and silicon integrated circuit chips (with the planar process).
Other
Hydrophobic silica is used as a defoamer component.
In its capacity as a refractory, it is useful in fiber form as a high-temperature thermal protection fabric.
Silica is used in the extraction of DNA and RNA due to its ability to bind to the nucleic acids under the presence of chaotropes.
Silica aerogel was used in the Stardust spacecraft to collect extraterrestrial particles.
Pure silica (silicon dioxide), when cooled as fused quartz into a glass with no true melting point, can be used as a glass fibre for fibreglass.
Production
Silicon dioxide is mostly obtained by mining, including sand mining and purification of quartz.
Quartz is suitable for many purposes, while chemical processing is required to make a purer or otherwise more suitable (e.g. more reactive or fine-grained) product.
Precipitated silica
Precipitated silica or amorphous silica is produced by the acidification of solutions of sodium silicate. The gelatinous precipitate or silica gel, is first washed and then dehydrated to produce colorless microporous silica. The idealized equation involving a trisilicate and sulfuric acid is:
Na2Si3O7 + H2SO4 -> 3 SiO2 + Na2SO4 + H2O
Approximately one billion kilograms/year (1999) of silica were produced in this manner, mainly for use for polymer composites – tires and shoe soles.
On microchips
Thin films of silica grow spontaneously on silicon wafers via thermal oxidation, producing a very shallow layer of about 1 nm or 10 Å of so-called native oxide.
Higher temperatures and alternative environments are used to grow well-controlled layers of silicon dioxide on silicon, for example at temperatures between 600 and 1200 °C, using so-called dry oxidation with O2
Si + O2 -> SiO2
or wet oxidation with H2O.
Si + 2 H2O -> SiO2 + 2 H2
The native oxide layer is beneficial in microelectronics, where it acts as electric insulator with high chemical stability. It can protect the silicon, store charge, block current, and even act as a controlled pathway to limit current flow.
Laboratory or special methods
From organosilicon compounds
Many routes to silicon dioxide start with an organosilicon compound, e.g., HMDSO, TEOS. Synthesis of silica is illustrated below using tetraethyl orthosilicate (TEOS). Simply heating TEOS at 680–730 °C results in the oxide:
Si(OC2H5)4 -> SiO2 + 2 O(C2H5)2
Similarly TEOS combusts around 400 °C:
Si(OC2H5)4 + 12 O2 -> SiO2 + 10 H2O + 8 CO2
TEOS undergoes hydrolysis via the so-called sol-gel process. The course of the reaction and nature of the product are affected by catalysts, but the idealized equation is:
Si(OC2H5)4 + 2 H2O -> SiO2 + 4 HOCH2CH3
Other methods
Being highly stable, silicon dioxide arises from many methods. Conceptually simple, but of little practical value, combustion of silane gives silicon dioxide. This reaction is analogous to the combustion of methane:
SiH4 + 2 O2 -> SiO2 + 2 H2O
However the chemical vapor deposition of silicon dioxide onto crystal surface from silane had been used using nitrogen as a carrier gas at 200–500 °C.
Chemical reactions
Silicon dioxide is a relatively inert material (hence its widespread occurrence as a mineral). Silica is often used as inert containers for chemical reactions. At high temperatures, it is converted to silicon by reduction with carbon.
Fluorine reacts with silicon dioxide to form SiF4 and O2 whereas the other halogen gases (Cl2, Br2, I2) are unreactive.
Most forms of silicon dioxide are attacked ("etched") by hydrofluoric acid (HF) to produce hexafluorosilicic acid:
Stishovite does not react to HF to any significant degree.
HF is used to remove or pattern silicon dioxide in the semiconductor industry.
Silicon dioxide acts as a Lux–Flood acid, being able to react with bases under certain conditions. As it does not contain any hydrogen, non-hydrated silica cannot directly act as a Brønsted–Lowry acid. While silicon dioxide is only poorly soluble in water at low or neutral pH (typically, 2 × 10−4 M for quartz up to 10−3 M for cryptocrystalline chalcedony), strong bases react with glass and easily dissolve it. Therefore, strong bases have to be stored in plastic bottles to avoid jamming the bottle cap, to preserve the integrity of the recipient, and to avoid undesirable contamination by silicate anions.
Silicon dioxide dissolves in hot concentrated alkali or fused hydroxide, as described in this idealized equation:
SiO2 + 2 NaOH -> Na2SiO3 + H2O
Silicon dioxide will neutralise basic metal oxides (e.g. sodium oxide, potassium oxide, lead(II) oxide, zinc oxide, or mixtures of oxides, forming silicates and glasses as the Si-O-Si bonds in silica are broken successively). As an example the reaction of sodium oxide and SiO2 can produce sodium orthosilicate, sodium silicate, and glasses, dependent on the proportions of reactants:
2 Na2O + SiO2 -> Na4SiO4;
Na2O + SiO2 -> Na2SiO3;
Na2O + SiO2 -> glass.
Examples of such glasses have commercial significance, e.g. soda–lime glass, borosilicate glass, lead glass. In these glasses, silica is termed the network former or lattice former. The reaction is also used in blast furnaces to remove sand impurities in the ore by neutralisation with calcium oxide, forming calcium silicate slag.
Silicon dioxide reacts in heated reflux under dinitrogen with ethylene glycol and an alkali metal base to produce highly reactive, pentacoordinate silicates which provide access to a wide variety of new silicon compounds. The silicates are essentially insoluble in all polar solvent except methanol.
Silicon dioxide reacts with elemental silicon at high temperatures to produce SiO:
SiO2 + Si -> 2 SiO
Water solubility
The solubility of silicon dioxide in water strongly depends on its crystalline form and is three to four times higher for amorphous silica than quartz; as a function of temperature, it peaks around . This property is used to grow single crystals of quartz in a hydrothermal process where natural quartz is dissolved in superheated water in a pressure vessel that is cooler at the top. Crystals of 0.5–1 kg can be grown for 1–2 months. These crystals are a source of very pure quartz for use in electronic applications. Above the critical temperature of water and a pressure of or higher, water is a supercritical fluid and solubility is once again higher than at lower temperatures.
Health effects
Silica ingested orally is essentially nontoxic, with an of 5000 mg/kg (5 g/kg). A 2008 study following subjects for 15 years found that higher levels of silica in water appeared to decrease the risk of dementia. An increase of 10 mg/day of silica in drinking water was associated with a reduced risk of dementia of 11%.
Inhaling finely divided crystalline silica dust can lead to silicosis, bronchitis, or lung cancer, as the dust becomes lodged in the lungs and continuously irritates the tissue, reducing lung capacities. When fine silica particles are inhaled in large enough quantities (such as through occupational exposure), it increases the risk of systemic autoimmune diseases such as lupus and rheumatoid arthritis compared to expected rates in the general population.
Occupational hazard
Silica is an occupational hazard for people who do sandblasting or work with powdered crystalline silica products. Amorphous silica, such as fumed silica, may cause irreversible lung damage in some cases but is not associated with the development of silicosis. Children, asthmatics of any age, those with allergies, and the elderly (all of whom have reduced lung capacity) can be affected in less time.
Crystalline silica is an occupational hazard for those working with stone countertops because the process of cutting and installing the countertops creates large amounts of airborne silica. Crystalline silica used in hydraulic fracturing presents a health hazard to workers.
Pathophysiology
In the body, crystalline silica particles do not dissolve over clinically relevant periods. Silica crystals inside the lungs can activate the NLRP3 inflammasome inside macrophages and dendritic cells and thereby result in production of interleukin, a highly pro-inflammatory cytokine in the immune system.
Regulation
Regulations restricting silica exposure 'with respect to the silicosis hazard' specify that they are concerned only with silica, which is both crystalline and dust-forming.
In 2013, the U.S. Occupational Safety and Health Administration reduced the exposure limit to 50 μg/m3 of air. Prior to 2013, it had allowed 100 μg/m3 and in construction workers even 250 μg/m3.
In 2013, OSHA also required the "green completion" of fracked wells to reduce exposure to crystalline silica and restrict the exposure limit.
Crystalline forms
SiO2, more so than almost any material, exists in many crystalline forms. These forms are called polymorphs.
Safety
Inhaling finely divided crystalline silica can lead to severe inflammation of the lung tissue, silicosis, bronchitis, lung cancer, and systemic autoimmune diseases, such as lupus and rheumatoid arthritis. Inhalation of amorphous silicon dioxide, in high doses, leads to non-permanent short-term inflammation, where all effects heal.
Other names
This extended list enumerates synonyms for silicon dioxide; all of these values are from a single source; values in the source were presented capitalized.
See also
Mesoporous silica
Orthosilicic acid
Silicon carbide
References
External links
Tridymite,
Quartz,
Cristobalite,
Amorphous, NIOSH Pocket Guide to Chemical Hazards
Crystalline, as respirable dust, NIOSH Pocket Guide to Chemical Hazards
Formation of silicon oxide layers in the semiconductor industry. LPCVD and PECVD method in comparison. Stress prevention.
Quartz (SiO2) piezoelectric properties
Silica (SiO2) and water
Epidemiological evidence on the carcinogenicity of silica: factors in scientific judgement by C. Soutar and others. Institute of Occupational Medicine Research Report TM/97/09
Scientific opinion on the health effects of airborne silica by A Pilkington and others. Institute of Occupational Medicine Research Report TM/95/08
The toxic effects of silica by A. Seaton and others. Institute of Occupational Medicine Research Report TM/87/13
Structure of precipitated silica
Ceramic materials
Refractory materials
IARC Group 1 carcinogens
Excipients
E-number additives
Oxides
Occupational safety and health | Silicon dioxide | [
"Physics",
"Chemistry",
"Engineering"
] | 5,136 | [
"Refractory materials",
"Oxides",
"Salts",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
43,711 | https://en.wikipedia.org/wiki/Nichols%20radiometer | A Nichols radiometer was the apparatus used by Ernest Fox Nichols and Gordon Ferrie Hull in 1901 for the measurement of radiation pressure.
It consisted of a pair of small silvered glass mirrors suspended in the manner of a torsion balance by a fine quartz fibre within an enclosure in which the air pressure could be regulated. The torsion head to which the fiber was attached could be turned from the outside using a magnet. A beam of light was directed first on one mirror and then on the other, and the opposite deflections observed with mirror and scale. By turning the mirror system around to receive the light on the unsilvered side, the influence of the air in the enclosure could be ascertained. This influence was found to be of almost negligible value at an air pressure of about . The radiant energy of the incident beam was deduced from its heating effect upon a small blackened silver disk, which was found to be more reliable than the bolometer when it was first used. With this apparatus, the experimenters were able to obtain an agreement between observed and computed radiation pressures within about 0.6%.
The original apparatus is at the Smithsonian Institution.
This apparatus is sometimes confused with the Crookes radiometer of 1873.
The original papers, with their historical context, have been re-printed in a chapter of the book Quantum Photonics: Pioneering Advances and Emerging Applications.
See also
Solar sail
References
E.F. Nichols and G.F. Hull, The Pressure due to Radiation, The Astrophysical Journal, Vol.17 No.5, p. 315-351 (1903)
Measuring the Pressure of Light: Pure Science at Dartmouth – Dartmouth Undergraduate Journal of Science
Electromagnetic radiation meters | Nichols radiometer | [
"Physics",
"Technology",
"Engineering"
] | 343 | [
"Measuring instruments",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Electromagnetic radiation meters"
] |
43,717 | https://en.wikipedia.org/wiki/Prisoner%27s%20dilemma | The prisoner's dilemma is a game theory thought experiment involving two rational agents, each of whom can either cooperate for mutual benefit or betray their partner ("defect") for individual gain. The dilemma arises from the fact that while defecting is rational for each agent, cooperation yields a higher payoff for each. The puzzle was designed by Merrill Flood and Melvin Dresher in 1950 during their work at the RAND Corporation. They invited economist Armen Alchian and mathematician John Williams to play a hundred rounds of the game, observing that Alchian and Williams often chose to cooperate. When asked about the results, John Nash remarked that rational behavior in the iterated version of the game can differ from that in a single-round version. This insight anticipated a key result in game theory: cooperation can emerge in repeated interactions, even in situations where it is not rational in a one-off interaction.
Albert W. Tucker later named the game the "prisoner's dilemma" by framing the rewards in terms of prison sentences. The prisoner's dilemma models many real-world situations involving strategic behavior. In casual usage, the label "prisoner's dilemma" is applied to any situation in which two entities can gain important benefits by cooperating or suffer by failing to do so, but find it difficult or expensive to coordinate their choices.
Premise
William Poundstone described this "typical contemporary version" of the game in his 1993 book Prisoner's Dilemma:
Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail. The prisoners are given a little time to think this over, but in no case may either learn what the other has decided until he has irrevocably made his decision. Each is informed that the other prisoner is being offered the very same deal. Each prisoner is concerned only with his own welfare—with minimizing his own prison sentence.
This leads to three different possible outcomes for prisoners A and B:
If A and B both remain silent, they will each serve one year in prison.
If one testifies against the other but the other doesn’t, the one testifying will be set free while the other serves three years in prison.
If A and B testify against each other, they will each serve two years.
Strategy for the prisoner's dilemma
Two prisoners are separated into individual rooms and cannot communicate with each other. It is assumed that both prisoners understand the nature of the game, have no loyalty to each other, and will have no opportunity for retribution or reward outside of the game. The normal game is shown below:
Regardless of what the other decides, each prisoner gets a higher reward by betraying the other ("defecting"). The reasoning involves analyzing both players' best responses: B will either cooperate or defect. If B cooperates, A should defect, because going free is better than serving 1 year. If B defects, A should also defect, because serving 2 years is better than serving 3. So, either way, A should defect since defecting is A's best response regardless of B's strategy. Parallel reasoning will show that B should defect.
Defection always results in a better payoff than cooperation, so it is a strictly dominant strategy for both players. Mutual defection is the only strong Nash equilibrium in the game. Since the collectively ideal result of mutual cooperation is irrational from a self-interested standpoint, this Nash equilibrium is not Pareto efficient.
Generalized form
The structure of the traditional prisoner's dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors red and blue and that each player chooses to either "cooperate" or "defect".
If both players cooperate, they both receive the reward for cooperating. If both players defect, they both receive the punishment payoff . If Blue defects while Red cooperates, then Blue receives the temptation payoff , while Red receives the "sucker's" payoff, . Similarly, if Blue cooperates while Red defects, then Blue receives the sucker's payoff , while Red receives the temptation payoff .
This can be expressed in normal form:
and to be a prisoner's dilemma game in the strong sense, the following condition must hold for the payoffs:
The payoff relationship implies that mutual cooperation is superior to mutual defection, while the payoff relationships and imply that defection is the dominant strategy for both agents.
The iterated prisoner's dilemma
If two players play the prisoner's dilemma more than once in succession, remember their opponent's previous actions, and are allowed to change their strategy accordingly, the game is called the iterated prisoner's dilemma.
In addition to the general form above, the iterative version also requires that , to prevent alternating cooperation and defection giving a greater reward than mutual cooperation.
The iterated prisoner's dilemma is fundamental to some theories of human cooperation and trust. Assuming that the game effectively models transactions between two people that require trust, cooperative behavior in populations can be modeled by a multi-player iterated version of the game. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoner's dilemma is also called the "peace-war game".
General strategy
If the iterated prisoner's dilemma is played a finite number of times and both players know this, then the dominant strategy and Nash equilibrium is to defect in all rounds. The proof is inductive: one might as well defect on the last turn, since the opponent will not have a chance to later retaliate. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. The same applies if the game length is unknown but has a known upper limit.
For cooperation to emerge between rational players, the number of rounds must be unknown or infinite. In that case, "always defect" may no longer be a dominant strategy. As shown by Robert Aumann in a 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain cooperation. Specifically, a player may be less willing to cooperate if their counterpart did not cooperate many times, which causes disappointment. Conversely, as time elapses, the likelihood of cooperation tends to rise, owing to the establishment of a "tacit agreement" among participating players. In experimental situations, cooperation can occur even when both participants know how many iterations will be played.
According to a 2019 experimental study in the American Economic Review that tested what strategies real-life subjects used in iterated prisoner's dilemma situations with perfect monitoring, the majority of chosen strategies were always to defect, tit-for-tat, and grim trigger. Which strategy the subjects chose depended on the parameters of the game.
Axelrod's tournament and successful strategy conditions
Interest in the iterated prisoner's dilemma was kindled by Robert Axelrod in his 1984 book The Evolution of Cooperation, in which he reports on a tournament that he organized of the N-step prisoner's dilemma (with N fixed) in which participants have to choose their strategy repeatedly and remember their previous encounters. Axelrod invited academic colleagues from around the world to devise computer strategies to compete in an iterated prisoner's dilemma tournament. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity for forgiveness, and so forth.
Axelrod discovered that when these encounters were repeated over a long period of time with many players, each with different strategies, greedy strategies tended to do very poorly in the long run while more altruistic strategies did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic behavior from mechanisms that are initially purely selfish, by natural selection.
The winning deterministic strategy was tit for tat, developed and entered into the tournament by Anatol Rapoport. It was the simplest of any program entered, containing only four lines of BASIC, and won the contest. The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his or her opponent did on the previous move. Depending on the situation, a slightly better strategy can be "tit for tat with forgiveness": when the opponent defects, on the next move, the player sometimes cooperates anyway, with a small probability (around 1–5%, depending on the lineup of opponents). This allows for occasional recovery from getting trapped in a cycle of defections.
After analyzing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to succeed:
Nice: The strategy will not be the first to defect (this is sometimes referred to as an "optimistic" algorithm), i.e., it will not "cheat" on its opponent for purely self-interested reasons first. Almost all the top-scoring strategies were nice.
Retaliating: The strategy must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate, a very bad choice that will frequently be exploited by "nasty" strategies.
Forgiving: Successful strategies must be forgiving. Though players will retaliate, they will cooperate again if the opponent does not continue to defect. This can stop long runs of revenge and counter-revenge, maximizing points.
Non-envious: The strategy must not strive to score more than the opponent.
In contrast to the one-time prisoner's dilemma game, the optimal strategy in the iterated prisoner's dilemma depends upon the strategies of likely opponents, and how they will react to defections and cooperation. For example, if a population consists entirely of players who always defect, except for one who follows the tit-for-tat strategy, that person is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy is to defect every time. More generally, given a population with a certain percentage of always-defectors with the rest being tit-for-tat players, the optimal strategy depends on the percentage and number of iterations played.
Other strategies
Deriving the optimal strategy is generally done in two ways:
Bayesian Nash equilibrium: If the statistical distribution of opposing strategies can be determined an optimal counter-strategy can be derived analytically.
Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those with high scores reproduce (a genetic algorithm for finding an optimal strategy). The mix of algorithms in the final population generally depends on the mix in the initial population. The introduction of mutation (random variation during reproduction) lessens the dependency on the initial population; empirical experiments with such systems tend to produce tit-for-tat players, but no analytic proof exists that this will always occur.
In the strategy called win-stay, lose-switch, faced with a failure to cooperate, the player switches strategy the next turn. In certain circumstances, Pavlov beats all other strategies by giving preferential treatment to co-players using a similar strategy.
Although tit-for-tat is considered the most robust basic strategy, a team from Southampton University in England introduced a more successful strategy at the 20th-anniversary iterated prisoner's dilemma competition. It relied on collusion between programs to achieve the highest number of points for a single program. The university submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the competing program's score. As a result, the 2004 Prisoners' Dilemma Tournament results show University of Southampton's strategies in the first three places (and a number of positions towards the bottom), despite having fewer wins and many more losses than the GRIM strategy. The Southampton strategy takes advantage of the fact that multiple entries were allowed in this particular competition and that a team's performance was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of minmaxing).
Because of this new rule, this competition also has little theoretical significance when analyzing single-agent strategies as compared to Axelrod's seminal tournament. But it provided a basis for analyzing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise.
Long before this new-rules tournament was played, Dawkins, in his book The Selfish Gene, pointed out the possibility of such strategies winning if multiple entries were allowed, but remarked that Axelrod would most likely not have allowed them if they had been submitted. It also relies on circumventing the rule that no communication is allowed between players, which the Southampton programs arguably did with their preprogrammed "ten-move dance" to recognize one another, reinforcing how valuable communication can be in shifting the balance of the game.
Even without implicit collusion between software strategies, tit-for-tat is not always the absolute winner of any given tournament; more precisely, its long-run results over a series of tournaments outperform its rivals, but this does not mean it is the most successful in the short term. The same applies to tit-for-tat with forgiveness and other optimal strategies.
This can also be illustrated using the Darwinian ESS simulation. In such a simulation, tit-for-tat will almost always come to dominate, though nasty strategies will drift in and out of the population because a tit-for-tat population is penetrable by non-retaliating nice strategies, which in turn are easy prey for the nasty strategies. Dawkins showed that here, no static mix of strategies forms a stable equilibrium, and the system will always oscillate between bounds.
Stochastic iterated prisoner's dilemma
In a stochastic iterated prisoner's dilemma game, strategies are specified in terms of "cooperation probabilities". In an encounter between player X and player Y, Xs strategy is specified by a set of probabilities P of cooperating with Y. P is a function of the outcomes of their previous encounters or some subset thereof. If P is a function of only their most recent n encounters, it is called a "memory-n" strategy. A memory-1 strategy is then specified by four cooperation probabilities: , where Pcd is the probability that X will cooperate in the present encounter given that the previous encounter was characterized by X cooperating and Y defecting. If each of the probabilities are either 1 or 0, the strategy is called deterministic. An example of a deterministic strategy is the tit-for-tat strategy written as , in which X responds as Y did in the previous encounter. Another is the win-stay, lose switch strategy written as . It has been shown that for any memory-n strategy there is a corresponding memory-1 strategy that gives the same statistical results, so that only memory-1 strategies need be considered.
If is defined as the above 4-element strategy vector of X and as the 4-element strategy vector of Y (where the indices are from Y's point of view), a transition matrix M may be defined for X whose ij-th entry is the probability that the outcome of a particular encounter between X and Y will be j given that the previous encounter was i, where i and j are one of the four outcome indices: cc, cd, dc, or dd. For example, from Xs point of view, the probability that the outcome of the present encounter is cd given that the previous encounter was cd is equal to . Under these definitions, the iterated prisoner's dilemma qualifies as a stochastic process and M is a stochastic matrix, allowing all of the theory of stochastic processes to be applied.
One result of stochastic theory is that there exists a stationary vector v for the matrix v such that . Without loss of generality, it may be specified that v is normalized so that the sum of its four components is unity. The ij-th entry in will give the probability that the outcome of an encounter between X and Y will be j given that the encounter n steps previous is i. In the limit as n approaches infinity, M will converge to a matrix with fixed values, giving the long-term probabilities of an encounter producing j independent of i. In other words, the rows of will be identical, giving the long-term equilibrium result probabilities of the iterated prisoner's dilemma without the need to explicitly evaluate a large number of interactions. It can be seen that v is a stationary vector for and particularly , so that each row of will be equal to v. Thus, the stationary vector specifies the equilibrium outcome probabilities for X. Defining and as the short-term payoff vectors for the {cc,cd,dc,dd} outcomes (from Xs point of view), the equilibrium payoffs for X and Y can now be specified as and , allowing the two strategies P and Q to be compared for their long-term payoffs.
Zero-determinant strategies
In 2012, William H. Press and Freeman Dyson published a new class of strategies for the stochastic iterated prisoner's dilemma called "zero-determinant" (ZD) strategies. The long term payoffs for encounters between X and Y can be expressed as the determinant of a matrix which is a function of the two strategies and the short term payoff vectors: and , which do not involve the stationary vector v. Since the determinant function is linear in , it follows that (where ). Any strategies for which are by definition a ZD strategy, and the long-term payoffs obey the relation .
Tit-for-tat is a ZD strategy which is "fair", in the sense of not gaining advantage over the other player. But the ZD space also contains strategies that, in the case of two players, can allow one player to unilaterally set the other player's score or alternatively force an evolutionary player to achieve a payoff some percentage lower than his own. The extorted player could defect, but would thereby hurt himself by getting a lower payoff. Thus, extortion solutions turn the iterated prisoner's dilemma into a sort of ultimatum game. Specifically, X is able to choose a strategy for which , unilaterally setting sy to a specific value within a particular range of values, independent of Ys strategy, offering an opportunity for X to "extort" player Y (and vice versa). But if X tries to set sx to a particular value, the range of possibilities is much smaller, consisting only of complete cooperation or complete defection.
An extension of the iterated prisoner's dilemma is an evolutionary stochastic iterated prisoner's dilemma, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly because they reduce each other's surplus).
Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is larger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.
While extortionary ZD strategies are not stable in large populations, another ZD class called "generous" strategies is both stable and robust. When the population is not too small, these strategies can supplant any other ZD strategy and even perform well against a broad array of generic strategies for iterated prisoner's dilemma, including win–stay, lose–switch. This was proven specifically for the donation game by Alexander Stewart and Joshua Plotkin in 2013. Generous strategies will cooperate with other cooperative players, and in the face of defection, the generous player loses more utility than its rival. Generous strategies are the intersection of ZD strategies and so-called "good" strategies, which were defined by Ethan Akin to be those for which the player responds to past mutual cooperation with future cooperation and splits expected payoffs equally if he receives at least the cooperative expected payoff. Among good strategies, the generous (ZD) subset performs well when the population is not too small. If the population is very small, defection strategies tend to dominate.
Continuous iterated prisoner's dilemma
Most work on the iterated prisoner's dilemma has focused on the discrete case, in which players either cooperate or defect, because this model is relatively simple to analyze. However, some researchers have looked at models of the continuous iterated prisoner's dilemma, in which players are able to make a variable contribution to the other player. Le and Boyd found that in such situations, cooperation is much harder to evolve than in the discrete iterated prisoner's dilemma. In a continuous prisoner's dilemma, if a population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than non-cooperators get little benefit from assorting with one another. By contrast, in a discrete prisoner's dilemma, tit-for-tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative to non-cooperators. Since nature arguably offers more opportunities for variable cooperation rather than a strict dichotomy of cooperation or defection, the continuous prisoner's dilemma may help explain why real-life examples of tit-for-tat-like cooperation are extremely rare even though tit-for-tat seems robust in theoretical models.
Real-life examples
Many instances of human interaction and natural processes have payoff matrices like the prisoner's dilemma's. It is therefore of interest to the social sciences, such as economics, politics, and sociology, as well as to the biological sciences, such as ethology and evolutionary biology. Many natural processes have been abstracted into models in which living beings are engaged in endless games of prisoner's dilemma.
Environmental studies
In environmental studies, the dilemma is evident in crises such as global climate change. It is argued all countries will benefit from a stable climate, but any single country is often hesitant to curb emissions. The immediate benefit to any one country from maintaining current behavior is perceived to be greater than the purported eventual benefit to that country if all countries' behavior was changed, therefore explaining the impasse concerning climate-change in 2007.
An important difference between climate-change politics and the prisoner's dilemma is uncertainty; the extent and pace at which pollution can change climate is not known. The dilemma faced by governments is therefore different from the prisoner's dilemma in that the payoffs of cooperation are unknown. This difference suggests that states will cooperate much less than in a real iterated prisoner's dilemma, so that the probability of avoiding a possible climate catastrophe is much smaller than that suggested by a game-theoretical analysis of the situation using a real iterated prisoner's dilemma.
Thomas Osang and Arundhati Nandy provide a theoretical explanation with proofs for a regulation-driven win-win situation along the lines of Michael Porter's hypothesis, in which government regulation of competing firms is substantial.
Animals
Cooperative behavior of many animals can be understood as an example of the iterated prisoner's dilemma. Often animals engage in long-term partnerships; for example, guppies inspect predators cooperatively in groups, and they are thought to punish non-cooperative inspectors.
Vampire bats are social animals that engage in reciprocal food exchange. Applying the payoffs from the prisoner's dilemma can help explain this behavior.
Psychology
In addiction research and behavioral economics, George Ainslie points out that addiction can be cast as an intertemporal prisoner's dilemma problem between the present and future selves of the addict. In this case, "defecting" means relapsing, where not relapsing both today and in the future is by far the best outcome. The case where one abstains today but relapses in the future is the worst outcome: in some sense, the discipline and self-sacrifice involved in abstaining today have been "wasted" because the future relapse means that the addict is right back where they started and will have to start over. Relapsing today and tomorrow is a slightly "better" outcome, because while the addict is still addicted, they haven't put the effort in to trying to stop. The final case, where one engages in the addictive behavior today while abstaining tomorrow, has the problem that (as in other prisoner's dilemmas) there is an obvious benefit to defecting "today", but tomorrow one will face the same prisoner's dilemma, and the same obvious benefit will be present then, ultimately leading to an endless string of defections.
In The Science of Trust, John Gottman defines good relationships as those where partners know not to enter into mutual defection behavior, or at least not to get dynamically stuck there in a loop. In cognitive neuroscience, fast brain signaling associated with processing different rounds may indicate choices at the next round. Mutual cooperation outcomes entail brain activity changes predictive of how quickly a person will cooperate in kind at the next opportunity; this activity may be linked to basic homeostatic and motivational processes, possibly increasing the likelihood of short-cutting into mutual cooperation.
Economics
The prisoner's dilemma has been called the E. coli of social psychology, and it has been used widely to research various topics such as oligopolistic competition and collective action to produce a collective good.
Advertising is sometimes cited as a real example of the prisoner's dilemma. When cigarette advertising was legal in the United States, competing cigarette manufacturers had to decide how much money to spend on advertising. The effectiveness of Firm A's advertising was partially determined by the advertising conducted by Firm B. Likewise, the profit derived from advertising for Firm B is affected by the advertising conducted by Firm A. If both Firm A and Firm B chose to advertise during a given period, then the advertisement from each firm negates the other's, receipts remain constant, and expenses increase due to the cost of advertising. Both firms would benefit from a reduction in advertising. However, should Firm B choose not to advertise, Firm A could benefit greatly by advertising. Nevertheless, the optimal amount of advertising by one firm depends on how much advertising the other undertakes. As the best strategy is dependent on what the other firm chooses there is no dominant strategy, which makes it slightly different from a prisoner's dilemma. The outcome is similar, though, in that both firms would be better off were they to advertise less than in the equilibrium.
Sometimes cooperative behaviors do emerge in business situations. For instance, cigarette manufacturers endorsed the making of laws banning cigarette advertising, understanding that this would reduce costs and increase profits across the industry.
Without enforceable agreements, members of a cartel are also involved in a (multi-player) prisoner's dilemma. "Cooperating" typically means agreeing to a price floor, while "defecting" means selling under this minimum level, instantly taking business from other cartel members. Anti-trust authorities want potential cartel members to mutually defect, ensuring the lowest possible prices for consumers.
Sport
Doping in sport has been cited as an example of a prisoner's dilemma. Two competing athletes have the option to use an illegal and/or dangerous drug to boost their performance. If neither athlete takes the drug, then neither gains an advantage. If only one does, then that athlete gains a significant advantage over the competitor, reduced by the legal and/or medical dangers of having taken the drug. But if both athletes take the drug, the benefits cancel out and only the dangers remain, putting them both in a worse position than if neither had doped.
International politics
In international relations theory, the prisoner's dilemma is often used to demonstrate why cooperation fails in situations when cooperation between states is collectively optimal but individually suboptimal. A classic example is the security dilemma, whereby an increase in one state's security (such as increasing its military strength) leads other states to fear for their own security out of fear of offensive action. Consequently, security-increasing measures can lead to tensions, escalation or conflict with one or more other parties, producing an outcome which no party truly desires. The security dilemma is particularly intense in situations when it is hard to distinguish offensive weapons from defensive weapons, and offense has the advantage in any conflict over defense.
The prisoner's dilemma has frequently been used by realist international relations theorists to demonstrate the why all states (regardless of their internal policies or professed ideology) under international anarchy will struggle to cooperate with one another even when all benefit from such cooperation.
Critics of realism argue that iteration and extending the shadow of the future are solutions to the prisoner's dilemma. When actors play the prisoner's dilemma once, they have incentives to defect, but when they expect to play it repeatedly, they have greater incentives to cooperate.
Multiplayer dilemmas
Many real-life dilemmas involve multiple players. Although metaphorical, Garrett Hardin's tragedy of the commons may be viewed as an example of a multi-player generalization of the prisoner's dilemma: each villager makes a choice for personal gain or restraint. The collective reward for unanimous or frequent defection is very low payoffs and the destruction of the commons.
The commons are not always exploited: William Poundstone, in a book about the prisoner's dilemma, describes a situation in New Zealand where newspaper boxes are left unlocked. It is possible for people to take a paper without paying (defecting), but very few do, feeling that if they do not pay then neither will others, destroying the system. Subsequent research by Elinor Ostrom, winner of the 2009 Nobel Memorial Prize in Economic Sciences, hypothesized that the tragedy of the commons is oversimplified, with the negative outcome influenced by outside influences. Without complicating pressures, groups communicate and manage the commons among themselves for their mutual benefit, enforcing social norms to preserve the resource and achieve the maximum good for the group, an example of effecting the best-case outcome for prisoner's dilemma.
Academic settings
The prisoner's dilemma has been used in various academic settings to illustrate the complexities of cooperation and competition. One notable example is the classroom experiment conducted by sociology professor Dan Chambliss at Hamilton College in the 1980s. Starting in 1981, Chambliss proposed that if no student took the final exam, everyone would receive an A, but if even one student took it, those who didn't would receive a zero. In 1988, John Werner, a first-year student, successfully organized his classmates to boycott the exam, demonstrating a practical application of game theory and the prisoner's dilemma concept.
Nearly 25 years later, a similar incident occurred at Johns Hopkins University in 2013. Professor Peter Fröhlich's grading policy scaled final exams according to the highest score, meaning that if everyone received the same score, they would all get an A. Students in Fröhlich's classes organized a boycott of the final exam, ensuring that no one took it. As a result, every student received an A, successfully solving the prisoner's dilemma in a mutually optimal way without iteration. These examples highlight how the prisoner's dilemma can be used to explore cooperative behavior and strategic decision-making in educational contexts.
Related games
Closed-bag exchange
Douglas Hofstadter suggested that people often find problems such as the prisoner's dilemma problem easier to understand when it is illustrated in the form of a simple game, or trade-off. One of several examples he used was "closed bag exchange":
Friend or Foe?
Friend or Foe? is a game show that aired from 2002 to 2003 on the Game Show Network in the US. On the game show, three pairs of people compete. When a pair is eliminated, they play a game similar to the prisoner's dilemma to determine how the winnings are split. If they both cooperate (Friend), they share the winnings 50–50. If one cooperates and the other defects (Foe), the defector gets all the winnings, and the cooperator gets nothing. If both defect, both leave with nothing. Notice that the reward matrix is slightly different from the standard one given above, as the rewards for the "both defect" and the "cooperate while the opponent defects" cases are identical. This makes the "both defect" case a weak equilibrium, compared with being a strict equilibrium in the standard prisoner's dilemma. If a contestant knows that their opponent is going to vote "Foe", then their own choice does not affect their own winnings. In a specific sense, Friend or Foe has a rewards model between prisoner's dilemma and the game of Chicken.
This is the rewards matrix:
This payoff matrix has also been used on the British television programs Trust Me, Shafted, The Bank Job and Golden Balls, and on the American game show Take It All, as well as for the winning couple on the reality shows Bachelor Pad and Love Island. Game data from the Golden Balls series has been analyzed by a team of economists, who found that cooperation was "surprisingly high" for amounts of money that would seem consequential in the real world but were comparatively low in the context of the game.
Iterated snowdrift
Researchers from the University of Lausanne and the University of Edinburgh have suggested that the "Iterated Snowdrift Game" may more closely reflect real-world social situations, although this model is actually a chicken game. In this model, the risk of being exploited through defection is lower, and individuals always gain from taking the cooperative choice. The snowdrift game imagines two drivers who are stuck on opposite sides of a snowdrift, each of whom is given the option of shoveling snow to clear a path or remaining in their car. A player's highest payoff comes from leaving the opponent to clear all the snow by themselves, but the opponent is still nominally rewarded for their work.
This may better reflect real-world scenarios, the researchers giving the example of two scientists collaborating on a report, both of whom would benefit if the other worked harder. "But when your collaborator doesn't do any work, it's probably better for you to do all the work yourself. You'll still end up with a completed project."
Coordination games
In coordination games, players must coordinate their strategies for a good outcome. An example is two cars that abruptly meet in a blizzard; each must choose whether to swerve left or right. If both swerve left, or both right, the cars do not collide. The local left- and right-hand traffic convention helps to co-ordinate their actions.
Symmetrical co-ordination games include Stag hunt and Bach or Stravinsky.
Asymmetric prisoner's dilemmas
A more general set of games is asymmetric. As in the prisoner's dilemma, the best outcome is cooperation, and there are motives for defection. Unlike the symmetric prisoner's dilemma, though, one player has more to lose and/or more to gain than the other. Some such games have been described as a prisoner's dilemma in which one prisoner has an alibi, hence the term "alibi game".
In experiments, players getting unequal payoffs in repeated games may seek to maximize profits, but only under the condition that both players receive equal payoffs; this may lead to a stable equilibrium strategy in which the disadvantaged player defects every X game, while the other always co-operates. Such behavior may depend on the experiment's social norms around fairness.
Software
Several software packages have been created to run simulations and tournaments of the prisoner's dilemma, some of which have their source code available:
The source code for the second tournament run by Robert Axelrod (written by Axelrod and many contributors in Fortran)
Prison, a library written in Java, last updated in 1998
Axelrod-Python, written in Python
Evoplex, a fast agent-based modeling program released in 2018 by Marcos Cardinot
In fiction
Hannu Rajaniemi set the opening scene of his The Quantum Thief trilogy in a "dilemma prison". The main theme of the series has been described as the "inadequacy of a binary universe" and the ultimate antagonist is a character called the All-Defector. The first book in the series was published in 2010, with the two sequels, The Fractal Prince and The Causal Angel, published in 2012 and 2014, respectively.
A game modeled after the iterated prisoner's dilemma is a central focus of the 2012 video game Zero Escape: Virtue's Last Reward and a minor part in its 2016 sequel Zero Escape: Zero Time Dilemma.
In The Mysterious Benedict Society and the Prisoner's Dilemma by Trenton Lee Stewart, the main characters start by playing a version of the game and escaping from the "prison" altogether. Later, they become actual prisoners and escape once again.
In The Adventure Zone: Balance during The Suffering Game subarc, the player characters are twice presented with the prisoner's dilemma during their time in two liches' domain, once cooperating and once defecting.
In the eighth novel from the author James S. A. Corey, Tiamat's Wrath, Winston Duarte explains the prisoner's dilemma to his 14-year-old daughter, Teresa, to train her in strategic thinking.
The 2008 film The Dark Knight includes a scene loosely based on the problem in which the Joker rigs two ferries, one containing prisoners and the other containing civilians, arming both groups with the means to detonate the bomb on each other's ferries, threatening to detonate them both if they hesitate.
In moral philosophy
The prisoner's dilemma is commonly used as a thinking tool in moral philosophy as an illustration of the potential tension between the benefit of the individual and the benefit of the community.
Both the one-shot and the iterated prisoner's dilemma have applications in moral philosophy. Indeed, many of the moral situations, such as genocide, are not easily repeated more than once. Moreover, in many situations, the previous rounds' outcomes are unknown to the players, since they are not necessarily the same (e.g. interaction with a panhandler on the street).
The philosopher David Gauthier uses the prisoner's dilemma to show how morality and rationality can conflict.
Some game theorists have criticized the use of the prisoner's dilemma as a thinking tool in moral philosophy. Kenneth Binmore argued that the prisoner's dilemma does not accurately describe the game played by humanity, which he argues is closer to a coordination game. Brian Skyrms shares this perspective.
Steven Kuhn suggests that these views may be reconciled by considering that moral behavior can modify the payoff matrix of a game, transforming it from a prisoner's dilemma into other games.
Pure and impure prisoner's dilemma
A prisoner's dilemma is considered "impure" if a mixed strategy may give better expected payoffs than a pure strategy. This creates the interesting possibility that the moral action from a utilitarian perspective (i.e., aiming at maximizing the good of an action) may require randomization of one's strategy, such as cooperating with 80% chance and defecting with 20% chance.
See also
Abilene paradox
Centipede game
Collective action problem
Externality
Folk theorem (game theory)
Free-rider problem
Gift-exchange game
Hobbesian trap
Innocent prisoner's dilemma
Liar Game
Metagame
Optional prisoner's dilemma
Prisoner's dilemma and cooperation
Public goods game
Reciprocal altruism
Rent-seeking
Social preferences
Superrationality
Swift trust theory
Tragedy of the commons
Traveler's dilemma
Unscrupulous diner's dilemma
Notes
References
Bibliography
Further reading
Amadae, S. (2016). "Prisoner's Dilemma", Prisoners of Reason. Cambridge University Press, NY, pp. 24–61.
Bicchieri, Cristina (1993). Rationality and Coordination. Cambridge University Press.
Dresher, M. (1961). The Mathematics of Games of Strategy: Theory and Applications Prentice-Hall, Englewood Cliffs, NJ.
Greif, A. (2006). Institutions and the Path to the Modern Economy: Lessons from Medieval Trade. Cambridge University Press, Cambridge, UK.
Rapoport, Anatol and Albert M. Chammah (1965). Prisoner's Dilemma. University of Michigan Press.
External links
The Bowerbird's Dilemma The Prisoner's Dilemma in ornithology – mathematical cartoon by Larry Gonick.
Dawkins: Nice Guys Finish First
Axelrod Iterated Prisoner's Dilemma Python library
Play Prisoner's Dilemma on oTree (N/A 11-5-17)
Nicky Case's Evolution of Trust, an example of the donation game
Iterated Prisoner's Dilemma online game by Wayne Davis
What The Prisoner's Dilemma Reveals About Life, The Universe, and Everything by Veritasium
Dilemmas
Environmental studies
Inefficiency in game theory
Moral psychology
Non-cooperative games
Social psychology
Social science experiments
Thought experiments | Prisoner's dilemma | [
"Mathematics"
] | 8,731 | [
"Game theory",
"Non-cooperative games",
"Inefficiency in game theory"
] |
43,730 | https://en.wikipedia.org/wiki/Linear%20programming | Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization).
More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polytope. A linear programming algorithm finds a point in the polytope where this function has the largest (or smallest) value if such a point exists.
Linear programs are problems that can be expressed in standard form as
Here the components of are the variables to be determined, and are given vectors, and is a given matrix. The function whose value is to be maximized ( in this case) is called the objective function. The constraints and specify a convex polytope over which the objective function is to be optimized.
Linear programming can be applied to various fields of study. It is widely used in mathematics and, to a lesser extent, in business, economics, and some engineering problems. There is a close connection between linear programs, eigenequations, John von Neumann's general equilibrium model, and structural equilibrium models (see dual linear program for details).
Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proven useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design.
History
The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in 1827 published a method for solving them, and after whom the method of Fourier–Motzkin elimination is named.
In the late 1930s, Soviet mathematician Leonid Kantorovich and American economist Wassily Leontief independently delved into the practical applications of linear programming. Kantorovich focused on manufacturing schedules, while Leontief explored economic applications. Their groundbreaking work was largely overlooked for decades.
The turning point came during World War II when linear programming emerged as a vital tool. It found extensive use in addressing complex wartime challenges, including transportation logistics, scheduling, and resource allocation. Linear programming proved invaluable in optimizing these processes while considering critical constraints such as costs and resource availability.
Despite its initial obscurity, the wartime successes propelled linear programming into the spotlight. Post-WWII, the method gained widespread recognition and became a cornerstone in various fields, from operations research to economics. The overlooked contributions of Kantorovich and Leontief in the late 1930s eventually became foundational to the broader acceptance and utilization of linear programming in optimizing decision-making processes.
Kantorovich's work was initially neglected in the USSR. About the same time as Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic problems as linear programs. Kantorovich and Koopmans later shared the 1975 Nobel Memorial Prize in Economic Sciences. In 1941, Frank Lauren Hitchcock also formulated transportation problems as linear programs and gave a solution very similar to the later simplex method. Hitchcock had died in 1957, and the Nobel Memorial Prize is not awarded posthumously.
From 1946 to 1947 George B. Dantzig independently developed general linear programming formulation to use for planning problems in the US Air Force. In 1947, Dantzig also invented the simplex method that, for the first time efficiently, tackled the linear programming problem in most cases. When Dantzig arranged a meeting with John von Neumann to discuss his simplex method, von Neumann immediately conjectured the theory of duality by realizing that the problem he had been working in game theory was equivalent. Dantzig provided formal proof in an unpublished report "A Theorem on Linear Inequalities" on January 5, 1948. Dantzig's work was made available to public in 1951. In the post-war years, many industries applied it in their daily planning.
Dantzig's original example was to find the best assignment of 70 people to 70 jobs. The computing power required to test all the permutations to select the best assignment is vast; the number of possible configurations exceeds the number of particles in the observable universe. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the simplex algorithm. The theory behind linear programming drastically reduces the number of possible solutions that must be checked.
The linear programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems.
Uses
Linear programming is a widely used field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems. Certain special cases of linear programming, such as network flow problems and multicommodity flow problems, are considered important enough to have much research on specialized algorithms. A number of algorithms for other types of optimization problems work by solving linear programming problems as sub-problems. Historically, ideas from linear programming have inspired many of the central concepts of optimization theory, such as duality, decomposition, and the importance of convexity and its generalizations. Likewise, linear programming was heavily used in the early formation of microeconomics, and it is currently utilized in company management, such as planning, production, transportation, and technology. Although the modern management issues are ever-changing, most companies would like to maximize profits and minimize costs with limited resources. Google also uses linear programming to stabilize YouTube videos.
Standard form
Standard form is the usual and most intuitive form of describing a linear programming problem. It consists of the following three parts:
A linear (or affine) function to be maximized
e.g.
Problem constraints of the following form
e.g.
Non-negative variables
e.g.
The problem is usually expressed in matrix form, and then becomes:
Other forms, such as minimization problems, problems with constraints on alternative forms, and problems involving negative variables can always be rewritten into an equivalent problem in standard form.
Example
Suppose that a farmer has a piece of farm land, say L hectares, to be planted with either wheat or barley or some combination of the two. The farmer has F kilograms of fertilizer and P kilograms of pesticide. Every hectare of wheat requires F1 kilograms of fertilizer and P1 kilograms of pesticide, while every hectare of barley requires F2 kilograms of fertilizer and P2 kilograms of pesticide. Let S1 be the selling price of wheat and S2 be the selling price of barley, per hectare. If we denote the area of land planted with wheat and barley by x1 and x2 respectively, then profit can be maximized by choosing optimal values for x1 and x2. This problem can be expressed with the following linear programming problem in the standard form:
In matrix form this becomes:
maximize
subject to
Augmented form (slack form)
Linear programming problems can be converted into an augmented form in order to apply the common form of the simplex algorithm. This form introduces non-negative slack variables to replace inequalities with equalities in the constraints. The problems can then be written in the following block matrix form:
Maximize :
where are the newly introduced slack variables, are the decision variables, and is the variable to be maximized.
Example
The example above is converted into the following augmented form:
{|
|-
| colspan="2" | Maximize:
| (objective function)
|-
| subject to:
|
| (augmented constraint)
|-
|
|
| (augmented constraint)
|-
|
|
| (augmented constraint)
|-
|
|
|}
where are (non-negative) slack variables, representing in this example the unused area, the amount of unused fertilizer, and the amount of unused pesticide.
In matrix form this becomes:
Maximize :
Duality
Every linear programming problem, referred to as a primal problem, can be converted into a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, we can express the primal problem as:
Maximize cTx subject to Ax ≤ b, x ≥ 0;
with the corresponding symmetric dual problem,
Minimize bTy subject to ATy ≥ c, y ≥ 0.
An alternative primal formulation is:
Maximize cTx subject to Ax ≤ b;
with the corresponding asymmetric dual problem,
Minimize bTy subject to ATy = c, y ≥ 0.
There are two ideas fundamental to duality theory. One is the fact that (for the symmetric dual) the dual of a dual linear program is the original primal linear program. Additionally, every feasible solution for a linear program gives a bound on the optimal value of the objective function of its dual. The weak duality theorem states that the objective function value of the dual at any feasible solution is always greater than or equal to the objective function value of the primal at any feasible solution. The strong duality theorem states that if the primal has an optimal solution, x*, then the dual also has an optimal solution, y*, and cTx*=bTy*.
A linear program can also be unbounded or infeasible. Duality theory tells us that if the primal is unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the dual is unbounded, then the primal must be infeasible. However, it is possible for both the dual and the primal to be infeasible. See dual linear program for details and several more examples.
Variations
Covering/packing dualities
A covering LP is a linear program of the form:
Minimize: bTy,
subject to: ATy ≥ c, y ≥ 0,
such that the matrix A and the vectors b and c are non-negative.
The dual of a covering LP is a packing LP, a linear program of the form:
Maximize: cTx,
subject to: Ax ≤ b, x ≥ 0,
such that the matrix A and the vectors b and c are non-negative.
Examples
Covering and packing LPs commonly arise as a linear programming relaxation of a combinatorial problem and are important in the study of approximation algorithms. For example, the LP relaxations of the set packing problem, the independent set problem, and the matching problem are packing LPs. The LP relaxations of the set cover problem, the vertex cover problem, and the dominating set problem are also covering LPs.
Finding a fractional coloring of a graph is another example of a covering LP. In this case, there is one constraint for each vertex of the graph and one variable for each independent set of the graph.
Complementary slackness
It is possible to obtain an optimal solution to the dual when only an optimal solution to the primal is known using the complementary slackness theorem. The theorem states:
Suppose that x = (x1, x2, ... , xn) is primal feasible and that y = (y1, y2, ... , ym) is dual feasible. Let (w1, w2, ..., wm) denote the corresponding primal slack variables, and let (z1, z2, ... , zn) denote the corresponding dual slack variables. Then x and y are optimal for their respective problems if and only if
xj zj = 0, for j = 1, 2, ... , n, and
wi yi = 0, for i = 1, 2, ... , m.
So if the i-th slack variable of the primal is not zero, then the i-th variable of the dual is equal to zero. Likewise, if the j-th slack variable of the dual is not zero, then the j-th variable of the primal is equal to zero.
This necessary condition for optimality conveys a fairly simple economic principle. In standard form (when maximizing), if there is slack in a constrained primal resource (i.e., there are "leftovers"), then additional quantities of that resource must have no value. Likewise, if there is slack in the dual (shadow) price non-negativity constraint requirement, i.e., the price is not zero, then there must be scarce supplies (no "leftovers").
Theory
Existence of optimal solutions
Geometrically, the linear constraints define the feasible region, which is a convex polytope. A linear function is a convex function, which implies that every local minimum is a global minimum; similarly, a linear function is a concave function, which implies that every local maximum is a global maximum.
An optimal solution need not exist, for two reasons. First, if the constraints are inconsistent, then no feasible solution exists: For instance, the constraints x ≥ 2 and x ≤ 1 cannot be satisfied jointly; in this case, we say that the LP is infeasible. Second, when the polytope is unbounded in the direction of the gradient of the objective function (where the gradient of the objective function is the vector of the coefficients of the objective function), then no optimal value is attained because it is always possible to do better than any finite value of the objective function.
Optimal vertices (and rays) of polyhedra
Otherwise, if a feasible solution exists and if the constraint set is bounded, then the optimum value is always attained on the boundary of the constraint set, by the maximum principle for convex functions (alternatively, by the minimum principle for concave functions) since linear functions are both convex and concave. However, some problems have distinct optimal solutions; for example, the problem of finding a feasible solution to a system of linear inequalities is a linear programming problem in which the objective function is the zero function (i.e., the constant function taking the value zero everywhere). For this feasibility problem with the zero-function for its objective-function, if there are two distinct solutions, then every convex combination of the solutions is a solution.
The vertices of the polytope are also called basic feasible solutions. The reason for this choice of name is as follows. Let d denote the number of variables. Then the fundamental theorem of linear inequalities implies (for feasible problems) that for every vertex x* of the LP feasible region, there exists a set of d (or fewer) inequality constraints from the LP such that, when we treat those d constraints as equalities, the unique solution is x*. Thereby we can study these vertices by means of looking at certain subsets of the set of all constraints (a discrete set), rather than the continuum of LP solutions. This principle underlies the simplex algorithm for solving linear programs.
Algorithms
Basis exchange algorithms
Simplex algorithm of Dantzig
The simplex algorithm, developed by George Dantzig in 1947, solves LP problems by constructing a feasible solution at a vertex of the polytope and then walking along a path on the edges of the polytope to vertices with non-decreasing values of the objective function until an optimum is reached for sure. In many practical problems, "stalling" occurs: many pivots are made with no increase in the objective function. In rare practical problems, the usual versions of the simplex algorithm may actually "cycle". To avoid cycles, researchers developed new pivoting rules.
In practice, the simplex algorithm is quite efficient and can be guaranteed to find the global optimum if certain precautions against cycling are taken. The simplex algorithm has been proved to solve "random" problems efficiently, i.e. in a cubic number of steps, which is similar to its behavior on practical problems.
However, the simplex algorithm has poor worst-case behavior: Klee and Minty constructed a family of linear programming problems for which the simplex method takes a number of steps exponential in the problem size. In fact, for some time it was not known whether the linear programming problem was solvable in polynomial time, i.e. of complexity class P.
Criss-cross algorithm
Like the simplex algorithm of Dantzig, the criss-cross algorithm is a basis-exchange algorithm that pivots between bases. However, the criss-cross algorithm need not maintain feasibility, but can pivot rather from a feasible basis to an infeasible basis. The criss-cross algorithm does not have polynomial time-complexity for linear programming. Both algorithms visit all 2D corners of a (perturbed) cube in dimension D, the Klee–Minty cube, in the worst case.
Interior point
In contrast to the simplex algorithm, which finds an optimal solution by traversing the edges between vertices on a polyhedral set, interior-point methods move through the interior of the feasible region.
Ellipsoid algorithm, following Khachiyan
This is the first worst-case polynomial-time algorithm ever found for linear programming. To solve a problem which has n variables and can be encoded in L input bits, this algorithm runs in time. Leonid Khachiyan solved this long-standing complexity issue in 1979 with the introduction of the ellipsoid method. The convergence analysis has (real-number) predecessors, notably the iterative methods developed by Naum Z. Shor and the approximation algorithms by Arkadi Nemirovski and D. Yudin.
Projective algorithm of Karmarkar
Khachiyan's algorithm was of landmark importance for establishing the polynomial-time solvability of linear programs. The algorithm was not a computational break-through, as the simplex method is more efficient for all but specially constructed families of linear programs.
However, Khachiyan's algorithm inspired new lines of research in linear programming. In 1984, N. Karmarkar proposed a projective method for linear programming. Karmarkar's algorithm improved on Khachiyan's worst-case polynomial bound (giving ). Karmarkar claimed that his algorithm was much faster in practical LP than the simplex method, a claim that created great interest in interior-point methods. Since Karmarkar's discovery, many interior-point methods have been proposed and analyzed.
Vaidya's 87 algorithm
In 1987, Vaidya proposed an algorithm that runs in time.
Vaidya's 89 algorithm
In 1989, Vaidya developed an algorithm that runs in time. Formally speaking, the algorithm takes arithmetic operations in the worst case, where is the number of constraints, is the number of variables, and is the number of bits.
Input sparsity time algorithms
In 2015, Lee and Sidford showed that linear programming can be solved in time, where denotes the soft O notation, and represents the number of non-zero elements, and it remains taking in the worst case.
Current matrix multiplication time algorithm
In 2019, Cohen, Lee and Song improved the running time to time, is the exponent of matrix multiplication and is the dual exponent of matrix multiplication. is (roughly) defined to be the largest number such that one can multiply an matrix by a matrix in time. In a followup work by Lee, Song and Zhang, they reproduce the same result via a different method. These two algorithms remain when and . The result due to Jiang, Song, Weinstein and Zhang improved to .
Comparison of interior-point methods and simplex algorithms
The current opinion is that the efficiencies of good implementations of simplex-based methods and interior point methods are similar for routine applications of linear programming. However, for specific types of LP problems, it may be that one type of solver is better than another (sometimes much better), and that the structure of the solutions generated by interior point methods versus simplex-based methods are significantly different with the support set of active variables being typically smaller for the latter one.
Open problems and recent work
There are several open problems in the theory of linear programming, the solution of which would represent fundamental breakthroughs in mathematics and potentially major advances in our ability to solve large-scale linear programs.
Does LP admit a strongly polynomial-time algorithm?
Does LP admit a strongly polynomial-time algorithm to find a strictly complementary solution?
Does LP admit a polynomial-time algorithm in the real number (unit cost) model of computation?
This closely related set of problems has been cited by Stephen Smale as among the 18 greatest unsolved problems of the 21st century. In Smale's words, the third version of the problem "is the main unsolved problem of linear programming theory." While algorithms exist to solve linear programming in weakly polynomial time, such as the ellipsoid methods and interior-point techniques, no algorithms have yet been found that allow strongly polynomial-time performance in the number of constraints and the number of variables. The development of such algorithms would be of great theoretical interest, and perhaps allow practical gains in solving large LPs as well.
Although the Hirsch conjecture was recently disproved for higher dimensions, it still leaves the following questions open.
Are there pivot rules which lead to polynomial-time simplex variants?
Do all polytopal graphs have polynomially bounded diameter?
These questions relate to the performance analysis and development of simplex-like methods. The immense efficiency of the simplex algorithm in practice despite its exponential-time theoretical performance hints that there may be variations of simplex that run in polynomial or even strongly polynomial time. It would be of great practical and theoretical significance to know whether any such variants exist, particularly as an approach to deciding if LP can be solved in strongly polynomial time.
The simplex algorithm and its variants fall in the family of edge-following algorithms, so named because they solve linear programming problems by moving from vertex to vertex along edges of a polytope. This means that their theoretical performance is limited by the maximum number of edges between any two vertices on the LP polytope. As a result, we are interested in knowing the maximum graph-theoretical diameter of polytopal graphs. It has been proved that all polytopes have subexponential diameter. The recent disproof of the Hirsch conjecture is the first step to prove whether any polytope has superpolynomial diameter. If any such polytopes exist, then no edge-following variant can run in polynomial time. Questions about polytope diameter are of independent mathematical interest.
Simplex pivot methods preserve primal (or dual) feasibility. On the other hand, criss-cross pivot methods do not preserve (primal or dual) feasibilitythey may visit primal feasible, dual feasible or primal-and-dual infeasible bases in any order. Pivot methods of this type have been studied since the 1970s. Essentially, these methods attempt to find the shortest pivot path on the arrangement polytope under the linear programming problem. In contrast to polytopal graphs, graphs of arrangement polytopes are known to have small diameter, allowing the possibility of strongly polynomial-time criss-cross pivot algorithm without resolving questions about the diameter of general polytopes.
Integer unknowns
If all of the unknown variables are required to be integers, then the problem is called an integer programming (IP) or integer linear programming (ILP) problem. In contrast to linear programming, which can be solved efficiently in the worst case, integer programming problems are in many practical situations (those with bounded variables) NP-hard. 0–1 integer programming or binary integer programming (BIP) is the special case of integer programming where variables are required to be 0 or 1 (rather than arbitrary integers). This problem is also classified as NP-hard, and in fact the decision version was one of Karp's 21 NP-complete problems.
If only some of the unknown variables are required to be integers, then the problem is called a mixed integer (linear) programming (MIP or MILP) problem. These are generally also NP-hard because they are even more general than ILP programs.
There are however some important subclasses of IP and MIP problems that are efficiently solvable, most notably problems where the constraint matrix is totally unimodular and the right-hand sides of the constraints are integers or – more general – where the system has the total dual integrality (TDI) property.
Advanced algorithms for solving integer linear programs include:
cutting-plane method
Branch and bound
Branch and cut
Branch and price
if the problem has some extra structure, it may be possible to apply delayed column generation.
Such integer-programming algorithms are discussed by Padberg and in Beasley.
Integral linear programs
A linear program in real variables is said to be integral if it has at least one optimal solution which is integral, i.e., made of only integer values. Likewise, a polyhedron is said to be integral if for all bounded feasible objective functions c, the linear program has an optimum with integer coordinates. As observed by Edmonds and Giles in 1977, one can equivalently say that the polyhedron is integral if for every bounded feasible integral objective function c, the optimal value of the linear program is an integer.
Integral linear programs are of central importance in the polyhedral aspect of combinatorial optimization since they provide an alternate characterization of a problem. Specifically, for any problem, the convex hull of the solutions is an integral polyhedron; if this polyhedron has a nice/compact description, then we can efficiently find the optimal feasible solution under any linear objective. Conversely, if we can prove that a linear programming relaxation is integral, then it is the desired description of the convex hull of feasible (integral) solutions.
Terminology is not consistent throughout the literature, so one should be careful to distinguish the following two concepts,
in an integer linear program, described in the previous section, variables are forcibly constrained to be integers, and this problem is NP-hard in general,
in an integral linear program, described in this section, variables are not constrained to be integers but rather one has proven somehow that the continuous problem always has an integral optimal value (assuming c is integral), and this optimal value may be found efficiently since all polynomial-size linear programs can be solved in polynomial time.
One common way of proving that a polyhedron is integral is to show that it is totally unimodular. There are other general methods including the integer decomposition property and total dual integrality. Other specific well-known integral LPs include the matching polytope, lattice polyhedra, submodular flow polyhedra, and the intersection of two generalized polymatroids/g-polymatroids – e.g. see Schrijver 2003.
Solvers and scripting (programming) languages
Permissive licenses:
Copyleft (reciprocal) licenses:
MINTO (Mixed Integer Optimizer, an integer programming solver which uses branch and bound algorithm) has publicly available source code but is not open source.
Proprietary licenses:
See also
Convex programming
Dynamic programming
Input–output model
Job shop scheduling
Least absolute deviations
Least-squares spectral analysis
Linear algebra
Linear production game
Linear-fractional programming (LFP)
LP-type problem
Mathematical programming
Nonlinear programming
Odds algorithm used to solve optimal stopping problems
Oriented matroid
Quadratic programming, a superset of linear programming
Semidefinite programming
Shadow price
Simplex algorithm, used to solve LP problems
Notes
References
F. L. Hitchcock: The distribution of a product from several sources to numerous localities, Journal of Mathematics and Physics, 20, 1941, 224–230.
G.B Dantzig: Maximization of a linear function of variables subject to linear inequalities, 1947. Published pp. 339–347 in T.C. Koopmans (ed.):Activity Analysis of Production and Allocation, New York-London 1951 (Wiley & Chapman-Hall)
J. E. Beasley, editor. Advances in Linear and Integer Programming. Oxford Science, 1996. (Collection of surveys)
(Average behavior on random problems)
Richard W. Cottle, ed. The Basic George B. Dantzig. Stanford Business Books, Stanford University Press, Stanford, California, 2003. (Selected papers by George B. Dantzig)
George B. Dantzig and Mukund N. Thapa. 1997. Linear programming 1: Introduction. Springer-Verlag.
(Comprehensive, covering e.g. pivoting and interior-point algorithms, large-scale problems, decomposition following Dantzig–Wolfe and Benders, and introducing stochastic programming.)
Evar D. Nering and Albert W. Tucker, 1993, Linear Programs and Related Problems, Academic Press. (elementary)
(carefully written account of primal and dual simplex algorithms and projective algorithms, with an introduction to integer linear programming – featuring the traveling salesman problem for Odysseus.)
(computer science)
(Invited survey, from the International Symposium on Mathematical Programming.)
(Computer science)
Further reading
Dmitris Alevras and Manfred W. Padberg, Linear Optimization and Extensions: Problems and Solutions, Universitext, Springer-Verlag, 2001. (Problems from Padberg with solutions.)
Chapter 4: Linear Programming: pp. 63–94. Describes a randomized half-plane intersection algorithm for linear programming.
A6: MP1: INTEGER PROGRAMMING, pg.245. (computer science, complexity theory)
(elementary introduction for mathematicians and computer scientists)
Cornelis Roos, Tamás Terlaky, Jean-Philippe Vial, Interior Point Methods for Linear Optimization, Second Edition, Springer-Verlag, 2006. (Graduate level)
Alexander Schrijver, Theory of Linear and Integer Programming. John Wiley & sons, 1998, (mathematical)
; with online solver: https://online-optimizer.appspot.com/
(linear optimization modeling)
H. P. Williams, Model Building in Mathematical Programming, Fifth Edition, 2013. (Modeling)
Stephen J. Wright, 1997, Primal-Dual Interior-Point Methods, SIAM. (Graduate level)
Yinyu Ye, 1997, Interior Point Algorithms: Theory and Analysis, Wiley. (Advanced graduate-level)
Ziegler, Günter M., Chapters 1–3 and 6–7 in Lectures on Polytopes, Springer-Verlag, New York, 1994. (Geometry)
External links
Guidance On Formulating LP Problems
Mathematical Programming Glossary
The Linear Programming FAQ
Benchmarks For Optimisation Software
Convex optimization
Geometric algorithms
P-complete problems | Linear programming | [
"Mathematics"
] | 6,261 | [
"Mathematical problems",
"Computational problems",
"P-complete problems"
] |
43,734 | https://en.wikipedia.org/wiki/Network%20packet | In telecommunications and computer networking, a network packet is a formatted unit of data carried by a packet-switched network. A packet consists of control information and user data; the latter is also known as the payload. Control information provides data for delivering the payload (e.g., source and destination network addresses, error detection codes, or sequencing information). Typically, control information is found in packet headers and trailers.
In packet switching, the bandwidth of the transmission medium is shared between multiple communication sessions, in contrast to circuit switching, in which circuits are preallocated for the duration of one session and data is typically transmitted as a continuous bit stream.
Terminology
In the seven-layer OSI model of computer networking, packet strictly refers to a protocol data unit at layer 3, the network layer. A data unit at layer 2, the data link layer, is a frame. In layer 4, the transport layer, the data units are segments and datagrams. Thus, in the example of TCP/IP communication over Ethernet, a TCP segment is carried in one or more IP packets, which are each carried in one or more Ethernet frames.
Architecture
The basis of the packet concept is the postal letter: the header is like the envelope, the payload is the entire content inside the envelope, and the footer would be your signature at the bottom.
Network design can achieve two major results by using packets: error detection and multiple host addressing.
Framing
Communications protocols use various conventions for distinguishing the elements of a packet and for formatting the user data. For example, in Point-to-Point Protocol, the packet is formatted in 8-bit bytes, and special characters are used to delimit elements. Other protocols, like Ethernet, establish the start of the header and data elements by their location relative to the start of the packet. Some protocols format the information at a bit level instead of a byte level.
Contents
A packet may contain any of the following components:
Addresses
The routing of network packets requires two network addresses, the source address of the sending host, and the destination address of the receiving host.
Error detection and correction
Error detection and correction is performed at various layers in the protocol stack. Network packets may contain a checksum, parity bits or cyclic redundancy checks to detect errors that occur during transmission.
At the transmitter, the calculation is performed before the packet is sent. When received at the destination, the checksum is recalculated, and compared with the one in the packet. If discrepancies are found, the packet may be corrected or discarded. Any packet loss due to these discards is dealt with by the network protocol.
In some cases, modifications of the network packet may be necessary while routing, in which cases checksums are recalculated.
Hop limit
Under fault conditions, packets can end up traversing a closed circuit. If nothing was done, eventually the number of packets circulating would build up until the network was congested to the point of failure. Time to live is a field that is decreased by one each time a packet goes through a network hop. If the field reaches zero, routing has failed, and the packet is discarded.
Ethernet packets have no time-to-live field and so are subject to broadcast storms in the presence of a switching loop.
Length
There may be a field to identify the overall packet length. However, in some types of networks, the length is implied by the duration of the transmission.
Protocol identifier
It is often desirable to carry multiple communication protocols on a network. A protocol identifier field specifies a packet's protocol and allows the protocol stack to process many types of packets.
Priority
Some networks implement quality of service which can prioritize some types of packets above others. This field indicates which packet queue should be used; a high-priority queue is emptied more quickly than lower-priority queues at points in the network where congestion is occurring.
Payload
In general, the payload is the data that is carried on behalf of an application. It is usually of variable length, up to a maximum that is set by the network protocol and sometimes the equipment on the route. When necessary, some networks can break a larger packet into smaller packets.
Examples
Internet protocol
IP packets are composed of a header and payload. The header consists of fixed and optional fields. The payload appears immediately after the header. An IP packet has no trailer. However, an IP packet is often carried as the payload inside an Ethernet frame, which has its own header and trailer.
Per the end-to-end principle, IP networks do not provide guarantees of delivery, non-duplication, or in-order delivery of packets. However, it is common practice to layer a reliable transport protocol such as Transmission Control Protocol on top of the packet service to provide such protection.
NASA Deep Space Network
The Consultative Committee for Space Data Systems (CCSDS) packet telemetry standard defines the protocol used for the transmission of spacecraft instrument data over the deep-space channel. Under this standard, an image or other data sent from a spacecraft instrument is transmitted using one or more packets.
MPEG packetized stream
Packetized elementary stream (PES) is a specification associated with the MPEG-2 standard that allows an elementary stream to be divided into packets. The elementary stream is packetized by encapsulating sequential data bytes from the elementary stream between PES packet headers.
A typical method of transmitting elementary stream data from a video or audio encoder is to first create PES packets from the elementary stream data and then to encapsulate these PES packets inside an MPEG transport stream (TS) packets or an MPEG program stream (PS). The TS packets can then be transmitted using broadcasting techniques, such as those used in an ATSC and DVB.
NICAM
In order to provide mono compatibility, the NICAM signal is transmitted on a subcarrier alongside the sound carrier. This means that the FM or AM regular mono sound carrier is left alone for reception by monaural receivers. The NICAM packet (except for the header) is scrambled with a nine-bit pseudo-random bit-generator before transmission. Making the NICAM bitstream look more like white noise is important because this reduces signal patterning on adjacent TV channels.
See also
Anti-replay
Fast packet switching
Mangled packet
Packet analyzer
Packet generation model
Statistical time-division multiplexing
Tail drop
References
Units of information | Network packet | [
"Mathematics"
] | 1,305 | [
"Units of information",
"Quantity",
"Units of measurement"
] |
43,810 | https://en.wikipedia.org/wiki/Charles%20Yerkes | Charles Tyson Yerkes Jr. ( ; June 25, 1837 – December 29, 1905) was an American financier. He played a part in developing mass-transit systems in Chicago and London.
Philadelphia
Yerkes was born into a Quaker family in the Northern Liberties, a district adjacent to Philadelphia, on June 25, 1837. His mother, Elizabeth Link Yerkes, died of puerperal fever when he was five years old, and soon thereafter his father Charles Tyson Yerkes Sr. remarried a non-Quaker and was therefore expelled from the Society of Friends. After finishing a two-year course at Philadelphia's Central High School, Yerkes began his business career at the age of 17 as a clerk for a local grain brokerage. In 1859, aged 22, he began his own brokerage business and registered with the Philadelphia Stock Exchange.
By 1865, he had begun banking and specialized in selling municipal, state, and government bonds. Relying on his bank president father's associations, his political acquaintances, and his own acumen, Yerkes became well-known as a businessman. While serving as a financial agent for the City of Philadelphia's treasurer, Joseph Marcer, Yerkes risked public money in a large-scale stock speculation. This speculation ended calamitously when the Great Chicago Fire started a financial panic. Left insolvent and unable to make payment to the City of Philadelphia, Yerkes was convicted of larceny and sentenced to thirty-three months in Eastern State Penitentiary.
In an attempt to remain out of prison, he attempted to blackmail two influential Pennsylvania politicians. The blackmail plan initially failed; the damaging information concerning the politicians was eventually made public and politicians, including then-President Ulysses S. Grant, feared that the revelations might harm their prospects during the upcoming elections. Yerkes was promised a pardon if he would deny the accusations he had made. He agreed to these terms and was released after serving seven months in prison.
Chicago
In 1881 Yerkes traveled to Fargo in the Dakota Territory to obtain a divorce from his wife. Later that year, he remarried and relocated to Chicago. There, he opened a stock and grain brokerage but soon became involved with planning the city's public transportation system. In 1886, Yerkes and his business partners used a complex financial deal to acquire control of the North Chicago Street Railway and then followed this with a series of further takeovers until he controlled a majority of Chicago's street railway systems on the north and west sides. Yerkes was not averse to using bribery and blackmail to obtain his objectives.
In an effort to improve his public reputation, Yerkes decided in 1892 to fund the world's largest telescope after being lobbied by the astronomer George Ellery Hale and University of Chicago president William Rainey Harper. He had intended initially to finance only a telescope but agreed eventually to fund an entire observatory. He contributed more than $500,000 to the University of Chicago to establish what would become known as Yerkes Observatory, located in Williams Bay, Wisconsin.
In 1895, Yerkes purchased the Republican partisan newspaper, the Chicago Inter Ocean, using the publication to publicize his political agenda.
Yerkes began a campaign for longer streetcar franchises in 1895, but Illinois governor John Peter Altgeld vetoed the franchise bills. Yerkes renewed the campaign in 1897, and, after a hard-fought struggle, secured from the Illinois Legislature a bill granting city councils the right to approve extended franchises. The so-called franchise war then shifted to the Chicago City Council — a venue in which Yerkes ordinarily thrived. A partially reformed council and Mayor Carter Harrison Jr., however, ultimately defeated Yerkes, with the swing votes coming from aldermen "Hinky Dink" Kenna and "Bathhouse" John Coughlin.
In 1899, Yerkes sold the majority of his Chicago transport stocks and relocated to New York.
Art collection
While living in Chicago, Yerkes became an art collector, relying on Sarah Tyson Hallowell (1846–1924) to advise him for his purchases. After the Chicago World's Fair in 1893, she tried to interest him in the works of Auguste Rodin, which were part of the loan exhibition of French art. Because the subject matter was controversial, Yerkes initially refused the works, but he soon changed his mind and acquired two Rodin marbles, Cupid and Psyche and Orpheus, for his Chicago mansion, the first two of Rodin's works known to have been sold to an American collector. Yerkes' art collection also included paintings by Frans Hals, works by the French academic painters, such as Pygmalion and Galatea by Jean-Léon Gérôme and works by William-Adolphe Bouguereau and members of the Barbizon School. In 1904, he published a two volume catalog of his collection, which by that time was in New York:
Catalogue of paintings and sculpture in the collection of Charles T. Yerkes, esq., New York, 1904
London
In August 1900, Yerkes became involved with the development of the London underground railway system after riding along the route of one proposed line and surveying the city of London from the summit of Hampstead Heath. He established the Underground Electric Railways Company of London to take control of the District Railway and the partly built Baker Street and Waterloo Railway, Charing Cross, Euston and Hampstead Railway, and Great Northern, Piccadilly and Brompton Railway. Yerkes employed complex financial arrangements similar to those that he had used in the United States to raise the funds necessary to construct the new lines and electrify the District Railway (known presently as the District line). In one of his last great triumphs, Yerkes managed to thwart an attempt by J. P. Morgan to become involved with the London underground railway. Yerkes did not live to see his London tube lines in operation. The now Bakerloo and Piccadilly lines opened in 1906, a few months after his death, and the Charing Cross line (now part of the Northern line) the next summer.
Death and legacy
Yerkes died in the hotel Waldorf Astoria in New York on December 29, 1905, of kidney disease. The events of Yerkes's life served as a model for Theodore Dreiser's novels The Financier, The Titan, and The Stoic, in which Yerkes was fictionalized as Frank Cowperwood.
The crater Yerkes on the Moon is named in his honor.
Pictures of Yerkes and his second wife Mary were painted by his favorite artist Jan van Beers (National Portrait Gallery, Washington, D.C.). His wife, the daughter of Thomas Moore of Philadelphia, was also painted in 1892 by the Swiss-born American artist Adolfo Müller-Ury (1862–1947). In 1893 Müller-Ury painted from miniatures portraits of Yerkes's Quaker grandparents, Mr. and Mrs. Silas Yerkes. In 1906, his widow Mary Adelaide married playwright and raconteur Wilson Mizner; they were divorced the next year.
References
Sources
External links
Chicago "L".org
Detailed history of Charles Yerkes' involvement in the Chicago elevated railways and street car system
Photograph of Charles Yerkes
University of Chicago - Biography of Yerkes
Wall Street Journal Review
London Transport Museum Photographic Archive
1837 births
1905 deaths
American people of Dutch descent
Central High School (Philadelphia) alumni
Businesspeople from Philadelphia
Businesspeople from Chicago
People associated with astronomy
American railway entrepreneurs
People associated with transport in London
History of the London Underground
Deaths from kidney disease
American businesspeople convicted of crimes
19th-century American businesspeople | Charles Yerkes | [
"Astronomy"
] | 1,547 | [
"People associated with astronomy"
] |
43,818 | https://en.wikipedia.org/wiki/Java%20XML | In computing, Java XML APIs were developed by Sun Microsystems, consisting separate computer programming application programming interfaces (APIs).
Application programming interfaces
Java API for XML Processing (JAXP)
Java API for XML Messaging (JAXM)
Jakarta XML RPC (JAX-RPC) — formerly Java API for XML Based RPC deprecated for Java API for XML Web Services
Jakarta XML Registries (JAXR) — formerly Java API for XML Registries
Jakarta XML Web Services (JAX-WS) — formerly Java API for XML Web Services
Jakarta RESTful Web Services (JAX-RS) — formerly Java API for RESTful Web Services
Java API for XQuery (XQJ)
Jakarta XML Binding (JAXB) — formerly Java Architecture for XML Binding (this was its official Sun name, even though it is an API, see )
StAX (Streaming XML processing) — compatible with JDK 1.4 and above, included in JDK 1.6
Only the Java API for XML Processing (JAXP) is a required API in Enterprise Java Beans Specification 1.3.
A number of different open-source software packages implement these APIs:
Apache Xerces — One of the original and most popular SAX and DOM parsers
Apache Xalan — XSLT/XPath implementation, included in JDK 1.4 and above as the default transformer (XSLT 1.0)
Saxon XSLT — alternative highly specification-compliant XSLT/XPath/XQuery processor (supports both XSLT 1.0 and 2.0)
Woodstox — An open-source StAX and SAX (as of version 3.2) implementation
References
External links
StelsXML JDBC driver - JDBC driver for XML files.
Woodstox - Woodstox home page.
How To Schema Check Xml Via JAXB - Rob Austin
Java EE and web framework tutorials - Learning xml in java.
XML | Java XML | [
"Technology"
] | 399 | [
"Computing platforms",
"Java platform"
] |
43,851 | https://en.wikipedia.org/wiki/Social%20norm | A social norm is a shared standard of acceptable behavior by a group. Social norms can both be informal understandings that govern the behavior of members of a society, as well as be codified into rules and laws. Social normative influences or social norms, are deemed to be powerful drivers of human behavioural changes and well organized and incorporated by major theories which explain human behaviour. Institutions are composed of multiple norms. Norms are shared social beliefs about behavior; thus, they are distinct from "ideas", "attitudes", and "values", which can be held privately, and which do not necessarily concern behavior. Norms are contingent on context, social group, and historical circumstances.
Scholars distinguish between regulative norms (which constrain behavior), constitutive norms (which shape interests), and prescriptive norms (which prescribe what actors ought to do). The effects of norms can be determined by a logic of appropriateness and logic of consequences; the former entails that actors follow norms because it is socially appropriate, and the latter entails that actors follow norms because of cost-benefit calculations.
Three stages have been identified in the life cycle of a norm: (1) Norm emergence – norm entrepreneurs seek to persuade others of the desirability and appropriateness of certain behaviors; (2) Norm cascade – when a norm obtains broad acceptance; and (3) Norm internalization – when a norm acquires a "taken-for-granted" quality. Norms are robust to various degrees: some norms are often violated whereas other norms are so deeply internalized that norm violations are infrequent. Evidence for the existence of norms can be detected in the patterns of behavior within groups, as well as the articulation of norms in group discourse.
In some societies, individuals often limit their potential due to social norms, while others engage in social movements to challenge and resist these constraints.
Definition
There are varied definitions of social norms, but there is agreement among scholars that norms are:
social and shared among members of a group,
related to behaviors and shape decision-making,
proscriptive or prescriptive
socially acceptable way of living by a group of people in a society.
In 1965, Jack P. Gibbs identified three basic normative dimensions that all concepts of norms could be subsumed under:
"a collective evaluation of behavior in terms of what it ought to be"
"a collective expectation as to what behavior will be"
"particular reactions to behavior" (including attempts sanction or induce certain conduct)
According to Ronald Jepperson, Peter Katzenstein and Alexander Wendt, "norms are collective expectations about proper behavior for a given identity." Wayne Sandholtz argues against this definition, as he writes that shared expectations are an effect of norms, not an intrinsic quality of norms. Sandholtz, Martha Finnemore and Kathryn Sikkink define norms instead as "standards of appropriate behavior for actors with a given identity." In this definition, norms have an "oughtness" quality to them.
Michael Hechter and Karl-Dieter Opp define norms as "cultural phenomena that prescribe and proscribe behavior in specific circumstances." Sociologists Christine Horne and Stefanie Mollborn define norms as "group-level evaluations of behavior." This entails that norms are widespread expectations of social approval or disapproval of behavior. Scholars debate whether social norms are individual constructs or collective constructs.
Economist and game theorist Peyton Young defines norms as "patterns of behavior that are self-enforcing within a group." He emphasizes that norms are driven by shared expectations: "Everyone conforms, everyone is expected to conform, and everyone wants to conform when they expect everyone else to conform." He characterizes norms as devices that "coordinate people's expectations in interactions that possess multiple equilibria."
Concepts such as "conventions", "customs", "morals", "mores", "rules", and "laws" have been characterized as equivalent to norms. Institutions can be considered collections or clusters of multiple norms. Rules and norms are not necessarily distinct phenomena: both are standards of conduct that can have varying levels of specificity and formality. Laws are a highly formal version of norms. Laws, rules and norms may be at odds; for example, a law may prohibit something but norms still allow it. Norms are not the equivalent of an aggregation of individual attitudes. Ideas, attitudes and values are not necessarily norms, as these concepts do not necessarily concern behavior and may be held privately. "Prevalent behaviors" and behavioral regularities are not necessarily norms. Instinctual or biological reactions, personal tastes, and personal habits are not necessarily norms.
Emergence and transmission
Groups may adopt norms in a variety of ways.
Some stable and self-reinforcing norms may emerge spontaneously without conscious human design. Peyton Young goes as far as to say that "norms typically evolve without top-down direction... through interactions of individuals rather than by design." Norms may develop informally, emerging gradually as a result of repeated use of discretionary stimuli to control behavior. Not necessarily laws set in writing, informal norms represent generally accepted and widely sanctioned routines that people follow in everyday life. These informal norms, if broken, may not invite formal legal punishments or sanctions, but instead encourage reprimands, warnings, or othering; incest, for example, is generally thought of as wrong in society, but many jurisdictions do not legally prohibit it.
Norms may also be created and advanced through conscious human design by norm entrepreneurs. Norms can arise formally, where groups explicitly outline and implement behavioral expectations. Legal norms typically arise from design. A large number of these norms we follow 'naturally' such as driving on the right side of the road in the US and on the left side in the UK, or not speeding in order to avoid a ticket.
Martha Finnemore and Kathryn Sikkink identify three stages in the life cycle of a norm:
Norm emergence: Norm entrepreneurs seek to persuade others to adopt their ideas about what is desirable and appropriate.
Norm cascade: When a norm has broad acceptance and reaches a tipping point, with norm leaders pressuring others to adopt and adhere to the norm.
Norm internalization: When the norm has acquired a "taken-for-granted" quality where compliance with the norm is nearly automatic.
They argue that several factors may raise the influence of certain norms:
Legitimation: Actors that feel insecure about their status and reputation may be more likely to embrace norms.
Prominence: Norms that are held by actors seen as desirable and successful are more likely to diffuse to others.
Intrinsic qualities of the norm: Norms that are specific, long-lasting, and universal are more likely to become prominent.
Path dependency: Norms that are related to preexisting norms are more likely to be widely accepted.
World time-context: Systemic shocks (such as wars, revolutions and economic crises) may motivate a search for new norms.
Christina Horne and Stefanie Mollborn have identified two broad categories of arguments for the emergence of norms:
Consequentialism: norms are created when an individual's behavior has consequences and externalities for other members of the group.
Relationalism: norms are created because people want to attract positive social reactions. In other words, norms do not necessarily contribute to the collective good.
Per consequentialism, norms contribute to the collective good. However, per relationalism, norms do not necessarily contribute to the collective good; norms may even be harmful to the collective.
Some scholars have characterized norms as essentially unstable, thus creating possibilities for norm change. According to Wayne Sandholtz, actors are more likely to persuade others to modify existing norms if they possess power, can reference existing foundational meta-norms, and can reference precedents. Social closeness between actors has been characterized as a key component in sustaining social norms.
Transfer of norms between groups
Individuals may also import norms from a previous organization to their new group, which can get adopted over time. Without a clear indication of how to act, people typically rely on their history to determine the best course forward; what was successful before may serve them well again. In a group, individuals may all import different histories or scripts about appropriate behaviors; common experience over time will lead the group to define as a whole its take on the right action, usually with the integration of several members' schemas. Under the importation paradigm, norm formation occurs subtly and swiftly whereas with formal or informal development of norms may take longer.
Groups internalize norms by accepting them as reasonable and proper standards for behavior within the group. Once firmly established, a norm becomes a part of the group's operational structure and hence more difficult to change. While possible for newcomers to a group to change its norms, it is much more likely that the new individual will adopt the group's norms, values, and perspectives, rather than the other way around.
Deviance from social norms
Deviance is defined as "nonconformity to a set of norms that are accepted by a significant number of people in a community or society" More simply put, if group members do not follow a norm, they become tagged as a deviant. In the sociological literature, this can often lead to them being considered outcasts of society. Yet, deviant behavior amongst children is somewhat expected. Except the idea of this deviance manifesting as a criminal action, the social tolerance given in the example of the child is quickly withdrawn against the criminal. Crime is considered one of the most extreme forms of deviancy according to scholar Clifford R. Shaw.
What is considered "normal" is relative to the location of the culture in which the social interaction is taking place. In psychology, an individual who routinely disobeys group norms runs the risk of turning into the "institutionalized deviant." Similar to the sociological definition, institutionalized deviants may be judged by other group members for their failure to adhere to norms. At first, group members may increase pressure on a non-conformist, attempting to engage the individual in conversation or explicate why he or she should follow their behavioral expectations. The role in which one decides on whether or not to behave is largely determined on how their actions will affect others. Especially with new members who perhaps do not know any better, groups may use discretionary stimuli to bring an individual's behavior back into line. Over time, however, if members continue to disobey, the group will give-up on them as a lost cause; while the group may not necessarily revoke their membership, they may give them only superficial consideration. If a worker is late to a meeting, for example, violating the office norm of punctuality, a supervisor or other co-worker may wait for the individual to arrive and pull him aside later to ask what happened. If the behavior continues, eventually the group may begin meetings without him since the individual "is always late." The group generalizes the individual's disobedience and promptly dismisses it, thereby reducing the member's influence and footing in future group disagreements.
Group tolerance for deviation varies across membership; not all group members receive the same treatment for norm violations. Individuals may build up a "reserve" of good behavior through conformity, which they can borrow against later. These idiosyncrasy credits provide a theoretical currency for understanding variations in group behavioral expectations. A teacher, for example, may more easily forgive a straight-A student for misbehaving—who has past "good credit" saved up—than a repeatedly disruptive student. While past performance can help build idiosyncrasy credits, some group members have a higher balance to start with. Individuals can import idiosyncrasy credits from another group; childhood movie stars, for example, who enroll in college, may experience more leeway in adopting school norms than other incoming freshmen. Finally, leaders or individuals in other high-status positions may begin with more credits and appear to be "above the rules" at times. Even their idiosyncrasy credits are not bottomless, however; while held to a more lenient standard than the average member, leaders may still face group rejection if their disobedience becomes too extreme.
Deviance also causes multiple emotions one experiences when going against a norm. One of those emotions widely attributed to deviance is guilt. Guilt is connected to the ethics of duty which in turn becomes a primary object of moral obligation. Guilt is followed by an action that is questioned after its doing. It can be described as something negative to the self as well as a negative state of feeling. Used in both instances, it is both an unpleasant feeling as well as a form of self-punishment. Using the metaphor of "dirty hands", it is the staining or tainting of oneself and therefore having to self cleanse away the filth. It is a form of reparation that confronts oneself as well as submitting to the possibility of anger and punishment from others. Guilt is a point in both action and feeling that acts as a stimulus for further "honorable" actions.
A 2023 study found that non-industrial societies varied in their punishments of norm violations. Punishment varied based on the types of norm violations and the socio-economic system of the society. The study "found evidence that reputational punishment was associated with egalitarianism and the absence of food storage; material punishment was associated with the presence of food storage; physical punishment was moderately associated with greater dependence on hunting; and execution punishment was moderately associated with social stratification."
Behavior
Whereas ideas in general do not necessarily have behavioral implications, Martha Finnemore notes that "norms by definition concern behavior. One could say that they are collectively held ideas about behavior."
Norms running counter to the behaviors of the overarching society or culture may be transmitted and maintained within small subgroups of society. For example, Crandall (1988) noted that certain groups (e.g., cheerleading squads, dance troupes, sports teams, sororities) have a rate of bulimia, a publicly recognized life-threatening disease, that is much higher than society as a whole. Social norms have a way of maintaining order and organizing groups.
In the field of social psychology, the roles of norms are emphasized—which can guide behavior in a certain situation or environment as "mental representations of appropriate behavior". It has been shown that normative messages can promote pro-social behavior, including decreasing alcohol use, increasing voter turnout, and reducing energy use. According to the psychological definition of social norms' behavioral component, norms have two dimensions: how much a behavior is exhibited, and how much the group approves of that behavior.
Social control
Although not considered to be formal laws within society, norms still work to promote a great deal of social control. They are statements that regulate conduct. The cultural phenomenon that is the norm is the prescriber of acceptable behavior in specific instances. Ranging in variations depending on culture, race, religion, and geographical location, it is the foundation of the terms some know as acceptable as not to injure others, the golden rule, and to keep promises that have been pledged. Without them, there would be a world without consensus, common ground, or restrictions. Even though the law and a state's legislation is not intended to control social norms, society and the law are inherently linked and one dictates the other. This is why it has been said that the language used in some legislation is controlling and dictating for what should or should not be accepted. For example, the criminalization of familial sexual relations is said to protect those that are vulnerable, however even consenting adults cannot have sexual relationships with their relatives. The language surrounding these laws conveys the message that such acts are supposedly immoral and should be condemned, even though there is no actual victim in these consenting relationships.
Social norms can be enforced formally (e.g., through sanctions) or informally (e.g., through body language and non-verbal communication cues). Because individuals often derive physical or psychological resources from group membership, groups are said to control discretionary stimuli; groups can withhold or give out more resources in response to members' adherence to group norms, effectively controlling member behavior through rewards and operant conditioning. Social psychology research has found the more an individual values group-controlled resources or the more an individual sees group membership as central to his definition of self, the more likely he is to conform. Social norms also allow an individual to assess what behaviors the group deems important to its existence or survival, since they represent a codification of belief; groups generally do not punish members or create norms over actions which they care little about. Norms in every culture create conformity that allows for people to become socialized to the culture in which they live.
As social beings, individuals learn when and where it is appropriate to say certain things, to use certain words, to discuss certain topics or wear certain clothes, and when it is not. Thus, knowledge about cultural norms is important for impressions, which is an individual's regulation of their nonverbal behavior. One also comes to know through experience what types of people he/she can and cannot discuss certain topics with or wear certain types of dress around. Typically, this knowledge is derived through experience (i.e. social norms are learned through social interaction). Wearing a suit to a job interview in order to give a great first impression represents a common example of a social norm in the white collar work force.
In his work "Order without Law: How Neighbors Settle Disputes", Robert Ellickson studies various interactions between members of neighbourhoods and communities to show how societal norms create order within a small group of people. He argues that, in a small community or neighborhood, many rules and disputes can be settled without a central governing body simply by the interactions within these communities.
Sociology
In sociology, norms are seen as rules that bind an individual's actions to a specific sanction in one of two forms: a punishment or a reward. Through regulation of behavior, social norms create unique patterns that allow for distinguishing characteristics to be made between social systems. This creates a boundary that allows for a differentiation between those that belong in a specific social setting and those that do not.
For Talcott Parsons of the functionalist school, norms dictate the interactions of people in all social encounters. On the other hand, Karl Marx believed that norms are used to promote the creation of roles in society which allows for people of different levels of social class structure to be able to function properly. Marx claims that this power dynamic creates social order. James Coleman (sociologist) used both micro and macro conditions for his theory. For Coleman, norms start out as goal oriented actions by actors on the micro level. If the benefits do not outweigh the costs of the action for the actors, then a social norm would emerge. The norm's effectiveness is then determined by its ability to enforce its sanctions against those who would not contribute to the "optimal social order."
Heinrich Popitz is convinced that the establishment of social norms, that make the future actions of alter foreseeable for ego, solves the problem of contingency (Niklas Luhmann). In this way, ego can count on those actions as if they would already have been performed and does not have to wait for their actual execution; social interaction is thus accelerated. Important factors in the standardization of behavior are sanctions and social roles.
Operant conditioning
The probability of these behaviours occurring again is discussed in the theories of B. F. Skinner, who states that operant conditioning plays a role in the process of social norm development. Operant conditioning is the process by which behaviours are changed as a function of their consequences. The probability that a behaviour will occur can be increased or decreased depending on the consequences of said behaviour.
In the case of social deviance, an individual who has gone against a norm will contact the negative contingencies associated with deviance, this may take the form of formal or informal rebuke, social isolation or censure, or more concrete punishments such as fines or imprisonment. If one reduces the deviant behavior after receiving a negative consequence, then they have learned via punishment. If they have engaged in a behavior consistent with a social norm after having an aversive stimulus reduced, then they have learned via negative reinforcement. Reinforcement increases behavior, while punishment decreases behavior.
As an example of this, consider a child who has painted on the walls of her house, if she has never done this before she may immediately seek a reaction from her mother or father. The form of reaction taken by the mother or father will affect whether the behaviour is likely to occur again in the future. If her parent is positive and approving of the behaviour it will likely reoccur (reinforcement) however, if the parent offers an aversive consequence (physical punishment, time-out, anger etc...) then the child is less likely to repeat the behaviour in future (punishment).
Skinner also states that humans are conditioned from a very young age on how to behave and how to act with those around us considering the outside influences of the society and location one is in. Built to blend into the ambiance and attitude around us, deviance is a frowned upon action.
Focus theory of normative conduct
Cialdini, Reno, and Kallgren developed the focus theory of normative conduct to describe how individuals implicitly juggle multiple behavioral expectations at once. Expanding on conflicting prior beliefs about whether cultural, situational or personal norms motivate action, the researchers suggested the focus of an individual's attention will dictate what behavioral expectation they follow.
Types
There is no clear consensus on how the term norm should be used.
Martha Finnemore and Kathryn Sikkink distinguish between three types of norms:
Regulative norms: they "order and constrain behavior"
Constitutive norms: they "create new actors, interests, or categories of action"
Evaluative and prescriptive norms: they have an "oughtness" quality to them
Finnemore, Sikkink, Jeffrey W. Legro and others have argued that the robustness (or effectiveness) of norms can be measured by factors such as:
The specificity of the norm: norms that are clear and specific are more likely to be effective
The longevity of the norm: norms with a history are more likely to be effective
The universality of the norm: norms that make general claims (rather than localized and particularistic claims) are more likely to be effective
The prominence of the norm: norms that are widely accepted among powerful actors are more likely to be effective
Christina Horne argues that the robustness of a norm is shaped by the degree of support for the actors who sanction deviant behaviors; she refers to norms regulating how to enforce norms as "metanorms." According to Beth G. Simmons and Hyeran Jo, diversity of support for a norm can be a strong indicator of robustness. They add that institutionalization of a norm raises its robustness. It has also been posited that norms that exist within broader clusters of distinct but mutually reinforcing norms may be more robust.
Jeffrey Checkel argues that there are two common types of explanations for the efficacy of norms:
Rationalism: actors comply with norms due to coercion, cost-benefit calculations, and material incentives
Constructivism: actors comply with norms due to social learning and socialization
According to Peyton Young, mechanisms that support normative behavior include:
Coordination
Social pressure
Signaling
Focal points
Descriptive versus injunctive
Descriptive norms depict what happens, while injunctive norms describe what should happen. Cialdini, Reno, and Kallgren (1990) define a descriptive norm as people's perceptions of what is commonly done in specific situations; it signifies what most people do, without assigning judgment. The absence of trash on the ground in a parking lot, for example, transmits the descriptive norm that most people there do not litter. An Injunctive norm, on the other hand, transmits group approval about a particular behavior; it dictates how an individual should behave. Watching another person pick up trash off the ground and throw it out, a group member may pick up on the injunctive norm that he ought to not litter.
Prescriptive and proscriptive norms
Prescriptive norms are unwritten rules that are understood and followed by society and indicate what we should do. Expressing gratitude or writing a Thank You card when someone gives you a gift represents a prescriptive norm in American culture. Proscriptive norms, in contrast, comprise the other end of the same spectrum; they are similarly society's unwritten rules about what one should not do. These norms can vary between cultures; while kissing someone you just met on the cheek is an acceptable greeting in some European countries, this is not acceptable, and thus represents a proscriptive norm in the United States.
Subjective norms
Subjective norms are determined by beliefs about the extent to which important others want a person to perform a behavior.When combined with attitude toward behavior, subjective norms shape an individual's intentions. Social influences are conceptualized in terms of the pressure that people perceive from important others to perform, or not to perform, a behavior. Social Psychologist Icek Azjen theorized that subjective norms are determined by the strength of a given normative belief and further weighted by the significance of a social referent, as represented in the following equation: SN ∝ Σnimi , where (n) is a normative belief and (m) is the motivation to comply with said belief.
Mathematical representations
Over the last few decades, several theorists have attempted to explain social norms from a more theoretical point of view. By quantifying behavioral expectations graphically or attempting to plot the logic behind adherence, theorists hoped to be able to predict whether or not individuals would conform. The return potential model and game theory provide a slightly more economic conceptualization of norms, suggesting individuals can calculate the cost or benefit behind possible behavioral outcomes. Under these theoretical frameworks, choosing to obey or violate norms becomes a more deliberate, quantifiable decision.
Return potential model
Developed in the 1960s, the return potential model provides a method for plotting and visualizing group norms. In the regular coordinate plane, the amount of behavior exhibited is plotted on the X-axis (label a in Figure 1) while the amount of group acceptance or approval gets plotted on the Y-axis (b in Figure 1). The graph represents the potential return or positive outcome to an individual for a given behavioral norm. Theoretically, one could plot a point for each increment of behavior how much the group likes or dislikes that action. For example, it may be the case that among first-year graduate students, strong social norms exist around how many daily cups of coffee a student drinks. If the return curve in Figure 1 correctly displays the example social norm, we can see that if someone drinks 0 cups of coffee a day, the group strongly disapproves. The group disapproves of the behavior of any member who drinks fewer than four cups of coffee a day; the group disapproves of drinking more than seven cups, shown by the approval curve dipping back below zero. As seen in this example, the return potential model displays how much group approval one can expect for each increment of behavior.
Point of maximum return. The point with the greatest y-coordinate is called the point of maximum return, as it represents the amount of behavior the group likes the best. While c in Figure 1 is labeling the return curve in general, the highlighted point just above it at X=6, represents the point of maximum return. Extending our above example, the point of maximum return for first-year graduate students would be 6 cups of coffee; they receive the most social approval for drinking exactly that many cups. Any more or any fewer cups would decrease the approval.
Range of tolerable behavior. Label d represents the range of tolerable behavior, or the amount of action the group finds acceptable. It encompasses all the positive area under the curve. In Figure 1, the range of tolerable behavior extends is 3, as the group approves of all behavior from 4 to 7 and 7-4=3. Carrying over our coffee example again, we can see that first-years only approve of having a limited number of cups of coffee (between 4 and 7); more than 7 cups or fewer than 4 would fall outside the range of tolerable behavior. Norms can have a narrower or wider range of tolerable behavior. Typically, a narrower range of behavior indicates a behavior with greater consequences to the group.
Intensity. The intensity of the norm tells how much the group cares about the norm, or how much group affect is at stake to be won or lost. It is represented in the return potential model by the total amount of area subsumed by the curve, regardless of whether the area is positive or negative. A norm with low intensity would not vary far from the x-axis; the amount of approval or disapproval for given behaviors would be closer to zero. A high-intensity norm, however, would have more extreme approval ratings. In Figure 1, the intensity of the norm appears high, as few behaviors invoke a rating of indifference.
Crystallization. Finally, norm crystallization refers to how much variance exists within the curve; translated from the theoretical back to the actual norm, it shows how much agreement exists between group members about the approval for a given amount of behavior. It may be that some members believe the norm more central to group functioning than others. A group norm like how many cups of coffee first years should drink would probably have low crystallization since a lot of individuals have varying beliefs about the appropriate amount of caffeine to imbibe; in contrast, the norm of not plagiarizing another student's work would likely have high crystallization, as people uniformly agree on the behavior's unacceptability. Showing the overall group norm, the return potential model in Figure 1 does not indicate the crystallization. However, a return potential model that plotted individual data points alongside the cumulative norm could demonstrate the variance and allow us to deduce crystallization.
Game theory
Another general formal framework that can be used to represent the essential elements of the social situation surrounding a norm is the repeated game of game theory. Rational choice, a branch of game theory, deals with the relations and actions socially committed among rational agents. A norm gives a person a rule of thumb for how they should behave. However, a rational person acts according to the rule only if it is beneficial for them. The situation can be described as follows. A norm gives an expectation of how other people act in a given situation (macro). A person acts optimally given the expectation (micro). For a norm to be stable, people's actions must reconstitute the expectation without change (micro-macro feedback loop). A set of such correct stable expectations is known as a Nash equilibrium. Thus, a stable norm must constitute a Nash equilibrium. In the Nash equilibrium, no one actor has any positive incentive in individually deviating from a certain action. Social norms will be implemented if the actions of that specific norm come into agreement by the support of the Nash equilibrium in the majority of the game theoretical approaches.
From a game-theoretical point of view, there are two explanations for the vast variety of norms that exist throughout the world. One is the difference in games. Different parts of the world may give different environmental contexts and different people may have different values, which may result in a difference in games. The other is equilibrium selection not explicable by the game itself. Equilibrium selection is closely related to coordination. For a simple example, driving is common throughout the world, but in some countries people drive on the right and in other countries people drive on the left (see coordination game). A framework called comparative institutional analysis is proposed to deal with the game theoretical structural understanding of the variety of social norms.
See also
References
Further reading
Appelbaum, R. P., Carr, D., Duneir, M., Giddens, A. (2009). Conformity, Deviance, and Crime. Introduction to Sociology, New York, NY: W. W. Norton & Company, Inc., p. 173.
Bicchieri, C. (2006). The Grammar of Society: The Nature and Dynamics of Social Norms, New York: Cambridge University Press.
Boyd, R. & Richerson, P.J. (1985). Culture and the Evolutionary Process, Chicago: University of Chicago Press.
Durkheim, E. (1915). The Elementary Forms of the Religious Life, New York: Free Press.
Fine, G.A. (2001). Social Norms, ed. by Michael Hechter and Karl-Dieter Opp, New York, NY: Russell Sage Foundation.
Hechter, M. & Karl-Dieter Opp, eds. (2001). Social Norms, New York: Russell Sage Foundation.
Heiss, J. (1981). "Social Roles", In Social Psychology: Sociological Perspectives, Rosenburg, M. & Turner, R.H. (eds.), New York: Basic Books.
Hochschild, A. (1989). "The Economy of Gratitude", In D.D. Franks & E.D. McCarthy (Eds.), The Sociology of Emotions: Original Essays and Research Papers, Greenwich, CT: JAI Press.
Horne, C. (2001). "Social Norms". In M. Hechter & K. Opp (Eds.), New York, NY: Russell Sage Foundation.
Kohn, M.L. (1977). Class and Conformity: A Study in Values, 2nd ed., Chicago, IL: University of Chicago Press.
Posner, E. (2000). Law and Social Norms. Cambridge MA: Harvard University Press
Scott, J.F. (1971). Internalization of Norms: A Sociological Theory of Moral Commitment, Englewoods Cliffs, N.J.: Prentice–Hall.
Ullmann-Margalit, E. (1977). The Emergence of Norms. Oxford: Oxford University Press.
Young, H.P. (2008). "Social norms". The New Palgrave Dictionary of Economics, 2nd Edition.
External links
Conformity
Consensus reality
Social concepts
Sociological terminology
Social agreement
Social psychology
Folklore | Social norm | [
"Biology"
] | 7,053 | [
"Behavior",
"Conformity",
"Human behavior"
] |
43,854 | https://en.wikipedia.org/wiki/Reality | Reality is the sum or aggregate of all that is real or existent within the universe, as opposed to that which is only imaginary, nonexistent or nonactual. The term is also used to refer to the ontological status of things, indicating their existence. In physical terms, reality is the totality of a system, known and unknown.
Philosophical questions about the nature of reality or existence or being are considered under the rubric of ontology, which is a major branch of metaphysics in the Western philosophical tradition. Ontological questions also feature in diverse branches of philosophy, including the philosophy of science, of religion, of mathematics, and philosophical logic. These include questions about whether only physical objects are real (i.e., physicalism), whether reality is fundamentally immaterial (e.g. idealism), whether hypothetical unobservable entities posited by scientific theories exist, whether a god or gods exist, whether numbers and other abstract objects exist, and whether possible worlds exist. Epistemology is concerned with what can be known or inferred as likely and how, whereby in the modern world emphasis is put on reason, empirical evidence and science as sources and methods to determine or investigate reality.
World views
World views and theories
A common colloquial usage would have reality mean "perceptions, beliefs, and attitudes toward reality", as in "My reality is not your reality." This is often used just as a colloquialism indicating that the parties to a conversation agree, or should agree, not to quibble over deeply different conceptions of what is real. For example, in a religious discussion between friends, one might say (attempting humor), "You might disagree, but in my reality, everyone goes to heaven."
Reality can be defined in a way that links it to worldviews or parts of them (conceptual frameworks): Reality is the totality of all things, structures (actual and conceptual), events (past and present) and phenomena, whether observable or not. It is what a world view (whether it be based on individual or shared human experience) ultimately attempts to describe or map.
Certain ideas from physics, philosophy, sociology, literary criticism, and other fields shape various theories of reality. One such theory is that there simply and literally is no reality beyond the perceptions or beliefs we each have about reality. Such attitudes are summarized in popular statements, such as "Perception is reality" or "Life is how you perceive reality" or "reality is what you can get away with" (Robert Anton Wilson), and they indicate anti-realism – that is, the view that there is no objective reality, whether acknowledged explicitly or not.
Many of the concepts of science and philosophy are often defined culturally and socially. This idea was elaborated by Thomas Kuhn in his book The Structure of Scientific Revolutions (1962). The Social Construction of Reality, a book about the sociology of knowledge written by Peter L. Berger and Thomas Luckmann, was published in 1966. It explained how knowledge is acquired and used for the comprehension of reality. Out of all the realities, the reality of everyday life is the most important one since our consciousness requires us to be completely aware and attentive to the experience of everyday life.
Related concepts
A priori and a posteriori
Potentiality and actuality
Belief
Belief studies
Western philosophy
Philosophy addresses two different aspects of the topic of reality: the nature of reality itself, and the relationship between the mind (as well as language and culture) and reality.
On the one hand, ontology is the study of being, and the central topic of the field is couched, variously, in terms of being, existence, "what is", and reality. The task in ontology is to describe the most general categories of reality and how they are interrelated. If a philosopher wanted to proffer a positive definition of the concept "reality", it would be done under this heading. As explained above, some philosophers draw a distinction between reality and existence. In fact, many analytic philosophers today tend to avoid the term "real" and "reality" in discussing ontological issues. But for those who would treat "is real" the same way they treat "exists", one of the leading questions of analytic philosophy has been whether existence (or reality) is a property of objects. It has been widely held by analytic philosophers that it is not a property at all, though this view has lost some ground in recent decades.
On the other hand, particularly in discussions of objectivity that have feet in both metaphysics and epistemology, philosophical discussions of "reality" often concern the ways in which reality is, or is not, in some way dependent upon (or, to use fashionable jargon, "constructed" out of) mental and cultural factors such as perceptions, beliefs, and other mental states, as well as cultural artifacts, such as religions and political movements, on up to the vague notion of a common cultural world view, or .
Realism
The view that there is a reality independent of any beliefs, perceptions, etc., is called realism. More specifically, philosophers are given to speaking about "realism about" this and that, such as realism about universals or realism about the external world. Generally, where one can identify any class of object, the existence or essential characteristics of which is said not to depend on perceptions, beliefs, language, or any other human artifact, one can speak of "realism about" that object.
A correspondence theory of knowledge about what exists claims that "true" knowledge of reality represents accurate correspondence of statements about and images of reality with the actual reality that the statements or images are attempting to represent. For example, the scientific method can verify that a statement is true based on the observable evidence that a thing exists. Many humans can point to the Rocky Mountains and say that this mountain range exists, and continues to exist even if no one is observing it or making statements about it.
Anti-realism
One can also speak of anti-realism about the same objects. Anti-realism is the latest in a long series of terms for views opposed to realism. Perhaps the first was idealism, so called because reality was said to be in the mind, or a product of our ideas. Berkeleyan idealism is the view, propounded by the Irish empiricist George Berkeley, that the objects of perception are actually ideas in the mind. In this view, one might be tempted to say that reality is a "mental construct"; this is not quite accurate, however, since, in Berkeley's view, perceptual ideas are created and coordinated by God. By the 20th century, views similar to Berkeley's were called phenomenalism. Phenomenalism differs from Berkeleyan idealism primarily in that Berkeley believed that minds, or souls, are not merely ideas nor made up of ideas, whereas varieties of phenomenalism, such as that advocated by Russell, tended to go farther to say that the mind itself is merely a collection of perceptions, memories, etc., and that there is no mind or soul over and above such mental events. Finally, anti-realism became a fashionable term for any view which held that the existence of some object depends upon the mind or cultural artifacts. The view that the so-called external world is really merely a social, or cultural, artifact, called social constructionism, is one variety of anti-realism. Cultural relativism is the view that social issues such as morality are not absolute, but at least partially cultural artifact.
Being
The nature of being is a perennial topic in metaphysics. For instance, Parmenides taught that reality was a single unchanging Being, whereas Heraclitus wrote that all things flow. The 20th-century philosopher Heidegger thought previous philosophers have lost sight of the question of Being (qua Being) in favour of the questions of beings (existing things), so he believed that a return to the Parmenidean approach was needed. An ontological catalogue is an attempt to list the fundamental constituents of reality. The question of whether or not existence is a predicate has been discussed since the Early Modern period, not least in relation to the ontological argument for the existence of God. Existence, that something is, has been contrasted with essence, the question of what something is.
Since existence without essence seems blank, it associated with nothingness by philosophers such as Hegel. Nihilism represents an extremely negative view of being, the absolute a positive one.
Explanations for the existence of something rather than nothing
Perception
The question of direct or "naïve" realism, as opposed to indirect or "representational" realism, arises in the philosophy of perception and of mind out of the debate over the nature of conscious experience; the epistemological question of whether the world we see around us is the real world itself or merely an internal perceptual copy of that world generated by neural processes in our brain. Naïve realism is known as direct realism when developed to counter indirect or representative realism, also known as epistemological dualism, the philosophical position that our conscious experience is not of the real world itself but of an internal representation, a miniature virtual-reality replica of the world.
Timothy Leary coined the influential term Reality Tunnel, by which he means a kind of representative realism. The theory states that, with a subconscious set of mental filters formed from their beliefs and experiences, every individual interprets the same world differently, hence "Truth is in the eye of the beholder". His ideas influenced the work of his friend Robert Anton Wilson.
Abstract objects and mathematics
The status of abstract entities, particularly numbers, is a topic of discussion in mathematics.
In the philosophy of mathematics, the best known form of realism about numbers is Platonic realism, which grants them abstract, immaterial existence. Other forms of realism identify mathematics with the concrete physical universe.
Anti-realist stances include formalism and fictionalism.
Some approaches are selectively realistic about some mathematical objects but not others. Finitism rejects infinite quantities. Ultra-finitism accepts finite quantities up to a certain amount. Constructivism and intuitionism are realistic about objects that can be explicitly constructed, but reject the use of the principle of the excluded middle to prove existence by reductio ad absurdum.
The traditional debate has focused on whether an abstract (immaterial, intelligible) realm of numbers has existed in addition to the physical (sensible, concrete) world. A recent development is the mathematical universe hypothesis, the theory that only a mathematical world exists, with the finite, physical world being an illusion within it.
An extreme form of realism about mathematics is the mathematical multiverse hypothesis advanced by Max Tegmark. Tegmark's sole postulate is: All structures that exist mathematically also exist physically. That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world". The hypothesis suggests that worlds corresponding to different sets of initial conditions, physical constants, or altogether different equations should be considered real. The theory can be considered a form of Platonism in that it posits the existence of mathematical entities, but can also be considered a mathematical monism in that it denies that anything exists except mathematical objects.
Properties
The problem of universals is an ancient problem in metaphysics about whether universals exist. Universals are general or abstract qualities, characteristics, properties, kinds or relations, such as being male/female, solid/liquid/gas or a certain colour, that can be predicated of individuals or particulars or that individuals or particulars can be regarded as sharing or participating in. For example, Scott, Pat, and Chris have in common the universal quality of being human or humanity.
The realist school claims that universals are real – they exist and are distinct from the particulars that instantiate them. There are various forms of realism. Two major forms are Platonic realism and Aristotelian realism. Platonic realism is the view that universals are real entities and they exist independent of particulars. Aristotelian realism, on the other hand, is the view that universals are real entities, but their existence is dependent on the particulars that exemplify them.
Nominalism and conceptualism are the main forms of anti-realism about universals.
Time and space
A traditional realist position in ontology is that time and space have existence apart from the human mind. Idealists deny or doubt the existence of objects independent of the mind. Some anti-realists whose ontological position is that objects outside the mind do exist, nevertheless doubt the independent existence of time and space.
Kant, in the Critique of Pure Reason, described time as an a priori notion that, together with other a priori notions such as space, allows us to comprehend sense experience. Kant denies that either space or time are substance, entities in themselves, or learned by experience; he holds rather that both are elements of a systematic framework we use to structure our experience. Spatial measurements are used to quantify how far apart objects are, and temporal measurements are used to quantitatively compare the interval between (or duration of) events. Although space and time are held to be transcendentally ideal in this sense, they are also empirically real, i.e. not mere illusions.
Idealist writers such as J. M. E. McTaggart in The Unreality of Time have argued that time is an illusion.
As well as differing about the reality of time as a whole, metaphysical theories of time can differ in their ascriptions of reality to the past, present and future separately.
Presentism holds that the past and future are unreal, and only an ever-changing present is real.
The block universe theory, also known as Eternalism, holds that past, present and future are all real, but the passage of time is an illusion. It is often said to have a scientific basis in relativity.
The growing block universe theory holds that past and present are real, but the future is not.
Time, and the related concepts of process and evolution are central to the system-building metaphysics of A. N. Whitehead and Charles Hartshorne.
Possible worlds
The term "possible world" goes back to Leibniz's theory of possible worlds, used to analyse necessity, possibility, and similar modal notions. Modal realism is the view, notably propounded by David Kellogg Lewis, that all possible worlds are as real as the actual world. In short: the actual world is regarded as merely one among an infinite set of logically possible worlds, some "nearer" to the actual world and some more remote. Other theorists may use the Possible World framework to express and explore problems without committing to it ontologically. Possible world theory is related to alethic logic: a proposition is necessary if it is true in all possible worlds, and possible if it is true in at least one. The many worlds interpretation of quantum mechanics is a similar idea in science.
Theories of everything (TOE) and philosophy
The philosophical implications of a physical TOE are frequently debated. For example, if philosophical physicalism is true, a physical TOE will coincide with a philosophical theory of everything.
The "system building" style of metaphysics attempts to answer all the important questions in a coherent way, providing a complete picture of the world. Plato and Aristotle could be said to be early examples of comprehensive systems. In the early modern period (17th and 18th centuries), the system-building scope of philosophy is often linked to the rationalist method of philosophy, that is the technique of deducing the nature of the world by pure a priori reason. Examples from the early modern period include the Leibniz's Monadology, Descartes's Dualism, Spinoza's Monism. Hegel's Absolute idealism and Whitehead's Process philosophy were later systems.
Other philosophers do not believe its techniques can aim so high. Some scientists think a more mathematical approach than philosophy is needed for a TOE, for instance Stephen Hawking wrote in A Brief History of Time that even if we had a TOE, it would necessarily be a set of equations. He wrote, "What is it that breathes fire into the equations and makes a universe for them to describe?"
Phenomenology
On a much broader and more subjective level, private experiences, curiosity, inquiry, and the selectivity involved in personal interpretation of events shapes reality as seen by one and only one person and hence is called phenomenological. While this
form of reality might be common to others as well, it could at times also be so unique to oneself as to never be experienced or agreed upon by anyone else. Much of the kind of experience deemed spiritual occurs on this level of reality.
Phenomenology is a philosophical method developed in the early years of the twentieth century by Edmund Husserl (1859–1938) and a circle of followers at the universities of Göttingen and Munich in Germany. Subsequently, phenomenological themes were taken up by philosophers in France, the United States, and elsewhere, often in contexts far removed from Husserl's work.
The word phenomenology comes from the Greek phainómenon, meaning "that which appears", and lógos, meaning "study". In Husserl's conception, phenomenology is primarily concerned with making the structures of consciousness, and the phenomena which appear in acts of consciousness, objects of systematic reflection and analysis. Such reflection was to take place from a highly modified "first person" viewpoint, studying phenomena not as they appear to "my" consciousness, but to any consciousness whatsoever. Husserl believed that phenomenology could thus provide a firm basis for all human knowledge, including scientific knowledge, and could establish philosophy as a "rigorous science".
Husserl's conception of phenomenology has been criticised and developed by his student and assistant Martin Heidegger (1889–1976), by existentialists like Maurice Merleau-Ponty (1908–1961) and Jean-Paul Sartre (1905–1980), and by other philosophers, such as Paul Ricoeur (1913–2005), Emmanuel Levinas (1906–1995), and Dietrich von Hildebrand (1889–1977).
Skeptical hypotheses
Skeptical hypotheses in philosophy suggest that reality could be very different from what we think it is; or at least that we cannot prove it is not. Examples include:
The "Brain in a vat" hypothesis is cast in scientific terms. It supposes that one might be a disembodied brain kept alive in a vat, and fed false sensory signals. This hypothesis is related to the Matrix hypothesis below.
The "Dream argument" of Descartes and Zhuangzi supposes reality to be indistinguishable from a dream.
Descartes' Evil demon is a being "as clever and deceitful as he is powerful, who has directed his entire effort to misleading me."
The five minute hypothesis (or omphalos hypothesis or Last Thursdayism) suggests that the world was created recently together with records and traces indicating a greater age.
Diminished reality refers to artificially diminished reality, not due to limitations of sensory systems but via artificial filters.
The Matrix hypothesis or Simulated reality hypothesis suggest that we might be inside a computer simulation or virtual reality. Related hypotheses may also involve simulations with signals that allow the inhabitant species in virtual or simulated reality to perceive the external reality.
Non-western ancient philosophy and religion
Jain philosophy
Jain philosophy postulates that seven tattva (truths or fundamental principles) constitute reality. These seven tattva are:
Jīva – The soul which is characterized by consciousness.
Ajīva – The non-soul.
Asrava – Influx of karma.
Bandha – The bondage of karma.
Samvara – Obstruction of the inflow of karmic matter into the soul.
Nirjara – Shedding of karmas.
Moksha – Liberation or Salvation, i.e. the complete annihilation of all karmic matter (bound with any particular soul).
Physical sciences
Scientific realism
Scientific realism is, at the most general level, the view that the world (the universe) described by science (perhaps ideal science) is the real world, as it is, independent of what we might take it to be. Within philosophy of science, it is often framed as an answer to the question "how is the success of science to be explained?" The debate over what the success of science involves centers primarily on the status of entities that are not directly observable discussed by scientific theories. Generally, those who are scientific realists state that one can make reliable claims about these entities (viz., that they have the same ontological status) as directly observable entities, as opposed to instrumentalism. The most used and studied scientific theories today state more or less the truth.
Realism and locality in physics
Realism in the sense used by physicists does not equate to realism in metaphysics. The latter is the claim that the world is mind-independent: that even if the results of a measurement do not pre-exist the act of measurement, that does not require that they are the creation of the observer. Furthermore, a mind-independent property does not have to be the value of some physical variable such as position or momentum. A property can be dispositional (or potential), i.e. it can be a tendency: in the way that glass objects tend to break, or are disposed to break, even if they do not actually break. Likewise, the mind-independent properties of quantum systems could consist of a tendency to respond to particular measurements with particular values with ascertainable probability. Such an ontology would be metaphysically realistic, without being realistic in the physicist's sense of "local realism" (which would require that a single value be produced with certainty).
A closely related term is counterfactual definiteness (CFD), used to refer to the claim that one can meaningfully speak of the definiteness of results of measurements that have not been performed (i.e. the ability to assume the existence of objects, and properties of objects, even when they have not been measured).
Local realism is a significant feature of classical mechanics, of general relativity, and of classical electrodynamics; but not quantum mechanics. In a work now called the EPR paradox, Einstein relied on local realism to suggest that hidden variables were missing in quantum mechanics. However, John S. Bell subsequently showed that the predictions of quantum mechanics are inconsistent with hidden variables, a result known as Bell's theorem. The predictions of quantum mechanics have been verified: Bell's inequalities are violated, meaning either local realism or counterfactual definiteness must be incorrect. Different interpretations of quantum mechanics violate different parts of local realism and/or counterfactual definiteness.
The transition from "possible" to "actual" is a major topic of quantum physics, with related theories including quantum darwinism.
Role of "observation" in quantum mechanics
The quantum mind–body problem refers to the philosophical discussions of the mind–body problem in the context of quantum mechanics. Since quantum mechanics involves quantum superpositions, which are not perceived by observers, some interpretations of quantum mechanics place conscious observers in a special position.
The founders of quantum mechanics debated the role of the observer, and of them, Wolfgang Pauli and Werner Heisenberg believed that quantum mechanics expressed the observers knowledge and when an experiment was completed the additional knowledge should be incorporated in the wave function, an effect that came to be called state reduction or collapse. This point of view, which was never fully endorsed by Niels Bohr, was denounced as mystical and anti-scientific by Albert Einstein. Pauli accepted the term, and described quantum mechanics as lucid mysticism.
Heisenberg and Bohr always described quantum mechanics in logical positivist terms. Bohr also took an active interest in the philosophical implications of quantum theories such as his complementarity, for example. He believed quantum theory offers a complete description of nature, albeit one that is simply ill-suited for everyday experiences – which are better described by classical mechanics and probability. Bohr never specified a demarcation line above which objects cease to be quantum and become classical. He believed that it was not a question of physics, but one of philosophy.
Eugene Wigner reformulated the "Schrödinger's cat" thought experiment as "Wigner's friend" and proposed that the consciousness of an observer is the demarcation line which precipitates collapse of the wave function, independent of any realist interpretation. Commonly known as "consciousness causes collapse", this controversial interpretation of quantum mechanics states that observation by a conscious observer is what makes the wave function collapse. However, this is a minority view among quantum philosophers, considering it a misunderstanding. There are other possible solutions to the "Wigner's friend" thought experiment, which do not require consciousness to be different from other physical processes. Moreover, Wigner shifted to those interpretations in his later years.
Multiverse
The multiverse is the hypothetical set of multiple possible universes (including the historical universe we consistently experience) that together comprise everything that exists: the entirety of space, time, matter, and energy as well as the physical laws and constants that describe them. The term was coined in 1895 by the American philosopher and psychologist William James. In the many-worlds interpretation (MWI), one of the mainstream interpretations of quantum mechanics, there are an infinite number of universes and every possible quantum outcome occurs in at least one universe, albeit there is a debate as to how real the (other) worlds are.
The structure of the multiverse, the nature of each universe within it and the relationship between the various constituent universes, depend on the specific multiverse hypothesis considered. Multiverses have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology and fiction, particularly in science fiction and fantasy. In these contexts, parallel universes are also called "alternative universes", "quantum universes", "interpenetrating dimensions", "parallel dimensions", "parallel worlds", "alternative realities", "alternative timelines", and "dimensional planes", among others.
Anthropic principle
Personal and collective reality
Each individual has a different view of reality, with different memories and personal history, knowledge, personality traits and experience. This system, mostly referring to the human brain, affects cognition and behavior and into this complex new knowledge, memories, information, thoughts and experiences are continuously integrated. The connectome – neural networks/wirings in brains – is thought to be a key factor in human variability in terms of cognition or the way we perceive the world (as a context) and related features or processes. Sensemaking is the process by which people give meaning to their experiences and make sense of the world they live in. Personal identity is relating to questions like how a unique individual is persisting through time.
Sensemaking and determination of reality also occurs collectively, which is investigated in social epistemology and related approaches. From the collective intelligence perspective, the intelligence of the individual human (and potentially AI entities) is substantially limited and advanced intelligence emerges when multiple entities collaborate over time. Collective memory is an important component of the social construction of reality and communication and communication-related systems, such as media systems, may also be major components .
Philosophy of perception raises questions based on the evolutionary history of humans' perceptual apparatuses, particularly or especially individuals' physiological senses, described as "[w]e don't see reality—we only see what was useful to see in the past", partly suggesting that "[o]ur species has been so successful not in spite of our inability to see reality but because of it".
Scientific theories of everything
A theory of everything (TOE) is a putative theory of theoretical physics that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle. The theory of everything is also called the final theory. Many candidate theories of everything have been proposed by theoretical physicists during the twentieth century, but none have been confirmed experimentally. The primary problem in producing a TOE is that general relativity and quantum mechanics are hard to unify. This is one of the unsolved problems in physics.
Initially, the term "theory of everything" was used with an ironic connotation to refer to various overgeneralized theories. For example, a great-grandfather of Ijon Tichy, a character from a cycle of Stanisław Lem's science fiction stories of the 1960s, was known to work on the "General Theory of Everything". Physicist John Ellis claims to have introduced the term into the technical literature in an article in Nature in 1986. Over time, the term stuck in popularizations of quantum physics to describe a theory that would unify or explain through a single model the theories of all fundamental interactions and of all particles of nature: general relativity for gravitation, and the standard model of elementary particle physics – which includes quantum mechanics – for electromagnetism, the two nuclear interactions, and the known elementary particles.
Current candidates for a theory of everything include string theory, M theory, and loop quantum gravity.
Technology
Media
Media – such as news media, social media, websites including Wikipedia, and fiction – shape individuals' and society's perception of reality (including as part of belief and attitude formation) and are partly used intentionally as means to learn about reality. Various technologies have changed society's relationship with reality such as the advent of radio and TV technologies.
Research investigates interrelations and effects, for example aspects in the social construction of reality. A major component of this shaping and representation of perceived reality is agenda, selection and prioritization – not only (or primarily) the quality, tone and types of content – which influences, for instance, the public agenda. Disproportional news attention for low-probability incidents – such as high-consequence accidents – can distort audiences' risk perceptions with harmful consequences. Various biases such as false balance, public attention dependence reactions like sensationalism and domination by "current events", as well as various interest-driven uses of media such as marketing can also have major impacts on the perception of reality. Time-use studies found that e.g. in 2018 the average U.S. American "spent around eleven hours every day looking at screens".
Filter bubbles and echo chambers
Virtual reality and cyberspace
Virtual reality (VR) is a computer-simulated environment that can simulate physical presence in places in the real world, as well as in imaginary worlds.
The virtuality continuum is a continuous scale ranging between the completely virtual, a virtuality, and the completely real: reality. The reality–virtuality continuum therefore encompasses all possible variations and compositions of real and virtual objects. It has been described as a concept in new media and computer science, but in fact it could be considered a matter of anthropology. The concept was first introduced by Paul Milgram.
The area between the two extremes, where both the real and the virtual are mixed, is the so-called mixed reality. This in turn is said to consist of both augmented reality, where the virtual augments the real, and augmented virtuality, where the real augments the virtual.
Cyberspace, the world's computer systems considered as an interconnected whole, can be thought of as a virtual reality; for instance, it is portrayed as such in the cyberpunk fiction of William Gibson and others. Second Life and MMORPGs such as World of Warcraft are examples of artificial environments or virtual worlds (falling some way short of full virtual reality) in cyberspace.
"RL" in internet culture
On the Internet, "real life" refers to life in the real world. It generally references life or consensus reality, in contrast to an environment seen as fiction or fantasy, such as virtual reality, lifelike experience, dreams, novels, or movies. Online, the acronym "IRL" stands for "in real life", with the meaning "not on the Internet". Sociologists engaged in the study of the Internet have determined that someday, a distinction between online and real-life worlds may seem "quaint", noting that certain types of online activity, such as sexual intrigues, have already made a full transition to complete legitimacy and "reality". The abbreviation "RL" stands for "real life". For example, one can speak of "meeting in RL" someone whom one has met in a chat or on an Internet forum. It may also be used to express an inability to use the Internet for a time due to "RL problems".
See also
Alternate history
Counterfactual history
Derealization
Consciousness
Extended modal realism
Hyperreality
Modal realism
Alfred Korzybski
Notes
References
Alt URL
Further reading
George Musser, "Virtual Reality: How close can physics bring us to a truly fundamental understanding of the world?", Scientific American, vol. 321, no. 3 (September 2019), pp. 30–35.
"Physics is ... the bedrock of the broader search for truth.... Yet [physicists] sometimes seem to be struck by a collective impostor syndrome.... Truth can be elusive even in the best-established theories. Quantum mechanics is as well tested a theory as can be, yet its interpretation remains inscrutable. [p. 30.] The deeper physicists dive into reality, the more reality seems to evaporate." [p. 34.]
External links
C.D. Broad on Reality
Phenomenology Online: Materials discussing and exemplifying phenomenological research
The Matrix as Metaphysics by David Chalmers
Concepts in metaphysics
Concepts in epistemology
Concepts in logic
Concepts in metaphilosophy
Concepts in the philosophy of language
Concepts in the philosophy of science
Ontology
Philosophy of mathematics
Philosophy of religion
Philosophy of technology
Concepts in the philosophy of mind
Concepts in social philosophy
Realism
Quantum measurement | Reality | [
"Physics",
"Mathematics",
"Technology"
] | 7,014 | [
"Philosophy of technology",
"Science and technology studies",
"Quantum mechanics",
"Quantum measurement",
"nan"
] |
43,871 | https://en.wikipedia.org/wiki/Fatal%20insomnia | Fatal insomnia is an extremely rare neurodegenerative prion disease that results in trouble sleeping as its hallmark symptom. The majority of cases are familial (fatal familial insomnia [FFI]), stemming from a mutation in the PRNP gene, with the remainder of cases occurring sporadically (sporadic fatal insomnia [sFI]). The problems with sleeping typically start out gradually and worsen over time. Eventually, the patient will succumb to total insomnia (agrypnia excitata), most often leading to other symptoms such as speech problems, coordination problems, and dementia. It results in death within a few months to a few years, and there is no known disease-modifying treatment.
Signs and symptoms
The disease has four stages:
Characterized by worsening insomnia, resulting in panic attacks, paranoia, and phobias. This stage lasts for about four months.
Hallucinations and panic attacks become noticeable, continuing for about five months.
Complete inability to sleep is followed by rapid loss of weight. This lasts for about three months.
Dementia, during which the person becomes unresponsive or mute over the course of six months, is the final stage of the disease, after which death follows.
Clinically, FFI manifests with a disordered sleep-wake cycle, dysautonomia, motor disturbances, and neuropsychiatric disorders.
Other symptoms include profuse sweating, miosis (pinpoint pupils), sudden entrance into menopause or impotence, neck stiffness, and elevation of blood pressure and heart rate. The sporadic form of the disease often presents with double vision. Prolonged constipation is common as well. As the disease progresses, the person becomes stuck in a state of pre-sleep limbo, or hypnagogia, which is the state just before sleep in healthy individuals. During these stages, people commonly and repeatedly move their limbs as if they were dreaming.
The age of onset is variable, ranging from 13 to 60 years, with an average of 50. The disease can be detected prior to onset by genetic testing. Death usually occurs between 6–36 months from onset. The presentation of the disease varies considerably from person to person, even among people within the same family; in the sporadic form, for example, sleep problems are not commonly reported and early symptoms are ataxia, cognitive impairment, and double vision.
Cause
Fatal familial insomnia is a rare hereditary prion disease that is associated with a mutation in PRNP. The gene, which provides instructions for making the prion protein PrPC, is located on the short arm of chromosome 20 at position p13. Individuals with FFI or familial Creutzfeldt–Jakob disease (fCJD) both carry a mutation at codon 178 of the prion protein gene. FFI is also invariably linked to the presence of the methionine codon at position 129 of the mutant allele, whereas fCJD is linked to the presence of the valine codon at that position. The disease occurs when there is a change of amino acid at position 178 in which asparagine is found instead of the normal aspartic acid. This has to be accompanied with a methionine at position 129.
FFI is an autosomal dominant disease caused by a missense GAC-to-AAC mutation at codon 178 of the PRNP prion protein gene located on chromosome 20, along with the presence of the methionine polymorphism at position 129 of the mutant allele. Pathologically, FFI is characterized predominantly by thalamic degenerationespecially in the medio-dorsal and anteroventral nuclei. Phenotypic variability is a perplexing feature of FFI.
Pathophysiology
Given its striking clinical and neuropathologic similarities with fatal familial insomnia (FFI), a genetic prion disease linked to a point mutation at codon 178 (D178N) in the PRNP coupled with methionine at codon 129, the MM2T subtype is also known as sporadic FI (sFI). Transmission studies using susceptible transgenic mice have consistently demonstrated that the same prion strain is associated with both sFI and FFI. In contrast to what has been the rule for the most common neurodegenerative disorders, sFI is rarer than its genetic counterpart. Whereas the recognized patients with FFI are numerous and belong to >50 families worldwide, only about 30 cases of CJD MM2T and a few cases with mixed MM2T and MM2C features (MM2T+C) have been recorded to date.
In itself the presence of prions causes reduced glucose to be used by the thalamus and a mild hypo-metabolism of the cingulate cortex. The extent of this symptom varies between two variations of the disease, these being those presenting methionine homozygotes at codon 129 and methionine/valine heterozygotes, with some evidence that hypo-metabolism is more severe in the latter. Given the relationship between the involvement of the thalamus in regulating sleep and alertness, a causal relationship can be drawn and is often mentioned as the cause of insomnia.
Diagnosis
Diagnosis is based on symptoms and can be supported by a sleep study, a PET scan and genetic testing if the patient's family has a history of the disease. As with other prion diseases, the diagnosis can be confirmed only by a brain autopsy post-mortem.
The real-time quaking-induced conversion (RT-QuIC), a highly sensitive assay that detects minute amounts of PrPSc in the cerebrospinal fluid (CSF), has been reported to have a sensitivity of 50% in FFI and sFI.[Cracco et al. Handb Clin Neurol 2018][Mock et al. Sci Rep. 2021] However, this low sensitivity may change since the examination was based on a low number of cases, and the RT-QuIC technology is continuously evolving.
A test that measures the cerebral metabolic rate of glucose by positron emission tomography (PET), referred to as [18F]-FDG-PET, has demonstrated severe hypometabolism of the thalamus bilaterally in FFI and sFI, also in the earliest stages of the disease. This hypometabolism then spreads, eventually impacting most cortical regions.[Cortelli et al. Brain 2006] The complexity and cost of this test currently impede its use in routine diagnosis.
Differential diagnosis
Other diseases involving the mammalian prion protein are known. Some are transmissible (TSEs, including FFI) such as kuru, bovine spongiform encephalopathy (BSE, also known as mad cow disease) in cattle and chronic wasting disease in American deer and American elk in some areas of the United States and Canada, as well as Creutzfeldt–Jakob disease (CJD). Until recently prion diseases were thought to be transmissible only by direct contact with infected tissue, such as from eating infected tissue, transfusion or transplantation; research suggests that prions can be transmitted by aerosols but that the general public is not at risk of airborne infection.
Treatments
Treatment involves palliative care. There is conflicting evidence over the use of sleeping pills, including barbiturates, as a treatment for the disease. Symptoms of fatal familial insomnia may be treated with medications.
Clonazepam may be prescribed to treat muscle spasms, and eszopiclone or zolpidem may be prescribed to help treat insomnia. However these drugs do not work in the long term.
Prognosis
Like all prion diseases, the disease is invariably fatal. Life expectancy ranges from seven months to six years, with an average of 18 months.
Epidemiology and history
Fatal insomnia was first described by Elio Lugaresi et al. in 1986.
In 1998 40 families were known to carry the gene for FFI globally: eight German, five Italian, four American, two French, two Australian, two British, one Japanese and one Austrian. In the Basque Country of Spain, 16 family cases of the 178N mutation were seen between 1993 and 2005 related to two families with a common ancestor in the 18th century. In 2011, another family was added to the list when researchers found the first man in the Netherlands to be diagnosed with FFI. Whilst he had lived in the Netherlands for 19 years, he was of Egyptian descent. Other prion diseases are similar to FFI and may be related but are missing the D178N gene mutation.
, 37 cases of sporadic fatal insomnia have been diagnosed. Unlike in FFI, those with sFI do not have the D178N mutation in the PRNP-prion gene; they all have a different mutation in the same gene causing methionine homozygosity at codon 129.
Nonetheless, the methionine presence in lieu of the valine (Val129) is what causes the sporadic form of disease. The targeting of this mutation has been suggested as a strategy for treatment, or possibly as a cure for the disease.
Silvano, 1983, Bologna, Italy
In late 1983 Italian neurologist/sleep expert Dr Ignazio Roiter received a patient at the University of Bologna hospital's sleep institute. The man, known only as Silvano, decided in a rare moment of consciousness to be recorded for future studies and to donate his brain for research in hopes of finding a cure for future victims.
In 1986, Lugaresi and colleagues first named and described in detail the clinical and histopathological features of fatal familial insomnia (FFI) [Lugaresi et al. NEJM]. This report was mostly based on a patient referred to as Silvano, who was diagnosed with sleep impairment in 1983 by Dr. Ignazio Roiter. Dr. Roiter referred the case to Prof. Elio Lugaresi, a well-known sleep expert, who, along with his colleagues, carried out advanced sleep analyses. As Silvano's condition quickly deteriorated, Lugaresi arranged for a postmortem neuropathological examination of the brain to be carried out by Dr. Gambetti, Lugaresi's former trainee. The collaboration of these two groups led to the 1986 publication [27]. At the time, a prion disease was not suspected due to a lack of prion-related histpathology and frozen brain tissue for advanced analysis. However, due to the devotion of Dr. Roiter and Silvano's family, more cases were obtained, resulting in the classification of FFI as a familial prion disease tied to the 178Asn genetic mutation. [Medori et al. NEJM, 1992]
Unnamed American patient, 2001
In an article published in 2006, Schenkein and Montagna wrote of a 52-year-old American man who was able to exceed the average survival time by nearly one year with various strategies that included vitamin therapy and meditation, different stimulants and hypnotics and even complete sensory deprivation in an attempt to induce sleep at night and increase alertness during the day. He managed to write a book and drive hundreds of miles in this time, but nonetheless, over the course of his trials, the man succumbed to the classic four-stage progression of the illness.
Egyptian man, 2011, Netherlands
In 2011, the first reported case in the Netherlands was of a 57-year-old man of Egyptian descent. The man came in with symptoms of double vision and progressive memory loss, and his family also noted he had recently become disoriented, paranoid and confused. Whilst he tended to fall asleep at random during daily activities, he experienced vivid dreams and random muscular jerks during normal slow-wave sleep. After four months of these symptoms, he began to have convulsions in his hands, trunk and lower limbs while awake. The person died at age 58, seven months after the onset of symptoms. An autopsy revealed mild atrophy of the frontal cortex and moderate atrophy of the thalamus. The latter is one of the most common signs of FFI.
Research
Still with unclear benefit in humans, a number of treatments have had tentative success in slowing disease progression in animal models, including pentosan polysulfate, mepacrine, and amphotericin B. , a study investigating doxycycline is being carried out.
In 2009, a mouse model was made for FFI. These mice expressed a humanized version of the PrP protein that also contains the D178N FFI mutation. These mice appear to have progressively fewer and shorter periods of uninterrupted sleep, damage in the thalamus, and early deaths, similar to humans with FFI.
The Prion Alliance was established by husband and wife duo Eric Minikel and Sonia Vallabh after Vallabh's mother was diagnosed with the fatal disease. They conduct research at the Broad Institute to develop therapeutics for human prion diseases. Other research interests involve identifying biomarkers to track the progression of prion disease in living people.
References
External links
Neurodegenerative disorders
Transmissible spongiform encephalopathies
Unsolved problems in neuroscience
Sleep disorders
Rare diseases
Sleeplessness and sleep deprivation | Fatal insomnia | [
"Biology"
] | 2,823 | [
"Sleep disorders",
"Behavior",
"Sleep",
"Sleeplessness and sleep deprivation"
] |
43,932 | https://en.wikipedia.org/wiki/Obligate%20aerobe | An obligate aerobe is an organism that requires oxygen to grow. Through cellular respiration, these organisms use oxygen to metabolise substances, like sugars or fats, to obtain energy. In this type of respiration, oxygen serves as the terminal electron acceptor for the electron transport chain. Aerobic respiration has the advantage of yielding more energy (adenosine triphosphate or ATP) than fermentation or anaerobic respiration, but obligate aerobes are subject to high levels of oxidative stress.
Examples
Among organisms, almost all animals, most fungi, and several bacteria are obligate aerobes. Examples of obligately aerobic bacteria include Mycobacterium tuberculosis (acid-fast), Bacillus (Gram-positive), and Nocardia asteroides (Gram-positive). With the exception of the yeasts, most fungi are obligate aerobes. Also, almost all algae are obligate aerobes.
A unique obligate aerobe is Streptomyces coelicolor which is gram-positive, soil-dwelling, and belongs to the phylum Actinomycetota. It is unique because the genome of this obligate aerobe encodes numerous enzymes with functions that are usually attributed to anaerobic metabolism in facultatively and strictly anaerobic bacteria.
Survival strategies
When obligate aerobes are in a temporarily oxygen-deprived environment, they need survival strategies to avoid death. Under these conditions, Mycobacterium smegmatis can quickly switch between fermentative hydrogen production and hydrogen oxidation with either oxygen or fumarate reduction depending on the availability of electron acceptor. This example is the first time that hydrogen production has been seen in an obligate aerobe. It also confirms the fermentation in a mycobacterium and is evidence that hydrogen plays a role in survival as well as growth.
Problems can also arise in oxygen-rich environments, most commonly attributed to oxidative stress. This occurrence is when there is an imbalance of free radicals and antioxidants in the cells of the organism, largely due to pollution and radiation in the environment. Obligate aerobes survive this phenomenon by using the organism's immune system to correct the imbalance.
See also
Aerobic respiration
Anaerobic respiration
Fermentation
Obligate anaerobe
Facultative anaerobe
Microaerophile
References
Microbiology | Obligate aerobe | [
"Chemistry",
"Biology"
] | 514 | [
"Microbiology",
"Microscopy"
] |
43,937 | https://en.wikipedia.org/wiki/Parasitism | Parasitism is a close relationship between species, where one organism, the parasite, lives on or inside another organism, the host, causing it some harm, and is adapted structurally to this way of life. The entomologist E. O. Wilson characterised parasites as "predators that eat prey in units of less than one". Parasites include single-celled protozoans such as the agents of malaria, sleeping sickness, and amoebic dysentery; animals such as hookworms, lice, mosquitoes, and vampire bats; fungi such as honey fungus and the agents of ringworm; and plants such as mistletoe, dodder, and the broomrapes.
There are six major parasitic strategies of exploitation of animal hosts, namely parasitic castration, directly transmitted parasitism (by contact), trophicallytransmitted parasitism (by being eaten), vector-transmitted parasitism, parasitoidism, and micropredation. One major axis of classification concerns invasiveness: an endoparasite lives inside the host's body; an ectoparasite lives outside, on the host's surface.
Like predation, parasitism is a type of consumer–resource interaction, but unlike predators, parasites, with the exception of parasitoids, are much smaller than their hosts, do not kill them, and often live in or on their hosts for an extended period. Parasites of animals are highly specialised, each parasite species living on one given animal species, and reproduce at a faster rate than their hosts. Classic examples include interactions between vertebrate hosts and tapeworms, flukes, and those between the malaria-causing Plasmodium species, and fleas.
Parasites reduce host fitness by general or specialised pathology, that ranges from parasitic castration to modification of host behaviour. Parasites increase their own fitness by exploiting hosts for resources necessary for their survival, in particular by feeding on them and by using intermediate (secondary) hosts to assist in their transmission from one definitive (primary) host to another. Although parasitism is often unambiguous, it is part of a spectrum of interactions between species, grading via parasitoidism into predation, through evolution into mutualism, and in some fungi, shading into being saprophytic.
Human knowledge of parasites such as roundworms and tapeworms dates back to ancient Egypt, Greece, and Rome. In early modern times, Antonie van Leeuwenhoek observed Giardia lamblia with his microscope in 1681, while Francesco Redi described internal and external parasites including sheep liver fluke and ticks. Modern parasitology developed in the 19th century. In human culture, parasitism has negative connotations. These were exploited to satirical effect in Jonathan Swift's 1733 poem "On Poetry: A Rhapsody", comparing poets to hyperparasitical "vermin". In fiction, Bram Stoker's 1897 Gothic horror novel Dracula and its many later adaptations featured a blood-drinking parasite. Ridley Scott's 1979 film Alien was one of many works of science fiction to feature a parasitic alien species.
Etymology
First used in English in 1539, the word parasite comes from the Medieval French , from the Latinised form , . The related term parasitism appears in English from 1611.
Evolutionary strategies
Basic concepts
Parasitism is a kind of symbiosis, a close and persistent long-term biological interaction between a parasite and its host. Unlike saprotrophs, parasites feed on living hosts, though some parasitic fungi, for instance, may continue to feed on hosts they have killed. Unlike commensalism and mutualism, the parasitic relationship harms the host, either feeding on it or, as in the case of intestinal parasites, consuming some of its food. Because parasites interact with other species, they can readily act as vectors of pathogens, causing disease. Predation is by definition not a symbiosis, as the interaction is brief, but the entomologist E. O. Wilson has characterised parasites as "predators that eat prey in units of less than one".
Within that scope are many possible strategies. Taxonomists classify parasites in a variety of overlapping schemes, based on their interactions with their hosts and on their life cycles, which can be complex. An obligate parasite depends completely on the host to complete its life cycle, while a facultative parasite does not. Parasite life cycles involving only one host are called "direct"; those with a definitive host (where the parasite reproduces sexually) and at least one intermediate host are called "indirect". An endoparasite lives inside the host's body; an ectoparasite lives outside, on the host's surface. Mesoparasites—like some copepods, for example—enter an opening in the host's body and remain partly embedded there. Some parasites can be generalists, feeding on a wide range of hosts, but many parasites, and the majority of protozoans and helminths that parasitise animals, are specialists and extremely host-specific. An early basic, functional division of parasites distinguished microparasites and macroparasites. These each had a mathematical model assigned in order to analyse the population movements of the host–parasite groupings. The microorganisms and viruses that can reproduce and complete their life cycle within the host are known as microparasites. Macroparasites are the multicellular organisms that reproduce and complete their life cycle outside of the host or on the host's body.
Much of the thinking on types of parasitism has focused on terrestrial animal parasites of animals, such as helminths. Those in other environments and with other hosts often have analogous strategies. For example, the snubnosed eel is probably a facultative endoparasite (i.e., it is semiparasitic) that opportunistically burrows into and eats sick and dying fish. Plant-eating insects such as scale insects, aphids, and caterpillars closely resemble ectoparasites, attacking much larger plants; they serve as vectors of bacteria, fungi and viruses which cause plant diseases. As female scale insects cannot move, they are obligate parasites, permanently attached to their hosts.
The sensory inputs that a parasite employs to identify and approach a potential host are known as "host cues". Such cues can include, for example, vibration, exhaled carbon dioxide, skin odours, visual and heat signatures, and moisture. Parasitic plants can use, for example, light, host physiochemistry, and volatiles to recognize potential hosts.
Major strategies
There are six major parasitic strategies, namely parasitic castration; directly transmitted parasitism; trophically-transmitted parasitism; vector-transmitted parasitism; parasitoidism; and micropredation. These apply to parasites whose hosts are plants as well as animals. These strategies represent adaptive peaks; intermediate strategies are possible, but organisms in many different groups have consistently converged on these six, which are evolutionarily stable.
A perspective on the evolutionary options can be gained by considering four key questions: the effect on the fitness of a parasite's hosts; the number of hosts they have per life stage; whether the host is prevented from reproducing; and whether the effect depends on intensity (number of parasites per host). From this analysis, the major evolutionary strategies of parasitism emerge, alongside predation.
Parasitic castrators
Parasitic castrators partly or completely destroy their host's ability to reproduce, diverting the energy that would have gone into reproduction into host and parasite growth, sometimes causing gigantism in the host. The host's other systems remain intact, allowing it to survive and to sustain the parasite. Parasitic crustaceans such as those in the specialised barnacle genus Sacculina specifically cause damage to the gonads of their many species of host crabs. In the case of Sacculina, the testes of over two-thirds of their crab hosts degenerate sufficiently for these male crabs to develop female secondary sex characteristics such as broader abdomens, smaller claws and egg-grasping appendages. Various species of helminth castrate their hosts (such as insects and snails). This may happen directly, whether mechanically by feeding on their gonads, or by secreting a chemical that destroys reproductive cells; or indirectly, whether by secreting a hormone or by diverting nutrients. For example, the trematode Zoogonus lasius, whose sporocysts lack mouths, castrates the intertidal marine snail Tritia obsoleta chemically, developing in its gonad and killing its reproductive cells.
Directly transmitted
Directly transmitted parasites, not requiring a vector to reach their hosts, include such parasites of terrestrial vertebrates as lice and mites; marine parasites such as copepods and cyamid amphipods; monogeneans; and many species of nematodes, fungi, protozoans, bacteria, and viruses. Whether endoparasites or ectoparasites, each has a single host-species. Within that species, most individuals are free or almost free of parasites, while a minority carry a large number of parasites; this is known as an aggregated distribution.
Trophically transmitted
Trophically-transmitted parasites are transmitted by being eaten by a host. They include trematodes (all except schistosomes), cestodes, acanthocephalans, pentastomids, many roundworms, and many protozoa such as Toxoplasma. They have complex life cycles involving hosts of two or more species. In their juvenile stages they infect and often encyst in the intermediate host. When the intermediate-host animal is eaten by a predator, the definitive host, the parasite survives the digestion process and matures into an adult; some live as intestinal parasites. Many trophically transmitted parasites modify the behaviour of their intermediate hosts, increasing their chances of being eaten by a predator. As with directly transmitted parasites, the distribution of trophically transmitted parasites among host individuals is aggregated. Coinfection by multiple parasites is common. Autoinfection, where (by exception) the whole of the parasite's life cycle takes place in a single primary host, can sometimes occur in helminths such as Strongyloides stercoralis.
Vector-transmitted
Vector-transmitted parasites rely on a third party, an intermediate host, where the parasite does not reproduce sexually, to carry them from one definitive host to another. These parasites are microorganisms, namely protozoa, bacteria, or viruses, often intracellular pathogens (disease-causers). Their vectors are mostly hematophagic arthropods such as fleas, lice, ticks, and mosquitoes. For example, the deer tick Ixodes scapularis acts as a vector for diseases including Lyme disease, babesiosis, and anaplasmosis. Protozoan endoparasites, such as the malarial parasites in the genus Plasmodium and sleeping-sickness parasites in the genus Trypanosoma, have infective stages in the host's blood which are transported to new hosts by biting insects.
Parasitoids
Parasitoids are insects which sooner or later kill their hosts, placing their relationship close to predation. Most parasitoids are parasitoid wasps or other hymenopterans; others include dipterans such as phorid flies. They can be divided into two groups, idiobionts and koinobionts, differing in their treatment of their hosts.
Idiobiont parasitoids sting their often-large prey on capture, either killing them outright or paralysing them immediately. The immobilised prey is then carried to a nest, sometimes alongside other prey if it is not large enough to support a parasitoid throughout its development. An egg is laid on top of the prey and the nest is then sealed. The parasitoid develops rapidly through its larval and pupal stages, feeding on the provisions left for it.
Koinobiont parasitoids, which include flies as well as wasps, lay their eggs inside young hosts, usually larvae. These are allowed to go on growing, so the host and parasitoid develop together for an extended period, ending when the parasitoids emerge as adults, leaving the prey dead, eaten from inside. Some koinobionts regulate their host's development, for example preventing it from pupating or making it moult whenever the parasitoid is ready to moult. They may do this by producing hormones that mimic the host's moulting hormones (ecdysteroids), or by regulating the host's endocrine system.
Micropredators
A micropredator attacks more than one host, reducing each host's fitness by at least a small amount, and is only in contact with any one host intermittently. This behavior makes micropredators suitable as vectors, as they can pass smaller parasites from one host to another. Most micropredators are hematophagic, feeding on blood. They include annelids such as leeches, crustaceans such as branchiurans and gnathiid isopods, various dipterans such as mosquitoes and tsetse flies, other arthropods such as fleas and ticks, vertebrates such as lampreys, and mammals such as vampire bats.
Transmission strategies
Parasites use a variety of methods to infect animal hosts, including physical contact, the fecal–oral route, free-living infectious stages, and vectors, suiting their differing hosts, life cycles, and ecological contexts. Examples to illustrate some of the many possible combinations are given in the table.
Variations
Among the many variations on parasitic strategies are hyperparasitism, social parasitism, brood parasitism, kleptoparasitism, sexual parasitism, and adelphoparasitism.
Hyperparasitism
Hyperparasites feed on another parasite, as exemplified by protozoa living in helminth parasites, or facultative or obligate parasitoids whose hosts are either conventional parasites or parasitoids. Levels of parasitism beyond secondary also occur, especially among facultative parasitoids. In oak gall systems, there can be up to four levels of parasitism.
Hyperparasites can control their hosts' populations, and are used for this purpose in agriculture and to some extent in medicine. The controlling effects can be seen in the way that the CHV1 virus helps to control the damage that chestnut blight, Cryphonectria parasitica, does to American chestnut trees, and in the way that bacteriophages can limit bacterial infections. It is likely, though little researched, that most pathogenic microparasites have hyperparasites which may prove widely useful in both agriculture and medicine.
Social parasitism
Social parasites take advantage of interspecific interactions between members of eusocial animals such as ants, termites, and bumblebees. Examples include the large blue butterfly, Phengaris arion, its larvae employing ant mimicry to parasitise certain ants, Bombus bohemicus, a bumblebee which invades the hives of other bees and takes over reproduction while their young are raised by host workers, and Melipona scutellaris, a eusocial bee whose virgin queens escape killer workers and invade another colony without a queen. An extreme example of interspecific social parasitism is found in the ant Tetramorium inquilinum, an obligate parasite which lives exclusively on the backs of other Tetramorium ants. A mechanism for the evolution of social parasitism was first proposed by Carlo Emery in 1909. Now known as "Emery's rule", it states that social parasites tend to be closely related to their hosts, often being in the same genus.
Intraspecific social parasitism occurs in parasitic nursing, where some individual young take milk from unrelated females. In wedge-capped capuchins, higher ranking females sometimes take milk from low ranking females without any reciprocation.
Brood parasitism
In brood parasitism, the hosts suffer increased parental investment and energy expenditure to feed parasitic young, which are commonly larger than host young. The growth rate of host nestlings is slowed, reducing the host's fitness. Brood parasites include birds in different families such as cowbirds, whydahs, cuckoos, and black-headed ducks. These do not build nests of their own, but leave their eggs in nests of other species. In the family Cuculidae, over 40% of cuckoo species are obligate brood parasites, while others are either facultative brood parasites or provide parental care. The eggs of some brood parasites mimic those of their hosts, while some cowbird eggs have tough shells, making them hard for the hosts to kill by piercing, both mechanisms implying selection by the hosts against parasitic eggs. The adult female European cuckoo further mimics a predator, the European sparrowhawk, giving her time to lay her eggs in the host's nest unobserved. Host species often combat parasitic egg mimicry through egg polymorphism, having two or more egg phenotypes within a single population of a species. Multiple phenotypes in host eggs decrease the probability of a parasitic species accurately "matching" their eggs to host eggs.
Kleptoparasitism
In kleptoparasitism (from Greek κλέπτης (kleptēs), "thief"), parasites steal food gathered by the host. The parasitism is often on close relatives, whether within the same species or between species in the same genus or family. For instance, the many lineages of cuckoo bees lay their eggs in the nest cells of other bees in the same family. Kleptoparasitism is uncommon generally but conspicuous in birds; some such as skuas are specialised in pirating food from other seabirds, relentlessly chasing them down until they disgorge their catch.
Sexual parasitism
A unique approach is seen in some species of anglerfish, such as Ceratias holboelli, where the males are reduced to tiny sexual parasites, wholly dependent on females of their own species for survival, permanently attached below the female's body, and unable to fend for themselves. The female nourishes the male and protects him from predators, while the male gives nothing back except the sperm that the female needs to produce the next generation.
Adelphoparasitism
Adelphoparasitism, (from Greek ἀδελφός (adelphós), brother), also known as sibling-parasitism, occurs where the host species is closely related to the parasite, often in the same family or genus. In the citrus blackfly parasitoid, Encarsia perplexa, unmated females may lay haploid eggs in the fully developed larvae of their own species, producing male offspring, while the marine worm Bonellia viridis has a similar reproductive strategy, although the larvae are planktonic.
Illustrations
Examples of the major variant strategies are illustrated.
Taxonomic range
Parasitism has an extremely wide taxonomic range, including animals, plants, fungi, protozoans, bacteria, and viruses.
Animals
Parasitism is widespread in the animal kingdom, and has evolved independently from free-living forms hundreds of times. Many types of helminth including flukes and cestodes have complete life cycles involving two or more hosts. By far the largest group is the parasitoid wasps in the Hymenoptera. The phyla and classes with the largest numbers of parasitic species are listed in the table. Numbers are conservative minimum estimates. The columns for Endo- and Ecto-parasitism refer to the definitive host, as documented in the Vertebrate and Invertebrate columns.
Plants
A hemiparasite or partial parasite such as mistletoe derives some of its nutrients from another living plant, whereas a holoparasite such as Cuscuta derives all of its nutrients from another plant. Parasitic plants make up about one per cent of angiosperms and are in almost every biome in the world. All these plants have modified roots, haustoria, which penetrate the host plants, connecting them to the conductive system—either the xylem, the phloem, or both. This provides them with the ability to extract water and nutrients from the host. A parasitic plant is classified depending on where it latches onto the host, either the stem or the root, and the amount of nutrients it requires. Since holoparasites have no chlorophyll and therefore cannot make food for themselves by photosynthesis, they are always obligate parasites, deriving all their food from their hosts. Some parasitic plants can locate their host plants by detecting chemicals in the air or soil given off by host shoots or roots, respectively. About 4,500 species of parasitic plant in approximately 20 families of flowering plants are known.
Species within the Orobanchaceae (broomrapes) are among the most economically destructive of all plants. Species of Striga (witchweeds) are estimated to cost billions of dollars a year in crop yield loss, infesting over 50 million hectares of cultivated land within Sub-Saharan Africa alone. Striga infects both grasses and grains, including corn, rice, and sorghum, which are among the world's most important food crops. Orobanche also threatens a wide range of other important crops, including peas, chickpeas, tomatoes, carrots, and varieties of cabbage. Yield loss from Orobanche can be total; despite extensive research, no method of control has been entirely successful.
Many plants and fungi exchange carbon and nutrients in mutualistic mycorrhizal relationships. Some 400 species of myco-heterotrophic plants, mostly in the tropics, however effectively cheat by taking carbon from a fungus rather than exchanging it for minerals. They have much reduced roots, as they do not need to absorb water from the soil; their stems are slender with few vascular bundles, and their leaves are reduced to small scales, as they do not photosynthesize. Their seeds are small and numerous, so they appear to rely on being infected by a suitable fungus soon after germinating.
Fungi
Parasitic fungi derive some or all of their nutritional requirements from plants, other fungi, or animals.
Plant pathogenic fungi are classified into three categories depending on their mode of nutrition: biotrophs, hemibiotrophs and necrotrophs. Biotrophic fungi derive nutrients from living plant cells, and during the course of infection they colonise their plant host in such a way as to keep it alive for a maximally long time. One well-known example of a biotrophic pathogen is Ustilago maydis, causative agent of the corn smut disease. Necrotrophic pathogens on the other hand, kill host cells and feed saprophytically, an example being the root-colonising honey fungi in the genus Armillaria. Hemibiotrophic pathogens begin their colonising their hosts as biotrophs, and subsequently killing off host cells and feeding as necrotrophs, a phenomenon termed the biotrophy-necrotrophy switch.
Pathogenic fungi are well-known causative agents of diseases on animals as well as humans. Fungal infections (mycosis) are estimated to kill 1.6 million people each year. One example of a potent fungal animal pathogen are Microsporidia - obligate intracellular parasitic fungi that largely affect insects, but may also affect vertebrates including humans, causing the intestinal infection microsporidiosis.
Protozoa
Protozoa such as Plasmodium, Trypanosoma, and Entamoeba are endoparasitic. They cause serious diseases in vertebrates including humans—in these examples, malaria, sleeping sickness, and amoebic dysentery—and have complex life cycles.
Bacteria
Many bacteria are parasitic, though they are more generally thought of as pathogens causing disease. Parasitic bacteria are extremely diverse, and infect their hosts by a variety of routes. To give a few examples, Bacillus anthracis, the cause of anthrax, is spread by contact with infected domestic animals; its spores, which can survive for years outside the body, can enter a host through an abrasion or may be inhaled. Borrelia, the cause of Lyme disease and relapsing fever, is transmitted by vectors, ticks of the genus Ixodes, from the diseases' reservoirs in animals such as deer. Campylobacter jejuni, a cause of gastroenteritis, is spread by the fecal–oral route from animals, or by eating insufficiently cooked poultry, or by contaminated water. Haemophilus influenzae, an agent of bacterial meningitis and respiratory tract infections such as influenza and bronchitis, is transmitted by droplet contact. Treponema pallidum, the cause of syphilis, is spread by sexual activity.
Viruses
Viruses are obligate intracellular parasites, characterised by extremely limited biological function, to the point where, while they are evidently able to infect all other organisms from bacteria and archaea to animals, plants and fungi, it is unclear whether they can themselves be described as living. They can be either RNA or DNA viruses consisting of a single or double strand of genetic material (RNA or DNA, respectively), covered in a protein coat and sometimes a lipid envelope. They thus lack all the usual machinery of the cell such as enzymes, relying entirely on the host cell's ability to replicate DNA and synthesise proteins. Most viruses are bacteriophages, infecting bacteria.
Evolutionary ecology
Parasitism is a major aspect of evolutionary ecology; for example, almost all free-living animals are host to at least one species of parasite. Vertebrates, the best-studied group, are hosts to between 75,000 and 300,000 species of helminths and an uncounted number of parasitic microorganisms. On average, a mammal species hosts four species of nematode, two of trematodes, and two of cestodes. Humans have 342 species of helminth parasites, and 70 species of protozoan parasites. Some three-quarters of the links in food webs include a parasite, important in regulating host numbers. Perhaps 40 per cent of described species are parasitic.
Fossil record
Parasitism is hard to demonstrate from the fossil record, but holes in the mandibles of several specimens of Tyrannosaurus may have been caused by Trichomonas-like parasites. Saurophthirus, the Early Cretaceous flea, parasitized pterosaurs. Eggs that belonged to nematode worms and probably protozoan cysts were found in the Late Triassic coprolite of phytosaur. This rare find in Thailand reveals more about the ecology of prehistoric parasites.
Coevolution
As hosts and parasites evolve together, their relationships often change. When a parasite is in a sole relationship with a host, selection drives the relationship to become more benign, even mutualistic, as the parasite can reproduce for longer if its host lives longer. But where parasites are competing, selection favours the parasite that reproduces fastest, leading to increased virulence. There are thus varied possibilities in host–parasite coevolution.
Evolutionary epidemiology analyses how parasites spread and evolve, whereas Darwinian medicine applies similar evolutionary thinking to non-parasitic diseases like cancer and autoimmune conditions.
Long-term partnerships favouring mutualism
Long-term partnerships can lead to a relatively stable relationship tending to commensalism or mutualism, as, all else being equal, it is in the evolutionary interest of the parasite that its host thrives. A parasite may evolve to become less harmful for its host or a host may evolve to cope with the unavoidable presence of a parasite—to the point that the parasite's absence causes the host harm. For example, although animals parasitised by worms are often clearly harmed, such infections may also reduce the prevalence and effects of autoimmune disorders in animal hosts, including humans. In a more extreme example, some nematode worms cannot reproduce, or even survive, without infection by Wolbachia bacteria.
Lynn Margulis and others have argued, following Peter Kropotkin's 1902 Mutual Aid: A Factor of Evolution, that natural selection drives relationships from parasitism to mutualism when resources are limited. This process may have been involved in the symbiogenesis which formed the eukaryotes from an intracellular relationship between archaea and bacteria, though the sequence of events remains largely undefined.
Competition favouring virulence
Competition between parasites can be expected to favour faster reproducing and therefore more virulent parasites, by natural selection.
Among competing parasitic insect-killing bacteria of the genera Photorhabdus and Xenorhabdus, virulence depended on the relative potency of the antimicrobial toxins (bacteriocins) produced by the two strains involved. When only one bacterium could kill the other, the other strain was excluded by the competition. But when caterpillars were infected with bacteria both of which had toxins able to kill the other strain, neither strain was excluded, and their virulence was less than when the insect was infected by a single strain.
Cospeciation
A parasite sometimes undergoes cospeciation with its host, resulting in the pattern described in Fahrenholz's rule, that the phylogenies of the host and parasite come to mirror each other.
An example is between the simian foamy virus (SFV) and its primate hosts. The phylogenies of SFV polymerase and the mitochondrial cytochrome c oxidase subunit II from African and Asian primates were found to be closely congruent in branching order and divergence times, implying that the simian foamy viruses cospeciated with Old World primates for at least 30 million years.
The presumption of a shared evolutionary history between parasites and hosts can help elucidate how host taxa are related. For instance, there has been a dispute about whether flamingos are more closely related to storks or ducks. The fact that flamingos share parasites with ducks and geese was initially taken as evidence that these groups were more closely related to each other than either is to storks. However, evolutionary events such as the duplication, or the extinction of parasite species (without similar events on the host phylogeny) often erode similarities between host and parasite phylogenies. In the case of flamingos, they have similar lice to those of grebes. Flamingos and grebes do have a common ancestor, implying cospeciation of birds and lice in these groups. Flamingo lice then switched hosts to ducks, creating the situation which had confused biologists.
Parasites infect sympatric hosts (those within their same geographical area) more effectively, as has been shown with digenetic trematodes infecting lake snails. This is in line with the Red Queen hypothesis, which states that interactions between species lead to constant natural selection for coadaptation. Parasites track the locally common hosts' phenotypes, so the parasites are less infective to allopatric hosts, those from different geographical regions.
Modifying host behaviour
Some parasites modify host behaviour in order to increase their transmission between hosts, often in relation to predator and prey (parasite increased trophic transmission). For example, in the California coastal salt marsh, the fluke Euhaplorchis californiensis reduces the ability of its killifish host to avoid predators. This parasite matures in egrets, which are more likely to feed on infected killifish than on uninfected fish. Another example is the protozoan Toxoplasma gondii, a parasite that matures in cats but can be carried by many other mammals. Uninfected rats avoid cat odors, but rats infected with T. gondii are drawn to this scent, which may increase transmission to feline hosts. The malaria parasite modifies the skin odour of its human hosts, increasing their attractiveness to mosquitoes and hence improving the chance for the parasite to be transmitted. The spider Cyclosa argenteoalba often have parasitoid wasp larvae attached to them which alter their web-building behavior. Instead of producing their normal sticky spiral shaped webs, they made simplified webs when the parasites were attached. This manipulated behavior lasted longer and was more prominent the longer the parasites were left on the spiders.
Trait loss
Parasites can exploit their hosts to carry out a number of functions that they would otherwise have to carry out for themselves. Parasites which lose those functions then have a selective advantage, as they can divert resources to reproduction. Many insect ectoparasites including bedbugs, batbugs, lice and fleas have lost their ability to fly, relying instead on their hosts for transport. Trait loss more generally is widespread among parasites. An extreme example is the myxosporean Henneguya zschokkei, an ectoparasite of fish and the only animal known to have lost the ability to respire aerobically: its cells lack mitochondria.
Host defences
Hosts have evolved a variety of defensive measures against their parasites, including physical barriers like the skin of vertebrates, the immune system of mammals, insects actively removing parasites, and defensive chemicals in plants.
The evolutionary biologist W. D. Hamilton suggested that sexual reproduction could have evolved to help to defeat multiple parasites by enabling genetic recombination, the shuffling of genes to create varied combinations. Hamilton showed by mathematical modelling that sexual reproduction would be evolutionarily stable in different situations, and that the theory's predictions matched the actual ecology of sexual reproduction. However, there may be a trade-off between immunocompetence and breeding male vertebrate hosts' secondary sex characteristics, such as the plumage of peacocks and the manes of lions. This is because the male hormone testosterone encourages the growth of secondary sex characteristics, favouring such males in sexual selection, at the price of reducing their immune defences.
Vertebrates
The physical barrier of the tough and often dry and waterproof skin of reptiles, birds and mammals keeps invading microorganisms from entering the body. Human skin also secretes sebum, which is toxic to most microorganisms. On the other hand, larger parasites such as trematodes detect chemicals produced by the skin to locate their hosts when they enter the water. Vertebrate saliva and tears contain lysozyme, an enzyme that breaks down the cell walls of invading bacteria. Should the organism pass the mouth, the stomach with its hydrochloric acid, toxic to most microorganisms, is the next line of defence. Some intestinal parasites have a thick, tough outer coating which is digested slowly or not at all, allowing the parasite to pass through the stomach alive, at which point they enter the intestine and begin the next stage of their life. Once inside the body, parasites must overcome the immune system's serum proteins and pattern recognition receptors, intracellular and cellular, that trigger the adaptive immune system's lymphocytes such as T cells and antibody-producing B cells. These have receptors that recognise parasites.
Insects
Insects often adapt their nests to reduce parasitism. For example, one of the key reasons why the wasp Polistes canadensis nests across multiple combs, rather than building a single comb like much of the rest of its genus, is to avoid infestation by tineid moths. The tineid moth lays its eggs within the wasps' nests and then these eggs hatch into larvae that can burrow from cell to cell and prey on wasp pupae. Adult wasps attempt to remove and kill moth eggs and larvae by chewing down the edges of cells, coating the cells with an oral secretion that gives the nest a dark brownish appearance.
Plants
Plants respond to parasite attack with a series of chemical defences, such as polyphenol oxidase, under the control of the jasmonic acid-insensitive (JA) and salicylic acid (SA) signalling pathways. The different biochemical pathways are activated by different attacks, and the two pathways can interact positively or negatively. In general, plants can either initiate a specific or a non-specific response. Specific responses involve recognition of a parasite by the plant's cellular receptors, leading to a strong but localised response: defensive chemicals are produced around the area where the parasite was detected, blocking its spread, and avoiding wasting defensive production where it is not needed. Non-specific defensive responses are systemic, meaning that the responses are not confined to an area of the plant, but spread throughout the plant, making them costly in energy. These are effective against a wide range of parasites. When damaged, such as by lepidopteran caterpillars, leaves of plants including maize and cotton release increased amounts of volatile chemicals such as terpenes that signal they are being attacked; one effect of this is to attract parasitoid wasps, which in turn attack the caterpillars.
Biology and conservation
Ecology and parasitology
Parasitism and parasite evolution were until the twenty-first century studied by parasitologists, in a science dominated by medicine, rather than by ecologists or evolutionary biologists. Even though parasite-host interactions were plainly ecological and important in evolution, the history of parasitology caused what the evolutionary ecologist Robert Poulin called a "takeover of parasitism by parasitologists", leading ecologists to ignore the area. This was in his opinion "unfortunate", as parasites are "omnipresent agents of natural selection" and significant forces in evolution and ecology. In his view, the long-standing split between the sciences limited the exchange of ideas, with separate conferences and separate journals. The technical languages of ecology and parasitology sometimes involved different meanings for the same words. There were philosophical differences, too: Poulin notes that, influenced by medicine, "many parasitologists accepted that evolution led to a decrease in parasite virulence, whereas modern evolutionary theory would have predicted a greater range of outcomes".
Their complex relationships make parasites difficult to place in food webs: a trematode with multiple hosts for its various life cycle stages would occupy many positions in a food web simultaneously, and would set up loops of energy flow, confusing the analysis. Further, since nearly every animal has (multiple) parasites, parasites would occupy the top levels of every food web.
Parasites can play a role in the proliferation of non-native species. For example, invasive green crabs are minimally affected by native trematodes on the Eastern Atlantic coast. This helps them outcompete native crabs such as the Atlantic Rock and Jonah crabs.
Ecological parasitology can be important to attempts at control, like during the campaign for eradicating the Guinea worm. Even though the parasite was eradicated in all but four countries, the worm began using frogs as an intermediary host before infecting dogs, making control more difficult than it would have been if the relationships had been better understood.
Rationale for conservation
Although parasites are widely considered to be harmful, the eradication of all parasites would not be beneficial. Parasites account for at least half of life's diversity; they perform important ecological roles; and without parasites, organisms might tend to asexual reproduction, diminishing the diversity of traits brought about by sexual reproduction. Parasites provide an opportunity for the transfer of genetic material between species, facilitating evolutionary change. Many parasites require multiple hosts of different species to complete their life cycles and rely on predator-prey or other stable ecological interactions to get from one host to another. The presence of parasites thus indicates that an ecosystem is healthy.
An ectoparasite, the California condor louse, Colpocephalum californici, became a well-known conservation issue. A large and costly captive breeding program was run in the United States to rescue the California condor. It was host to a louse, which lived only on it. Any lice found were "deliberately killed" during the program, to keep the condors in the best possible health. The result was that one species, the condor, was saved and returned to the wild, while another species, the parasite, became extinct.
Although parasites are often omitted in depictions of food webs, they usually occupy the top position. Parasites can function like keystone species, reducing the dominance of superior competitors and allowing competing species to co-exist.
Quantitative ecology
A single parasite species usually has an aggregated distribution across host animals, which means that most hosts carry few parasites, while a few hosts carry the vast majority of parasite individuals. This poses considerable problems for students of parasite ecology, as it renders parametric statistics as commonly used by biologists invalid. Log-transformation of data before the application of parametric test, or the use of non-parametric statistics is recommended by several authors, but this can give rise to further problems, so quantitative parasitology is based on more advanced biostatistical methods.
History
Ancient
Human parasites including roundworms, the Guinea worm, threadworms and tapeworms are mentioned in Egyptian papyrus records from 3000 BC onwards; the Ebers Papyrus describes hookworm. In ancient Greece, parasites including the bladder worm are described in the Hippocratic Corpus, while the comic playwright Aristophanes called tapeworms "hailstones". The Roman physicians Celsus and Galen documented the roundworms Ascaris lumbricoides and Enterobius vermicularis.
Medieval
In his Canon of Medicine, completed in 1025, the Persian physician Avicenna recorded human and animal parasites including roundworms, threadworms, the Guinea worm and tapeworms.
In his 1397 book Traité de l'état, science et pratique de l'art de la Bergerie (Account of the state, science and practice of the art of shepherding), wrote the first description of a trematode endoparasite, the sheep liver fluke Fasciola hepatica.
Early modern
In the early modern period, Francesco Redi's 1668 book Esperienze Intorno alla Generazione degl'Insetti (Experiences of the Generation of Insects), explicitly described ecto- and endoparasites, illustrating ticks, the larvae of nasal flies of deer, and sheep liver fluke. Redi noted that parasites develop from eggs, contradicting the theory of spontaneous generation. In his 1684 book Osservazioni intorno agli animali viventi che si trovano negli animali viventi (Observations on Living Animals found in Living Animals), Redi described and illustrated over 100 parasites including the large roundworm in humans that causes ascariasis. Redi was the first to name the cysts of Echinococcus granulosus seen in dogs and sheep as parasitic; a century later, in 1760, Peter Simon Pallas correctly suggested that these were the larvae of tapeworms.
In 1681, Antonie van Leeuwenhoek observed and illustrated the protozoan parasite Giardia lamblia, and linked it to "his own loose stools". This was the first protozoan parasite of humans to be seen under a microscope. A few years later, in 1687, the Italian biologists Giovanni Cosimo Bonomo and Diacinto Cestoni described scabies as caused by the parasitic mite Sarcoptes scabiei, marking it as the first disease of humans with a known microscopic causative agent.
Parasitology
Modern parasitology developed in the 19th century with accurate observations and experiments by many researchers and clinicians; the term was first used in 1870. In 1828, James Annersley described amoebiasis, protozoal infections of the intestines and the liver, though the pathogen, Entamoeba histolytica, was not discovered until 1873 by Friedrich Lösch. James Paget discovered the intestinal nematode Trichinella spiralis in humans in 1835. James McConnell described the human liver fluke, Clonorchis sinensis, in 1875. Algernon Thomas and Rudolf Leuckart independently made the first discovery of the life cycle of a trematode, the sheep liver fluke, by experiment in 1881–1883. In 1877 Patrick Manson discovered the life cycle of the filarial worms that cause elephantiasis transmitted by mosquitoes. Manson further predicted that the malaria parasite, Plasmodium, had a mosquito vector, and persuaded Ronald Ross to investigate. Ross confirmed that the prediction was correct in 1897–1898. At the same time, Giovanni Battista Grassi and others described the malaria parasite's life cycle stages in Anopheles mosquitoes. Ross was controversially awarded the 1902 Nobel prize for his work, while Grassi was not. In 1903, David Bruce identified the protozoan parasite and the tsetse fly vector of African trypanosomiasis.
Vaccine
Given the importance of malaria, with some 220 million people infected annually, many attempts have been made to interrupt its transmission. Various methods of malaria prophylaxis have been tried including the use of antimalarial drugs to kill off the parasites in the blood, the eradication of its mosquito vectors with organochlorine and other insecticides, and the development of a malaria vaccine. All of these have proven problematic, with drug resistance, insecticide resistance among mosquitoes, and repeated failure of vaccines as the parasite mutates. The first and as of 2015 the only licensed vaccine for any parasitic disease of humans is RTS,S for Plasmodium falciparum malaria.
Biological control
Several groups of parasites, including microbial pathogens and parasitoidal wasps have been used as biological control agents in agriculture and horticulture.
Resistance
Poulin observes that the widespread prophylactic use of anthelmintic drugs in domestic sheep and cattle constitutes a worldwide uncontrolled experiment in the life-history evolution of their parasites. The outcomes depend on whether the drugs decrease the chance of a helminth larva reaching adulthood. If so, natural selection can be expected to favour the production of eggs at an earlier age. If on the other hand the drugs mainly affects adult parasitic worms, selection could cause delayed maturity and increased virulence. Such changes appear to be underway: the nematode Teladorsagia circumcincta is changing its adult size and reproductive rate in response to drugs.
Cultural significance
Classical times
In the classical era, the concept of the parasite was not strictly pejorative: the parasitus was an accepted role in Roman society, in which a person could live off the hospitality of others, in return for "flattery, simple services, and a willingness to endure humiliation".
Society
Parasitism has a derogatory sense in popular usage. According to the immunologist John Playfair,
The satirical cleric Jonathan Swift alludes to hyperparasitism in his 1733 poem "On Poetry: A Rhapsody", comparing poets to "vermin" who "teaze and pinch their foes":
A 2022 study examined the naming of some 3000 parasite species discovered in the previous two decades. Of those named after scientists, over 80% were named for men, whereas about a third of authors of papers on parasites were women. The study found that the percentage of parasite species named for relatives or friends of the author has risen sharply in the same period.
Fiction
In Bram Stoker's 1897 Gothic horror novel Dracula, and its many film adaptations, the eponymous Count Dracula is a blood-drinking parasite (a vampire). The critic Laura Otis argues that as a "thief, seducer, creator, and mimic, Dracula is the ultimate parasite. The whole point of vampirism is sucking other people's blood—living at other people's expense."
Disgusting and terrifying parasitic alien species are widespread in science fiction, as for instance in Ridley Scott's 1979 film Alien. In one scene, a Xenomorph bursts out of the chest of a dead man, with blood squirting out under high pressure assisted by explosive squibs. Animal organs were used to reinforce the shock effect. The scene was filmed in a single take, and the startled reaction of the actors was genuine.
The entomopathogenic fungus Cordyceps is represented culturally as a deadly threat to the human race. The video game series The Last of Us (2013–present) and its television adaptation present Cordyceps as a parasite of humans, causing a zombie apocalypse. Its human hosts initially become violent "infected" beings, before turning into blind zombie "clickers", complete with fruiting bodies growing out from their faces.
See also
Antiparasitic
Carcinogenic parasite
Effects of parasitic worms on the immune system
List of parasites of humans
Notes
References
Sources
Further reading
External links
Aberystwyth University: Parasitology—class outline with links to full text articles on parasitism and parasitology.
Division of Parasitic Diseases, Centers for Disease Control and Prevention
KSU: Parasitology Research—parasitology articles and links
Parasitology Resources on the World Wide Web: A Powerful Tool for Infectious Disease Practitioners (Oxford University Press)
Disease ecology
Ecology
Parasitology | Parasitism | [
"Biology"
] | 10,302 | [
"Parasitism",
"Symbiosis",
"Ecology"
] |
43,938 | https://en.wikipedia.org/wiki/Bacterial%20lawn | Bacterial lawn is a term used by microbiologists to describe the appearance of bacterial colonies when all the individual colonies on a Petri dish or agar plate merge to form a field or mat of bacteria. Bacterial lawns find use in screens for antibiotic resistance and bacteriophage titering.
Bacterial lawns (often of Serratia marcescens) are also used extensively when as an assay method when using bacteriophage as tracers in studies of groundwater flow.
Although occasionally used as a synonym for biofilm, the term primarily applies to the simple, clonal, unstructured mats of organisms that typically only form on laboratory growth media. Biofilms—the aggregated form of microorganisms most commonly found in nature— are generally more complex and diverse and marked by larger quantities of extracellular structural matrix relative to the cellular biomass.
Techniques
Bacterial lawns can be produced manually by evenly spreading a high amount of bacteria onto an agar plate using a sterile cotton swab or a Drigalski spatula. Alternatively an automated machine can be used such as a spiral plater where the plate is rotated and the sample is spread evenly using an automated dispenser.
They may also be produced using the "pour plate" technique in which a concentrated inoculum of the appropriate bacteria are mixed with melted agar and spread evenly over the surface of a Petri dish.
See also
Antibiotic resistance
Miles-Misra method
Bacterial culture
Antibiotic sensitivity
Etest
References
Bacteria
Microbiology terms | Bacterial lawn | [
"Biology"
] | 309 | [
"Microbiology terms",
"Prokaryotes",
"Microorganisms",
"Bacteria"
] |
43,942 | https://en.wikipedia.org/wiki/Petri%20dish | A Petri dish (alternatively known as a Petri plate or cell-culture dish) is a shallow transparent lidded dish that biologists use to hold growth medium in which cells can be cultured, originally, cells of bacteria, fungi and small mosses. The container is named after its inventor, German bacteriologist Julius Richard Petri. It is the most common type of culture plate. The Petri dish is one of the most common items in biology laboratories and has entered popular culture. The term is sometimes written in lower case, especially in non-technical literature.
What was later called Petri dish was originally developed by German physician Robert Koch in his private laboratory in 1881, as a precursor method. Petri, as assistant to Koch, at Berlin University made the final modifications in 1887 as used today.
Penicillin, the first antibiotic, was discovered in 1929 when Alexander Fleming noticed that penicillium mold contaminating a bacterial culture in a Petri dish had killed the bacteria around it.
History
The Petri dish was developed by German physician Julius Richard Petri (after whom the name is given) while working as an assistant to Robert Koch at Berlin University. Petri did not invent the culture dish himself; rather, it was a modified version of Koch's invention which used an agar medium that was developed by Walther Hesse. Koch had published a precursor dish in a booklet in 1881 titled "" (Methods for the Study of Pathogenic Organisms), which has been known as the "Bible of Bacteriology". He described a new bacterial culture method that used a glass slide with agar and a container (basically a Petri dish, a circular glass dish of 20 × 5 cm with matching lid) which he called ("moist chamber"). A bacterial culture was spread on the glass slide, then placed in the moist chamber with a small wet paper. Bacterial growth was easily visible.
Koch publicly demonstrated his plating method at the Seventh International Medical Congress in London in August 1881. There, Louis Pasteur exclaimed, "" ("What a great progress, Sir!") It was using this method that Koch discovered important pathogens of tuberculosis (Mycobacterium tuberculosis), anthrax (Bacillus anthracis), and cholera (Vibrio cholerae). For his research on tuberculosis, he was awarded the Nobel Prize in Physiology or Medicine in 1905. His students also made important discoveries. Friedrich Loeffler discovered the bacteria of glanders (Burkholderia mallei) in 1882 and diphtheria (Corynebacterium diphtheriae) in 1884; and Georg Theodor August Gaffky, the bacterium of typhoid (Salmonella enterica) in 1884.
Petri made changes in how the circular dish was used. It is often asserted that Petri developed a new culture plate, but this is incorrect. Instead of using a separate glass slide or plate on which culture media were placed, Petri directly placed media into the glass dish, eliminating unnecessary steps such as transferring the culture media, using the wet paper, and reducing the chance of contamination. He published the improved method in 1887 as "" ("A minor modification of the plating technique of Koch"). Although it could have been named "Koch dish", the final method was given an eponymous name Petri dish.
Features and variants
Petri dishes are usually cylindrical, mostly with diameters ranging from , and a height to diameter ratio ranging from 1:10 to 1:4. Squarish versions are also available.
Petri dishes were traditionally reusable and made of glass; often of heat-resistant borosilicate glass for proper sterilization at 120–160 °C.
Since the 1960s, plastic dishes, usually disposable, are also common.
The dishes are often covered with a shallow transparent lid, resembling a slightly wider version of the dish itself. The lids of glass dishes are usually loose-fitting. Plastic dishes may have close-fitting covers that delay the drying of the contents. Alternatively, some glass or plastic versions may have small holes around the rim, or ribs on the underside of the cover, to allow for air flow over the culture and prevent water condensation.
Some Petri dishes, especially plastic ones, usually feature rings and/or slots on their lids and bases so that they are less prone to sliding off one another when stacked or sticking to a smooth surface by suction.
Small dishes may have a protruding base that can be secured on a microscope stage for direct examination.
Some versions may have grids printed on the bottom to help in measuring the density of cultures.
A microplate is a single container with an array of flat-bottomed cavities, each being essentially a small Petri dish. It makes it possible to inoculate and grow dozens or hundreds of independent cultures of dozens of samples at the same time. Besides being much cheaper and convenient than separate dishes, the microplate is also more amenable to automated handling and inspection.
Some plates are separated into different mediums known as biplates, triplates, and quadplates.
Uses
Petri dishes are widely used in biology to cultivate microorganisms such as bacteria, yeasts, and molds. It is most suited for organisms that thrive on a solid or semisolid surface. The culture medium is often an agar plate, a layer a few mm thick of agar or agarose gel containing whatever nutrients the organism requires (such as blood, salts, carbohydrates, amino acids) and other desired ingredients (such as dyes, indicators, and medicinal drugs). The agar and other ingredients are dissolved in warm water and poured into the dish and left to cool down. Once the medium solidifies, a sample of the organism is inoculated ("plated"). The dishes are then left undisturbed for hours or days while the organism grows, possibly in an incubator. They are usually covered, or placed upside-down, to lessen the risk of contamination from airborne spores. Virus or phage cultures require that a population of bacteria be grown in the dish first, which then becomes the culture medium for the viral inoculum.
While Petri dishes are widespread in microbiological research, smaller dishes tend to be used for large-scale studies in which growing cells in Petri dishes can be relatively expensive and labor-intensive.
Petri dishes can be used to visualize the location of contamination on surfaces, such as kitchen counters and utensils, clothing, food preparation equipment, or animal and human skin. For this application, the Petri dishes may be filled so that the culture medium protrudes slightly above the edges of the dish to make it easier to take samples on hard objects. Shallow Petri dishes prepared in this way are called Replicate Organism Detection And Counting (RODAC) plates and are available commercially.
Petri dishes are also used for cell cultivation of isolated cells from eukaryotic organisms, such as in immunodiffusion studies, on solid agar or in a liquid medium.
Petri dishes may be used to observe the early stages of plant germination, and to grow plants asexually from isolated cells.
Petri dishes may be convenient enclosures to study the behavior of insects and other small animals.
Due to their large open surface, Petri dishes are effective containers to evaporate solvents and dry out precipitates, either at room temperature or in ovens and desiccators.
Petri dishes also make convenient temporary storage for samples, especially liquid, granular, or powdered ones, and small objects such as insects or seeds. Their transparency and flat profile allows the contents to be inspected with the naked eye, magnifying glass, or low-power microscope without removing the lid.
In popular culture
The Petri dish is one of a small number of laboratory equipment items whose name entered popular culture. It is often used metaphorically, e.g. for a contained community that is being studied as if they were microorganisms in a biology experiment, or an environment where original ideas and enterprises may flourish.
Unicode has a Petri dish emoji, "🧫", which has the code point U+1F9EB (HTML entity "🧫" or "🧫", UTF-8 "0xF0 0x9F 0xA7 0xAB").
See also
References
External links
Laboratory glassware
Microbiology equipment
German inventions
1887 in science
1887 in Germany | Petri dish | [
"Biology"
] | 1,764 | [
"Microbiology equipment"
] |
43,946 | https://en.wikipedia.org/wiki/Biofilm | A biofilm is a syntrophic community of microorganisms in which cells stick to each other and often also to a surface. These adherent cells become embedded within a slimy extracellular matrix that is composed of extracellular polymeric substances (EPSs). The cells within the biofilm produce the EPS components, which are typically a polymeric combination of extracellular polysaccharides, proteins, lipids and DNA. Because they have a three-dimensional structure and represent a community lifestyle for microorganisms, they have been metaphorically described as "cities for microbes".
Biofilms may form on living (biotic) or non-living (abiotic) surfaces and can be common in natural, industrial, and hospital settings. They may constitute a microbiome or be a portion of it. The microbial cells growing in a biofilm are physiologically distinct from planktonic cells of the same organism, which, by contrast, are single cells that may float or swim in a liquid medium. Biofilms can form on the teeth of most animals as dental plaque, where they may cause tooth decay and gum disease.
Microbes form a biofilm in response to a number of different factors, which may include cellular recognition of specific or non-specific attachment sites on a surface, nutritional cues, or in some cases, by exposure of planktonic cells to sub-inhibitory concentrations of antibiotics. A cell that switches to the biofilm mode of growth undergoes a phenotypic shift in behavior in which large suites of genes are differentially regulated.
A biofilm may also be considered a hydrogel, which is a complex polymer that contains many times its dry weight in water. Biofilms are not just bacterial slime layers but biological systems; the bacteria organize themselves into a coordinated functional community. Biofilms can attach to a surface such as a tooth or rock, and may include a single species or a diverse group of microorganisms. Subpopulations of cells within the biofilm differentiate to perform various activities for motility, matrix production, and sporulation, supporting the overall success of the biofilm. The biofilm bacteria can share nutrients and are sheltered from harmful factors in the environment, such as desiccation, antibiotics, and a host body's immune system. A biofilm usually begins to form when a free-swimming, planktonic bacterium attaches to a surface.
Origin and formation
Origin of biofilms
Biofilms are thought to have arisen during primitive Earth as a defense mechanism for prokaryotes, as the conditions at that time were too harsh for their survival. They can be found very early in Earth's fossil records (about 3.25 billion years ago) as both Archaea and Bacteria, and commonly protect prokaryotic cells by providing them with homeostasis, encouraging the development of complex interactions between the cells in the biofilm.
Formation of biofilms
The formation of a biofilm begins with the attachment of free-floating microorganisms to a surface. The first colonist bacteria of a biofilm may adhere to the surface initially by the weak van der Waals forces and hydrophobic effects. If the colonists are not immediately separated from the surface, they can anchor themselves more permanently using cell adhesion structures such as pili. A unique group of Archaea that inhabit anoxic groundwater have similar structures called hami. Each hamus is a long tube with three hook attachments that are used to attach to each other or to a surface, enabling a community to develop. Hyperthermophilic archaeon Pyrobaculum calidifontis produce bundling pili which are homologous to the bacterial TasA filaments, a major component of the extracellular matrix in bacterial biofilms, which contribute to biofilm stability. TasA homologs are encoded by many other archaea, suggesting mechanistic similarities and evolutionary connection between bacterial and archaeal biofilms.
Hydrophobicity can also affect the ability of bacteria to form biofilms. Bacteria with increased hydrophobicity have reduced repulsion between the substratum and the bacterium. Some bacteria species are not able to attach to a surface on their own successfully due to their limited motility but are instead able to anchor themselves to the matrix or directly to other, earlier bacteria colonists. Non-motile bacteria cannot recognize surfaces or aggregate together as easily as motile bacteria.
During surface colonization bacteria cells are able to communicate using quorum sensing (QS) products such as N-acyl homoserine lactone (AHL). Once colonization has begun, the biofilm grows by a combination of cell division and recruitment. Polysaccharide matrices typically enclose bacterial biofilms. The matrix exopolysaccharides can trap QS autoinducers within the biofilm to prevent predator detection and ensure bacterial survival. In addition to the polysaccharides, these matrices may also contain material from the surrounding environment, including but not limited to minerals, soil particles, and blood components, such as erythrocytes and fibrin. The final stage of biofilm formation is known as development, and is the stage in which the biofilm is established and may only change in shape and size.
The development of a biofilm may allow for an aggregate cell colony to be increasingly tolerant or resistant to antibiotics. Cell-cell communication or quorum sensing has been shown to be involved in the formation of biofilm in several bacterial species.
Development
Biofilms are the product of a microbial developmental process. The process is summarized by five major stages of biofilm development, as shown in the diagram below:
Dispersal
Dispersal of cells from the biofilm colony is an essential stage of the biofilm life cycle. Dispersal enables biofilms to spread and colonize new surfaces. Enzymes that degrade the biofilm extracellular matrix, such as dispersin B and deoxyribonuclease, may contribute to biofilm dispersal. Enzymes that degrade the biofilm matrix may be useful as anti-biofilm agents. Evidence has shown that a fatty acid messenger, cis-2-decenoic acid, is capable of inducing dispersion and inhibiting growth of biofilm colonies. Secreted by Pseudomonas aeruginosa, this compound induces cyclo heteromorphic cells in several species of bacteria and the yeast Candida albicans.
Nitric oxide has also been shown to trigger the dispersal of biofilms of several bacteria species at sub-toxic concentrations. Nitric oxide has potential as a treatment for patients that have chronic infections caused by biofilms.
It was generally assumed that cells dispersed from biofilms immediately go into the planktonic growth phase. However, studies have shown that the physiology of dispersed cells from Pseudomonas aeruginosa biofilms is highly different from that of planktonic and biofilm cells. Hence, the dispersal process is a unique stage during the transition from biofilm to planktonic lifestyle in bacteria. Dispersed cells are found to be highly virulent against macrophages and Caenorhabditis elegans, but highly sensitive towards iron stress, as compared with planktonic cells.
Furthermore, Pseudomonas aeruginosa biofilms undergo distinct spatiotemporal dynamics during biofilm dispersal or disassembly, with contrasting consequences in recolonization and disease dissemination. Biofilm dispersal induced bacteria to activate dispersal genes to actively depart from biofilms as single cells at consistent velocities but could not recolonize fresh surfaces. In contrast, biofilm disassembly by degradation of a biofilm exopolysaccharide released immotile aggregates at high initial velocities, enabling the bacteria to recolonize fresh surfaces and cause infections in the hosts efficiently. Hence, biofilm dispersal is more complex than previously thought, where bacterial populations adopting distinct behavior after biofilm departure may be the key to survival of bacterial species and dissemination of diseases.
Properties
Biofilms are usually found on solid substrates submerged in or exposed to an aqueous solution, although they can form as floating mats on liquid surfaces and also on the surface of leaves, particularly in high humidity climates. Given sufficient resources for growth, a biofilm will quickly grow to be macroscopic (visible to the naked eye). Biofilms can contain many different types of microorganism, e.g. bacteria, archaea, protozoa, fungi and algae; each group performs specialized metabolic functions. However, some organisms will form single-species films under certain conditions. The social structure (cooperation/competition) within a biofilm depends highly on the different species present.
Extracellular matrix
The EPS matrix consists of exopolysaccharides, proteins and nucleic acids. A large proportion of the EPS is more or less strongly hydrated, however, hydrophobic EPS also occur; one example is cellulose which is produced by a range of microorganisms. This matrix encases the cells within it and facilitates communication among them through biochemical signals as well as gene exchange. The EPS matrix also traps extracellular enzymes and keeps them in close proximity to the cells. Thus, the matrix represents an external digestion system and allows for stable synergistic microconsortia of different species. Some biofilms have been found to contain water channels that help distribute nutrients and signalling molecules. This matrix is strong enough that under certain conditions, biofilms can become fossilized (stromatolites).
Bacteria living in a biofilm usually have significantly different properties from free-floating bacteria of the same species, as the dense and protected environment of the film allows them to cooperate and interact in various ways. One benefit of this environment is increased resistance to detergents and antibiotics, as the dense extracellular matrix and the outer layer of cells protect the interior of the community. In some cases antibiotic resistance can be increased up to 5,000 times. Lateral gene transfer is often facilitated within bacterial and archaeal biofilms and can leads to a more stable biofilm structure. Extracellular DNA is a major structural component of many different microbial biofilms. Enzymatic degradation of extracellular DNA can weaken the biofilm structure and release microbial cells from the surface.
However, biofilms are not always less susceptible to antibiotics. For instance, the biofilm form of Pseudomonas aeruginosa has no greater resistance to antimicrobials than do stationary-phase planktonic cells, although when the biofilm is compared to logarithmic-phase planktonic cells, the biofilm does have greater resistance to antimicrobials. This resistance to antibiotics in both stationary-phase cells and biofilms may be due to the presence of persister cells.
Habitats
Biofilms are ubiquitous in organic life. Nearly every species of microorganism have mechanisms by which they can adhere to surfaces and to each other. Biofilms will form on virtually every non-shedding surface in non-sterile aqueous or humid environments. Biofilms can grow in the most extreme environments: from, for example, the extremely hot, briny waters of hot springs ranging from very acidic to very alkaline, to frozen glaciers.
Biofilms can be found on rocks and pebbles at the bottoms of most streams or rivers and often form on the surfaces of stagnant pools of water. Biofilms are important components of food chains in rivers and streams and are grazed by the aquatic invertebrates upon which many fish feed. Biofilms are found on the surface of and inside plants. They can either contribute to crop disease or, as in the case of nitrogen-fixing rhizobia on root nodules, exist symbiotically with the plant. Examples of crop diseases related to biofilms include citrus canker, Pierce's disease of grapes, and bacterial spot of plants such as peppers and tomatoes.
Percolating filters
Percolating filters in sewage treatment works are highly effective removers of pollutants from settled sewage liquor. They work by trickling the liquid over a bed of hard material which is designed to have a very large surface area. A complex biofilm develops on the surface of the medium which absorbs, adsorbs and metabolises the pollutants. The biofilm grows rapidly and when it becomes too thick to retain its grip on the media it washes off and is replaced by newly grown film. The washed off ("sloughed" off) film is settled out of the liquid stream to leave a highly purified effluent.
Slow sand filter
Slow sand filters are used in water purification for treating raw water to produce a potable product. They work through the formation of a biofilm called the hypogeal layer or Schmutzdecke in the top few millimetres of the fine sand layer. The Schmutzdecke is formed in the first 10–20 days of operation and consists of bacteria, fungi, protozoa, rotifera and a range of aquatic insect larvae. As an epigeal biofilm ages, more algae tend to develop and larger aquatic organisms may be present including some bryozoa, snails and annelid worms. The surface biofilm is the layer that provides the effective purification in potable water treatment, the underlying sand providing the support medium for this biological treatment layer. As water passes through the hypogeal layer, particles of foreign matter are trapped in the mucilaginous matrix and soluble organic material is adsorbed. The contaminants are metabolised by the bacteria, fungi and protozoa. The water produced from an exemplary slow sand filter is of excellent quality with 90–99% bacterial cell count reduction.
Rhizosphere
Plant-beneficial microbes can be categorized as plant growth-promoting rhizobacteria. These plant growth-promoters colonize the roots of plants, and provide a wide range of beneficial functions for their host including nitrogen fixation, pathogen suppression, anti-fungal properties, and the breakdown of organic materials. One of these functions is the defense against pathogenic, soil-borne bacteria and fungi by way of induced systemic resistance (ISR) or induced systemic responses triggered by pathogenic microbes (pathogen-induced systemic acquired resistance). Plant exudates act as chemical signals for host specific bacteria to colonize. Rhizobacteria colonization steps include attractions, recognition, adherence, colonization, and growth. Bacteria that have been shown to be beneficial and form biofilms include Bacillus, Pseudomonas, and Azospirillum. Biofilms in the rhizosphere often result in pathogen or plant induced systemic resistances. Molecular properties on the surface of the bacterium cause an immune response in the plant host. These microbe associated molecules interact with receptors on the surface of plant cells, and activate a biochemical response that is thought to include several different genes at a number of loci. Several other signaling molecules have been linked to both induced systemic responses and pathogen-induced systemic responses, such as jasmonic acid and ethylene. Cell envelope components such as bacterial flagella and lipopolysaccharides, which are recognized by plant cells as components of pathogens. Certain iron metabolites produced by Pseudomonas have also been shown to create an induced systemic response. This function of the biofilm helps plants build stronger resistance to pathogens.
Plants that have been colonized by PGPR forming a biofilm have gained systemic resistances and are primed for defense against pathogens. This means that the genes necessary for the production of proteins that work towards defending the plant against pathogens have been expressed, and the plant has a "stockpile" of compounds to release to fight off pathogens. A primed defense system is much faster in responding to pathogen induced infection, and may be able to deflect pathogens before they are able to establish themselves. Plants increase the production of lignin, reinforcing cell walls and making it difficult for pathogens to penetrate into the cell, while also cutting off nutrients to already infected cells, effectively halting the invasion. They produce antimicrobial compounds such as phytoalexins, chitinases, and proteinase inhibitors, which prevent the growth of pathogens. These functions of disease suppression and pathogen resistance ultimately lead to an increase in agricultural production and a decrease in the use of chemical pesticides, herbicides, and fungicides because there is a reduced amount of crop loss due to disease. Induced systemic resistance and pathogen-induced systemic acquired resistance are both potential functions of biofilms in the rhizosphere, and should be taken into consideration when applied to new age agricultural practices because of their effect on disease suppression without the use of dangerous chemicals.
Mammalian gut
Studies in 2003 discovered that the immune system supports biofilm development in the large intestine. This was supported mainly with the fact that the two most abundantly produced molecules by the immune system also support biofilm production and are associated with the biofilms developed in the gut. This is especially important because the appendix holds a mass amount of these bacterial biofilms. This discovery helps to distinguish the possible function of the appendix and the idea that the appendix can help reinoculate the gut with good gut flora. However, modified or disrupted states of biofilms in the gut have been connected to diseases such as inflammatory bowel disease and colorectal cancer.
Human environment
In the human environment, biofilms can grow in showers very easily since they provide a moist and warm environment for them to thrive. Mold biofilms on ceilings may form due to roof leaks. They can form inside water and sewage pipes and cause clogging and corrosion. On floors and counters, they can make sanitation difficult in food preparation areas. In soil, they can cause bioclogging. In cooling- or heating-water systems, they are known to reduce heat transfer. Biofilms in marine engineering systems, such as pipelines of the offshore oil and gas industry, can lead to substantial corrosion problems. Corrosion is mainly due to abiotic factors; however, at least 20% of corrosion is caused by microorganisms that are attached to the metal subsurface (i.e., microbially influenced corrosion).
Ship fouling
Bacterial adhesion to boat hulls serves as the foundation for biofouling of seagoing vessels. Once a film of bacteria forms, it is easier for other marine organisms such as barnacles to attach. Such fouling can reduce maximum vessel speed by up to 20%, prolonging voyages and consuming fuel. Time in dry dock for refitting and repainting reduces the productivity of shipping assets, and the useful life of ships is also reduced due to corrosion and mechanical removal (scraping) of marine organisms from ships' hulls.
Stromatolites
Stromatolites are layered accretionary structures formed in shallow water by the trapping, binding and cementation of sedimentary grains by microbial biofilms, especially of cyanobacteria. Stromatolites include some of the most ancient records of life on Earth, and are still forming today.
Dental plaque
Within the human body, biofilms are present on the teeth as dental plaque, where they may cause tooth decay and gum disease. These biofilms can either be in an uncalcified state that can be removed by dental instruments, or a calcified state which is more difficult to remove. Removal techniques can also include antimicrobials.
Dental plaque is an oral biofilm that adheres to the teeth and consists of many species of both bacteria and fungi (such as Streptococcus mutans and Candida albicans), embedded in salivary polymers and microbial extracellular products. The accumulation of microorganisms subjects the teeth and gingival tissues to high concentrations of bacterial metabolites which results in dental disease. Biofilm on the surface of teeth is frequently subject to oxidative stress and acid stress. Dietary carbohydrates can cause a dramatic decrease in pH in oral biofilms to values of 4 and below (acid stress). A pH of 4 at body temperature of 37 °C causes depurination of DNA, leaving apurinic (AP) sites in DNA, especially loss of guanine.
Dental plaque biofilm can result in dental caries if it is allowed to develop over time. An ecologic shift away from balanced populations within the dental biofilm is driven by certain (cariogenic) microbiological populations beginning to dominate when the environment favors them. The shift to an acidogenic, aciduric, and cariogenic microbiological population develops and is maintained by frequent consumption of fermentable dietary carbohydrate. The resulting activity shift in the biofilm (and resulting acid production within the biofilm, at the tooth surface) is associated with an imbalance of demineralization over remineralization, leading to net mineral loss within dental hard tissues (enamel and then dentin), the symptom being a carious lesion, or cavity. By preventing the dental plaque biofilm from maturing or by returning it back to a non-cariogenic state, dental caries can be prevented and arrested. This can be achieved through the behavioral step of reducing the supply of fermentable carbohydrates (i.e. sugar intake) and frequent removal of the biofilm (i.e., toothbrushing).
Intercellular communication
A peptide pheromone quorum sensing signaling system in S. mutans includes the competence stimulating peptide (CSP) that controls genetic competence. Genetic competence is the ability of a cell to take up DNA released by another cell. Competence can lead to genetic transformation, a form of sexual interaction, favored under conditions of high cell density and/or stress where there is maximal opportunity for interaction between the competent cell and the DNA released from nearby donor cells. This system is optimally expressed when S. mutans cells reside in an actively growing biofilm. Biofilm grown S. mutans cells are genetically transformed at a rate 10- to 600-fold higher than S. mutans growing as free-floating planktonic cells suspended in liquid.
When the biofilm, containing S. mutans and related oral streptococci, is subjected to acid stress, the competence regulon is induced, leading to resistance to being killed by acid. As pointed out by Michod et al., transformation in bacterial pathogens likely provides for effective and efficient recombinational repair of DNA damages. It appears that S. mutans can survive the frequent acid stress in oral biofilms, in part, through the recombinational repair provided by competence and transformation.
Predator-prey interactions
Predator-prey interactions between biofilms and bacterivores, such as the soil-dwelling nematode Caenorhabditis elegans, had been extensively studied. Via the production of sticky matrix and formation of aggregates, Yersinia pestis biofilms can prevent feeding by obstructing the mouth of C. elegans. Moreover, Pseudomonas aeruginosa biofilms can impede the slithering motility of C. elegans, termed as 'quagmire phenotype', resulting in trapping of C. elegans within the biofilms and preventing the exploration of nematodes to feed on susceptible biofilms. This significantly reduced the ability of predator to feed and reproduce, thereby promoting the survival of biofilms. Pseudomonas aeruginosa biofilms can also mask their chemical signatures, where they reduced the diffusion of quorum sensing molecules into the environment and prevented the detection of C. elegans.
Taxonomic diversity
Many different bacteria form biofilms, including gram-positive (e.g. Bacillus spp, Listeria monocytogenes, Staphylococcus spp, and lactic acid bacteria, including Lactobacillus plantarum and Lactococcus lactis) and gram-negative species (e.g. Escherichia coli, or Pseudomonas aeruginosa). Cyanobacteria also form biofilms in aquatic environments.
Biofilms are formed by bacteria that colonize plants, e.g. Pseudomonas putida, Pseudomonas fluorescens, and related pseudomonads which are common plant-associated bacteria found on leaves, roots, and in the soil, and the majority of their natural isolates form biofilms. Several nitrogen-fixing symbionts of legumes such as Rhizobium leguminosarum and Sinorhizobium meliloti form biofilms on legume roots and other inert surfaces.
Along with bacteria, biofilms are also generated by archaea and by a range of eukaryotic organisms, including fungi e.g. Cryptococcus laurentii and microalgae. Among microalgae, one of the main progenitors of biofilms are diatoms, which colonise both fresh and marine environments worldwide.
For other species in disease-associated biofilms and biofilms arising from eukaryotes, see below.
Infectious diseases
Biofilms have been found to be involved in a wide variety of microbial infections in the body, by one estimate 80% of all infections. Infectious processes in which biofilms have been implicated include common problems such as bacterial vaginosis, urinary tract infections, catheter infections, middle-ear infections, formation of dental plaque, gingivitis, coating contact lenses, and less common but more lethal processes such as endocarditis, infections in cystic fibrosis, and infections of permanent indwelling devices such as joint prostheses, heart valves, and intervertebral disc. The first visual evidence of a biofilm was recorded after spine surgery. It was found that in the absence of clinical presentation of infection, impregnated bacteria could form a biofilm around an implant, and this biofilm can remain undetected via contemporary diagnostic methods, including swabbing. Implant biofilm is frequently present in "aseptic" pseudarthrosis cases. Furthermore, it has been noted that bacterial biofilms may impair cutaneous wound healing and reduce topical antibacterial efficiency in healing or treating infected skin wounds. The diversity of P. aeruginosa cells within a biofilm is thought to make it harder to treat the infected lungs of people with cystic fibrosis. Early detection of biofilms in wounds is crucial to successful chronic wound management. Although many techniques have developed to identify planktonic bacteria in viable wounds, few have been able to quickly and accurately identify bacterial biofilms. Future studies are needed to find means of identifying and monitoring biofilm colonization at the bedside to permit timely initiation of treatment.
It has been shown that biofilms are present on the removed tissue of 80% of patients undergoing surgery for chronic sinusitis. The patients with biofilms were shown to have been denuded of cilia and goblet cells, unlike the controls without biofilms who had normal cilia and goblet cell morphology. Biofilms were also found on samples from two of 10 healthy controls mentioned. The species of bacteria from intraoperative cultures did not correspond to the bacteria species in the biofilm on the respective patient's tissue. In other words, the cultures were negative though the bacteria were present. New staining techniques are being developed to differentiate bacterial cells growing in living animals, e.g. from tissues with allergy-inflammations.
Research has shown that sub-therapeutic levels of β-lactam antibiotics induce biofilm formation in Staphylococcus aureus. This sub-therapeutic level of antibiotic may result from the use of antibiotics as growth promoters in agriculture, or during the normal course of antibiotic therapy. The biofilm formation induced by low-level methicillin was inhibited by DNase, suggesting that the sub-therapeutic levels of antibiotic also induce extracellular DNA release. Moreover, from an evolutionary point of view, the creation of the tragedy of the commons in pathogenic microbes may provide advanced therapeutic ways for chronic infections caused by biofilms via genetically engineered invasive cheaters who can invade wild-types 'cooperators' of pathogenic bacteria until cooperator populations go to extinction or overall population 'cooperators and cheaters ' go to extinction.
Pseudomonas aeruginosa
P. aeruginosa represents a commonly used biofilm model organism since it is involved in different types of biofilm-associated chronic infections. Examples of such infections include chronic wounds, chronic otitis media, chronic prostatitis and chronic lung infections in cystic fibrosis (CF) patients. About 80% of CF patients have chronic lung infection, caused mainly by P. aeruginosa growing in a non-surface attached biofilms surround by PMN. The infection remains present despite aggressive antibiotic therapy and is a common cause of death in CF patients due to constant inflammatory damage to the lungs. In patients with CF, one therapy for treating early biofilm development is to employ DNase to structurally weaken the biofilm.
Biofilm formation of P. aeruginosa, along with other bacteria, is found in 90% of chronic wound infections, which leads to poor healing and high cost of treatment estimated at more than US$25 billion every year in the United States. In order to minimize the P. aeruginosa infection, host epithelial cells secrete antimicrobial peptides, such as lactoferrin, to prevent the formation of the biofilms.
Streptococcus pneumoniae
Streptococcus pneumoniae is the main cause of community-acquired pneumonia and meningitis in children and the elderly, and of sepsis in HIV-infected persons. When S. pneumoniae grows in biofilms, genes are specifically expressed that respond to oxidative stress and induce competence. Formation of a biofilm depends on competence stimulating peptide (CSP). CSP also functions as a quorum-sensing peptide. It not only induces biofilm formation, but also increases virulence in pneumonia and meningitis.
It has been proposed that competence development and biofilm formation is an adaptation of S. pneumoniae to survive the defenses of the host. In particular, the host's polymorphonuclear leukocytes produce an oxidative burst to defend against the invading bacteria, and this response can kill bacteria by damaging their DNA. Competent S. pneumoniae in a biofilm have the survival advantage that they can more easily take up transforming DNA from nearby cells in the biofilm to use for recombinational repair of oxidative damages in their DNA. Competent S. pneumoniae can also secrete an enzyme (murein hydrolase) that destroys non-competent cells (fratricide) causing DNA to be released into the surrounding medium for potential use by the competent cells.
The insect antimicrobial peptide cecropin A can destroy planktonic and sessile biofilm-forming uropathogenic E. coli cells, either alone or when combined with the antibiotic nalidixic acid, synergistically clearing infection in vivo (in the insect host Galleria mellonella) without off-target cytotoxicity. The multi-target mechanism of action involves outer membrane permeabilization followed by biofilm disruption triggered by the inhibition of efflux pump activity and interactions with extracellular and intracellular nucleic acids.
Escherichia coli
Escherichia coli biofilms are responsible for many intestinal infectious diseases. The Extraintestinal group of E. coli (ExPEC) is the dominant bacterial group that attacks the urinary system, which leads to urinary tract infections. The biofilm formation of these pathogenic E. coli is hard to eradicate due to the complexity of its aggregation structure, and it has a significant contribution to developing aggressive medical complications, increase in hospitalization rate, and cost of treatment. The development of E. coli biofilm is a common leading cause of urinary tract infections (UTI) in hospitals through its contribution to developing medical device-associated infections. Catheter-associated urinary tract infections (CAUTI) represent the most common hospital-acquired infection due to the formation of the pathogenic E. coli biofilm inside the catheters.
Staphylococcus aureus
Staphylococcus aureus pathogen can attack skin and lungs, leading to skin infection and pneumonia. Moreover, the biofilm infections network of S. aureus plays a critical role in preventing immune cells, such as macrophages from eliminating and destroying bacterial cells. Furthermore, biofilm formation by bacteria, such as S. aureus, not only develops resistance against antibiotic medication but also develop internal resistance toward antimicrobial peptides (AMPs), leading to preventing the inhibition of the pathogen and maintaining its survival.
Serratia marcescens
Serratia marcescens is a fairly common opportunistic pathogen that can form biofilms on various surfaces, including medical devices such as catheters and implants, as well as natural environments like soil and water. The formation of biofilms by S. marcescens is a serious concern because of its ability to adhere to and colonize surfaces, protecting itself from host immune responses and antimicrobial agents. This strength makes infections caused by S. marcescens challenging to treat, specifically in hospitals where the bacterium can cause severe, and specific, infections.
Research suggests that biofilm formation by S. marcescens is a process controlled by both nutrient cues and the quorum-sensing system. Quorum sensing influences the bacterium's ability to adhere to surfaces and establish mature biofilms, whereas the availability of specific nutrients can enhance or inhibit biofilm development.
S. marcescens creates biofilms that ultimately develop into a highly porous, thread-like structure composed of chains of cells, filaments, and cell clusters. Research has shown that S. marcescens biofilms exhibit complex structural organization, including the formation of microcolonies and channels that facilitate nutrient and waste exchange. The production of extracellular polymeric substances (EPS) is a key factor in biofilm development, contributing to the bacterium's adhesion and resistance to antimicrobial agents. In addition to its role in healthcare-associated infections, S. marcescens biofilms have been implicated in the deterioration of industrial equipment and processes. For example, biofilm growth in cooling towers can lead to biofouling and reduced efficiency.
Efforts to control and prevent biofilm formation by S. marcescens involve the use of antimicrobial coatings on medical devices, the development of targeted biofilm disruptors, and improved sterilization protocols. Further research into the molecular mechanisms governing S. marcescens biofilm formation and persistence is crucial for developing effective strategies to combat its associated risks. The use of indole compounds has been studied to be used as protection against biofilm formation.
Uses and impact
In medicine
It is suggested that around two-thirds of bacterial infections in humans involve biofilms. Infections associated with the biofilm growth usually are challenging to eradicate. This is mostly due to the fact that mature biofilms display antimicrobial tolerance, and immune response evasions. Biofilms often form on the inert surfaces of implanted devices such as catheters, prosthetic cardiac valves and intrauterine devices. Some of the most difficult infections to treat are those associated with the use of medical devices.
The rapidly expanding worldwide industry for biomedical devices and tissue engineering related products is already at $180 billion per year, yet this industry continues to suffer from microbial colonization. No matter the sophistication, microbial infections can develop on all medical devices and tissue engineering constructs. 60-70% of hospital-acquired infections are associated with the implantation of a biomedical device. This leads to 2 million cases annually in the U.S., costing the healthcare system over $5 billion in additional healthcare expenses.
The level of antibiotic resistance in a biofilm is much greater than that of non-biofilm bacteria, and can be as much as 5,000 times greater. The extracellular matrix of biofilm is considered one of the leading factors that can reduce the penetration of antibiotics into a biofilm structure and contributes to antibiotic resistance. Further, it has been demonstrated that the evolution of resistance to antibiotics may be affected by the biofilm lifestyle. Bacteriophage therapy can disperse the biofilm generated by antibiotic-resistant bacteria.
It has been shown that the introduction of a small current of electricity to the liquid surrounding a biofilm, together with small amounts of antibiotic can reduce the level of antibiotic resistance to levels of non-biofilm bacteria. This is termed the bioelectric effect. The application of a small DC current on its own can cause a biofilm to detach from its surface. A study showed that the type of current used made no difference to the bioelectric effect.
In industry
Biofilms can also be harnessed for constructive purposes. For example, many sewage treatment plants include a secondary treatment stage in which waste water passes over biofilms grown on filters, which extract and digest organic compounds. In such biofilms, bacteria are mainly responsible for removal of organic matter (BOD), while protozoa and rotifers are mainly responsible for removal of suspended solids (SS), including pathogens and other microorganisms. Slow sand filters rely on biofilm development in the same way to filter surface water from lake, spring or river sources for drinking purposes. What is regarded as clean water is effectively a waste material to these microcellular organisms. Biofilms can help eliminate petroleum oil from contaminated oceans or marine systems. The oil is eliminated by the hydrocarbon-degrading activities of communities of hydrocarbonoclastic bacteria (HCB).
Biofilms are used in microbial fuel cells (MFCs) to generate electricity from a variety of starting materials, including complex organic waste and renewable biomass.
Biofilms are also relevant for the improvement of metal dissolution in bioleaching industry, and aggregation of microplastics pollutants for convenient removal from the environment.
Food industry
Biofilms have become problematic in several food industries due to the ability to form on plants and during industrial processes. Bacteria can survive long periods of time in water, animal manure, and soil, causing biofilm formation on plants or in the processing equipment. The buildup of biofilms can affect the heat flow across a surface and increase surface corrosion and frictional resistance of fluids. These can lead to a loss of energy in a system and overall loss of products. Along with economic problems, biofilm formation on food poses a health risk to consumers due to the ability to make the food more resistant to disinfectants As a result, from 1996 to 2010 the Centers for Disease Control and Prevention estimated 48 million foodborne illnesses per year. Biofilms have been connected to about 80% of bacterial infections in the United States.
In produce, microorganisms attach to the surfaces and biofilms develop internally. During the washing process, biofilms resist sanitization and allow bacteria to spread across the produce, especially via kitchen utensils. This problem is also found in ready-to-eat foods, because the foods go through limited cleaning procedures before consumption Due to the perishability of dairy products and limitations in cleaning procedures, resulting in the buildup of bacteria, dairy is susceptible to biofilm formation and contamination. The bacteria can spoil the products more readily and contaminated products pose a health risk to consumers. One species of bacteria that can be found in various industries and is a major cause of foodborne disease is Salmonella. Large amounts of Salmonella contamination can be found in the poultry processing industry as about 50% of Salmonella strains can produce biofilms on poultry farms. Salmonella increases the risk of foodborne illnesses when the poultry products are not cleaned and cooked correctly. Salmonella is also found in the seafood industry where biofilms form from seafood borne pathogens on the seafood itself as well as in water. Shrimp products are commonly affected by Salmonella because of unhygienic processing and handling techniques The preparation practices of shrimp and other seafood products can allow for bacteria buildup on the products.
New forms of cleaning procedures are being tested to reduce biofilm formation in these processes which will lead to safer and more productive food processing industries. These new forms of cleaning procedures also have a profound effect on the environment, often releasing toxic gases into the groundwater reservoirs. As a response to the aggressive methods employed in controlling biofilm formation, there are a number of novel technologies and chemicals under investigation that can prevent either the proliferation or adhesion of biofilm-secreting microbes. Latest proposed biomolecules presenting marked anti-biofilm activity include a range of metabolites such as bacterial rhamnolipids and even plant- and animal-derived alkaloids.
In aquaculture
In shellfish and algal aquaculture, biofouling microbial species tend to block nets and cages and ultimately outcompete the farmed species for space and food. Bacterial biofilms start the colonization process by creating microenvironments that are more favorable for biofouling species. In the marine environment, biofilms could reduce the hydrodynamic efficiency of ships and propellers, lead to pipeline blockage and sensor malfunction, and increase the weight of appliances deployed in seawater. Numerous studies have shown that biofilm can be a reservoir for potentially pathogenic bacteria in freshwater aquaculture. Moreover, biofilms are important in establishing infections on the fish. As mentioned previously, biofilms can be difficult to eliminate even when antibiotics or chemicals are used in high doses. The role that biofilm plays as reservoirs of bacterial fish pathogens has not been explored in detail but it certainly deserves to be studied.
Eukaryotic
Along with bacteria, biofilms are often initiated and produced by eukaryotic microbes. The biofilms produced by eukaryotes is usually occupied by bacteria and other eukaryotes alike, however the surface is cultivated and EPS is secreted initially by the eukaryote. Both fungi and microalgae are known to form biofilms in such a way. Biofilms of fungal origin are important aspects of human infection and fungal pathogenicity, as the fungal infection is more resistant to antifungals.
In the environment, fungal biofilms are an area of ongoing research. One key area of research is fungal biofilms on plants. For example, in the soil, plant associated fungi including mycorrhiza have been shown to decompose organic matter and protect plants from bacterial pathogens.
Biofilms in aquatic environments are often founded by diatoms. The exact purpose of these biofilms is unknown, however there is evidence that the EPS produced by diatoms facilitates both cold and salinity stress. These eukaryotes interact with a diverse range of other organisms within a region known as the phycosphere, but importantly are the bacteria associated with diatoms, as it has been shown that although diatoms excrete EPS, they only do so when interacting with certain bacteria species.
Horizontal gene transfer
Horizontal gene transfer is the lateral transfer of genetic material between cellular organisms. It happens frequently in prokaryotes, and less frequently in eukaryotes. In bacteria, horizontal gene transfer can occur through transformation (uptake of free floating DNA in the environment), transduction (virus mediated DNA uptake), or conjugation (transfer of DNA between pili structures of two adjacent bacteria). Recent studies have also uncovered other mechanisms, such as membrane vesicle transmission or gene transfer agents. Biofilms promote horizontal gene transfer in a variety of ways.Bacterial conjugation has been shown to accelerate biofilm formation in difficult environment due to the robust connections established by the conjugative pili. These connections can often foster cross-species transfer events due to the diverse heterogeneity of many biofilms. Additionally, biofilms are structurally confined by a polysaccharide matrix, providing the close spatial requirements for conjugation. Transformation is also frequently observed in biofilms. Bacterial autolysis is a key mechanism in biofilm structural regulation, providing an abundant source of competent DNA primed for transformative uptake. In some instances, inter-biofilm quorum sensing can enhance the competence of free floating eDNA, further promoting transformation. Stx gene transfer through bacteriophage carriers has been witnessed within biofilms, which suggests that biofilms are also a suitable environment for transduction. Membrane vesicles HGT occurs when released membrane vesicles (containing genetic information) fuse with a recipient bacteria, and release genetic material into the bacteria's cytoplasm. Recent research has revealed that membrane vesicle HGT can promote single-strain biofilm formation, yet the role membrane vesicle HGT plays in the formation of multistrain biofilms is still unknown. GTAs, or gene transfer agents, are phage-like particles produced by the host bacteria and contain random DNA fragments from the host bacteria genome. HGT within biofilms can confer antibiotic resistance or increased pathogenicity across the biofilms' population, promoting biofilm homeostasis.
Examples
Conjugative plasmids may encode biofilm-associated proteins, such as PtgA, PrgB, or PrgC which promote cell adhesion (required for early biofilm formation). Genes encoding type III fimbriae are found in pOLA52 (Klebsiella pneumoniae plasmid) which promote conjugative-pilus-dependent biofilm formation.
Transformation commonly occurs within biofilms. A phenomenon called fratricide can be seen among streptococcal species in which cell-wall degrading enzymes are released, lysing neighboring bacteria and releasing their DNA. This DNA can then be taken up by the surviving bacteria (transformation). Competence stimulating peptides may play an important role in biofilm formation among S. pneumoniae and S. mutans as well. Among V. cholerae, the competence pilus itself promotes cell aggregation through pilus-pilus interactions at the beginning of biofilm formation.
Phage invasion may play a role in biofilm life cycles, lysing bacteria and releasing their eDNA, which strengthens biofilm structures and can be taken up by neighboring bacteria in transformation. Biofilm destruction caused by the E. coli phage Rac and the P. aeruginosa prophage Pf4 causes detachment of cells from the biofilm. Detachment is a biofilm phenomenon which requires more study, but is hypothesized to proliferate the bacterial species that comprise the biofilm.
Membrane vesicle HGT has been witnessed occurring in marine environments, among Neisseria gonorrhoeae, Pseudomonas aeruginosa, Helicobacter pylori, and among many other bacterial species. Even though membrane vesicle HGT has been shown as a contributing factor in biofilm formation, research is still required to prove that membrane vesicle mediated HGT occurs within biofilms. Membrane vesicle HGT has also been shown to modulate phage-bacteria interactions in Bacillus subtilis SPP1 phage-resistant cells (lacking the SPP1 receptor protein). Upon exposure to vesicles containing receptors, transduction of pBT163 (a cat-encoding plasmid) occurs, resulting in the expression of the SPP1 receptor protein, opening the receptive bacteria to future phage infection.
Recent research has shown that the archaeal species H. volcanii has some biofilm phenotypes similar to bacterial biofilms such as differentiation and HGT, which required cell-cell contact and involved formation of cytosolic bridges and cellular fusion events.
Cultivation devices
There is a wide variety of biofilm cultivation devices to mimic natural or industrial environments. Although it is important to consider that the particular experimental platform for biofilm research determines what kind of biofilm is cultivated and the data that can be extracted. These devices can be grouped into the following:
microtiter plate (MTP) systems and MBEC Assay® [formerly the Calgary Biofilm Device (CBD)]
BioFilm Ring Test (BRT) or clinical Biofilm Ring Test (cBRT)
Robbins Device or modified Robbins Device (such as the MPMR-10PMMA or the Bio-inLine Biofilm Reactor)
Drip Flow Biofilm Reactor®
rotary devices (such as the CDC Biofilm Reactor®, the Rotating Disk Reactor, the Biofilm Annular Reactor, the Industrial Surfaces Biofilm Reactor, or the Constant Depth Film Fermenter)
flow chambers or flow cells (such as the Coupon Evaluation Flow Cell, Transmission Flow Cell, and Capillary Flow Cell from BioSurface Technologies)
microfluidic approaches, such as 3D-bacterial "biofilm-dispersal-then-recolonization" (BDR) microfluidic model
See also
References
Further reading
External links
A TED-ED animation on basic biofilm biology: The microbial jungles all over the place (and you) by Scott Chimileski and Roberto Kolter
Thickness analysis, organic and mineral proportion of biofilms in order to decide a treatment strategy
Biofilm Archive of Biofilm Research & News
"Why Am I Still Sick?" – The Movie, 2012: Documentary on Biofilms: The Silent Role of Biofilms in Chronic Disease
HD Video Interviews on biofilms, antibiotics, etc. with experts, youtube.com: ADRSupport/biofilm
Bacteriology
Biological matter
Environmental microbiology
Environmental soil science
Membrane biology
Microbiology terms | Biofilm | [
"Chemistry",
"Biology",
"Environmental_science"
] | 10,349 | [
"Membrane biology",
"Environmental soil science",
"Microbiology terms",
"Molecular biology",
"Environmental microbiology"
] |
43,948 | https://en.wikipedia.org/wiki/Star%20formation | Star formation is the process by which dense regions within molecular clouds in interstellar space, sometimes referred to as "stellar nurseries" or "star-forming regions", collapse and form stars. As a branch of astronomy, star formation includes the study of the interstellar medium (ISM) and giant molecular clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar objects as its immediate products. It is closely related to planet formation, another branch of astronomy. Star formation theory, as well as accounting for the formation of a single star, must also account for the statistics of binary stars and the initial mass function. Most stars do not form in isolation but as part of a group of stars referred as star clusters or stellar associations.
History
The first stars were believed to be formed approximately 12-13 billion years ago following the Big Bang. Over intervals of time, stars have fused helium to form a series of chemical elements.
Stellar nurseries
Interstellar clouds
Spiral galaxies like the Milky Way contain stars, stellar remnants, and a diffuse interstellar medium (ISM) of gas and dust. The interstellar medium consists of 104 to 106 particles per cm3, and is typically composed of roughly 70% hydrogen, 28% helium, and 1.5% heavier elements by mass. The trace amounts of heavier elements were and are produced within stars via stellar nucleosynthesis and ejected as the stars pass beyond the end of their main sequence lifetime. Higher density regions of the interstellar medium form clouds, or diffuse nebulae, where star formation takes place. In contrast to spiral galaxies, elliptical galaxies lose the cold component of its interstellar medium within roughly a billion years, which hinders the galaxy from forming diffuse nebulae except through mergers with other galaxies.
In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H2) form, so these nebulae are called molecular clouds. The Herschel Space Observatory has revealed that filaments, or elongated dense gas structures, are truly ubiquitous in molecular clouds and central to the star formation process. They fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed manner in which the filaments are fragmented. Observations of supercritical filaments have revealed quasi-periodic chains of dense cores with spacing comparable to the filament inner width, and embedded protostars with outflows.
Observations indicate that the coldest clouds tend to form low-mass stars, which are first observed via the infrared light they emit inside the clouds, and then as visible light when the clouds dissipate. Giant molecular clouds, which are generally warmer, produce stars of all masses. These giant molecular clouds have typical densities of 100 particles per cm3, diameters of , masses of up to 6 million solar masses (), or six million times the mass of Earth's sun. The average interior temperature is .
About half the total mass of the Milky Way's galactic ISM is found in molecular clouds and the galaxy includes an estimated 6,000 molecular clouds, each with more than . The nebula nearest to the Sun where massive stars are being formed is the Orion Nebula, away. However, lower mass star formation is occurring about 400–450 light-years distant in the ρ Ophiuchi cloud complex.
A more compact site of star formation is the opaque clouds of dense gas and dust known as Bok globules, so named after the astronomer Bart Bok. These can form in association with collapsing molecular clouds or possibly independently. The Bok globules are typically up to a light-year across and contain a few solar masses. They can be observed as dark clouds silhouetted against bright emission nebulae or background stars. Over half the known Bok globules have been found to contain newly forming stars.
Cloud collapse
An interstellar cloud of gas will remain in hydrostatic equilibrium as long as the kinetic energy of the gas pressure is in balance with the potential energy of the internal gravitational force. Mathematically this is expressed using the virial theorem, which states that, to maintain equilibrium, the gravitational potential energy must equal twice the internal thermal energy. If a cloud is massive enough that the gas pressure is insufficient to support it, the cloud will undergo gravitational collapse. The mass above which a cloud will undergo such collapse is called the Jeans mass. The Jeans mass depends on the temperature and density of the cloud, but is typically thousands to tens of thousands of solar masses. During cloud collapse dozens to tens of thousands of stars form more or less simultaneously which is observable in so-called embedded clusters. The end product of a core collapse is an open cluster of stars.
In triggered star formation, one of several events might occur to compress a molecular cloud and initiate its gravitational collapse. Molecular clouds may collide with each other, or a nearby supernova explosion can be a trigger, sending shocked matter into the cloud at very high speeds. (The resulting new stars may themselves soon produce supernovae, producing self-propagating star formation.) Alternatively, galactic collisions can trigger massive starbursts of star formation as the gas clouds in each galaxy are compressed and agitated by tidal forces. The latter mechanism may be responsible for the formation of globular clusters.
A supermassive black hole at the core of a galaxy may serve to regulate the rate of star formation in a galactic nucleus. A black hole that is accreting infalling matter can become active, emitting a strong wind through a collimated relativistic jet. This can limit further star formation. Massive black holes ejecting radio-frequency-emitting particles at near-light speed can also block the formation of new stars in aging galaxies. However, the radio emissions around the jets may also trigger star formation. Likewise, a weaker jet may trigger star formation when it collides with a cloud.
As it collapses, a molecular cloud breaks into smaller and smaller pieces in a hierarchical manner, until the fragments reach stellar mass. In each of these fragments, the collapsing gas radiates away the energy gained by the release of gravitational potential energy. As the density increases, the fragments become opaque and are thus less efficient at radiating away their energy. This raises the temperature of the cloud and inhibits further fragmentation. The fragments now condense into rotating spheres of gas that serve as stellar embryos.
Complicating this picture of a collapsing cloud are the effects of turbulence, macroscopic flows, rotation, magnetic fields and the cloud geometry. Both rotation and magnetic fields can hinder the collapse of a cloud. Turbulence is instrumental in causing fragmentation of the cloud, and on the smallest scales it promotes collapse.
Protostar
A protostellar cloud will continue to collapse as long as the gravitational binding energy can be eliminated. This excess energy is primarily lost through radiation. However, the collapsing cloud will eventually become opaque to its own radiation, and the energy must be removed through some other means. The dust within the cloud becomes heated to temperatures of , and these particles radiate at wavelengths in the far infrared where the cloud is transparent. Thus the dust mediates the further collapse of the cloud.
During the collapse, the density of the cloud increases towards the center and thus the middle region becomes optically opaque first. This occurs when the density is about . A core region, called the first hydrostatic core, forms where the collapse is essentially halted. It continues to increase in temperature as determined by the virial theorem. The gas falling toward this opaque region collides with it and creates shock waves that further heat the core.
When the core temperature reaches about , the thermal energy dissociates the H2 molecules. This is followed by the ionization of the hydrogen and helium atoms. These processes absorb the energy of the contraction, allowing it to continue on timescales comparable to the period of collapse at free fall velocities. After the density of infalling material has reached about 10−8 g / cm3, that material is sufficiently transparent to allow energy radiated by the protostar to escape. The combination of convection within the protostar and radiation from its exterior allow the star to contract further. This continues until the gas is hot enough for the internal pressure to support the protostar against further gravitational collapse—a state called hydrostatic equilibrium. When this accretion phase is nearly complete, the resulting object is known as a protostar.
Accretion of material onto the protostar continues partially from the newly formed circumstellar disc. When the density and temperature are high enough, deuterium fusion begins, and the outward pressure of the resultant radiation slows (but does not stop) the collapse. Material comprising the cloud continues to "rain" onto the protostar. In this stage bipolar jets are produced called Herbig–Haro objects. This is probably the means by which excess angular momentum of the infalling material is expelled, allowing the star to continue to form.
When the surrounding gas and dust envelope disperses and accretion process stops, the star is considered a pre-main-sequence star (PMS star). The energy source of these objects is (gravitational contraction)Kelvin–Helmholtz mechanism, as opposed to hydrogen burning in main sequence stars. The PMS star follows a Hayashi track on the Hertzsprung–Russell (H–R) diagram. The contraction will proceed until the Hayashi limit is reached, and thereafter contraction will continue on a Kelvin–Helmholtz timescale with the temperature remaining stable. Stars with less than thereafter join the main sequence. For more massive PMS stars, at the end of the Hayashi track they will slowly collapse in near hydrostatic equilibrium, following the Henyey track.
Finally, hydrogen begins to fuse in the core of the star, and the rest of the enveloping material is cleared away. This ends the protostellar phase and begins the star's main sequence phase on the H–R diagram.
The stages of the process are well defined in stars with masses around or less. In high mass stars, the length of the star formation process is comparable to the other timescales of their evolution, much shorter, and the process is not so well defined. The later evolution of stars is studied in stellar evolution.
Observations
Key elements of star formation are only available by observing in wavelengths other than the optical. The protostellar stage of stellar existence is almost invariably hidden away deep inside dense clouds of gas and dust left over from the GMC. Often, these star-forming cocoons known as Bok globules, can be seen in silhouette against bright emission from surrounding gas. Early stages of a star's life can be seen in infrared light, which penetrates the dust more easily than visible light.
Observations from the Wide-field Infrared Survey Explorer (WISE) have thus been especially important for unveiling numerous galactic protostars and their parent star clusters. Examples of such embedded star clusters are FSR 1184, FSR 1190, Camargo 14, Camargo 74, Majaess 64, and Majaess 98.
The structure of the molecular cloud and the effects of the protostar can be observed in near-IR extinction maps (where the number of stars are counted per unit area and compared to a nearby zero extinction area of sky), continuum dust emission and rotational transitions of CO and other molecules; these last two are observed in the millimeter and submillimeter range. The radiation from the protostar and early star has to be observed in infrared astronomy wavelengths, as the extinction caused by the rest of the cloud in which the star is forming is usually too big to allow us to observe it in the visual part of the spectrum. This presents considerable difficulties as the Earth's atmosphere is almost entirely opaque from 20μm to 850μm, with narrow windows at 200μm and 450μm. Even outside this range, atmospheric subtraction techniques must be used.
X-ray observations have proven useful for studying young stars, since X-ray emission from these objects is about 100–100,000 times stronger than X-ray emission from main-sequence stars. The earliest detections of X-rays from T Tauri stars were made by the Einstein X-ray Observatory. For low-mass stars X-rays are generated by the heating of the stellar corona through magnetic reconnection, while for high-mass O and early B-type stars X-rays are generated through supersonic shocks in the stellar winds. Photons in the soft X-ray energy range covered by the Chandra X-ray Observatory and XMM-Newton may penetrate the interstellar medium with only moderate absorption due to gas, making the X-ray a useful wavelength for seeing the stellar populations within molecular clouds. X-ray emission as evidence of stellar youth makes this band particularly useful for performing censuses of stars in star-forming regions, given that not all young stars have infrared excesses. X-ray observations have provided near-complete censuses of all stellar-mass objects in the Orion Nebula Cluster and Taurus Molecular Cloud.
The formation of individual stars can only be directly observed in the Milky Way Galaxy, but in distant galaxies star formation has been detected through its unique spectral signature.
Initial research indicates star-forming clumps start as giant, dense areas in turbulent gas-rich matter in young galaxies, live about 500 million years, and may migrate to the center of a galaxy, creating the central bulge of a galaxy.
On February 21, 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
In February 2018, astronomers reported, for the first time, a signal of the reionization epoch, an indirect detection of light from the earliest stars formed - about 180 million years after the Big Bang.
An article published on October 22, 2019, reported on the detection of 3MM-1, a massive star-forming galaxy about 12.5 billion light-years away that is obscured by clouds of dust. At a mass of about 1010.8 solar masses, it showed a star formation rate about 100 times as high as in the Milky Way.
Notable pathfinder objects
MWC 349 was first discovered in 1978, and is estimated to be only 1,000 years old.
VLA 1623 – The first exemplar Class 0 protostar, a type of embedded protostar that has yet to accrete the majority of its mass. Found in 1993, is possibly younger than 10,000 years.
L1014 – An extremely faint embedded object representative of a new class of sources that are only now being detected with the newest telescopes. Their status is still undetermined, they could be the youngest low-mass Class 0 protostars yet seen or even very low-mass evolved objects (like brown dwarfs or even rogue planets).
GCIRS 8* – The youngest known main sequence star in the Galactic Center region, discovered in August 2006. It is estimated to be 3.5 million years old.
Low mass and high mass star formation
Stars of different masses are thought to form by slightly different mechanisms. The theory of low-mass star formation, which is well-supported by observation, suggests that low-mass stars form by the gravitational collapse of rotating density enhancements within molecular clouds. As described above, the collapse of a rotating cloud of gas and dust leads to the formation of an accretion disk through which matter is channeled onto a central protostar. For stars with masses higher than about , however, the mechanism of star formation is not well understood.
Massive stars emit copious quantities of radiation which pushes against infalling material. In the past, it was thought that this radiation pressure might be substantial enough to halt accretion onto the massive protostar and prevent the formation of stars with masses more than a few tens of solar masses. Recent theoretical work has shown that the production of a jet and outflow clears a cavity through which much of the radiation from a massive protostar can escape without hindering accretion through the disk and onto the protostar. Present thinking is that massive stars may therefore be able to form by a mechanism similar to that by which low mass stars form.
There is mounting evidence that at least some massive protostars are indeed surrounded by accretion disks. Disk accretion in high-mass protostars, similar to their low-mass counterparts, is expected to exhibit bursts of episodic accretion as a result of a gravitationally instability leading to clumpy and in-continuous accretion rates. Recent evidence of accretion bursts in high-mass protostars has indeed been confirmed observationally. Several other theories of massive star formation remain to be tested observationally. Of these, perhaps the most prominent is the theory of competitive accretion, which suggests that massive protostars are "seeded" by low-mass protostars which compete with other protostars to draw in matter from the entire parent molecular cloud, instead of simply from a small local region.
Another theory of massive star formation suggests that massive stars may form by the coalescence of two or more stars of lower mass.
Filamentary nature of star formation
Recent studies have emphasized the role of filamentary structures in molecular clouds as the initial conditions for star formation. Findings from the Herschel Space Observatory highlight the ubiquitous nature of these filaments in the cold interstellar medium (ISM). The spatial relationship between cores and filaments indicates that the majority of prestellar cores are located within 0.1 pc of supercritical filaments. This supports the hypothesis that filamentary structures act as pathways for the accumulation of gas and dust, leading to core formation.
Both the core mass function (CMF) and filament line mass function (FLMF) observed in the California GMC follow power-law distributions at the high-mass end, consistent with the Salpeter initial mass function (IMF). Current results strongly support the existence of a connection between the FLMF and the CMF/IMF, demonstrating that this connection holds at the level of an individual cloud, specifically the California GMC. The FLMF presented is a distribution of local line masses for a complete, homogeneous sample of filaments within the same cloud. It is the local line mass of a filament that defines its ability to fragment at a particular location along its spine, not the average line mass of the filament. This connection is more direct and provides tighter constraints on the origin of the CMF/IMF.
See also
References
Stellar astronomy
Concepts in astronomy
Concepts in stellar astronomy | Star formation | [
"Physics",
"Astronomy"
] | 3,899 | [
"Concepts in astrophysics",
"Concepts in astronomy",
"Concepts in stellar astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
43,950 | https://en.wikipedia.org/wiki/Interstate%20Highway%20System | The Dwight D. Eisenhower National System of Interstate and Defense Highways, commonly known as the Interstate Highway System, or the Eisenhower Interstate System, is a network of controlled-access highways that forms part of the National Highway System in the United States. The system extends throughout the contiguous United States and has routes in Hawaii, Alaska, and Puerto Rico.
In the 20th century, the United States Congress began funding roadways through the Federal Aid Road Act of 1916, and started an effort to construct a national road grid with the passage of the Federal Aid Highway Act of 1921. In 1926, the United States Numbered Highway System was established, creating the first national road numbering system for cross-country travel. The roads were funded and maintained by U.S. states, and there were few national standards for road design. United States Numbered Highways ranged from two-lane country roads to multi-lane freeways. After Dwight D. Eisenhower became president in 1953, his administration developed a proposal for an interstate highway system, eventually resulting in the enactment of the Federal-Aid Highway Act of 1956.
Unlike the earlier United States Numbered Highway System, the interstates were designed to be all freeways, with nationally unified standards for construction and signage. While some older freeways were adopted into the system, most of the routes were completely new. In dense urban areas, the choice of routing destroyed many well-established neighborhoods, often intentionally as part of a program of "urban renewal". In the two decades following the 1956 Highway Act, the construction of the freeways displaced one million people, and as a result of the many freeway revolts during this era, several planned Interstates were abandoned or re-routed to avoid urban cores.
Construction of the original Interstate Highway System was proclaimed complete in 1992, despite deviations from the original 1956 plan and several stretches that did not fully conform with federal standards. The construction of the Interstate Highway System cost approximately $114 billion (equivalent to $ in ). The system has continued to expand and grow as additional federal funding has provided for new routes to be added, and many future Interstate Highways are currently either being planned or under construction.
Though heavily funded by the federal government, Interstate Highways are owned by the state in which they were built. With few exceptions, all Interstates must meet specific standards, such as having controlled access, physical barriers or median strips between lanes of oncoming traffic, breakdown lanes, avoiding at-grade intersections, no traffic lights, and complying with federal traffic sign specifications. Interstate Highways use a numbering scheme in which primary Interstates are assigned one- or two-digit numbers, and shorter routes which branch off from longer ones are assigned three-digit numbers where the last two digits match the parent route. The Interstate Highway System is partially financed through the Highway Trust Fund, which itself is funded by a combination of a federal fuel tax and transfers from the Treasury's general fund. Though federal legislation initially banned the collection of tolls, some Interstate routes are toll roads, either because they were grandfathered into the system or because subsequent legislation has allowed for tolling of Interstates in some cases.
, about one quarter of all vehicle miles driven in the country used the Interstate Highway System, which has a total length of . In 2022 and 2023, the number of fatalities on the Interstate Highway System amounted to more than 5,000 people annually, with nearly 5,600 fatalities in 2022.
History
Planning
The United States government's efforts to construct a national network of highways began on an ad hoc basis with the passage of the Federal Aid Road Act of 1916, which provided $75 million over a five-year period for matching funds to the states for the construction and improvement of highways. The nation's revenue needs associated with World War I prevented any significant implementation of this policy, which expired in 1921.
In December 1918, E. J. Mehren, a civil engineer and the editor of Engineering News-Record, presented his "A Suggested National Highway Policy and Plan" during a gathering of the State Highway Officials and Highway Industries Association at the Congress Hotel in Chicago. In the plan, Mehren proposed a system, consisting of five east–west routes and 10 north–south routes. The system would include two percent of all roads and would pass through every state at a cost of , providing commercial as well as military transport benefits.
In 1919, the US Army sent an expedition across the US to determine the difficulties that military vehicles would have on a cross-country trip. Leaving from the Ellipse near the White House on July 7, the Motor Transport Corps convoy needed 62 days to drive on the Lincoln Highway to the Presidio of San Francisco along the Golden Gate. The convoy suffered many setbacks and problems on the route, such as poor-quality bridges, broken crankshafts, and engines clogged with desert sand.
Dwight Eisenhower, then a 28-year-old brevet lieutenant colonel, accompanied the trip "through darkest America with truck and tank," as he later described it. Some roads in the West were a "succession of dust, ruts, pits, and holes."
As the landmark 1916 law expired, new legislation was passed—the Federal Aid Highway Act of 1921 (Phipps Act). This new road construction initiative once again provided for federal matching funds for road construction and improvement, $75 million allocated annually. Moreover, this new legislation for the first time sought to target these funds to the construction of a national road grid of interconnected "primary highways", setting up cooperation among the various state highway planning boards.
The Bureau of Public Roads asked the Army to provide a list of roads that it considered necessary for national defense. In 1922, General John J. Pershing, former head of the American Expeditionary Force in Europe during the war, complied by submitting a detailed network of of interconnected primary highways—the so-called Pershing Map.
A boom in road construction followed throughout the decade of the 1920s, with such projects as the New York parkway system constructed as part of a new national highway system. As automobile traffic increased, planners saw a need for such an interconnected national system to supplement the existing, largely non-freeway, United States Numbered Highways system. By the late 1930s, planning had expanded to a system of new superhighways.
In 1938, President Franklin D. Roosevelt gave Thomas MacDonald, chief at the Bureau of Public Roads, a hand-drawn map of the United States marked with eight superhighway corridors for study. In 1939, Bureau of Public Roads Division of Information chief Herbert S. Fairbank wrote a report called Toll Roads and Free Roads, "the first formal description of what became the Interstate Highway System" and, in 1944, the similarly themed Interregional Highways.
Federal Aid Highway Act of 1956
The Interstate Highway System gained a champion in President Dwight D. Eisenhower, who was influenced by his experiences as a young Army officer crossing the country in the 1919 Motor Transport Corps convoy that drove in part on the Lincoln Highway, the first road across America. He recalled that, "The old convoy had started me thinking about good two-lane highways... the wisdom of broader ribbons across our land." Eisenhower also gained an appreciation of the Reichsautobahn system, the first "national" implementation of modern Germany's Autobahn network, as a necessary component of a national defense system while he was serving as Supreme Commander of Allied Forces in Europe during World War II. In 1954, Eisenhower appointed General Lucius D. Clay to head a committee charged with proposing an interstate highway system plan. Summing up motivations for the construction of such a system, Clay stated,
Clay's committee proposed a 10-year, $100 billion program , which would build of divided highways linking all American cities with a population of greater than 50,000. Eisenhower initially preferred a system consisting of toll roads, but Clay convinced Eisenhower that toll roads were not feasible outside of the highly populated coastal regions. In February 1955, Eisenhower forwarded Clay's proposal to Congress. The bill quickly won approval in the Senate, but House Democrats objected to the use of public bonds as the means to finance construction. Eisenhower and the House Democrats agreed to instead finance the system through the Highway Trust Fund, which itself would be funded by a gasoline tax. In June 1956, Eisenhower signed the Federal Aid Highway Act of 1956 into law. Under the act, the federal government would pay for 90 percent of the cost of construction of Interstate Highways. Each Interstate Highway was required to be a freeway with at least four lanes and no at-grade crossings.
The publication in 1955 of the General Location of National System of Interstate Highways, informally known as the Yellow Book, mapped out what became the Interstate Highway System. Assisting in the planning was Charles Erwin Wilson, who was still head of General Motors when President Eisenhower selected him as Secretary of Defense in January 1953.
Construction
Some sections of highways that became part of the Interstate Highway System actually began construction earlier.
Three states have claimed the title of first Interstate Highway. Missouri claims that the first three contracts under the new program were signed in Missouri on August 2, 1956. The first contract signed was for upgrading a section of US Route 66 to what is now designated Interstate 44. On August 13, 1956, work began on US 40 (now I-70) in St. Charles County.
Kansas claims that it was the first to start paving after the act was signed. Preliminary construction had taken place before the act was signed, and paving started September 26, 1956. The state marked its portion of I-70 as the first project in the United States completed under the provisions of the new Federal-Aid Highway Act of 1956.
The Pennsylvania Turnpike could also be considered one of the first Interstate Highways, and is nicknamed "Grandfather of the Interstate System". On October 1, 1940, of the highway now designated I‑70 and I‑76 opened between Irwin and Carlisle. The Commonwealth of Pennsylvania refers to the turnpike as the Granddaddy of the Pikes, a reference to turnpikes.
Milestones in the construction of the Interstate Highway System include:
October 17, 1974: Nebraska becomes the first state to complete all of its mainline Interstate Highways with the dedication of its final piece of I-80.
October 12, 1979: The final section of the Canada to Mexico freeway Interstate 5 is dedicated near Stockton, California. Representatives of the two neighboring nations attended the dedication to commemorate the first contiguous freeway connecting the North American countries.
August 22, 1986: The final section of the coast-to-coast I-80 (San Francisco, California, to Teaneck, New Jersey) is dedicated on the western edge of Salt Lake City, Utah, making I-80 the world's first contiguous freeway to span from the Atlantic to Pacific Ocean and, at the time, the longest contiguous freeway in the world. The section spanned from Redwood Road to just west of the Salt Lake City International Airport. At the dedication it was noted that coincidentally this was only from Promontory Summit, where a similar feat was accomplished nearly 120 years prior, the driving of the golden spike of the United States' First transcontinental railroad.
August 10, 1990: The final section of coast-to-coast I-10 (Santa Monica, California, to Jacksonville, Florida) is dedicated, the Papago Freeway Tunnel under downtown Phoenix, Arizona. Completion of this section was delayed due to a freeway revolt that forced the cancellation of an originally planned elevated routing.
September 12, 1991: I-90 becomes the final coast-to-coast Interstate Highway (Seattle, Washington to Boston, Massachusetts) to be completed with the dedication of an elevated viaduct bypassing Wallace, Idaho, which opened a week earlier. This section was delayed after residents forced the cancellation of the originally planned at-grade alignment that would have demolished much of downtown Wallace. The residents accomplished this feat by arranging for most of the downtown area to be declared a historic district and listed on the National Register of Historic Places; this succeeded in blocking the path of the original alignment. Two days after the dedication residents held a mock funeral celebrating the removal of the last stoplight on a transcontinental Interstate Highway.
October 14, 1992: The original Interstate Highway System is proclaimed to be complete with the opening of I-70 through Glenwood Canyon in Colorado. This section is considered an engineering marvel with a span featuring 40 bridges and numerous tunnels and is one of the most expensive rural highways per mile built in the United States.
The initial cost estimate for the system was $25 billion over 12 years; it ended up costing $114 billion (equivalent to $425 billion in 2006 or $ in ) and took 35 years.
1992–present
Discontinuities
The system was proclaimed complete in 1992, but two of the original Interstates—I-95 and I-70—were not continuous: both of these discontinuities were due to local opposition, which blocked efforts to build the necessary connections to fully complete the system. I-95 was made a continuous freeway in 2018, and thus I-70 remains the only original Interstate with a discontinuity.
I-95 was discontinuous in New Jersey because of the cancellation of the Somerset Freeway. This situation was remedied when the construction of the Pennsylvania Turnpike/Interstate 95 Interchange Project started in 2010 and partially opened on September 22, 2018, which was already enough to fill the gap.
However, I-70 remains discontinuous in Pennsylvania, because of the lack of a direct interchange with the Pennsylvania Turnpike at the eastern end of the concurrency near Breezewood. Traveling in either direction, I-70 traffic must exit the freeway and use a short stretch of US 30 (which includes a number of roadside services) to rejoin I-70. The interchange was not originally built because of a legacy federal funding rule, since relaxed, which restricted the use of federal funds to improve roads financed with tolls. Solutions have been proposed to eliminate the discontinuity, but they have been blocked by local opposition, fearing a loss of business.
Expansions and removals
The Interstate Highway System has been expanded numerous times. The expansions have both created new designations and extended existing designations. For example, I-49, added to the system in the 1980s as a freeway in Louisiana, was designated as an expansion corridor, and FHWA approved the expanded route north from Lafayette, Louisiana, to Kansas City, Missouri. The freeway exists today as separate completed segments, with segments under construction or in the planning phase between them.
In 1966, the FHWA designated the entire Interstate Highway System as part of the larger Pan-American Highway System, and at least two proposed Interstate expansions were initiated to help trade with Canada and Mexico spurred by the North American Free Trade Agreement (NAFTA). Long-term plans for I-69, which currently exists in several separate completed segments (the largest of which are in Indiana and Texas), is to have the highway route extend from Tamaulipas, Mexico to Ontario, Canada. The planned I-11 will then bridge the Interstate gap between Phoenix, Arizona and Las Vegas, Nevada, and thus form part of the CANAMEX Corridor (along with I-19, and portions of I-10 and I-15) between Sonora, Mexico and Alberta, Canada.
Opposition, cancellations, and removals
Political opposition from residents canceled many freeway projects around the United States, including:
I-40 in Memphis, Tennessee was rerouted and part of the original I-40 is still in use as the eastern half of Sam Cooper Boulevard.
I-66 in the District of Columbia was abandoned in 1977.
I-69 was to continue past its terminus at Interstate 465 to intersect with Interstate 70 and Interstate 65 at the north split, northeast of downtown Indianapolis. Though local opposition led to the cancellation of this project in 1981, bridges and ramps for the connection into the "north split" remained until it was rebuilt in 2023.
I-70 in Baltimore was supposed to run from the Baltimore Beltway (Interstate 695), which surrounds the city to terminate at I-95, the East Coast thoroughfare that runs through Maryland and Baltimore on a diagonal course, northeast to southwest; the connection was cancelled on the mid-1970s due to its routing through Gwynns Falls-Leakin Park, a wilderness urban park reserve following the Gwynns Falls stream through West Baltimore. This included the cancellation of I-170, partially built and in use as US 40, and nicknamed the Highway to Nowhere. The freeway stub of I-70 inside the Beltway was renumbered MD 570 in 2014, but continues to bear I-70 signs.
I-78 in New York City was canceled along with portions of I-278, I-478, and I-878. I-878 was supposed to be part of I-78, and I-478 and I-278 were to be spur routes.
I-80 in San Francisco was originally planned to travel past the city's Civic Center along the Panhandle Freeway into Golden Gate Park and terminate at the original alignment of I-280/SR 1. The city canceled this and several other freeways in 1958. Similarly, more than 20 years later, Sacramento canceled plans to upgrade I-80 to Interstate Standards and rerouted the freeway on what was then I-880 that traveled north of Downtown Sacramento.
I-83, southern extension of the Jones Falls Expressway (southern I-83) in Baltimore was supposed to run along the waterfront of the Patapsco River / Baltimore Harbor to connect to I-95, bisecting historic neighborhoods of Fells Point and Canton, but the connection was never built.
I-84 in Connecticut was once planned to fork east of Hartford, into an I-86 to Sturbridge, Massachusetts, and I-84 to Providence, R.I. The plan was cancelled, primarily because of anticipated impact on a major Rhode Island reservoir. The I-84 designation was restored to the highway to Sturbridge, and other numbering was used for completed Eastern sections of what had been planned as part of I-84.
I-95 through the District of Columbia into Maryland was abandoned in 1977. Instead it was rerouted to I-495 (Capital Beltway). The completed section is now I-395.
I-95 was originally planned to run up the Southwest Expressway and meet I-93, where the two highways would travel along the Central Artery through downtown Boston, but was rerouted onto the Route 128 beltway due to widespread opposition. This revolt also included the cancellation of the Inner Belt, connecting I-93 to I-90 and a cancelled section of the Northwest Expressway which would have carried US 3 inside the Route 128 beltway, meeting with Route 2 in Cambridge.
In addition to cancellations, removals of freeways are planned:
I-81 in Syracuse, New York, which bisects the city's 15th Ward neighborhood, is planned to be torn down and replaced with a boulevard that accommodates pedestrians. Freeway traffic would be rerouted along I-481.
Standards
The American Association of State Highway and Transportation Officials (AASHTO) has defined a set of standards that all new Interstates must meet unless a waiver from the Federal Highway Administration (FHWA) is obtained. One almost absolute standard is the controlled access nature of the roads. With few exceptions, traffic lights (and cross traffic in general) are limited to toll booths and ramp meters (metered flow control for lane merging during rush hour).
Speed limits
Being freeways, Interstate Highways usually have the highest speed limits in a given area. Speed limits are determined by individual states. From 1975 to 1986, the maximum speed limit on any highway in the United States was , in accordance with federal law.
Typically, lower limits are established in Northeastern and coastal states, while higher speed limits are established in inland states west of the Mississippi River. For example, the maximum speed limit is in northern Maine, varies between from southern Maine to New Jersey, and is in New York City and the District of Columbia. Currently, rural speed limits elsewhere generally range from . Several portions of various highways such as I-10 and I-20 in rural western Texas, I-80 in Nevada between Fernley and Winnemucca (except around Lovelock) and portions of I-15, I-70, I-80, and I-84 in Utah have a speed limit of . Other Interstates in Idaho, Montana, Oklahoma, South Dakota and Wyoming also have the same high speed limits.
In some areas, speed limits on Interstates can be significantly lower in areas where they traverse significantly hazardous areas. The maximum speed limit on I-90 is in downtown Cleveland because of two sharp curves with a suggested limit of in a heavily congested area; I-70 through Wheeling, West Virginia, has a maximum speed limit of through the Wheeling Tunnel and most of downtown Wheeling; and I-68 has a maximum speed limit of through Cumberland, Maryland, because of multiple hazards including sharp curves and narrow lanes through the city. In some locations, low speed limits are the result of lawsuits and resident demands; after holding up the completion of I-35E in St. Paul, Minnesota, for nearly 30 years in the courts, residents along the stretch of the freeway from the southern city limit to downtown successfully lobbied for a speed limit in addition to a prohibition on any vehicle weighing more than gross vehicle weight. I-93 in Franconia Notch State Park in northern New Hampshire has a speed limit of because it is a parkway that consists of only one lane per side of the highway. On the other hand, Interstates 15, 80, 84, and 215 in Utah have speed limits as high as within the Wasatch Front, Cedar City, and St. George areas, and I-25 in New Mexico within the Santa Fe and Las Vegas areas along with I-20 in Texas along Odessa and Midland and I-29 in North Dakota along the Grand Forks area have higher speed limits of .
Other uses
As one of the components of the National Highway System, Interstate Highways improve the mobility of military troops to and from airports, seaports, rail terminals, and other military bases. Interstate Highways also connect to other roads that are a part of the Strategic Highway Network, a system of roads identified as critical to the US Department of Defense.
The system has also been used to facilitate evacuations in the face of hurricanes and other natural disasters. An option for maximizing traffic throughput on a highway is to reverse the flow of traffic on one side of a divider so that all lanes become outbound lanes. This procedure, known as contraflow lane reversal, has been employed several times for hurricane evacuations. After public outcry regarding the inefficiency of evacuating from southern Louisiana prior to Hurricane Georges' landfall in September 1998, government officials looked towards contraflow to improve evacuation times. In Savannah, Georgia, and Charleston, South Carolina, in 1999, lanes of I-16 and I-26 were used in a contraflow configuration in anticipation of Hurricane Floyd with mixed results.
In 2004, contraflow was employed ahead of Hurricane Charley in the Tampa, Florida area and on the Gulf Coast before the landfall of Hurricane Ivan; however, evacuation times there were no better than previous evacuation operations. Engineers began to apply lessons learned from the analysis of prior contraflow operations, including limiting exits, removing troopers (to keep traffic flowing instead of having drivers stop for directions), and improving the dissemination of public information. As a result, the 2005 evacuation of New Orleans, Louisiana, prior to Hurricane Katrina ran much more smoothly.
According to urban legend, early regulations required that one out of every five miles of the Interstate Highway System must be built straight and flat, so as to be usable by aircraft during times of war. There is no evidence of this rule being included in any Interstate legislation. It is also commonly believed the Interstate Highway System was built for the sole purpose of evacuating cities in the event of nuclear warfare. While military motivations were present, the primary motivations were civilian.
Numbering system
Primary (one- and two-digit) Interstates
The numbering scheme for the Interstate Highway System was developed in 1957 by the American Association of State Highway and Transportation Officials (AASHTO). The association's present numbering policy dates back to August 10, 1973. Within the contiguous United States, primary Interstates—also called main line Interstates or two-digit Interstates—are assigned numbers less than 100.
While numerous exceptions do exist, there is a general scheme for numbering Interstates. Primary Interstates are assigned one- or two-digit numbers, while shorter routes (such as spurs, loops, and short connecting roads) are assigned three-digit numbers where the last two digits match the parent route (thus, I-294 is a loop that connects at both ends to I-94, while I-787 is a short spur route attached to I-87). In the numbering scheme for the primary routes, east–west highways are assigned even numbers and north–south highways are assigned odd numbers. Odd route numbers increase from west to east, and even-numbered routes increase from south to north (to avoid confusion with the US Highways, which increase from east to west and north to south). This numbering system usually holds true even if the local direction of the route does not match the compass directions. Numbers divisible by five are intended to be major arteries among the primary routes, carrying traffic long distances. Primary north–south Interstates increase in number from I-5 between Canada and Mexico along the West Coast to I‑95 between Canada and Miami, Florida along the East Coast. Major west–east arterial Interstates increase in number from I-10 between Santa Monica, California, and Jacksonville, Florida, to I-90 between Seattle, Washington, and Boston, Massachusetts, with two exceptions. There are no I-50 and I-60, as routes with those numbers would likely pass through states that currently have US Highways with the same numbers, which is generally disallowed under highway administration guidelines.
Several two-digit numbers are shared between unconnected road segments at opposite ends of the country for various reasons. Some such highways are incomplete Interstates (such as I-69 and I-74) and some just happen to share route designations (such as I-76, I-84, I‑86, I-87, and I-88). Some of these were due to a change in the numbering system as a result of a new policy adopted in 1973. Previously, letter-suffixed numbers were used for long spurs off primary routes; for example, western I‑84 was I‑80N, as it went north from I‑80. The new policy stated, "No new divided numbers (such as I-35W and I-35E, etc.) shall be adopted." The new policy also recommended that existing divided numbers be eliminated as quickly as possible; however, an I-35W and I-35E still exist in the Dallas–Fort Worth metroplex in Texas, and an I-35W and I-35E that run through Minneapolis and Saint Paul, Minnesota, still exist. Additionally, due to Congressional requirements, three sections of I-69 in southern Texas will be divided into I-69W, I-69E, and I-69C (for Central).
AASHTO policy allows dual numbering to provide continuity between major control points. This is referred to as a concurrency or overlap. For example, I‑75 and I‑85 share the same roadway in Atlanta; this section, called the Downtown Connector, is labeled both I‑75 and I‑85. Concurrencies between Interstate and US Highway numbers are also allowed in accordance with AASHTO policy, as long as the length of the concurrency is reasonable. In rare instances, two highway designations sharing the same roadway are signed as traveling in opposite directions; one such wrong-way concurrency is found between Wytheville and Fort Chiswell, Virginia, where I‑81 north and I‑77 south are equivalent (with that section of road traveling almost due east), as are I‑81 south and I‑77 north.
Auxiliary (three-digit) Interstates
Auxiliary Interstate Highways are circumferential, radial, or spur highways that principally serve urban areas. These types of Interstate Highways are given three-digit route numbers, which consist of a single digit prefixed to the two-digit number of its parent Interstate Highway. Spur routes deviate from their parent and do not return; these are given an odd first digit. Circumferential and radial loop routes return to the parent, and are given an even first digit. Unlike primary Interstates, three-digit Interstates are signed as either east–west or north–south, depending on the general orientation of the route, without regard to the route number. For instance, I-190 in Massachusetts is labeled north–south, while I-195 in New Jersey is labeled east–west. Some looped Interstate routes use inner–outer directions instead of compass directions, when the use of compass directions would create ambiguity. Due to the large number of these routes, auxiliary route numbers may be repeated in different states along the mainline. Some auxiliary highways do not follow these guidelines, however.
Alaska, Hawaii, and Puerto Rico
The Interstate Highway System also extends to Alaska, Hawaii, and Puerto Rico, even though they have no direct land connections to any other states or territories. However, their residents still pay federal fuel and tire taxes.
The Interstates in Hawaii, all located on the most populous island of Oahu, carry the prefix H. There are three one-digit routes in the state (H-1, H-2, and H-3) and one auxiliary route (H-201). These Interstates connect several military and naval bases together, as well as the important communities spread across Oahu, and especially within the urban core of Honolulu.
Both Alaska and Puerto Rico also have public highways that receive 90 percent of their funding from the Interstate Highway program. The Interstates of Alaska and Puerto Rico are numbered sequentially in order of funding without regard to the rules on odd and even numbers. They also carry the prefixes A and PR, respectively. However, these highways are signed according to their local designations, not their Interstate Highway numbers. Furthermore, these routes were neither planned according to nor constructed to the official Interstate Highway standards.
Mile markers and exit numbers
On one- or two-digit Interstates, the mile marker numbering almost always begins at the southern or western state line. If an Interstate originates within a state, the numbering begins from the location where the road begins in the south or west. As with all guidelines for Interstate routes, however, numerous exceptions exist.
Three-digit Interstates with an even first number that form a complete circumferential (circle) bypass around a city feature mile markers that are numbered in a clockwise direction, beginning just west of an Interstate that bisects the circumferential route near a south polar location. In other words, mile marker 1 on I-465, a route around Indianapolis, is just west of its junction with I-65 on the south side of Indianapolis (on the south leg of I-465), and mile marker 53 is just east of this same junction. An exception is I-495 in the Washington metropolitan area, with mileposts increasing counterclockwise because part of that road is also part of I-95.
Most Interstate Highways use distance-based exit numbers so that the exit number is the same as the nearest mile marker. If multiple exits occur within the same mile, letter suffixes may be appended to the numbers in alphabetical order starting with A. A small number of Interstate Highways (mostly in the Northeastern United States) use sequential-based exit numbering schemes (where each exit is numbered in order starting with 1, without regard for the mile markers on the road). One Interstate Highway, I-19 in Arizona, is signed with kilometer-based exit numbers. In the state of New York, most Interstate Highways use sequential exit numbering, with some exceptions.
Business routes
AASHTO defines a category of special routes separate from primary and auxiliary Interstate designations. These routes do not have to comply to Interstate construction or limited-access standards but are routes that may be identified and approved by the association. The same route marking policy applies to both US Numbered Highways and Interstate Highways; however, business route designations are sometimes used for Interstate Highways. Known as Business Loops and Business Spurs, these routes principally travel through the corporate limits of a city, passing through the central business district when the regular route is directed around the city. They also use a green shield instead of the red and blue shield. An example would be Business Loop Interstate 75 at Pontiac, Michigan, which follows surface roads into and through downtown. Sections of BL I-75's routing had been part of US 10 and M-24, predecessors of I-75 in the area.
Financing
Interstate Highways and their rights-of-way are owned by the state in which they were built. The last federally owned portion of the Interstate System was the Woodrow Wilson Bridge on the Washington Capital Beltway. The new bridge was completed in 2009 and is collectively owned by Virginia and Maryland. Maintenance is generally the responsibility of the state department of transportation. However, there are some segments of Interstate owned and maintained by local authorities.
Taxes and user fees
About 70 percent of the construction and maintenance costs of Interstate Highways in the United States have been paid through user fees, primarily the fuel taxes collected by the federal, state, and local governments. To a much lesser extent they have been paid for by tolls collected on toll highways and bridges. The federal gasoline tax was first imposed in 1932 at one cent per gallon; during the Eisenhower administration, the Highway Trust Fund, established by the Highway Revenue Act in 1956, prescribed a three-cent-per-gallon fuel tax, soon increased to 4.5 cents per gallon. Since 1993 the tax has remained at 18.4 cents per gallon. Other excise taxes related to highway travel also accumulated in the Highway Trust Fund. Initially, that fund was sufficient for the federal portion of building the Interstate system, built in the early years with "10 cent dollars", from the perspective of the states, as the federal government paid 90% of the costs while the state paid 10%. The system grew more rapidly than the rate of the taxes on fuel and other aspects of driving (e. g., excise tax on tires).
The rest of the costs of these highways are borne by general fund receipts, bond issues, designated property taxes, and other taxes. The federal contribution is funded primarily through fuel taxes and through transfers from the Treasury's general fund. Local government contributions are overwhelmingly from sources besides user fees. As decades passed in the 20th century and into the 21st century, the portion of the user fees spent on highways themselves covers about 57 percent of their costs, with about one-sixth of the user fees being sent to other programs, including the mass transit systems in large cities. Some large sections of Interstate Highways that were planned or constructed before 1956 are still operated as toll roads, for example the Massachusetts Turnpike (I-90), the New York State Thruway (I-87 and I-90), and Kansas Turnpike (I-35, I-335, I-470, I-70). Others have had their construction bonds paid off and they have become toll-free, such as the Connecticut Turnpike (I‑95, I-395), the Richmond-Petersburg Turnpike in Virginia (also I‑95), and the Kentucky Turnpike (I‑65).
As American suburbs have expanded, the costs incurred in maintaining freeway infrastructure have also grown, leaving little in the way of funds for new Interstate construction. This has led to the proliferation of toll roads (turnpikes) as the new method of building limited-access highways in suburban areas. Some Interstates are privately maintained (for example, the VMS company maintains I‑35 in Texas) to meet rising costs of maintenance and allow state departments of transportation to focus on serving the fastest-growing regions in their states.
Parts of the Interstate System might have to be tolled in the future to meet maintenance and expansion demands, as has been done with adding toll HOV/HOT lanes in cities such as Atlanta, Dallas, and Los Angeles. Although part of the tolling is an effect of the SAFETEA‑LU act, which has put an emphasis on toll roads as a means to reduce congestion, present federal law does not allow for a state to change a freeway section to a tolled section for all traffic.
Tolls
About of toll roads are included in the Interstate Highway System. While federal legislation initially banned the collection of tolls on Interstates, many of the toll roads on the system were either completed or under construction when the Interstate Highway System was established. Since these highways provided logical connections to other parts of the system, they were designated as Interstate highways. Congress also decided that it was too costly to either build toll-free Interstates parallel to these toll roads, or directly repay all the bondholders who financed these facilities and remove the tolls. Thus, these toll roads were grandfathered into the Interstate Highway System.
Toll roads designated as Interstates (such as the Massachusetts Turnpike) were typically allowed to continue collecting tolls, but are generally ineligible to receive federal funds for maintenance and improvements. Some toll roads that did receive federal funds to finance emergency repairs (notably the Connecticut Turnpike (I-95) following the Mianus River Bridge collapse) were required to remove tolls as soon as the highway's construction bonds were paid off. In addition, these toll facilities were grandfathered from Interstate Highway standards. A notable example is the western approach to the Benjamin Franklin Bridge in Philadelphia, where I-676 has a surface street section through a historic area.
Policies on toll facilities and Interstate Highways have since changed. The Federal Highway Administration has allowed some states to collect tolls on existing Interstate Highways, while a recent extension of I-376 included a section of Pennsylvania Route 60 that was tolled by the Pennsylvania Turnpike Commission before receiving Interstate designation. Also, newer toll facilities (like the tolled section of I-376, which was built in the early 1990s) must conform to Interstate standards. A new addition of the Manual on Uniform Traffic Control Devices in 2009 requires a black-on-yellow "Toll" sign to be placed above the Interstate trailblazer on Interstate Highways that collect tolls.
Legislation passed in 2005 known as SAFETEA-LU, encouraged states to construct new Interstate Highways through "innovative financing" methods. SAFETEA-LU facilitated states to pursue innovative financing by easing the restrictions on building interstates as toll roads, either through state agencies or through public–private partnerships. However, SAFETEA-LU left in place a prohibition of installing tolls on existing toll-free Interstates, and states wishing to toll such routes to finance upgrades and repairs must first seek approval from Congress. Many states have started using High-occupancy toll lane and other partial tolling methods, whereby certain lanes of highly congested freeways are tolled, while others are left free, allowing people to pay a fee to travel in less congested lanes. Examples of recent projects to add HOT lanes to existing freeways include the Virginia HOT lanes on the Virginia portions of the Capital Beltway and other related interstate highways (I-95, I-495, I-395) and the addition of express toll lanes to Interstate 77 in North Carolina in the Charlotte metropolitan area.
Chargeable and non-chargeable Interstate routes
Interstate Highways financed with federal funds are known as "chargeable" Interstate routes, and are considered part of the network of highways. Federal laws also allow "non-chargeable" Interstate routes, highways funded similarly to state and US Highways to be signed as Interstates, if they both meet the Interstate Highway standards and are logical additions or connections to the system. These additions fall under two categories: routes that already meet Interstate standards, and routes not yet upgraded to Interstate standards. Only routes that meet Interstate standards may be signed as Interstates once their proposed number is approved.
Signage
Interstate shield
Interstate Highways are signed by a number placed on a red, white, and blue sign. The shield design itself is a registered trademark of the American Association of State Highway and Transportation Officials. The colors red, white, and blue were chosen because they are the colors of the American flag. In the original design, the name of the state was displayed above the highway number, but in many states, this area is now left blank, allowing for the printing of larger and more-legible digits. Signs with the shield alone are placed periodically throughout each Interstate as reassurance markers. These signs usually measure high, and are wide for two-digit Interstates or for three-digit Interstates.
Interstate business loops and spurs use a special shield in which the red and blue are replaced with green, the word "BUSINESS" appears instead of "INTERSTATE", and the word "SPUR" or "LOOP" usually appears above the number. The green shield is employed to mark the main route through a city's central business district, which intersects the associated Interstate at one (spur) or both (loop) ends of the business route. The route usually traverses the main thoroughfare(s) of the city's downtown area or other major business district. A city may have more than one Interstate-derived business route, depending on the number of Interstates passing through a city and the number of significant business districts therein.
Over time, the design of the Interstate shield has changed. In 1957 the Interstate shield designed by Texas Highway Department employee Richard Oliver was introduced, the winner of a contest that included 100 entries; at the time, the shield color was a dark navy blue and only wide. The Manual on Uniform Traffic Control Devices (MUTCD) standards revised the shield in the 1961, 1971, and 1978 editions.
Exit numbering
The majority of Interstates have exit numbers. Like other highways, Interstates feature guide signs that list control cities to help direct drivers through interchanges and exits toward their desired destination. All traffic signs and lane markings on the Interstates are supposed to be designed in compliance with the Manual on Uniform Traffic Control Devices (MUTCD). There are, however, many local and regional variations in signage.
For many years, California was the only state that did not use an exit numbering system. It was granted an exemption in the 1950s due to having an already largely completed and signed highway system; placing exit number signage across the state was deemed too expensive. To control costs, California began to incorporate exit numbers on its freeways in 2002—Interstate, US, and state routes alike. Caltrans commonly installs exit number signage only when a freeway or interchange is built, reconstructed, retrofitted, or repaired, and it is usually tacked onto the top-right corner of an already existing sign. Newer signs along the freeways follow this practice as well. Most exits along California's Interstates now have exit number signage, particularly in rural areas. California, however, still does not use mileposts, although a few exist for experiments or for special purposes.
In 2010–2011, the Illinois State Toll Highway Authority posted all new mile markers to be uniform with the rest of the state on I‑90 (Jane Addams Memorial/Northwest Tollway) and the I‑94 section of the Tri‑State Tollway, which previously had matched the I‑294 section starting in the south at I‑80/I‑94/IL Route 394. This also applied to the tolled portion of the Ronald Reagan Tollway (I-88). The tollway also added exit number tabs to the exits.
Exit numbers correspond to Interstate mileage markers in most states. On I‑19 in Arizona, however, length is measured in kilometers instead of miles because, at the time of construction, a push for the United States to change to a metric system of measurement had gained enough traction that it was mistakenly assumed that all highway measurements would eventually be changed to metric (and some distance signs retain metric distances); proximity to metric-using Mexico may also have been a factor, as I‑19 indirectly connects I‑10 to the Mexican Federal Highway system via surface streets in Nogales. Mileage count increases from west to east on most even-numbered Interstates; on odd-numbered Interstates mileage count increases from south to north.
Some highways, including the New York State Thruway, use sequential exit-numbering schemes. Exits on the New York State Thruway count up from Yonkers traveling north, and then west from Albany. I‑87 in New York State is numbered in three sections. The first section makes up the Major Deegan Expressway in the Bronx, with interchanges numbered sequentially from 1 to 14. The second section of I‑87 is a part of the New York State Thruway that starts in Yonkers (exit 1) and continues north to Albany (exit 24); at Albany, the Thruway turns west and becomes I‑90 for exits 25 to 61. From Albany north to the Canadian border, the exits on I‑87 are numbered sequentially from 1 to 44 along the Adirondack Northway. This often leads to confusion as there is more than one exit on I‑87 with the same number. For example, exit 4 on Thruway section of I‑87 connects with the Cross County Parkway in Yonkers, but exit 4 on the Northway is the exit for the Albany airport. These two exits share a number but are located apart.
Many northeastern states label exit numbers sequentially, regardless of how many miles have passed between exits. States in which Interstate exits are still numbered sequentially are Connecticut, Delaware, New Hampshire, New York, and Vermont; as such, three of the main Interstate Highways that remain completely within these states (87, 88, 89) have interchanges numbered sequentially along their entire routes. Maine, Massachusetts, Pennsylvania, Virginia, Georgia, and Florida followed this system for a number of years, but have since converted to mileage-based exit numbers. Georgia renumbered in 2000, while Maine did so in 2004. Massachusetts converted its exit numbers in 2021, and most recently Rhode Island in 2022. The Pennsylvania Turnpike uses both mile marker numbers and sequential numbers. Mile marker numbers are used for signage, while sequential numbers are used for numbering interchanges internally. The New Jersey Turnpike, including the portions that are signed as I‑95 and I‑78, also has sequential numbering, but other Interstates within New Jersey use mile markers.
Sign locations
There are four common signage methods on Interstates:
Locating a sign on the ground to the side of the highway, mostly the right, and is used to denote exits, as well as rest areas, motorist services such as gas and lodging, recreational sites, and freeway names
Attaching the sign to an overpass
Mounting on full gantries that bridge the entire width of the highway and often show two or more signs
Mounting on half-gantries that are located on one side of the highway, like a ground-mounted sign
Statistics
Volume
Heaviest traveled: 379,000 vehicles per day: I-405 in Los Angeles, California (2011 estimate).
Elevation
Highest: : I-70 in the Eisenhower Tunnel at the Continental Divide in the Colorado Rocky Mountains.
Lowest (land): : I-8 at the New River near Seeley, California.
Lowest (underwater): : I-95 in the Fort McHenry Tunnel under the Baltimore Inner Harbor.
Length
Longest (east–west): : I-90 from Boston, Massachusetts, to Seattle, Washington.
Longest (north–south): : I-95 from the Canadian border near Houlton, Maine, to Miami, Florida.
Shortest (two-digit): : I-69W in Laredo, Texas.
Shortest (auxiliary): : I-878 in Queens, New York, New York.
Longest segment between state lines: : I-10 in Texas from the New Mexico state line near El Paso to the Louisiana state line near Orange, Texas.
Shortest segment between state lines: : I-95/I-495 (Capital Beltway) on the Woodrow Wilson Bridge across the Potomac River where they briefly cross the southernmost tip of the District of Columbia between its borders with Maryland and Virginia.
Longest concurrency: : I-80 and I-90; Gary, Indiana, to Elyria, Ohio.
States
Most states served by an Interstate: 15 states plus the District of Columbia: I-95 through Florida, Georgia, South Carolina, North Carolina, Virginia, DC, Maryland, Delaware, Pennsylvania, New Jersey, New York, Connecticut, Rhode Island, Massachusetts, New Hampshire, and Maine.
Most Interstates in a state: 32 routes: New York, totaling
Most primary Interstates in a state: 13 routes: Illinois
Most Interstate mileage in a state: : Texas, in 17 different routes.
Fewest Interstates in a state: 3 routes: Delaware, New Mexico, North Dakota, and Rhode Island. Puerto Rico also has 3 routes.
Fewest primary Interstates in a state: 1 route: Delaware, Maine, and Rhode Island (I-95 in each case).
Least Interstate mileage in a state: : Delaware, in 3 different routes.
Impact and reception
Following the passage of the Federal Aid Highway Act of 1956, passenger rail declined sharply as did freight rail for a short time, but the trucking industry expanded dramatically and the cost of shipping and travel fell sharply. Suburbanization became possible, with the rapid growth of larger, sprawling, and more car-dependent housing than was available in central cities, enabling racial segregation by white flight. A sense of isolationism developed in suburbs, with suburbanites wanting to keep urban areas disconnected from the suburbs. Tourism dramatically expanded, creating a demand for more service stations, motels, restaurants and visitor attractions. The Interstate System was the basis for urban expansion in the Sun Belt, and many urban areas in the region are thus very car-dependent. The highways may have contributed to increased economic productivity in, and thereby increased migration to, the Sun Belt. In rural areas, towns and small cities off the grid lost out as shoppers followed the interstate and new factories were located near them.
The system had a profound effect on interstate shipping. The Interstate Highway System was being constructed at the same time as the intermodal shipping container made its debut. These containers could be placed on trailers behind trucks and shipped across the country with ease. A new road network and shipping containers that could be easily moved from ship to train to truck, meant that overseas manufacturers and domestic startups could get their products to market quicker than ever, allowing for accelerated economic growth. Forty years after its construction, the Interstate Highway system returned on investment, making $6 for every $1 spent on the project. According to research by the FHWA, "from 1950 to 1989, approximately one-quarter of the nation's productivity increase is attributable to increased investment in the highway system."
The system had a particularly strong effect in Southern states, where major highways were inadequate. The new system facilitated the relocation of heavy manufacturing to the South and spurred the development of Southern-based corporations like Walmart (in Arkansas) and FedEx (in Tennessee).
The Interstate Highway System also dramatically affected American culture, contributing to cars becoming more central to the American identity. Before, driving was considered an excursion that required some amount of skill and could have some chance of unpredictability. With the standardization of signs, road widths and rules, certain unpredictabilities lessened. Justin Fox wrote, "By making road more reliable and by making Americans more reliant on them, they took away most of the adventure and romance associated with driving."
The Interstate Highway System has been criticized for contributing to the decline of some cities that were divided by Interstates, and for displacing minority neighborhoods in urban centers. Between 1957 and 1977, the Interstate System alone displaced over 475,000 households and one million people across the country. Highways have also been criticized for increasing racial segregation by creating physical barriers between neighborhoods, and for overall reductions in available housing and population in neighborhoods affected by highway construction. Other critics have blamed the Interstate Highway System for the decline of public transportation in the United States since the 1950s, which minorities and low-income residents are three to six times more likely to use. Previous highways, such as US 66, were also bypassed by the new Interstate system, turning countless rural communities along the way into ghost towns. The Interstate System has also contributed to continued resistance against new public transportation.
The Interstate Highway System had a negative impact on minority groups, especially in urban areas. Even though the government used eminent domain to obtain land for the Interstates, it was still economical to build where land was cheapest. This cheap land was often located in predominately minority areas. Not only were minority neighborhoods destroyed, but in some cities the Interstates were used to divide white and minority neighborhoods. These practices were common in cities both in the North and South, including Nashville, Miami, Chicago, Detroit, and many other cities. The division and destruction of neighborhoods led to the limitation of employment and other opportunities, which deteriorated the economic fabric of neighborhoods. Neighborhoods bordering Interstates have a much higher level of particulate air pollution and are more likely to be chosen for polluting industrial facilities.
See also
Highway systems by country
List of controlled-access highway systems
Non-motorized access on freeways
Notes
References
Further reading
External links
Dwight D. Eisenhower National System of Interstate and Defense Highways, Federal Highway Administration (FHWA)
Route Log and Finder List, FHWA
Turner-Fairbank Highway Research Center, FHWA
Interstate Highway System, Dwight D. Eisenhower Presidential Library and Museum
"Keep on Trucking?: Would you pay more in taxes to fix roads and rail?", NOW on PBS
1956 establishments in the United States
Presidency of Dwight D. Eisenhower
Transport systems
Types of roads | Interstate Highway System | [
"Physics",
"Technology"
] | 11,038 | [
"Physical systems",
"Transport",
"Transport systems"
] |
43,956 | https://en.wikipedia.org/wiki/Erlenmeyer%20flask | An Erlenmeyer flask, also known as a conical flask (British English) or a titration flask, is a type of laboratory flask with a flat bottom, a conical body, and a cylindrical neck. It is named after the German chemist Emil Erlenmeyer (1825–1909), who invented it in 1860.
Erlenmeyer flasks have wide bases and narrow necks. They may be graduated, and often have spots of ground glass or enamel where they can be labeled with a pencil. It differs from the beaker in its tapered body and narrow neck. Depending on the application, they may be constructed from glass or plastic, in a wide range of volumes.
The mouth of the Erlenmeyer flask may have a beaded lip that can be stoppered or covered. Alternatively, the neck may be fitted with ground glass or other connector for use with more specialized stoppers or attachment to other apparatus. A Büchner flask is a common design modification for filtration under vacuum.
Uses
In chemistry
The slanted sides and narrow neck of this flask allow the contents of the flask to be mixed by swirling, without risk of spillage, making them suitable for titrations by placing it under the buret and adding solvent and the indicator in the Erlenmeyer flask. Such features similarly make the flask suitable for boiling liquids. Hot vapour condenses on the upper section of the Erlenmeyer flask, reducing solvent loss. Erlenmeyer flasks' narrow necks can also support filter funnels.
The final two attributes of Erlenmeyer flasks make them especially appropriate for recrystallization. The sample to be purified is heated to a boil, and sufficient solvent is added for complete dissolution. The receiving flask is filled with a small amount of solvent, and heated to a boil. The hot solution is filtered through a fluted filter paper into the receiving flask. Hot vapors from the boiling solvent keep the filter funnel warm, avoiding the premature crystallization.
Like beakers, Erlenmeyer flasks are not normally suitable for accurate volumetric measurements. Their stamped volumes are approximate within about 5% accuracy.
In biology
Erlenmeyer flasks are also used in microbiology for the preparation of microbial cultures. Erlenmeyer flasks used in cell culture are sterilized and may feature vented closures to enhance gas exchange during incubation and shaking. The use of minimal liquid volumes, typically no more than one fifth of the total flask volume, and baffles molded into the flask's internal surface both serve to maximize gas transfer and promote chaotic mixing when the flasks are orbitally shaken. The oxygen transfer rate in Erlenmeyer flasks depends on the agitation speed, the liquid volume, and the shake-flask design. The shaking frequency has the most significant impact on oxygen transfer.
Oxygenation and mixing of liquid cultures further depend on rotation of the liquid "in-phase", meaning the synchronous movement of the liquid with the shaker table. Under certain conditions the shaking process leads to a breakdown of liquid motion – called "out-of-phase phenomenon". This phenomenon has been intensively characterized for shake flask bioreactors. Out-of-phase conditions are associated with a strong decrease in mixing performance, oxygen transfer, and power input. Main factor for out-of-phase operation is the viscosity of the culture medium, but also the vessel diameter, low filling levels and/or a high number of baffles.
Legal restriction
To impede illicit drug manufacturers, the state of Texas previously restricted the sale of Erlenmeyer flasks to those who have the requisite permits. On September 1, 2019, SB 616 amended the law so that permits are no longer required, but accurate inventory of this and certain other pieces of lab equipment must still be maintained, loss or theft must still be reported, and the owner must still allow audits of their records and equipment to be made.
Additional images
See also
Chemex
Fernbach flask
Fleaker
Florence flask
References
External links
1860 in science
1860 in the German Confederation
German inventions
Laboratory glassware
Volumetric instruments | Erlenmeyer flask | [
"Technology",
"Engineering"
] | 865 | [
"Volumetric instruments",
"Measuring instruments"
] |
43,970 | https://en.wikipedia.org/wiki/Calorimeter | A calorimeter is a device used for calorimetry, or the process of measuring the heat of chemical reactions or physical changes as well as heat capacity. Differential scanning calorimeters, isothermal micro calorimeters, titration calorimeters and accelerated rate calorimeters are among the most common types. A simple calorimeter just consists of a thermometer attached to a metal container full of water suspended above a combustion chamber. It is one of the measurement devices used in the study of thermodynamics, chemistry, and biochemistry.
To find the enthalpy change per mole of a substance A in a reaction between two substances A and B, the substances are separately added to a calorimeter and the initial and final temperatures (before the reaction has started and after it has finished) are noted. Multiplying the temperature change by the mass and specific heat capacities of the substances gives a value for the energy given off or absorbed during the reaction. Dividing the energy change by how many moles of A were present gives its enthalpy change of reaction.
where is the amount of heat according to the change in temperature measured in joules and is the heat capacity of the calorimeter which is a value associated with each individual apparatus in units of energy per temperature (joules/kelvin).
History
In 1761 Joseph Black introduced the idea of latent heat which led to the creation of the first ice calorimeters. In 1780, Antoine Lavoisier used the heat released by the respiration of a guinea pig to melt snow surrounding his apparatus, showing that respiratory gas exchange is a form of combustion, similar to the burning of a candle. Lavoisier named this apparatus 'calorimeter', based on both Greek and Latin roots. One of the first ice calorimeters was used in the winter of 1782–83 by Lavoisier and Pierre-Simon Laplace. It relied on the heat required for the melting of ice to measure the heat released in various chemical reactions.
Adiabatic calorimeters
An adiabatic calorimeter is a calorimeter used to examine a runaway reaction. Since the calorimeter runs in an adiabatic environment, any heat generated by the material sample under test causes the sample to increase in temperature, thus fueling the reaction.
No adiabatic calorimeter is fully adiabatic - some heat will be lost by the sample to the sample holder. A mathematical correction factor, known as the phi-factor, can be used to adjust the calorimetric result to account for these heat losses. The phi-factor is the ratio of the thermal mass of the sample and sample holder to the thermal mass of the sample alone.
Reaction calorimeters
A reaction calorimeter is a calorimeter in which a chemical reaction is initiated within a closed insulated container. Reaction heats are measured and the total heat is obtained by integrating heat flow versus time. This is the standard used in industry to measure heats since industrial processes are engineered to run at constant temperatures. Reaction calorimetry can also be used to determine maximum heat release rate for chemical process engineering and for tracking the global kinetics of reactions. There are four main methods for measuring the heat in reaction calorimeter:
Heat flow calorimeter
The cooling/heating jacket controls either the temperature of the process or the temperature of the jacket. Heat is measured by monitoring the temperature difference between heat transfer fluid and the process fluid. In addition, fill volumes (i.e. wetted area), specific heat, heat transfer coefficient have to be determined to arrive at a correct value. It is possible with this type of calorimeter to do reactions at reflux, although it is very less accurate.
Heat balance calorimeter
The cooling/heating jacket controls the temperature of the process. Heat is measured by monitoring the heat gained or lost by the heat transfer fluid.
Power compensation
Power compensation uses a heater placed within the vessel to maintain a constant temperature. The energy supplied to this heater can be varied as reactions require and the calorimetry signal is purely derived from this electrical power.
Constant flux
Constant flux calorimetry (or COFLUX as it is often termed) is derived from heat balance calorimetry and uses specialized control mechanisms to maintain a constant heat flow (or flux) across the vessel wall.
Bomb calorimeters
A bomb calorimeter is a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Bomb calorimeters have to withstand the large pressure within the calorimeter as the reaction is being measured. Electrical energy is used to ignite the fuel; as the fuel is burning, it will heat up the surrounding air, which expands and escapes through a tube that leads the air out of the calorimeter. When the air is escaping through the copper tube it will also heat up the water outside the tube. The change in temperature of the water allows for calculating calorie content of the fuel.
In more recent calorimeter designs, the whole bomb, pressurized with excess pure oxygen (typically at ) and containing a weighed mass of a sample (typically 1–1.5 g) and a small fixed amount of water (to saturate the internal atmosphere, thus ensuring that all water produced is liquid, and removing the need to include enthalpy of vaporization in calculations), is submerged under a known volume of water (ca. 2000 ml) before the charge is electrically ignited. The bomb, with the known mass of the sample and oxygen, form a closed system — no gases escape during the reaction. The weighed reactant put inside the steel container is then ignited. Energy is released by the combustion and heat flow from this crosses the stainless steel wall, thus raising the temperature of the steel bomb, its contents, and the surrounding water jacket. The temperature change in the water is then accurately measured with a thermometer. This reading, along with a bomb factor (which is dependent on the heat capacity of the metal bomb parts), is used to calculate the energy given out by the sample burn. A small correction is made to account for the electrical energy input, the burning fuse, and acid production (by titration of the residual liquid). After the temperature rise has been measured, the excess pressure in the bomb is released.
At its core, a bomb calorimeter consists of a small cup to contain the sample, oxygen, a stainless steel bomb, water, a stirrer, a thermometer, the dewar or insulating container (to prevent heat flow from the calorimeter to its surroundings) and an ignition circuit connected to the bomb. By using stainless steel for the bomb, the reaction will occur with no volume change observed.
Since there is no heat exchange between the calorimeter and surroundings (Q = 0) (adiabatic), no work is performed (W = 0)
Thus, the total internal energy change
Also, total internal energy change
(constant volume )
where is heat capacity of the bomb
Before the bomb can be used to determine heat of combustion of any compound, it must be calibrated. The value of can be estimated by
and can be measured;
In the laboratory, is determined by running a compound with known heat of combustion value:
Common compounds are benzoic acid () or p-methyl benzoic acid ().
Temperature () is recorded every minute and
A small factor contributes to the correction of the total heat of combustion is the fuse wire. Nickel fuse wire is often used and has heat of combustion: 981.2cal/g.
In order to calibrate the bomb, a small amount (~ 1g) of benzoic acid, or p-methyl benzoic acid is weighed.
A length of nickel fuse wire (~10 cm) is weighed both before and after the combustion process. Mass of fuse wire burned
The combustion of sample (benzoic acid) inside the bomb
Once value of the bomb is determined, the bomb is ready to use to calculate heat of combustion of any compounds by
Combustion of non-flammables
The higher pressure and concentration of in the bomb system can render combustible some compounds that are not normally flammable. Some substances do not combust completely, making the calculations harder as the remaining mass has to be taken into consideration, making the possible error considerably larger and compromising the data.
When working with compounds that are not as flammable (that might not combust completely) one solution would be to mix the compound with some flammable compounds with a known heat of combustion and make a pallet with the mixture. Once the of the bomb is known, the heat of combustion of the flammable compound (), of the wire () and the masses ( and ), and the temperature change (ΔT), the heat of combustion of the less flammable compound () can be calculated with:
CLFC = Cv ΔT − CFC mFC − CW mW
Calvet-type calorimeters
The detection is based on a three-dimensional fluxmeter sensor. The fluxmeter element consists of a ring of several thermocouples in series. The corresponding thermopile of high thermal conductivity surrounds the experimental space within the calorimetric block. The radial arrangement of the thermopiles guarantees an almost complete integration of the heat. This is verified by the calculation of the efficiency ratio that indicates that an average value of 94% ± 1% of heat is transmitted through the sensor on the full range of temperature of the Calvet-type calorimeter. In this setup, the sensitivity of the calorimeter is not affected by the crucible, the type of purgegas, or the flow rate. The main advantage of the setup is the increase of the experimental vessel's size and consequently the size of the sample, without affecting the accuracy of the calorimetric measurement.
The calibration of the calorimetric detectors is a key parameter and has to be performed very carefully. For Calvet-type calorimeters, a specific calibration, so called Joule effect or electrical calibration, has been developed to overcome all the problems encountered by a calibration done with standard materials. The main advantages of this type of calibration are as follows:
It is an absolute calibration.
The use of standard materials for calibration is not necessary. The calibration can be performed at a constant temperature, in the heating mode and in the cooling mode.
It can be applied to any experimental vessel volume.
It is a very accurate calibration.
An example of Calvet-type calorimeter is the C80 Calorimeter (reaction, isothermal and scanning calorimeter).
Adiabatic and Isoperibol calorimeters
Sometimes referred to as constant-pressure calorimeters, adiabatic calorimeters measure the change in enthalpy of a reaction occurring in solution during which the no heat exchange with the surroundings is allowed (adiabatic) and the atmospheric pressure remains constant.
An example is a coffee-cup calorimeter, which is constructed from two nested Styrofoam cups, providing insulation from the surroundings, and a lid with two holes, allowing insertion of a thermometer and a stirring rod. The inner cup holds a known amount of a solvent, usually water, that absorbs the heat from the reaction. When the reaction occurs, the outer cup provides insulation. Then
where
, Specific heat at constant pressure
, Enthalpy of solution
, Change in temperature
, mass of solvent
, molecular mass of solvent
The measurement of heat using a simple calorimeter, like the coffee cup calorimeter, is an example of constant-pressure calorimetry, since the pressure (atmospheric pressure) remains constant during the process. Constant-pressure calorimetry is used in determining the changes in enthalpy occurring in solution. Under these conditions the change in enthalpy equals the heat.
Commercial calorimeters operate in a similar way. The semi-adiabatic (isoperibol) calorimeters measure temperature changes up to 10°C and account for heat loss through the walls of the reaction vessel to the environment, hence, semi-adiabatic. The reaction vessel is a dewar flask which is immersed in a constant temperature bath. This provides a constant heat leak rate that can be corrected through the software. The heat capacity of the reactants (and the vessel) are measured by introducing a known amount of heat using a heater element (voltage and current) and measuring the temperature change.
Adiabatic calorimeters most commonly used in materials science research to study reactions that occur at a constant pressure and volume. They are particularly useful for determining the heat capacity of substances, measuring the enthalpy changes of chemical reactions, and studying the thermodynamic properties of materials.
Differential scanning calorimeter
In a differential scanning calorimeter (DSC), heat flow into a sample—usually contained in a small aluminium capsule or 'pan'—is measured differentially, i.e., by comparing it to the flow into an empty reference pan.
In a heat flux DSC, both pans sit on a small slab of material with a known (calibrated) heat resistance K. The temperature of the calorimeter is raised linearly with time (scanned), i.e., the heating rate
dT/dt = β
is kept constant. This time linearity requires good design and good (computerized) temperature control. Of course, controlled cooling and isothermal experiments are also possible.
Heat flows into the two pans by conduction. The flow of heat into the sample is larger because of its heat capacity Cp. The difference in flow dq/dt induces a small temperature difference ΔT across the slab. This temperature difference is measured using a thermocouple. The heat capacity can in principle be determined from this signal:
Note that this formula (equivalent to Newton's law of heat flow) is analogous to, and much older than, Ohm's law of electric flow:
.
When suddenly heat is absorbed by the sample (e.g., when the sample melts), the signal will respond and exhibit a peak.
From the integral of this peak the enthalpy of melting can be determined, and from its onset the melting temperature.
Differential scanning calorimetry is a workhorse technique in many fields, particularly in polymer characterization.
A modulated temperature differential scanning calorimeter (MTDSC) is a type of DSC in which a small oscillation is imposed upon the otherwise linear heating rate.
This has a number of advantages. It facilitates the direct measurement of the heat capacity in one measurement, even in (quasi-)isothermal conditions. It permits the simultaneous measurement of heat effects that respond to a changing heating rate (reversing) and that don't respond to the changing heating rate (non-reversing). It allows for the optimization of both sensitivity and resolution in a single test by allowing for a slow average heating rate (optimizing resolution) and a fast changing heating rate (optimizing sensitivity).
A DSC may also be used as an initial safety screening tool. In this mode the sample will be housed in a non-reactive crucible (often gold, or gold-plated steel), and which will be able to withstand pressure (typically up to 100 bar). The presence of an exothermic event can then be used to assess the stability of a substance to heat. However, due to a combination of relatively poor sensitivity, slower than normal scan rates (typically 2–3 °C per min) due to much heavier crucible, and unknown activation energy, it is necessary to deduct about 75–100 °C from the initial start of the observed exotherm to suggest a maximum temperature for the material. A much more accurate data set can be obtained from an adiabatic calorimeter, but such a test may take 2–3 days from ambient at a rate of 3 °C increment per half hour.
Isothermal titration calorimeter
In an isothermal titration calorimeter, the heat of reaction is used to follow a titration experiment. This permits determination of the midpoint (stoichiometry) (N) of a reaction as well as its enthalpy (delta H), entropy (delta S) and of primary concern the binding affinity (Ka)
The technique is gaining in importance particularly in the field of biochemistry, because it facilitates determination of substrate binding to enzymes. The technique is commonly used in the pharmaceutical industry to characterize potential drug candidates.
Continuous Reaction Calorimeter
The Continuous Reaction Calorimeter is especially suitable to obtain thermodynamic information for a scale-up of continuous processes in tubular reactors. This is useful because the released heat can strongly depend on the reaction control, especially for non-selective reactions. With the Continuous Reaction Calorimeter an axial temperature profile along the tube reactor can be recorded and the specific heat of reaction can be determined by means of heat balances and segmental dynamic parameters. The system must consist of a tubular reactor, dosing systems, preheaters, temperature sensors and flow meters.
In traditional heat flow calorimeters, one reactant is added continuously in small amounts, similar to a semi-batch process, in order to obtain a complete conversion of the reaction. In contrast to the tubular reactor, this leads to longer residence times, different substance concentrations and flatter temperature profiles. Thus, the selectivity of not well-defined reactions can be affected. This can lead to the formation of by-products or consecutive products which alter the measured heat of reaction, since other bonds are formed. The amount of by-product or secondary product can be found by calculating the yield of the desired product.
If the heat of reaction measured in the HFC (Heat flow calorimetry) and PFR calorimeter differ, most probably some side reactions have occurred. They could for example be caused by different temperatures and residence times. The totally measured Qr is composed of partially overlapped reaction enthalpies (ΔHr) of main and side reactions, depending on their degrees of conversion (U).
Calorimetry in Geothermal Reactors
Calorimeters can be used to measure the efficiency of geothermal energy conversion processes. Through measuring the heat input and output of the process, engineers can determine how effective the plant is at converting geothermal energy into usable electricity or other forms of energy.
Calorimeters can also monitor the quality of the steam extracted from the geothermal resource. By analyzing the heat content of the steam, engineers can ensure that the resource meets the required specifications for efficient energy production.
See also
Enthalpy
Heat
Calorie
Heat of combustion
Calorimeter constant
Reaction calorimeter
Calorimeter (particle physics)
References
External links
Isothermal Battery Calorimeters - National Renewable Energy Laboratory
Fact Sheet: Isothermal Battery Calorimeters, National Renewable Energy Laboratory, March 2015
Fluitec Contiplant Continuous Reactors
Continuous milli‑scale reaction calorimeter for direct scale‑up of flow chemistry Journal of Flow Chemistry https://doi.org/10.1007/s41981-021-00204-y
Reaction Calorimetry in continuous flow mode. A new approach for the thermal characterization of high energetic and fast reactions https://doi.org/10.1021/acs.oprd.0c00117
Measuring instruments
Laboratory equipment
Calorimetry | Calorimeter | [
"Technology",
"Engineering"
] | 4,024 | [
"Measuring instruments"
] |
43,972 | https://en.wikipedia.org/wiki/Partial%20pressure | In a mixture of gases, each constituent gas has a partial pressure which is the notional pressure of that constituent gas as if it alone occupied the entire volume of the original mixture at the same temperature. The total pressure of an ideal gas mixture is the sum of the partial pressures of the gases in the mixture (Dalton's Law).
The partial pressure of a gas is a measure of thermodynamic activity of the gas's molecules. Gases dissolve, diffuse, and react according to their partial pressures but not according to their concentrations in gas mixtures or liquids. This general property of gases is also true in chemical reactions of gases in biology. For example, the necessary amount of oxygen for human respiration, and the amount that is toxic, is set by the partial pressure of oxygen alone. This is true across a very wide range of different concentrations of oxygen present in various inhaled breathing gases or dissolved in blood; consequently, mixture ratios, like that of breathable 20% oxygen and 80% Nitrogen, are determined by volume instead of by weight or mass. Furthermore, the partial pressures of oxygen and carbon dioxide are important parameters in tests of arterial blood gases. That said, these pressures can also be measured in, for example, cerebrospinal fluid.
Symbol
The symbol for pressure is usually or which may use a subscript to identify the pressure, and gas species are also referred to by subscript. When combined, these subscripts are applied recursively.
Examples:
or = pressure at time 1
or = partial pressure of hydrogen
or or PaO2 = arterial partial pressure of oxygen
or or PvO2 = venous partial pressure of oxygen
Dalton's law of partial pressures
Dalton's law expresses the fact that the total pressure of a mixture of ideal gases is equal to the sum of the partial pressures of the individual gases in the mixture. This equality arises from the fact that in an ideal gas, the molecules are so far apart that they do not interact with each other. Most actual real-world gases come very close to this ideal. For example, given an ideal gas mixture of nitrogen (N2), hydrogen (H2) and ammonia (NH3):
where:
= total pressure of the gas mixture
= partial pressure of nitrogen (N2)
= partial pressure of hydrogen (H2)
= partial pressure of ammonia (NH3)
Ideal gas mixtures
Ideally the ratio of partial pressures equals the ratio of the number of molecules. That is, the mole fraction of an individual gas component in an ideal gas mixture can be expressed in terms of the component's partial pressure or the moles of the component:
and the partial pressure of an individual gas component in an ideal gas can be obtained using this expression:
The mole fraction of a gas component in a gas mixture is equal to the volumetric fraction of that component in a gas mixture.
The ratio of partial pressures relies on the following isotherm relation:
VX is the partial volume of any individual gas component (X)
Vtot is the total volume of the gas mixture
pX is the partial pressure of gas X
ptot is the total pressure of the gas mixture
nX is the amount of substance of gas (X)
ntot is the total amount of substance in gas mixture
Partial volume (Amagat's law of additive volume)
The partial volume of a particular gas in a mixture is the volume of one component of the gas mixture. It is useful in gas mixtures, e.g. air, to focus on one particular gas component, e.g. oxygen.
It can be approximated both from partial pressure and molar fraction:
VX is the partial volume of an individual gas component X in the mixture
Vtot is the total volume of the gas mixture
pX is the partial pressure of gas X
ptot is the total pressure of the gas mixture
nX is the amount of substance of gas X
ntot is the total amount of substance in the gas mixture
Vapor pressure
Vapor pressure is the pressure of a vapor in equilibrium with its non-vapor phases (i.e., liquid or solid). Most often the term is used to describe a liquid's tendency to evaporate. It is a measure of the tendency of molecules and atoms to escape from a liquid or a solid. A liquid's atmospheric pressure boiling point corresponds to the temperature at which its vapor pressure is equal to the surrounding atmospheric pressure and it is often called the normal boiling point.
The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point of the liquid.
The vapor pressure chart displayed has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points.
For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. At higher altitudes, the atmospheric pressure is less than that at sea level, so boiling points of liquids are reduced. At the top of Mount Everest, the atmospheric pressure is approximately 0.333 atm, so by using the graph, the boiling point of diethyl ether would be approximately 7.5 °C versus 34.6 °C at sea level (1 atm).
Equilibrium constants of reactions involving gas mixtures
It is possible to work out the equilibrium constant for a chemical reaction involving a mixture of gases given the partial pressure of each gas and the overall reaction formula. For a reversible reaction involving gas reactants and gas products, such as:
{\mathit{a}A} + {\mathit{b}B} <=> {\mathit{c}C} + {\mathit{d}D}
the equilibrium constant of the reaction would be:
For reversible reactions, changes in the total pressure, temperature or reactant concentrations will shift the equilibrium so as to favor either the right or left side of the reaction in accordance with Le Chatelier's Principle. However, the reaction kinetics may either oppose or enhance the equilibrium shift. In some cases, the reaction kinetics may be the overriding factor to consider.
Henry's law and the solubility of gases
Gases will dissolve in liquids to an extent that is determined by the equilibrium between the undissolved gas and the gas that has dissolved in the liquid (called the solvent). The equilibrium constant for that equilibrium is:
where:
= the equilibrium constant for the solvation process
= partial pressure of gas in equilibrium with a solution containing some of the gas
= the concentration of gas in the liquid solution
The form of the equilibrium constant shows that the concentration of a solute gas in a solution is directly proportional to the partial pressure of that gas above the solution. This statement is known as Henry's law and the equilibrium constant is quite often referred to as the Henry's law constant.
Henry's law is sometimes written as:
where is also referred to as the Henry's law constant. As can be seen by comparing equations () and () above, is the reciprocal of . Since both may be referred to as the Henry's law constant, readers of the technical literature must be quite careful to note which version of the Henry's law equation is being used.
Henry's law is an approximation that only applies for dilute, ideal solutions and for solutions where the liquid solvent does not react chemically with the gas being dissolved.
In diving breathing gases
In underwater diving the physiological effects of individual component gases of breathing gases are a function of partial pressure.
Using diving terms, partial pressure is calculated as:
partial pressure = (total absolute pressure) × (volume fraction of gas component)
For the component gas "i":
pi = P × Fi
For example, at underwater, the total absolute pressure is (i.e., 1 bar of atmospheric pressure + 5 bar of water pressure) and the partial pressures of the main components of air, oxygen 21% by volume and nitrogen approximately 79% by volume are:
pN2 = 6 bar × 0.79 = 4.7 bar absolute
pO2 = 6 bar × 0.21 = 1.3 bar absolute
The minimum safe lower limit for the partial pressures of oxygen in a breathing gas mixture for diving is absolute. Hypoxia and sudden unconsciousness can become a problem with an oxygen partial pressure of less than 0.16 bar absolute. Oxygen toxicity, involving convulsions, becomes a problem when oxygen partial pressure is too high. The NOAA Diving Manual recommends a maximum single exposure of 45 minutes at 1.6 bar absolute, of 120 minutes at 1.5 bar absolute, of 150 minutes at 1.4 bar absolute, of 180 minutes at 1.3 bar absolute and of 210 minutes at 1.2 bar absolute. Oxygen toxicity becomes a risk when these oxygen partial pressures and exposures are exceeded. The partial pressure of oxygen also determines the maximum operating depth of a gas mixture.
Narcosis is a problem when breathing gases at high pressure. Typically, the maximum total partial pressure of narcotic gases used when planning for technical diving may be around 4.5 bar absolute, based on an equivalent narcotic depth of .
The effect of a toxic contaminant such as carbon monoxide in breathing gas is also related to the partial pressure when breathed. A mixture which may be relatively safe at the surface could be dangerously toxic at the maximum depth of a dive, or a tolerable level of carbon dioxide in the breathing loop of a diving rebreather may become intolerable within seconds during descent when the partial pressure rapidly increases, and could lead to panic or incapacitation of the diver.
In medicine
The partial pressures of particularly oxygen () and carbon dioxide () are important parameters in tests of arterial blood gases, but can also be measured in, for example, cerebrospinal fluid.
See also
References
Engineering thermodynamics
Equilibrium chemistry
Gas laws
Gases
Physical chemistry
Pressure
Underwater diving physics
Distillation | Partial pressure | [
"Physics",
"Chemistry",
"Engineering"
] | 2,077 | [
"Physical quantities",
"Engineering thermodynamics",
"Phases of matter",
"Pressure",
"Thermodynamics",
"Statistical mechanics",
"Physical chemistry",
"Gases",
"Mechanical quantities",
"Equilibrium chemistry",
"Distillation",
"Wikipedia categories named after physical quantities",
"Scalar phy... |
43,982 | https://en.wikipedia.org/wiki/Plumbing | Plumbing is any system that conveys fluids for a wide range of applications. Plumbing uses pipes, valves, plumbing fixtures, tanks, and other apparatuses to convey fluids. Heating and cooling (HVAC), waste removal, and potable water delivery are among the most common uses for plumbing, but it is not limited to these applications. The word derives from the Latin for lead, plumbum, as the first effective pipes used in the Roman era were lead pipes.
In the developed world, plumbing infrastructure is critical to public health and sanitation.
Boilermakers and pipefitters are not plumbers although they work with piping as part of their trade and their work can include some plumbing.
History
Plumbing originated during ancient civilizations, as they developed public baths and needed to provide potable water and wastewater removal for larger numbers of people.
The Mesopotamians introduced the world to clay sewer pipes around 4000 BCE, with the earliest examples found in the Temple of Bel at Nippur and at Eshnunna, used to remove wastewater from sites, and capture rainwater, in wells. The city of Uruk contains the oldest known examples of brick constructed Latrines, constructed atop interconnecting fired clay sewer pipes, . Clay pipes were later used in the Hittite city of Hattusa. They had easily detachable and replaceable segments, and allowed for cleaning.
Standardized earthen plumbing pipes with broad flanges making use of asphalt for preventing leakages appeared in the urban settlements of the Indus Valley civilization by 2700 BC.
Copper piping appeared in Egypt by 2400 BCE, with the Pyramid of Sahure and adjoining temple complex at Abusir, found to be connected by a copper waste pipe.
The word "plumber" dates from the Roman Empire. The Latin for lead is . Roman roofs used lead in conduits and drain pipes and some were also covered with lead. Lead was also used for piping and for making baths.
Plumbing reached its early apex in ancient Rome, which saw the introduction of expansive systems of aqueducts, tile wastewater removal, and widespread use of lead pipes. The Romans used lead pipe inscriptions to prevent water theft. With the Fall of Rome both water supply and sanitation stagnated—or regressed—for well over 1,000 years. Improvement was very slow, with little effective progress made until the growth of modern densely populated cities in the 1800s. During this period, public health authorities began pressing for better waste disposal systems to be installed, to prevent or control epidemics of disease. Earlier, the waste disposal system had consisted of collecting waste and dumping it on the ground or into a river. Eventually the development of separate, underground water and sewage systems eliminated open sewage ditches and cesspools.
In post-classical Kilwa the wealthy enjoyed indoor plumbing in their stone homes.
Most large cities today pipe solid wastes to sewage treatment plants in order to separate and partially purify the water, before emptying into streams or other bodies of water. For potable water use, galvanized iron piping was commonplace in the United States from the late 1800s until around 1960. After that period, copper piping took over, first soft copper with flared fittings, then with rigid copper tubing using soldered fittings.
The use of lead for potable water declined sharply after World War II because of increased awareness of the dangers of lead poisoning. At this time, copper piping was introduced as a better and safer alternative to lead pipes.
Systems
The major categories of plumbing systems or subsystems are:
potable cold and hot tap water supply
plumbing drainage venting
sewage systems and septic systems with or without hot water heat recycling and graywater recovery and treatment systems
Rainwater, surface, and subsurface water drainage
fuel gas piping
hydronics, i.e. heating and cooling systems using water to transport thermal energy, as in district heating systems, like for example the New York City steam system.
Water pipes
A water pipe is a pipe or tube, frequently made of plastic or metal, that carries pressurized and treated fresh water to a building (as part of a municipal water system), as well as inside the building.
History
Lead was the favoured material for water pipes for many centuries because its malleability made it practical to work into the desired shape. Such use was so common that the word "plumbing" derives from plumbum, the Latin word for lead. This was a source of lead-related health problems in the years before the health hazards of ingesting lead were fully understood; among these were stillbirths and high rates of infant mortality. Lead water pipes were still widely used in the early 20th century and remain in many households. Lead-tin alloy solder was commonly used to join copper pipes, but modern practice uses tin-antimony alloy solder instead in order to eliminate lead hazards.
Despite the Romans' common use of lead pipes, their aqueducts rarely poisoned people. Unlike other parts of the world where lead pipes cause poisoning, the Roman water had so much calcium in it that a layer of plaque prevented the water contacting the lead itself. What often causes confusion is the large amount of evidence of widespread lead poisoning, particularly amongst those who would have had easy access to piped water, an unfortunate result of lead being used in cookware and as an additive to processed food and drink (for example as a preservative in wine). Roman lead pipe inscriptions provided information on the owner to prevent water theft.
Wooden pipes were used in London and elsewhere during the 16th and 17th centuries. The pipes were hollowed-out logs which were tapered at the end with a small hole in which the water would pass through. The multiple pipes were then sealed together with hot animal fat. Wooden pipes were used in Philadelphia, Boston, and Montreal in the 1800s. Built-up wooden tubes were widely used in the US during the 20th century. These pipes (used in place of corrugated iron or reinforced concrete pipes) were made of sections cut from short lengths of wood. Locking of adjacent rings with hardwood dowel pins produced a flexible structure. About 100,000 feet of these wooden pipes were installed during WW2 in drainage culverts, storm sewers and conduits, under highways and at army camps, naval stations, airfields and ordnance plants.
Cast iron and ductile iron pipe was long a lower-cost alternative to copper before the advent of durable plastic materials but special non-conductive fittings must be used where transitions are to be made to other metallic pipes (except for terminal fittings) in order to avoid corrosion owing to electrochemical reactions between dissimilar metals (see galvanic cell).
Bronze fittings and short pipe segments are commonly used in combination with various materials.
Difference between pipes and tubes
The difference between pipes and tubes is a matter of sizing. For instance, PVC pipe for plumbing applications and galvanized steel pipe are measured in iron pipe size (IPS). Copper tube, CPVC, PeX and other tubing is measured nominally, basically an average diameter. These sizing schemes allow for universal adaptation of transitional fittings. For instance, 1/2" PeX tubing is the same size as 1/2" copper tubing. 1/2" PVC on the other hand is not the same size as 1/2" tubing, and therefore requires either a threaded male or female adapter to connect them. When used in agricultural irrigation, the singular form "pipe" is often used as a plural.
Pipe is available in rigid joints, which come in various lengths depending on the material. Tubing, in particular copper, comes in rigid hard tempered joints or soft tempered (annealed) rolls. PeX and CPVC tubing also comes in rigid joints or flexible rolls. The temper of the copper, whether it is a rigid joint or flexible roll, does not affect the sizing.
The thicknesses of the water pipe and tube walls can vary. Because piping and tubing are commodities, having a greater wall thickness implies higher initial cost. Thicker walled pipe generally implies greater durability and higher pressure tolerances. Pipe wall thickness is denoted by various schedules or for large bore polyethylene pipe in the UK by the Standard Dimension Ratio (SDR), defined as the ratio of the pipe diameter to its wall thickness. Pipe wall thickness increases with schedule, and is available in schedules 20, 40, 80, and higher in special cases. The schedule is largely determined by the operating pressure of the system, with higher pressures commanding greater thickness. Copper tubing is available in four wall thicknesses: type DWV (thinnest wall; only allowed as drain pipe per UPC), type 'M' (thin; typically only allowed as drain pipe by IPC code), type 'L' (thicker, standard duty for water lines and water service), and type 'K' (thickest, typically used underground between the main and the meter).
Wall thickness does not affect pipe or tubing size. 1/2" L copper has the same outer diameter as 1/2" K or M copper. The same applies to pipe schedules. As a result, a slight increase in pressure losses is realized due to a decrease in flowpath as wall thickness is increased. In other words, 1 foot of 1/2" L copper has slightly less volume than 1 foot of 1/2 M copper.
Materials
Water systems of ancient times relied on gravity for the supply of water, using pipes or channels usually made of clay, lead, bamboo, wood, or stone. Hollowed wooden logs wrapped in steel banding were used for plumbing pipes, particularly water mains. Logs were used for water distribution in England close to 500 years ago. US cities began using hollowed logs in the late 1700s through the 1800s. Today, most plumbing supply pipe is made out of steel, copper, and plastic; most waste (also known as "soil") out of steel, copper, plastic, and cast iron.
The straight sections of plumbing systems are called "pipes" or "tubes". A pipe is typically formed via casting or welding, whereas a tube is made through extrusion. Pipe normally has thicker walls and may be threaded or welded, while tubing is thinner-walled and requires special joining techniques such as brazing, compression fitting, crimping, or for plastics, solvent welding. These joining techniques are discussed in more detail in the piping and plumbing fittings article.
Steel
Galvanized steel potable water supply and distribution pipes are commonly found with nominal pipe sizes from to . It is rarely used today for new construction residential plumbing. Steel pipe has National Pipe Thread (NPT) standard tapered male threads, which connect with female tapered threads on elbows, tees, couplers, valves, and other fittings. Galvanized steel (often known simply as "galv" or "iron" in the plumbing trade) is relatively expensive, and difficult to work with due to weight and requirement of a pipe threader. It remains in common use for repair of existing "galv" systems and to satisfy building code non-combustibility requirements typically found in hotels, apartment buildings and other commercial applications. It is also extremely durable and resistant to mechanical abuse. Black lacquered steel pipe is the most widely used pipe material for fire sprinklers and natural gas.
Most typical single family home systems will not require supply piping larger than due to expense as well as steel piping's tendency to become obstructed from internal rusting and mineral deposits forming on the inside of the pipe over time once the internal galvanizing zinc coating has degraded. In potable water distribution service, galvanized steel pipe has a service life of about 30 to 50 years, although it is not uncommon for it to be less in geographic areas with corrosive water contaminants.
Copper
Copper pipe and tubing was widely used for domestic water systems in the latter half of the twentieth century. Demand for copper products has fallen due to the dramatic increase in the price of copper, resulting in increased demand for alternative products including PEX and stainless steel.
Plastic
Plastic pipe is in wide use for domestic water supply and drain-waste-vent (DWV) pipe. Principal types include:
Polyvinyl chloride (PVC) was produced experimentally in the 19th century but did not become practical to manufacture until 1926, when Waldo Semon of BF Goodrich Co. developed a method to plasticize PVC, making it easier to process. PVC pipe began to be manufactured in the 1940s and was in wide use for Drain-Waste-Vent piping during the reconstruction of Germany and Japan following WWII. In the 1950s, plastics manufacturers in Western Europe and Japan began producing acrylonitrile butadiene styrene (ABS) pipe. The method for producing cross-linked polyethylene (PEX) was also developed in the 1950s. Plastic supply pipes have become increasingly common, with a variety of materials and fittings employed.
PVC/CPVC – rigid plastic pipes similar to PVC drain pipes but with thicker walls to deal with municipal water pressure, introduced around 1970. PVC stands for polyvinyl chloride, and it has become a common replacement for metal piping. PVC should be used only for cold water, or for venting. CPVC can be used for hot and cold potable water supply. Connections are made with primers and solvent cements as required by code.
PP – The material is used primarily in housewares, food packaging, and clinical equipment, but since the early 1970s has seen increasing use worldwide for both domestic hot and cold water. PP pipes are heat fused, being unsuitable for the use of glues, solvents, or mechanical fittings. PP pipe is often used in green building projects.
PBT – flexible (usually gray or black) plastic pipe which is attached to barbed fittings and secured in place with a copper crimp ring. The primary manufacturer of PBT tubing and fittings was driven into bankruptcy by a class-action lawsuit over failures of this system. However, PB and PBT tubing has since returned to the market and codes, typically first for "exposed locations" such as risers.
PEX – cross-linked polyethylene system with mechanically joined fittings employing barbs, and crimped steel or copper rings.
Polytanks – plastic polyethylene cisterns, underground water tanks, above ground water tanks, are usually made of linear polyethylene suitable as a potable water storage tank, provided in white, black or green.
Aqua – known as PEX-Al-PEX, for its PEX/aluminum sandwich, consisting of aluminum pipe sandwiched between layers of PEX, and connected with modified brass compression fittings. In 2005, many of these fittings were recalled.
Present-day water-supply systems use a network of high-pressure pumps, and pipes in buildings are now made of copper, brass, plastic (particularly cross-linked polyethylene called PEX, which is estimated to be used in 60% of single-family homes), or other nontoxic material. Due to its toxicity, most cities moved away from lead water-supply piping by the 1920s in the United States, although lead pipes were approved by national plumbing codes into the 1980s, and lead was used in plumbing solder for drinking water until it was banned in 1986. Drain and vent lines are made of plastic, steel, cast iron, or lead.
Gallery
Components
In addition to lengths of pipe or tubing, pipe fittings such as valves, elbows, tees, and unions. are used in plumbing systems. Pipe and fittings are held in place with pipe hangers and strapping.
Plumbing fixtures are exchangeable devices that use water and can be connected to a building's plumbing system. They are considered to be "fixtures", in that they are semi-permanent parts of buildings, not usually owned or maintained separately. Plumbing fixtures are seen by and designed for the end-users. Some examples of fixtures include water closets (also known as toilets), urinals, bidets, showers, bathtubs, utility and kitchen sinks, drinking fountains, ice makers, humidifiers, air washers, fountains, and eye wash stations.
Sealants
Threaded pipe joints are sealed with thread seal tape or pipe dope. Many plumbing fixtures are sealed to their mounting surfaces with plumber's putty.
Equipment and tools
Plumbing equipment includes devices often behind walls or in utility spaces which are not seen by the general public. It includes water meters, pumps, expansion tanks, back flow preventers, water filters, UV sterilization lights, water softeners, water heaters, heat exchangers, gauges, and control systems.
There are many tools a plumber needs to do a good plumbing job. While many simple plumbing tasks can be completed with a few common hand held tools, other more complex jobs require specialised tools, designed specifically to make the job easier.
Specialized plumbing tools include pipe wrenches, flaring pliers, pipe vise, pipe bending machine, pipe cutter, dies, and joining tools such as soldering torches and crimp tools. New tools have been developed to help plumbers fix problems more efficiently. For example, plumbers use video cameras for inspections of hidden leaks or other problems; they also use hydro jets, and high pressure hydraulic pumps connected to steel cables for trench-less sewer line replacement.
Flooding from excessive rain or clogged sewers may require specialized equipment, such as a heavy duty pumper truck designed to vacuum raw sewage.
Problems
Bacteria have been shown to live in "premises plumbing systems". The latter refers to the "pipes and fixtures within a building that transport water to taps after it is delivered by the utility". Community water systems have been known for centuries to spread waterborne diseases like typhoid and cholera. However, "opportunistic premises plumbing pathogens" have been recognized only more recently: Legionella pneumophila, discovered in 1976, Mycobacterium avium, and Pseudomonas aeruginosa are the most commonly tracked bacteria, which people with depressed immunity can inhale or ingest and may become infected with.
Some of the locations where these opportunistic pathogens can grow include faucets, shower heads, water heaters and along pipe walls. Reasons that favor their growth are "high surface-to-volume ratio, intermittent stagnation, low disinfectant residual, and warming cycles". A high surface-to-volume ratio, i.e. a relatively large surface area allows the bacteria to form a biofilm, which protects them from disinfection.
Regulation
Much of the plumbing work in populated areas is regulated by government or quasi-government agencies due to the direct impact on the public's health, safety, and welfare. Plumbing installation and repair work on residences and other buildings generally must be done according to plumbing and building codes to protect the inhabitants of the buildings and to ensure safe, quality construction to future buyers. If permits are required for work, plumbing contractors typically secure them from the authorities on behalf of home or building owners.
Australia
In Australia, the national governing body for plumbing regulation is the Australian Building Codes Board. They are responsible for the creation of the National Construction Code (NCC), Volume 3 of which, the Plumbing Regulations 2008 and the Plumbing Code of Australia, pertains to plumbing.
Each Government at the state level has their own Authority and regulations in place for licensing plumbers. They are also responsible for the interpretation, administration and enforcement of the regulations outlined in the NCC. These Authorities are usually established for the sole purpose of regulating plumbing activities in their respective states/territories. However, several state level regulation acts are quite outdated, with some still operating on local policies introduced more than a decade ago. This has led to an increase in plumbing regulatory issues not covered under current policy, and as such, many policies are currently being updated to cover these more modern issues. The updates include changed to the minimum experience and training requirements for licensing, additional work standards for new and more specific kinds of plumbing, as well as adopting the Plumbing Code of Australia into state regulations in an effort to standardise plumbing regulations across the country.
Norway
In Norway, new domestic plumbing installed since 1997 has had to satisfy the requirement that it should be easily accessible for replacement after installation. This has led to the development of the pipe-in-pipe system as a de facto requirement for domestic plumbing.
United Kingdom
In the United Kingdom the professional body is the Chartered Institute of Plumbing and Heating Engineering (educational charity status) and it is true that the trade still remains virtually ungoverned; there are no systems in place to monitor or control the activities of unqualified plumbers or those home owners who choose to undertake installation and maintenance works themselves, despite the health and safety issues which arise from such works when they are undertaken incorrectly; see Health Aspects of Plumbing (HAP) published jointly by the World Health Organization (WHO) and the World Plumbing Council (WPC). WPC has subsequently appointed a representative to the World Health Organization to take forward various projects related to Health Aspects of Plumbing.
United States
In the United States, plumbing codes and licensing are generally controlled by state and local governments. At the national level, the Environmental Protection Agency has set guidelines about what constitutes lead-free plumbing fittings and pipes, in order to comply with the Safe Drinking Water Act.
Some widely used Standards in the United States are:
ASME A112.6.3 – Floor and Trench Drains
ASME A112.6.4 – Roof, Deck, and Balcony Drains
ASME A112.18.1/CSA B125.1 – Plumbing Supply Fittings
ASME A112.19.1/CSA B45.2 – Enameled Cast Iron and Enameled Steel Plumbing Fixtures
ASME A112.19.2/CSA B45.1 – Ceramic Plumbing Fixtures
Canada
In Canada, plumbing is a regulated trade requiring specific technical training and certification. Standards and regulations for plumbing are overseen at the provincial and territorial level, each having its distinct governing body:
Governing Bodies: Each province or territory possesses its regulatory authority overseeing the licensing and regulation of plumbers. For instance, in Ontario, the Ontario College of Trades handles the certification and regulation of tradespeople, whereas in British Columbia, the Industry Training Authority (ITA) undertakes this function.
Certification: To achieve certified plumber status in Canada, individuals typically complete an apprenticeship program encompassing both classroom instruction and hands-on experience. Upon completion, candidates undergo an examination for their certification.
Building Codes: Plumbing installations and repairs must adhere to building codes specified by individual provinces or territories. The National Building Code of Canada acts as a model code, with provinces and territories having the discretion to adopt or modify to their specific needs.
Safety and Health: Given its direct correlation with health and sanitation, plumbing work is of paramount importance in Canada. Regulations ensure uncontaminated drinking water and proper wastewater treatment, underscoring the vital role of certified plumbers for public health.
Environmental Considerations: Reflecting Canada's commitment to environmental conservation, there is an increasing emphasis on sustainable plumbing practices. Regulations advocate water conservation and the deployment of eco-friendly materials.
Standards: The Canadian Standards Association (CSA) determines standards for diverse plumbing products, ensuring their safety, quality, and efficiency. Items such as faucets and toilets frequently come with a CSA certification, indicating adherence to required standards.
See also
Active fire protection
Copper pipe
Domestic water system
Double-walled pipe
EPA Lead and Copper Rule
Fire hose
Flange
Garden hose
HDPE pipe
Heat pipe
Hose
MS Pipe, MS Tube
Passive fire protection
Pipe
Pipe and tube bender
Pipefitter
Pipe network analysis
Pipeline transport
Piping and plumbing fittings
Pipe support
Plastic pipework
Plastic pressure pipe systems
Plumber
Plumbing & Drainage Institute
Plumbosolvency
Sanitation in ancient Rome
Tube
Victaulic
Water supply network
References
Notes
Further reading
External links
ATSDR Case Studies in Environmental Medicine: Lead Toxicity U.S. Department of Health and Human Services
Lead Water Pipes and Infant Mortality in Turn-of-the-Century Massachusetts
Case Studies in Environmental Medicine - Lead Toxicity
ToxFAQs: Lead
Building engineering
Bathrooms
de:Rohrleitung | Plumbing | [
"Engineering"
] | 4,983 | [
"Construction",
"Plumbing"
] |
44,027 | https://en.wikipedia.org/wiki/Permutation | In mathematics, a permutation of a set can mean one of two different things:
an arrangement of its members in a sequence or linear order, or
the act or process of changing the linear order of an ordered set.
An example of the first meaning is the six permutations (orderings) of the set {1, 2, 3}: written as tuples, they are (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), and (3, 2, 1). Anagrams of a word whose letters are all different are also permutations: the letters are already ordered in the original word, and the anagram reorders them. The study of permutations of finite sets is an important topic in combinatorics and group theory.
Permutations are used in almost every branch of mathematics and in many other fields of science. In computer science, they are used for analyzing sorting algorithms; in quantum physics, for describing states of particles; and in biology, for describing RNA sequences.
The number of permutations of distinct objects is factorial, usually written as , which means the product of all positive integers less than or equal to .
According to the second meaning, a permutation of a set is defined as a bijection from to itself. That is, it is a function from to for which every element occurs exactly once as an image value. Such a function is equivalent to the rearrangement of the elements of in which each element i is replaced by the corresponding . For example, the permutation (3, 1, 2) is described by the function defined as
.
The collection of all permutations of a set form a group called the symmetric group of the set. The group operation is the composition of functions (performing one rearrangement after the other), which results in another function (rearrangement). The properties of permutations do not depend on the nature of the elements being permuted, only on their number, so one often considers the standard set .
In elementary combinatorics, the -permutations, or partial permutations, are the ordered arrangements of distinct elements selected from a set. When is equal to the size of the set, these are the permutations in the previous sense.
History
Permutation-like objects called hexagrams were used in China in the I Ching (Pinyin: Yi Jing) as early as 1000 BC.
In Greece, Plutarch wrote that Xenocrates of Chalcedon (396–314 BC) discovered the number of different syllables possible in the Greek language. This would have been the first attempt on record to solve a difficult problem in permutations and combinations.
Al-Khalil (717–786), an Arab mathematician and cryptographer, wrote the Book of Cryptographic Messages. It contains the first use of permutations and combinations, to list all possible Arabic words with and without vowels.
The rule to determine the number of permutations of n objects was known in Indian culture around 1150 AD. The Lilavati by the Indian mathematician Bhāskara II contains a passage that translates as follows:
The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures.
In 1677, Fabian Stedman described factorials when explaining the number of permutations of bells in change ringing. Starting from two bells: "first, two must be admitted to be varied in two ways", which he illustrates by showing 1 2 and 2 1. He then explains that with three bells there are "three times two figures to be produced out of three" which again is illustrated. His explanation involves "cast away 3, and 1.2 will remain; cast away 2, and 1.3 will remain; cast away 1, and 2.3 will remain". He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three. Effectively, this is a recursive process. He continues with five bells using the "casting away" method and tabulates the resulting 120 combinations. At this point he gives up and remarks:
Now the nature of these methods is such, that the changes on one number comprehends the changes on all lesser numbers, ... insomuch that a compleat Peal of changes on one number seemeth to be formed by uniting of the compleat Peals on all lesser numbers into one entire body;
Stedman widens the consideration of permutations; he goes on to consider the number of permutations of the letters of the alphabet and of horses from a stable of 20.
A first case in which seemingly unrelated mathematical questions were studied with the help of permutations occurred around 1770, when Joseph Louis Lagrange, in the study of polynomial equations, observed that properties of the permutations of the roots of an equation are related to the possibilities to solve it. This line of work ultimately resulted, through the work of Évariste Galois, in Galois theory, which gives a complete description of what is possible and impossible with respect to solving polynomial equations (in one unknown) by radicals. In modern mathematics, there are many similar situations in which understanding a problem requires studying certain permutations related to it.
The study of permutations as substitutions on n elements led to the notion of group as algebraic structure, through the works of Cauchy (1815 memoir).
Permutations played an important role in the cryptanalysis of the Enigma machine, a cipher device used by Nazi Germany during World War II. In particular, one important property of permutations, namely, that two permutations are conjugate exactly when they have the same cycle type, was used by cryptologist Marian Rejewski to break the German Enigma cipher in turn of years 1932-1933.
Definition
In mathematics texts it is customary to denote permutations using lowercase Greek letters. Commonly, either or are used.
A permutation can be defined as a bijection (an invertible mapping, a one-to-one and onto function) from a set to itself: The identity permutation is defined by for all elements , and can be denoted by the number , by , or by a single 1-cycle (x).
The set of all permutations of a set with n elements forms the symmetric group , where the group operation is composition of functions. Thus for two permutations and in the group , their product is defined by: Composition is usually written without a dot or other sign. In general, composition of two permutations is not commutative:
As a bijection from a set to itself, a permutation is a function that performs a rearrangement of a set, termed an active permutation or substitution. An older viewpoint sees a permutation as an ordered arrangement or list of all the elements of S, called a passive permutation. According to this definition, all permutations in are passive. This meaning is subtly distinct from how passive (i.e. alias) is used in Active and passive transformation and elsewhere, which would consider all permutations open to passive interpretation (regardless of whether they are in one-line notation, two-line notation, etc.).
A permutation can be decomposed into one or more disjoint cycles which are the orbits of the cyclic group acting on the set S. A cycle is found by repeatedly applying the permutation to an element: , where we assume . A cycle consisting of k elements is called a k-cycle. (See below.)
A fixed point of a permutation is an element x which is taken to itself, that is , forming a 1-cycle . A permutation with no fixed points is called a derangement. A permutation exchanging two elements (a single 2-cycle) and leaving the others fixed is called a transposition.
Notations
Several notations are widely used to represent permutations conveniently. Cycle notation is a popular choice, as it is compact and shows the permutation's structure clearly. This article will use cycle notation unless otherwise specified.
Two-line notation
Cauchy's two-line notation lists the elements of S in the first row, and the image of each element below it in the second row. For example, the permutation of S = {1, 2, 3, 4, 5, 6} given by the functioncan be written as
The elements of S may appear in any order in the first row, so this permutation could also be written:
One-line notation
If there is a "natural" order for the elements of S, say , then one uses this for the first row of the two-line notation:
Under this assumption, one may omit the first row and write the permutation in one-line notation as
,
that is, as an ordered arrangement of the elements of S. Care must be taken to distinguish one-line notation from the cycle notation described below: a common usage is to omit parentheses or other enclosing marks for one-line notation, while using parentheses for cycle notation. The one-line notation is also called the word representation.
The example above would then be: (It is typical to use commas to separate these entries only if some have two or more digits.)
This compact form is common in elementary combinatorics and computer science. It is especially useful in applications where the permutations are to be compared as larger or smaller using lexicographic order.
Cycle notation
Cycle notation describes the effect of repeatedly applying the permutation on the elements of the set S, with an orbit being called a cycle. The permutation is written as a list of cycles; since distinct cycles involve disjoint sets of elements, this is referred to as "decomposition into disjoint cycles".
To write down the permutation in cycle notation, one proceeds as follows:
Write an opening bracket followed by an arbitrary element x of :
Trace the orbit of x, writing down the values under successive applications of :
Repeat until the value returns to x, and close the parenthesis without repeating x:
Continue with an element y of S which was not yet written, and repeat the above process:
Repeat until all elements of S are written in cycles.
Also, it is common to omit 1-cycles, since these can be inferred: for any element x in S not appearing in any cycle, one implicitly assumes .
Following the convention of omitting 1-cycles, one may interpret an individual cycle as a permutation which fixes all the elements not in the cycle (a cyclic permutation having only one cycle of length greater than 1). Then the list of disjoint cycles can be seen as the composition of these cyclic permutations. For example, the one-line permutation can be written in cycle notation as:This may be seen as the composition of cyclic permutations: While permutations in general do not commute, disjoint cycles do; for example:Also, each cycle can be rewritten from a different starting point; for example,Thus one may write the disjoint cycles of a given permutation in many different ways.
A convenient feature of cycle notation is that inverting the permutation is given by reversing the order of the elements in each cycle. For example,
Canonical cycle notation
In some combinatorial contexts it is useful to fix a certain order for the elements in the cycles and of the (disjoint) cycles themselves. Miklós Bóna calls the following ordering choices the canonical cycle notation:
in each cycle the largest element is listed first
the cycles are sorted in increasing order of their first element, not omitting 1-cycles
For example, is a permutation of in canonical cycle notation.
Richard Stanley calls this the "standard representation" of a permutation, and Martin Aigner uses "standard form". Sergey Kitaev also uses the "standard form" terminology, but reverses both choices; that is, each cycle lists its minimal element first, and the cycles are sorted in decreasing order of their minimal elements.
Composition of permutations
There are two ways to denote the composition of two permutations. In the most common notation, is the function that maps any element x to . The rightmost permutation is applied to the argument first,
because the argument is written to the right of the function.
A different rule for multiplying permutations comes from writing the argument to the left of the function, so that the leftmost permutation acts first.
In this notation, the permutation is often written as an exponent, so σ acting on x is written xσ; then the product is defined by . This article uses the first definition, where the rightmost permutation is applied first.
The function composition operation satisfies the axioms of a group. It is associative, meaning , and products of more than two permutations are usually written without parentheses. The composition operation also has an identity element (the identity permutation ), and each permutation has an inverse (its inverse function) with .
Other uses of the term permutation
The concept of a permutation as an ordered arrangement admits several generalizations that have been called permutations, especially in older literature.
k-permutations of n
In older literature and elementary textbooks, a k-permutation of n (sometimes called a partial permutation, sequence without repetition, variation, or arrangement) means an ordered arrangement (list) of a k-element subset of an n-set. The number of such k-permutations (k-arrangements) of is denoted variously by such symbols as , , , , , or , computed by the formula:
,
which is 0 when , and otherwise is equal to
The product is well defined without the assumption that is a non-negative integer, and is of importance outside combinatorics as well; it is known as the Pochhammer symbol or as the -th falling factorial power :This usage of the term permutation is closely associated with the term combination to mean a subset. A k-combination of a set S is a k-element subset of S: the elements of a combination are not ordered. Ordering the k-combinations of S in all possible ways produces the k-permutations of S. The number of k-combinations of an n-set, C(n,k), is therefore related to the number of k-permutations of n by:
These numbers are also known as binomial coefficients, usually denoted :
Permutations with repetition
Ordered arrangements of k elements of a set S, where repetition is allowed, are called k-tuples. They have sometimes been referred to as permutations with repetition, although they are not permutations in the usual sense. They are also called words or strings over the alphabet S. If the set S has n elements, the number of k-tuples over S is
Permutations of multisets
If M is a finite multiset, then a multiset permutation is an ordered arrangement of elements of M in which each element appears a number of times equal exactly to its multiplicity in M. An anagram of a word having some repeated letters is an example of a multiset permutation. If the multiplicities of the elements of M (taken in some order) are , , ..., and their sum (that is, the size of M) is n, then the number of multiset permutations of M is given by the multinomial coefficient,
For example, the number of distinct anagrams of the word MISSISSIPPI is:
.
A k-permutation of a multiset M is a sequence of k elements of M in which each element appears a number of times less than or equal to its multiplicity in M (an element's repetition number).
Circular permutations
Permutations, when considered as arrangements, are sometimes referred to as linearly ordered arrangements. If, however, the objects are arranged in a circular manner this distinguished ordering is weakened: there is no "first element" in the arrangement, as any element can be considered as the start. An arrangement of distinct objects in a circular manner is called a circular permutation. These can be formally defined as equivalence classes of ordinary permutations of these objects, for the equivalence relation generated by moving the final element of the linear arrangement to its front.
Two circular permutations are equivalent if one can be rotated into the other. The following four circular permutations on four letters are considered to be the same.
1 4 2 3
4 3 2 1 3 4 1 2
2 3 1 4
The circular arrangements are to be read counter-clockwise, so the following two are not equivalent since no rotation can bring one to the other.
1 1
4 3 3 4
2 2
There are (n – 1)! circular permutations of a set with n elements.
Properties
The number of permutations of distinct objects is !.
The number of -permutations with disjoint cycles is the signless Stirling number of the first kind, denoted or .
Cycle type
The cycles (including the fixed points) of a permutation of a set with elements partition that set; so the lengths of these cycles form an integer partition of , which is called the cycle type (or sometimes cycle structure or cycle shape) of . There is a "1" in the cycle type for every fixed point of , a "2" for every transposition, and so on. The cycle type of is
This may also be written in a more compact form as .
More precisely, the general form is , where are the numbers of cycles of respective length. The number of permutations of a given cycle type is
.
The number of cycle types of a set with elements equals the value of the partition function .
Polya's cycle index polynomial is a generating function which counts permutations by their cycle type.
Conjugating permutations
In general, composing permutations written in cycle notation follows no easily described pattern – the cycles of the composition can be different from those being composed. However the cycle type is preserved in the special case of conjugating a permutation by another permutation , which means forming the product . Here, is the conjugate of by and its cycle notation can be obtained by taking the cycle notation for and applying to all the entries in it. It follows that two permutations are conjugate exactly when they have the same cycle type.
Order of a permutation
The order of a permutation is the smallest positive integer m so that . It is the least common multiple of the lengths of its cycles. For example, the order of is .
Parity of a permutation
Every permutation of a finite set can be expressed as the product of transpositions.
Although many such expressions for a given permutation may exist, either they all contain an even number of transpositions or they all contain an odd number of transpositions. Thus all permutations can be classified as even or odd depending on this number.
This result can be extended so as to assign a sign, written , to each permutation. if is even and if is odd. Then for two permutations and
It follows that
The sign of a permutation is equal to the determinant of its permutation matrix (below).
Matrix representation
A permutation matrix is an n × n matrix that has exactly one entry 1 in each column and in each row, and all other entries are 0. There are several ways to assign a permutation matrix to a permutation of {1, 2, ..., n}. One natural approach is to define to be the linear transformation of which permutes the standard basis by , and define to be its matrix. That is, has its jth column equal to the n × 1 column vector : its (i, j) entry is to 1 if i = σ(j), and 0 otherwise. Since composition of linear mappings is described by matrix multiplication, it follows that this construction is compatible with composition of permutations:. For example, the one-line permutations have product , and the corresponding matrices are:
It is also common in the literature to find the inverse convention, where a permutation σ is associated to the matrix whose (i, j) entry is 1 if j = σ(i) and is 0 otherwise. In this convention, permutation matrices multiply in the opposite order from permutations, that is, . In this correspondence, permutation matrices act on the right side of the standard row vectors : .
The Cayley table on the right shows these matrices for permutations of 3 elements.
Permutations of totally ordered sets
In some applications, the elements of the set being permuted will be compared with each other. This requires that the set S has a total order so that any two elements can be compared. The set {1, 2, ..., n} with the usual ≤ relation is the most frequently used set in these applications.
A number of properties of a permutation are directly related to the total ordering of S, considering the permutation written in one-line notation as a sequence .
Ascents, descents, runs, exceedances, records
An ascent of a permutation σ of n is any position i < n where the following value is bigger than the current one. That is, i is an ascent if . For example, the permutation 3452167 has ascents (at positions) 1, 2, 5, and 6.
Similarly, a descent is a position i < n with , so every i with is either an ascent or a descent.
An ascending run of a permutation is a nonempty increasing contiguous subsequence that cannot be extended at either end; it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length 1). By contrast an increasing subsequence of a permutation is not necessarily contiguous: it is an increasing sequence obtained by omitting some of the values of the one-line notation.
For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367.
If a permutation has k − 1 descents, then it must be the union of k ascending runs.
The number of permutations of n with k ascents is (by definition) the Eulerian number ; this is also the number of permutations of n with k descents. Some authors however define the Eulerian number as the number of permutations with k ascending runs, which corresponds to descents.
An exceedance of a permutation σ1σ2...σn is an index j such that . If the inequality is not strict (that is, ), then j is called a weak exceedance. The number of n-permutations with k exceedances coincides with the number of n-permutations with k descents.
A record or left-to-right maximum of a permutation σ is an element i such that σ(j) < σ(i) for all j < i.
Foata's transition lemma
Foata's fundamental bijection transforms a permutation with a given canonical cycle form into the permutation whose one-line notation has the same sequence of elements with parentheses removed. For example:Here the first element in each canonical cycle of becomes a record (left-to-right maximum) of . Given , one may find its records and insert parentheses to construct the inverse transformation . Underlining the records in the above example: , which allows the reconstruction of the cycles of .
The following table shows and for the six permutations of S = {1, 2, 3}, with the bold text on each side showing the notation used in the bijection: one-line notation for and canonical cycle notation for .
As a first corollary, the number of n-permutations with exactly k records is equal to the number of n-permutations with exactly k cycles: this last number is the signless Stirling number of the first kind, . Furthermore, Foata's mapping takes an n-permutation with k weak exceedances to an n-permutation with ascents. For example, (2)(31) = 321 has k = 2 weak exceedances (at index 1 and 2), whereas has ascent (at index 1; that is, from 2 to 3).
Inversions
An inversion of a permutation σ is a pair of positions where the entries of a permutation are in the opposite order: and . Thus a descent is an inversion at two adjacent positions. For example, has (i, j) = (1, 3), (2, 3), and (4, 5), where (σ(i), σ(j)) = (2, 1), (3, 1), and (5, 4).
Sometimes an inversion is defined as the pair of values (σ(i), σ(j)); this makes no difference for the number of inversions, and the reverse pair (σ(j), σ(i)) is an inversion in the above sense for the inverse permutation σ−1.
The number of inversions is an important measure for the degree to which the entries of a permutation are out of order; it is the same for σ and for σ−1. To bring a permutation with k inversions into order (that is, transform it into the identity permutation), by successively applying (right-multiplication by) adjacent transpositions, is always possible and requires a sequence of k such operations. Moreover, any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition of i and where i is a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by 1; as long as this number is not zero, the permutation is not the identity, so it has at least one descent. Bubble sort and insertion sort can be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutation σ can be written as a product of adjacent transpositions; for this one may simply reverse any sequence of such transpositions that transforms σ into the identity. In fact, by enumerating all sequences of adjacent transpositions that would transform σ into the identity, one obtains (after reversal) a complete list of all expressions of minimal length writing σ as a product of adjacent transpositions.
The number of permutations of n with k inversions is expressed by a Mahonian number. This is the coefficient of in the expansion of the product
The notation denotes the q-factorial. This expansion commonly appears in the study of necklaces.
Let such that and .
In this case, say the weight of the inversion is .
Kobayashi (2011) proved the enumeration formula
where denotes Bruhat order in the symmetric groups. This graded partial order often appears in the context of Coxeter groups.
Permutations in computing
Numbering permutations
One way to represent permutations of n things is by an integer N with 0 ≤ N < n!, provided convenient methods are given to convert between the number and the representation of a permutation as an ordered arrangement (sequence). This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive when n is small enough that N can be held in a machine word; for 32-bit words this means n ≤ 12, and for 64-bit words this means n ≤ 20. The conversion can be done via the intermediate form of a sequence of numbers dn, dn−1, ..., d2, d1, where di is a non-negative integer less than i (one may omit d1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is to simply express N in the factorial number system, which is just a particular mixed radix representation, where, for numbers less than n!, the bases (place values or multiplication factors) for successive digits are , , ..., 2!, 1!. The second step interprets this sequence as a Lehmer code or (almost equivalently) as an inversion table.
In the Lehmer code for a permutation σ, the number dn represents the choice made for the first term σ1, the number dn−1 represents the choice made for the second term
σ2 among the remaining elements of the set, and so forth. More precisely, each dn+1−i gives the number of remaining elements strictly less than the term σi. Since those remaining elements are bound to turn up as some later term σj, the digit dn+1−i counts the inversions (i,j) involving i as smaller index (the number of values j for which i < j and σi > σj). The inversion table for σ is quite similar, but here dn+1−k counts the number of inversions (i,j) where k = σj occurs as the smaller of the two values appearing in inverted order.
Both encodings can be visualized by an n by n Rothe diagram (named after Heinrich August Rothe) in which dots at (i,σi) mark the entries of the permutation, and a cross at (i,σj) marks the inversion (i,j); by the definition of inversions a cross appears in any square that comes both before the dot (j,σj) in its column, and before the dot (i,σi) in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns; it is just the Lehmer code for the inverse permutation, and vice versa.
To effectively convert a Lehmer code dn, dn−1, ..., d2, d1 into a permutation of an ordered set S, one can start with a list of the elements of S in increasing order, and for i increasing from 1 to n set σi to the element in the list that is preceded by dn+1−i other ones, and remove that element from the list. To convert an inversion table dn, dn−1, ..., d2, d1 into the corresponding permutation, one can traverse the numbers from d1 to dn while inserting the elements of S from largest to smallest into an initially empty sequence; at the step using the number d from the inversion table, the element from S inserted into the sequence at the point where it is preceded by d elements already present. Alternatively one could process the numbers from the inversion table and the elements of S both in the opposite order, starting with a row of n empty slots, and at each step place the element from S into the empty slot that is preceded by d other empty slots.
Converting successive natural numbers to the factorial number system produces those sequences in lexicographic order (as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by the place of their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives the signature of the permutation. Moreover, the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5); this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer code dn, dn−1, ..., d2, d1 has an ascent if and only if .
Algorithms to generate permutations
In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account; if so, one should only generate distinct multiset permutations of the sequence.
An obvious way to generate permutations of n is to generate values for the Lehmer code (possibly using the factorial number system representation of integers up to n!), and convert those into the corresponding permutations. However, the latter step, while straightforward, is hard to implement efficiently, because it requires n operations each of selection from a sequence and deletion from it, at an arbitrary position; of the obvious representations of the sequence as an array or a linked list, both require (for different reasons) about n2/4 operations to perform the conversion. With n likely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation in O(n log n) time.
Random generation of permutations
For generating random permutations of a given sequence of n values, it makes no difference whether one applies a randomly selected permutation of n to the sequence, or chooses a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations of n that result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for large n due to the growth of the number n!, there is no reason to assume that n will be small for random generation.
The basic idea to generate a random permutation is to generate at random one of the n! sequences of integers d1,d2,...,dn satisfying (since d1 is always zero it may be omitted) and to convert it to a permutation through a bijective correspondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 by Ronald Fisher and Frank Yates.
While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after using di to select an element among i remaining elements of the sequence (for decreasing values of i), rather than removing the element and compacting the sequence by shifting down further elements one place, one swaps the element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediate induction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated.
The resulting algorithm for generating a random permutation of a[0], a[1], ..., a[n − 1] can be described as follows in pseudocode:
for i from n downto 2 do
di ← random element of { 0, ..., i − 1 }
swap a[di] and a[i − 1]
This can be combined with the initialization of the array a[i] = i as follows
for i from 0 to n−1 do
di+1 ← random element of { 0, ..., i }
a[i] ← a[di+1]
a[di+1] ← i
If di+1 = i, the first assignment will copy an uninitialized value, but the second will overwrite it with the correct value i.
However, Fisher-Yates is not the fastest algorithm for generating a permutation, because Fisher-Yates is essentially a sequential algorithm and "divide and conquer" procedures can achieve the same result in parallel.
Generation in lexicographic order
There are many ways to systematically generate all permutations of a given sequence.
One classic, simple, and flexible algorithm is based upon finding the next permutation in lexicographic ordering, if it exists. It can handle repeated values, for which case it generates each distinct multiset permutation once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using the factorial number system) and converting those to permutations. It begins by sorting the sequence in (weakly) increasing order (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back to Narayana Pandita in 14th century India, and has been rediscovered frequently.
The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place.
Find the largest index k such that . If no such index exists, the permutation is the last permutation.
Find the largest index l greater than k such that .
Swap the value of a[k] with that of a[l].
Reverse the sequence from a[k + 1] up to and including the final element a[n].
For example, given the sequence [1, 2, 3, 4] (which is in increasing order), and given that the index is zero-based, the steps are as follows:
Index k = 2, because 3 is placed at an index that satisfies condition of being the largest index that is still less than a[k + 1] which is 4.
Index l = 3, because 4 is the only value in the sequence that is greater than 3 in order to satisfy the condition a[k] < a[l].
The values of a[2] and a[3] are swapped to form the new sequence [1, 2, 4, 3].
The sequence after k-index a[2] to the final element is reversed. Because only one value lies after this index (the 3), the sequence remains unchanged in this instance. Thus the lexicographic successor of the initial state is permuted: [1, 2, 4, 3].
Following this algorithm, the next lexicographic permutation will be [1, 3, 2, 4], and the 24th permutation will be [4, 3, 2, 1] at which point a[k] < a[k + 1] does not exist, indicating that this is the last permutation.
This method uses about 3 comparisons and 1.5 swaps per permutation, amortized over the whole sequence, not counting the initial sort.
Generation with minimal changes
An alternative to the above algorithm, the Steinhaus–Johnson–Trotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation.
An alternative to Steinhaus–Johnson–Trotter is Heap's algorithm, said by Robert Sedgewick in 1977 to be the fastest algorithm of generating permutations in applications.
The following figure shows the output of all three aforementioned algorithms for generating all permutations of length , and of six additional algorithms described in the literature.
Lexicographic ordering;
Steinhaus–Johnson–Trotter algorithm;
Heap's algorithm;
Ehrlich's star-transposition algorithm: in each step, the first entry of the permutation is exchanged with a later entry;
Zaks' prefix reversal algorithm: in each step, a prefix of the current permutation is reversed to obtain the next permutation;
Sawada-Williams' algorithm: each permutation differs from the previous one either by a cyclic left-shift by one position, or an exchange of the first two entries;
Corbett's algorithm: each permutation differs from the previous one by a cyclic left-shift of some prefix by one position;
Single-track ordering: each column is a cyclic shift of the other columns;
Single-track Gray code: each column is a cyclic shift of the other columns, plus any two consecutive permutations differ only in one or two transpositions.
Nested swaps generating algorithm in steps connected to the nested subgroups . Each permutation is obtained from the previous by a transposition multiplication to the left. Algorithm is connected to the Factorial_number_system of the index.
Generation of permutations in nested swap steps
Explicit sequence of swaps (transpositions, 2-cycles ), is described here, each swap applied (on the left) to the previous chain providing a new permutation, such that all the permutations can be retrieved, each only once. This counting/generating procedure has an additional structure (call it nested), as it is given in steps: after completely retrieving , continue retrieving by cosets of in , by appropriately choosing the coset representatives to be described below. Since each is sequentially generated, there is a last element . So, after generating by swaps, the next permutation in has to be for some . Then all swaps that generated are repeated, generating the whole coset , reaching the last permutation in that coset ; the next swap has to move the permutation to representative of another coset .
Continuing the same way, one gets coset representatives for the cosets of in ; the ordered set () is called the set of coset beginnings. Two of these representatives are in the same coset if and only if , that is, . Concluding, permutations are all representatives of distinct cosets if and only if for any , (no repeat condition). In particular, for all generated permutations to be distinct it is not necessary for the values to be distinct. In the process, one gets that and this provides the recursion procedure.
EXAMPLES: obviously, for one has ; to build there are only two possibilities for the coset beginnings satisfying the no repeat condition; the choice leads to . To continue generating one needs appropriate coset beginnings (satisfying the no repeat condition): there is a convenient choice: , leading to . Then, to build a convenient choice for the coset beginnings (satisfying the no repeat condition) is , leading to .
From examples above one can inductively go to higher in a similar way, choosing coset beginnings of in , as follows: for even choosing all coset beginnings equal to 1 and for odd choosing coset beginnings equal to . With such choices the "last" permutation is for odd and for even (). Using these explicit formulae one can easily compute the permutation of certain index in the counting/generation steps with minimum computation. For this, writing the index in factorial base is useful. For example, the permutation for index is: , yelding finally, .
Because multiplying by swap permutation takes short computing time and every new generated permutation requires only one such swap multiplication, this generation procedure is quite efficient. Moreover as there is a simple formula, having the last permutation in each can save even more time to go directly to a permutation with certain index in fewer steps than expected as it can be done in blocks of subgroups rather than swap by swap.
Applications
Permutations are used in the interleaver component of the error detection and correction algorithms, such as turbo codes, for example 3GPP Long Term Evolution mobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212).
Such applications raise the question of fast generation of permutations satisfying certain desirable properties. One of the methods is based on the permutation polynomials. Also as a base for optimal hashing in Unique Permutation Hashing.
See also
Alternating permutation
Convolution
Cyclic order
Even and odd permutations
Josephus permutation
Levi-Civita symbol
List of permutation topics
Major index
Permutation category
Permutation group
Permutation pattern
Permutation representation (symmetric group)
Probability
Rencontres numbers
Sorting network
Substitution cipher
Superpattern
Superpermutation
Twelvefold way
Weak order of permutations
Notes
References
Bibliography
This book mentions the Lehmer code (without using that name) as a variant C1,...,Cn of inversion tables in exercise 5.1.1–7 (p. 19), together with two other variants.
Fascicle 2, first printing.
The publisher is given as "W.S." who may have been William Smith, possibly acting as agent for the Society of College Youths, to which society the "Dedicatory" is addressed. In quotations the original long "S" has been replaced by a modern short "s".
Further reading
. The link is to a freely available retyped (LaTeX'ed) and revised version of the text originally published by Springer-Verlag.
. Section 5.1: Combinatorial Properties of Permutations, pp. 11–72.
External links
Arab inventions | Permutation | [
"Mathematics"
] | 9,842 | [
"Functions and mappings",
"Factorial and binomial topics",
"Permutations",
"Mathematical objects",
"Combinatorics",
"Mathematical relations"
] |
44,031 | https://en.wikipedia.org/wiki/Perfect%20matching | In graph theory, a perfect matching in a graph is a matching that covers every vertex of the graph. More formally, given a graph , a perfect matching in is a subset of edge set , such that every vertex in the vertex set is adjacent to exactly one edge in .
A perfect matching is also called a 1-factor; see Graph factorization for an explanation of this term. In some literature, the term complete matching is used.
Every perfect matching is a maximum-cardinality matching, but the opposite is not true. For example, consider the following graphs:
In graph (b) there is a perfect matching (of size 3) since all 6 vertices are matched; in graphs (a) and (c) there is a maximum-cardinality matching (of size 2) which is not perfect, since some vertices are unmatched.
A perfect matching is also a minimum-size edge cover. If there is a perfect matching, then both the matching number and the edge cover number equal .
A perfect matching can only occur when the graph has an even number of vertices. A near-perfect matching is one in which exactly one vertex is unmatched. This can only occur when the graph has an odd number of vertices, and such a matching must be maximum. In the above figure, part (c) shows a near-perfect matching. If, for every vertex in a graph, there is a near-perfect matching that omits only that vertex, the graph is also called factor-critical.
Characterizations
Hall's marriage theorem provides a characterization of bipartite graphs which have a perfect matching.
The Tutte theorem provides a characterization for arbitrary graphs.
A perfect matching is a spanning 1-regular subgraph, a.k.a. a 1-factor. In general, a spanning k-regular subgraph is a k-factor.
A spectral characterization for a graph to have a perfect matching is given by Hassani Monfared and Mallik as follows: Let be a graph on even vertices and be distinct nonzero purely imaginary numbers. Then has a perfect matching if and only if there is a real skew-symmetric matrix with graph and eigenvalues . Note that the (simple) graph of a real symmetric or skew-symmetric matrix of order has vertices and edges given by the nonzero off-diagonal entries of .
Computation
Deciding whether a graph admits a perfect matching can be done in polynomial time, using any algorithm for finding a maximum cardinality matching.
However, counting the number of perfect matchings, even in bipartite graphs, is #P-complete. This is because computing the permanent of an arbitrary 0–1 matrix (another #P-complete problem) is the same as computing the number of perfect matchings in the bipartite graph having the given matrix as its biadjacency matrix.
A remarkable theorem of Kasteleyn states that the number of perfect matchings in a planar graph can be computed exactly in polynomial time via the FKT algorithm.
The number of perfect matchings in a complete graph Kn (with n even) is given by the double factorial:
Connection to Graph Coloring
An edge-colored graph can induce a number of (not necessarily proper) vertex colorings equal to the number of perfect matchings, as every vertex is covered exactly once in each matching. This property has been investigated in quantum physics and computational complexity theory.
Perfect matching polytope
The perfect matching polytope of a graph is a polytope in R|E| in which each corner is an incidence vector of a perfect matching.
See also
Envy-free matching
Maximum-cardinality matching
Perfect matching in high-degree hypergraphs
Hall-type theorems for hypergraphs
The unique perfect matching problem
References
Matching (graph theory) | Perfect matching | [
"Mathematics"
] | 770 | [
"Matching (graph theory)",
"Mathematical relations",
"Graph theory"
] |
44,041 | https://en.wikipedia.org/wiki/Solvation | Solvations describes the interaction of a solvent with dissolved molecules. Both ionized and uncharged molecules interact strongly with a solvent, and the strength and nature of this interaction influence many properties of the solute, including solubility, reactivity, and color, as well as influencing the properties of the solvent such as its viscosity and density. If the attractive forces between the solvent and solute particles are greater than the attractive forces holding the solute particles together, the solvent particles pull the solute particles apart and surround them. The surrounded solute particles then move away from the solid solute and out into the solution. Ions are surrounded by a concentric shell of solvent. Solvation is the process of reorganizing solvent and solute molecules into solvation complexes and involves bond formation, hydrogen bonding, and van der Waals forces. Solvation of a solute by water is called hydration.
Solubility of solid compounds depends on a competition between lattice energy and solvation, including entropy effects related to changes in the solvent structure.
Distinction from solubility
By an IUPAC definition, solvation is an interaction of a solute with the solvent, which leads to stabilization of the solute species in the solution. In the solvated state, an ion or molecule in a solution is surrounded or complexed by solvent molecules. Solvated species can often be described by coordination number, and the complex stability constants. The concept of the solvation interaction can also be applied to an insoluble material, for example, solvation of functional groups on a surface of ion-exchange resin.
Solvation is, in concept, distinct from solubility. Solvation or dissolution is a kinetic process and is quantified by its rate. Solubility quantifies the dynamic equilibrium state achieved when the rate of dissolution equals the rate of precipitation. The consideration of the units makes the distinction clearer. The typical unit for dissolution rate is mol/s. The units for solubility express a concentration: mass per volume (mg/mL), molarity (mol/L), etc.
Solvents and intermolecular interactions
Solvation involves different types of intermolecular interactions:
Hydrogen bonding
Ion–dipole interactions
The van der Waals forces, which consist of dipole–dipole, dipole–induced dipole, and induced dipole–induced dipole interactions.
Which of these forces are at play depends on the molecular structure and properties of the solvent and solute. The similarity or complementary character of these properties between solvent and solute determines how well a solute can be solvated by a particular solvent.
Solvent polarity is the most important factor in determining how well it solvates a particular solute. Polar solvents have molecular dipoles, meaning that part of the solvent molecule has more electron density than another part of the molecule. The part with more electron density will experience a partial negative charge while the part with less electron density will experience a partial positive charge. Polar solvent molecules can solvate polar solutes and ions because they can orient the appropriate partially charged portion of the molecule towards the solute through electrostatic attraction. This stabilizes the system and creates a solvation shell (or hydration shell in the case of water) around each particle of solute. The solvent molecules in the immediate vicinity of a solute particle often have a much different ordering than the rest of the solvent, and this area of differently ordered solvent molecules is called the cybotactic region. Water is the most common and well-studied polar solvent, but others exist, such as ethanol, methanol, acetone, acetonitrile, and dimethyl sulfoxide. Polar solvents are often found to have a high dielectric constant, although other solvent scales are also used to classify solvent polarity. Polar solvents can be used to dissolve inorganic or ionic compounds such as salts. The conductivity of a solution depends on the solvation of its ions. Nonpolar solvents cannot solvate ions, and ions will be found as ion pairs.
Hydrogen bonding among solvent and solute molecules depends on the ability of each to accept H-bonds, donate H-bonds, or both. Solvents that can donate H-bonds are referred to as protic, while solvents that do not contain a polarized bond to a hydrogen atom and cannot donate a hydrogen bond are called aprotic. H-bond donor ability is classified on a scale (α). Protic solvents can solvate solutes that can accept hydrogen bonds. Similarly, solvents that can accept a hydrogen bond can solvate H-bond-donating solutes. The hydrogen bond acceptor ability of a solvent is classified on a scale (β). Solvents such as water can both donate and accept hydrogen bonds, making them excellent at solvating solutes that can donate or accept (or both) H-bonds.
Some chemical compounds experience solvatochromism, which is a change in color due to solvent polarity. This phenomenon illustrates how different solvents interact differently with the same solute. Other solvent effects include conformational or isomeric preferences and changes in the acidity of a solute.
Solvation energy and thermodynamic considerations
The solvation process will be thermodynamically favored only if the overall Gibbs energy of the solution is decreased, compared to the Gibbs energy of the separated solvent and solid (or gas or liquid). This means that the change in enthalpy minus the change in entropy (multiplied by the absolute temperature) is a negative value, or that the Gibbs energy of the system decreases. A negative Gibbs energy indicates a spontaneous process but does not provide information about the rate of dissolution.
Solvation involves multiple steps with different energy consequences. First, a cavity must form in the solvent to make space for a solute. This is both entropically and enthalpically unfavorable, as solvent ordering increases and solvent-solvent interactions decrease. Stronger interactions among solvent molecules leads to a greater enthalpic penalty for cavity formation. Next, a particle of solute must separate from the bulk. This is enthalpically unfavorable since solute-solute interactions decrease, but when the solute particle enters the cavity, the resulting solvent-solute interactions are enthalpically favorable. Finally, as solute mixes into solvent, there is an entropy gain.
The enthalpy of solution is the solution enthalpy minus the enthalpy of the separate systems, whereas the entropy of solution is the corresponding difference in entropy. The solvation energy (change in Gibbs free energy) is the change in enthalpy minus the product of temperature (in Kelvin) times the change in entropy. Gases have a negative entropy of solution, due to the decrease in gaseous volume as gas dissolves. Since their enthalpy of solution does not decrease too much with temperature, and their entropy of solution is negative and does not vary appreciably with temperature, most gases are less soluble at higher temperatures.
Enthalpy of solvation can help explain why solvation occurs with some ionic lattices but not with others. The difference in energy between that which is necessary to release an ion from its lattice and the energy given off when it combines with a solvent molecule is called the enthalpy change of solution. A negative value for the enthalpy change of solution corresponds to an ion that is likely to dissolve, whereas a high positive value means that solvation will not occur. It is possible that an ion will dissolve even if it has a positive enthalpy value. The extra energy required comes from the increase in entropy that results when the ion dissolves. The introduction of entropy makes it harder to determine by calculation alone whether a substance will dissolve or not. A quantitative measure for solvation power of solvents is given by donor numbers.
Although early thinking was that a higher ratio of a cation's ion charge to ionic radius, or the charge density, resulted in more solvation, this does not stand up to scrutiny for ions like iron(III) or lanthanides and actinides, which are readily hydrolyzed to form insoluble (hydrous) oxides. As these are solids, it is apparent that they are not solvated.
Strong solvent–solute interactions make the process of solvation more favorable. One way to compare how favorable the dissolution of a solute is in different solvents is to consider the free energy of transfer. The free energy of transfer quantifies the free energy difference between dilute solutions of a solute in two different solvents. This value essentially allows for comparison of solvation energies without including solute-solute interactions.
In general, thermodynamic analysis of solutions is done by modeling them as reactions. For example, if you add sodium chloride to water, the salt will dissociate into the ions sodium(+aq) and chloride(-aq). The equilibrium constant for this dissociation can be predicted by the change in Gibbs energy of this reaction.
The Born equation is used to estimate Gibbs free energy of solvation of a gaseous ion.
Recent simulation studies have shown that the variation in solvation energy between the ions and the surrounding water molecules underlies the mechanism of the Hofmeister series.
Macromolecules and assemblies
Solvation (specifically, hydration) is important for many biological structures and processes. For instance, solvation of ions and/or of charged macromolecules, like DNA and proteins, in aqueous solutions influences the formation of heterogeneous assemblies, which may be responsible for biological function. As another example, protein folding occurs spontaneously, in part because of a favorable change in the interactions between the protein and the surrounding water molecules. Folded proteins are stabilized by 5-10 kcal/mol relative to the unfolded state due to a combination of solvation and the stronger intramolecular interactions in the folded protein structure, including hydrogen bonding. Minimizing the number of hydrophobic side chains exposed to water by burying them in the center of a folded protein is a driving force related to solvation.
Solvation also affects host–guest complexation. Many host molecules have a hydrophobic pore that readily encapsulates a hydrophobic guest. These interactions can be used in applications such as drug delivery, such that a hydrophobic drug molecule can be delivered in a biological system without needing to covalently modify the drug in order to solubilize it. Binding constants for host–guest complexes depend on the polarity of the solvent.
Hydration affects electronic and vibrational properties of biomolecules.
Importance of solvation in computer simulations
Due to the importance of the effects of solvation on the structure of macromolecules, early computer simulations which attempted to model their behaviors without including the effects of solvent (in vacuo) could yield poor results when compared with experimental data obtained in solution. Small molecules may also adopt more compact conformations when simulated in vacuo; this is due to favorable van der Waals interactions and intramolecular electrostatic interactions which would be dampened in the presence of a solvent.
As computer power increased, it became possible to try and incorporate the effects of solvation within a simulation and the simplest way to do this is to surround the molecule being simulated with a "skin" of solvent molecules, akin to simulating the molecule within a drop of solvent if the skin is sufficiently deep.
See also
Born equation
Saturated solution
Solubility equilibrium
Solvent models
Supersaturation
Water model
References
Further reading
(part A), (part B), (Chemistry).
One example of a solvated MOF, where partial dissolution is described.
External links
Solutions
Chemical processes | Solvation | [
"Chemistry"
] | 2,426 | [
"Homogeneous chemical mixtures",
"Chemical processes",
"nan",
"Chemical process engineering",
"Solutions"
] |
44,044 | https://en.wikipedia.org/wiki/Oceanography | Oceanography (), also known as oceanology, sea science, ocean science, and marine science, is the scientific study of the ocean, including its physics, chemistry, biology, and geology.
It is an Earth science, which covers a wide range of topics, including ocean currents, waves, and geophysical fluid dynamics; fluxes of various chemical substances and physical properties within the ocean and across its boundaries; ecosystem dynamics; and plate tectonics and seabed geology.
Oceanographers draw upon a wide range of disciplines to deepen their understanding of the world’s oceans, incorporating insights from astronomy, biology, chemistry, geography, geology, hydrology, meteorology and physics.
History
Early history
Humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo in 384–322 BC. Early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken.
The Portuguese campaign of Atlantic navigation is the earliest example of a systematic scientific large project, sustained over many decades, studying the currents and winds of the Atlantic.
The work of Pedro Nunes (1502–1578) is remembered in the navigation context for the determination of the loxodromic curve: the shortest course between two points on the surface of a sphere represented onto a two-dimensional map. When he published his "Treatise of the Sphere" (1537), mostly a commentated translation of earlier work by others, he included a treatise on geometrical and astronomic methods of navigation. There he states clearly that Portuguese navigations were not an adventurous endeavour:
"nam se fezeram indo a acertar: mas partiam os nossos mareantes muy ensinados e prouidos de estromentos e regras de astrologia e geometria que sam as cousas que os cosmographos ham dadar apercebidas (...) e leuaua cartas muy particularmente rumadas e na ja as de que os antigos vsauam" (were not done by chance: but our seafarers departed well taught and provided with instruments and rules of astrology (astronomy) and geometry which were matters the cosmographers would provide (...) and they took charts with exact routes and no longer those used by the ancient).
His credibility rests on being personally involved in the instruction of pilots and senior seafarers from 1527 onwards by Royal appointment, along with his recognized competence as mathematician and astronomer.
The main problem in navigating back from the south of the Canary Islands (or south of Boujdour) by sail alone, is due to the change in the regime of winds and currents: the North Atlantic gyre and the Equatorial counter current will push south along the northwest bulge of Africa, while the uncertain winds where the Northeast trades meet the Southeast trades (the doldrums) leave a sailing ship to the mercy of the currents. Together, prevalent current and wind make northwards progress very difficult or impossible. It was to overcome this problem and clear the passage to India around Africa as a viable maritime trade route, that a systematic plan of exploration was devised by the Portuguese. The return route from regions south of the Canaries became the 'volta do largo' or 'volta do mar'. The 'rediscovery' of the Azores islands in 1427 is merely a reflection of the heightened strategic importance of the islands, now sitting on the return route from the western coast of Africa (sequentially called 'volta de Guiné' and 'volta da Mina'); and the references to the Sargasso Sea (also called at the time 'Mar da Baga'), to the west of the Azores, in 1436, reveals the western extent of the return route. This is necessary, under sail, to make use of the southeasterly and northeasterly winds away from the western coast of Africa, up to the northern latitudes where the westerly winds will bring the seafarers towards the western coasts of Europe.
The secrecy involving the Portuguese navigations, with the death penalty for the leaking of maps and routes, concentrated all sensitive records in the Royal Archives, completely destroyed by the Lisbon earthquake of 1775. However, the systematic nature of the Portuguese campaign, mapping the currents and winds of the Atlantic, is demonstrated by the understanding of the seasonal variations, with expeditions setting sail at different times of the year taking different routes to take account of seasonal predominate winds. This happens from as early as late 15th century and early 16th: Bartolomeu Dias followed the African coast on his way south in August 1487, while Vasco da Gama would take an open sea route from the latitude of Sierra Leone, spending three months in the open sea of the South Atlantic to profit from the southwards deflection of the southwesterly on the Brazilian side (and the Brazilian current going southward - Gama departed in July 1497); and Pedro Álvares Cabral (departing March 1500) took an even larger arch to the west, from the latitude of Cape Verde, thus avoiding the summer monsoon (which would have blocked the route taken by Gama at the time he set sail). Furthermore, there were systematic expeditions pushing into the western Northern Atlantic (Teive, 1454; Vogado, 1462; Teles, 1474; Ulmo, 1486).
The documents relating to the supplying of ships, and the ordering of sun declination tables for the southern Atlantic for as early as 1493–1496, all suggest a well-planned and systematic activity happening during the decade long period between Bartolomeu Dias finding the southern tip of Africa, and Gama's departure; additionally, there are indications of further travels by Bartolomeu Dias in the area. The most significant consequence of this systematic knowledge was the negotiation of the Treaty of Tordesillas in 1494, moving the line of demarcation 270 leagues to the west (from 100 to 370 leagues west of the Azores), bringing what is now Brazil into the Portuguese area of domination. The knowledge gathered from open sea exploration allowed for the well-documented extended periods of sail without sight of land, not by accident but as pre-determined planned route; for example, 30 days for Bartolomeu Dias culminating on Mossel Bay, the three months Gama spent in the South Atlantic to use the Brazil current (southward), or the 29 days Cabral took from Cape Verde up to landing in Monte Pascoal, Brazil.
The Danish expedition to Arabia 1761–67 can be said to be the world's first oceanographic expedition, as the ship Grønland had on board a group of scientists, including naturalist Peter Forsskål, who was assigned an explicit task by the king, Frederik V, to study and describe the marine life in the open sea, including finding the cause of mareel, or milky seas. For this purpose, the expedition was equipped with nets and scrapers, specifically designed to collect samples from the open waters and the bottom at great depth.
Although Juan Ponce de León in 1513 first identified the Gulf Stream, and the current was well known to mariners, Benjamin Franklin made the first scientific study of it and gave it its name. Franklin measured water temperatures during several Atlantic crossings and correctly explained the Gulf Stream's cause. Franklin and Timothy Folger printed the first map of the Gulf Stream in 1769–1770.
Information on the currents of the Pacific Ocean was gathered by explorers of the late 18th century, including James Cook and Louis Antoine de Bougainville. James Rennell wrote the first scientific textbooks on oceanography, detailing the current flows of the Atlantic and Indian oceans. During a voyage around the Cape of Good Hope in 1777, he mapped "the banks and currents at the Lagullas". He was also the first to understand the nature of the intermittent current near the Isles of Scilly, (now known as Rennell's Current). The tides and currents of the ocean are distinct. Tides are the rise and fall of sea levels created by the combination of the gravitational forces of the Moon along with the Sun (the Sun just in a much lesser extent) and are also caused by the Earth and Moon orbiting each other. An ocean current is a continuous, directed movement of seawater generated by a number of forces acting upon the water, including wind, the Coriolis effect, breaking waves, cabbeling, and temperature and salinity differences.
Sir James Clark Ross took the first modern sounding in deep sea in 1840, and Charles Darwin published a paper on reefs and the formation of atolls as a result of the second voyage of HMS Beagle in 1831–1836. Robert FitzRoy published a four-volume report of Beagles three voyages. In 1841–1842 Edward Forbes undertook dredging in the Aegean Sea that founded marine ecology.
The first superintendent of the United States Naval Observatory (1842–1861), Matthew Fontaine Maury devoted his time to the study of marine meteorology, navigation, and charting prevailing winds and currents. His 1855 textbook Physical Geography of the Sea was one of the first comprehensive oceanography studies. Many nations sent oceanographic observations to Maury at the Naval Observatory, where he and his colleagues evaluated the information and distributed the results worldwide.
Modern oceanography
Knowledge of the oceans remained confined to the topmost few fathoms of the water and a small amount of the bottom, mainly in shallow areas. Almost nothing was known of the ocean depths. The British Royal Navy's efforts to chart all of the world's coastlines in the mid-19th century reinforced the vague idea that most of the ocean was very deep, although little more was known. As exploration ignited both popular and scientific interest in the polar regions and Africa, so too did the mysteries of the unexplored oceans.
The seminal event in the founding of the modern science of oceanography was the 1872–1876 Challenger expedition. As the first true oceanographic cruise, this expedition laid the groundwork for an entire academic and research discipline. In response to a recommendation from the Royal Society, the British Government announced in 1871 an expedition to explore world's oceans and conduct appropriate scientific investigation. Charles Wyville Thomson and Sir John Murray launched the Challenger expedition. , leased from the Royal Navy, was modified for scientific work and equipped with separate laboratories for natural history and chemistry. Under the scientific supervision of Thomson, Challenger travelled nearly surveying and exploring. On her journey circumnavigating the globe, 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations were taken. Around 4,700 new species of marine life were discovered. The result was the Report Of The Scientific Results of the Exploring Voyage of H.M.S. Challenger during the years 1873–76. Murray, who supervised the publication, described the report as "the greatest advance in the knowledge of our planet since the celebrated discoveries of the fifteenth and sixteenth centuries". He went on to found the academic discipline of oceanography at the University of Edinburgh, which remained the centre for oceanographic research well into the 20th century. Murray was the first to study marine trenches and in particular the Mid-Atlantic Ridge, and map the sedimentary deposits in the oceans. He tried to map out the world's ocean currents based on salinity and temperature observations, and was the first to correctly understand the nature of coral reef development.
In the late 19th century, other Western nations also sent out scientific expeditions (as did private individuals and institutions). The first purpose-built oceanographic ship, Albatros, was built in 1882. In 1893, Fridtjof Nansen allowed his ship, Fram, to be frozen in the Arctic ice. This enabled him to obtain oceanographic, meteorological and astronomical data at a stationary spot over an extended period.
In 1881 the geographer John Francon Williams published a seminal book, Geography of the Oceans. Between 1907 and 1911 Otto Krümmel published the Handbuch der Ozeanographie, which became influential in awakening public interest in oceanography. The four-month 1910 North Atlantic expedition headed by John Murray and Johan Hjort was the most ambitious research oceanographic and marine zoological project ever mounted until then, and led to the classic 1912 book The Depths of the Ocean.
The first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the "Meteor" expedition gathered 70,000 ocean depth measurements using an echo sounder, surveying the Mid-Atlantic Ridge.
In 1934, Easter Ellen Cupp, the first woman to have earned a PhD (at Scripps) in the United States, completed a major work on diatoms that remained the standard taxonomy in the field until well after her death in 1999. In 1940, Cupp was let go from her position at Scripps. Sverdrup specifically commended Cupp as a conscientious and industrious worker and commented that his decision was no reflection on her ability as a scientist. Sverdrup used the instructor billet vacated by Cupp to employ Marston Sargent, a biologist studying marine algae, which was not a new research program at Scripps. Financial pressures did not prevent Sverdrup from retaining the services of two other young post-doctoral students, Walter Munk and Roger Revelle. Cupp's partner, Dorothy Rosenbury, found her a position teaching high school, where she remained for the rest of her career. (Russell, 2000)
Sverdrup, Johnson and Fleming published The Oceans in 1942, which was a major landmark. The Sea (in three volumes, covering physical oceanography, seawater and geology) edited by M.N. Hill was published in 1962, while Rhodes Fairbridge's Encyclopedia of Oceanography was published in 1966.
The Great Global Rift, running along the Mid Atlantic Ridge, was discovered by Maurice Ewing and Bruce Heezen in 1953 and mapped by Heezen and Marie Tharp using bathymetric data; in 1954 a mountain range under the Arctic Ocean was found by the Arctic Institute of the USSR. The theory of seafloor spreading was developed in 1960 by Harry Hammond Hess. The Ocean Drilling Program started in 1966. Deep-sea vents were discovered in 1977 by Jack Corliss and Robert Ballard in the submersible .
In the 1950s, Auguste Piccard invented the bathyscaphe and used the bathyscaphe to investigate the ocean's depths. The United States nuclear submarine made the first journey under the ice to the North Pole in 1958. In 1962 the FLIP (Floating Instrument Platform), a spar buoy, was first deployed.
In 1968, Tanya Atwater led the first all-woman oceanographic expedition. Until that time, gender policies restricted women oceanographers from participating in voyages to a significant extent.
From the 1970s, there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction. Early techniques included analog computers (such as the Ishiguro Storm Surge Computer) generally now replaced by numerical methods (e.g. SLOSH.) An oceanographic buoy array was established in the Pacific to allow prediction of El Niño events.
1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. Geosat seafloor mapping data became available in 1995.
Study of the oceans is critical to understanding shifts in Earth's energy balance along with related global and regional changes in climate, the biosphere and biogeochemistry. The atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux (and solar insolation). Recent studies have advanced knowledge on ocean acidification, ocean heat content, ocean currents, sea level rise, the oceanic carbon cycle, the water cycle, Arctic sea ice decline, coral bleaching, marine heatwaves, extreme weather, coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks.
In general, understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of Earth's resources. The Intergovernmental Oceanographic Commission reports that 1.7% of the total national research expenditure of its members is focused on ocean science.
Branches
The study of oceanography is divided into these five branches:
Biological oceanography
Biological oceanography investigates the ecology and biology of marine organisms in the context of the physical, chemical and geological characteristics of their ocean environment.
Chemical oceanography
Chemical oceanography is the study of the chemistry of the ocean. Whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes, ocean chemistry focuses primarily on the geochemical cycles. The following is a central topic investigated by chemical oceanography.
Ocean acidification
Ocean acidification describes the decrease in ocean pH that is caused by anthropogenic carbon dioxide () emissions into the atmosphere. Seawater is slightly alkaline and had a preindustrial pH of about 8.2. More recently, anthropogenic activities have steadily increased the carbon dioxide content of the atmosphere; about 30–40% of the added CO2 is absorbed by the oceans, forming carbonic acid and lowering the pH (now below 8.1) through ocean acidification. The pH is expected to reach 7.7 by the year 2100.
An important element for the skeletons of marine animals is calcium, but calcium carbonate becomes more soluble with pressure, so carbonate shells and skeletons dissolve below the carbonate compensation depth. Calcium carbonate becomes more soluble at lower pH, so ocean acidification is likely to affect marine organisms with calcareous shells, such as oysters, clams, sea urchins and corals, and the carbonate compensation depth will rise closer to the sea surface. Affected planktonic organisms will include pteropods, coccolithophorids and foraminifera, all important in the food chain. In tropical regions, corals are likely to be severely affected as they become less able to build their calcium carbonate skeletons, in turn adversely impacting other reef dwellers.
The current rate of ocean chemistry change seems to be unprecedented in Earth's geological history, making it unclear how well marine ecosystems will adapt to the shifting conditions of the near future. Of particular concern is the manner in which the combination of acidification with the expected additional stressors of higher ocean temperatures and lower oxygen levels will impact the seas.
Geological oceanography
Geological oceanography is the study of the geology of the ocean floor including plate tectonics and paleoceanography.
Physical oceanography
Physical oceanography studies the ocean's physical attributes including temperature-salinity structure, mixing, surface waves, internal waves, surface tides, internal tides, and currents. The following are central topics investigated by physical oceanography.
Seismic Oceanography
Ocean currents
Since the early ocean expeditions in oceanography, a major interest was the study of ocean currents and temperature measurements. The tides, the Coriolis effect, changes in direction and strength of wind, salinity, and temperature are the main factors determining ocean currents. The thermohaline circulation (THC) (thermo- referring to temperature and -haline referring to salt content) connects the ocean basins and is primarily dependent on the density of sea water. It is becoming more common to refer to this system as the 'meridional overturning circulation' because it more accurately accounts for other driving factors beyond temperature and salinity.
Examples of sustained currents are the Gulf Stream and the Kuroshio Current which are wind-driven western boundary currents.
Ocean heat content
Oceanic heat content (OHC) refers to the extra heat stored in the ocean from changes in Earth's energy balance. The increase in the ocean heat play an important role in sea level rise, because of thermal expansion. Ocean warming accounts for 90% of the energy accumulation associated with global warming since 1971.
Paleoceanography
Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the reconstruction of past climate at various intervals. Paleoceanographic research is also intimately tied to palaeoclimatology.
Oceanographic institutions
The earliest international organizations of oceanography were founded at the turn of the 20th century, starting with the International Council for the Exploration of the Sea created in 1902, followed in 1919 by the Mediterranean Science Commission. Marine research institutes were already in existence, starting with the Stazione Zoologica Anton Dohrn in Naples, Italy (1872), the Biological Station of Roscoff, France (1876), the Arago Laboratory in Banyuls-sur-mer, France (1882), the Laboratory of the Marine Biological Association in Plymouth, UK (1884), the Norwegian Institute for Marine Research in Bergen, Norway (1900), the Laboratory für internationale Meeresforschung, Kiel, Germany (1902). On the other side of the Atlantic, the Scripps Institution of Oceanography was founded in 1903, followed by the Woods Hole Oceanographic Institution in 1930, the Virginia Institute of Marine Science in 1938, the Lamont–Doherty Earth Observatory at Columbia University in 1949, and later the School of Oceanography at University of Washington. In Australia, the Australian Institute of Marine Science (AIMS), established in 1972 soon became a key player in marine tropical research.
In 1921 the International Hydrographic Bureau, called since 1970 the International Hydrographic Organization, was established to develop hydrographic and nautical charting standards.
Related disciplines
See also
List of seas
Ocean optics
Ocean color
Ocean chemistry
References
Sources and further reading
Boling Guo, Daiwen Huang. Infinite-Dimensional Dynamical Systems in Atmospheric and Oceanic Science, 2014, World Scientific Publishing, . Sample Chapter
Hamblin, Jacob Darwin (2005) Oceanographers and the Cold War: Disciples of Marine Science. University of Washington Press.
Lang, Michael A., Ian G. Macintyre, and Klaus Rützler, eds. Proceedings of the Smithsonian Marine Science Symposium. Smithsonian Contributions to the Marine Sciences, no. 38. Washington, D.C.: Smithsonian Institution Scholarly Press (2009)
Roorda, Eric Paul, ed. The Ocean Reader: History, Culture, Politics (Duke University Press, 2020) 523 pp. online review
Steele, J., K. Turekian and S. Thorpe. (2001). Encyclopedia of Ocean Sciences. San Diego: Academic Press. (6 vols.)
Sverdrup, Keith A., Duxbury, Alyn C., Duxbury, Alison B. (2006). Fundamentals of Oceanography, McGraw-Hill,
Russell, Joellen Louise. Easter Ellen Cupp, 2000, Regents of the University of California.
External links
NASA Jet Propulsion Laboratory – Physical Oceanography Distributed Active Archive Center (PO.DAAC). A data centre responsible for archiving and distributing data about the physical state of the ocean.
Scripps Institution of Oceanography. One of the world's oldest, largest, and most important centres for ocean and Earth science research, education, and public service.
Woods Hole Oceanographic Institution (WHOI). One of the world's largest private, non-profit ocean research, engineering and education organizations.
British Oceanographic Data Centre. A source of oceanographic data and information.
NOAA Ocean and Weather Data Navigator. Plot and download ocean data.
Freeview Video 'Voyage to the Bottom of the Deep Deep Sea' Oceanography Programme by the Vega Science Trust and the BBC/Open University.
Atlas of Spanish Oceanography by InvestigAdHoc.
Glossary of Physical Oceanography and Related Disciplines by Steven K. Baum, Department of Oceanography, Texas A&M University
Barcelona-Ocean.com . Inspiring Education in Marine Sciences
CFOO: Sea Atlas. A source of oceanographic live data (buoy monitoring) and education for South African coasts.
Memorial website for USNS Bowditch, USNS Dutton, USNS Michelson and USNS H. H. Hess
Applied and interdisciplinary physics
Earth sciences
Hydrology
Physical geography
Articles containing video clips | Oceanography | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 5,004 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"Environmental engineering"
] |
44,055 | https://en.wikipedia.org/wiki/Measurable%20function | In mathematics, and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in direct analogy to the definition that a continuous function between topological spaces preserves the topological structure: the preimage of any open set is open. In real analysis, measurable functions are used in the definition of the Lebesgue integral. In probability theory, a measurable function on a probability space is known as a random variable.
Formal definition
Let and be measurable spaces, meaning that and are sets equipped with respective -algebras and A function is said to be measurable if for every the pre-image of under is in ; that is, for all
That is, where is the σ-algebra generated by f. If is a measurable function, one writes
to emphasize the dependency on the -algebras and
Term usage variations
The choice of -algebras in the definition above is sometimes implicit and left up to the context. For example, for or other topological spaces, the Borel algebra (generated by all the open sets) is a common choice. Some authors define measurable functions as exclusively real-valued ones with respect to the Borel algebra.
If the values of the function lie in an infinite-dimensional vector space, other non-equivalent definitions of measurability, such as weak measurability and Bochner measurability, exist.
Notable classes of measurable functions
Random variables are by definition measurable functions defined on probability spaces.
If and are Borel spaces, a measurable function is also called a Borel function. Continuous functions are Borel functions but not all Borel functions are continuous. However, a measurable function is nearly a continuous function; see Luzin's theorem. If a Borel function happens to be a section of a map it is called a Borel section.
A Lebesgue measurable function is a measurable function where is the -algebra of Lebesgue measurable sets, and is the Borel algebra on the complex numbers Lebesgue measurable functions are of interest in mathematical analysis because they can be integrated. In the case is Lebesgue measurable if and only if is measurable for all This is also equivalent to any of being measurable for all or the preimage of any open set being measurable. Continuous functions, monotone functions, step functions, semicontinuous functions, Riemann-integrable functions, and functions of bounded variation are all Lebesgue measurable. A function is measurable if and only if the real and imaginary parts are measurable.
Properties of measurable functions
The sum and product of two complex-valued measurable functions are measurable. So is the quotient, so long as there is no division by zero.
If and are measurable functions, then so is their composition
If and are measurable functions, their composition need not be -measurable unless Indeed, two Lebesgue-measurable functions may be constructed in such a way as to make their composition non-Lebesgue-measurable.
The (pointwise) supremum, infimum, limit superior, and limit inferior of a sequence (viz., countably many) of real-valued measurable functions are all measurable as well.
The pointwise limit of a sequence of measurable functions is measurable, where is a metric space (endowed with the Borel algebra). This is not true in general if is non-metrizable. The corresponding statement for continuous functions requires stronger conditions than pointwise convergence, such as uniform convergence.
Non-measurable functions
Real-valued functions encountered in applications tend to be measurable; however, it is not difficult to prove the existence of non-measurable functions. Such proofs rely on the axiom of choice in an essential way, in the sense that Zermelo–Fraenkel set theory without the axiom of choice does not prove the existence of such functions.
In any measure space with a non-measurable set one can construct a non-measurable indicator function:
where is equipped with the usual Borel algebra. This is a non-measurable function since the preimage of the measurable set is the non-measurable
As another example, any non-constant function is non-measurable with respect to the trivial -algebra since the preimage of any point in the range is some proper, nonempty subset of which is not an element of the trivial
See also
- Vector spaces of measurable functions: the spaces
Notes
External links
Measurable function at Encyclopedia of Mathematics
Borel function at Encyclopedia of Mathematics
Measure theory
Types of functions | Measurable function | [
"Mathematics"
] | 971 | [
"Mathematical objects",
"Functions and mappings",
"Types of functions",
"Mathematical relations"
] |
44,057 | https://en.wikipedia.org/wiki/Galactic%20astronomy | Galactic astronomy is the study of the Milky Way galaxy and all its contents. This is in contrast to extragalactic astronomy, which is the study of everything outside our galaxy, including all other galaxies.
Galactic astronomy should not be confused with galaxy formation and evolution, which is the general study of galaxies, their formation, structure, components, dynamics, interactions, and the range of forms they take.
The Milky Way galaxy, where the Solar System is located, is in many ways the best-studied galaxy, although important parts of it are obscured from view in visible wavelengths by regions of cosmic dust. The development of radio astronomy, infrared astronomy and submillimetre astronomy in the 20th century allowed the gas and dust of the Milky Way to be mapped for the first time.
Subcategories
A standard set of subcategories is used by astronomical journals to split up the subject of Galactic Astronomy:
abundances – the study of the location of elements heavier than helium
bulge – the study of the bulge around the center of the Milky Way
center – the study of the central region of the Milky Way
disk – the study of the Milky Way disk (the plane upon which most galactic objects are aligned)
evolution – the evolution of the Milky Way
formation – the formation of the Milky Way
fundamental parameters – the fundamental parameters of the Milky Way (mass, size etc.)
globular cluster – globular clusters within the Milky Way
halo – the large halo around the Milky Way
kinematics, and dynamics – the motions of stars and clusters
nucleus – the region around the black hole at the center of the Milky Way (Sagittarius A*)
open clusters and associations – open clusters and associations of stars
Solar neighborhood – nearby stars
stellar content – numbers and types of stars in the Milky Way
structure – the structure (spiral arms etc.)
Stellar populations
Star clusters
Globular clusters
Open clusters
Interstellar medium
Interplanetary space - Interplanetary medium - interplanetary dust
Interstellar space - Interstellar medium - interstellar dust
Intergalactic space - Intergalactic medium - Intergalactic dust
See also
Galaxy
Milky Way
Extragalactic astronomy
References
External links
Mapping the hydrogen gas in the Milky Way
Astronomical sub-disciplines | Galactic astronomy | [
"Astronomy"
] | 446 | [
"Galactic astronomy",
"Astronomical sub-disciplines"
] |
44,058 | https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis | In physical cosmology, Big Bang nucleosynthesis (also known as primordial nucleosynthesis, and abbreviated as BBN) is the production of nuclei other than those of the lightest isotope of hydrogen (hydrogen-1, 1H, having a single proton as a nucleus) during the early phases of the universe. This type of nucleosynthesis is thought by most cosmologists to have occurred from 10 seconds to 20 minutes after the Big Bang. It is thought to be responsible for the formation of most of the universe's helium (as isotope helium-4 (4He)), along with small fractions of the hydrogen isotope deuterium (2H or D), the helium isotope helium-3 (3He), and a very small fraction of the lithium isotope lithium-7 (7Li). In addition to these stable nuclei, two unstable or radioactive isotopes were produced: the heavy hydrogen isotope tritium (3H or T) and the beryllium isotope beryllium-7 (7Be). These unstable isotopes later decayed into 3He and 7Li, respectively, as above.
Elements heavier than lithium are thought to have been created later in the life of the Universe by stellar nucleosynthesis, through the formation, evolution and death of stars.
Characteristics
There are several important characteristics of Big Bang nucleosynthesis (BBN):
The initial conditions (neutron–proton ratio) were set in the first second after the Big Bang.
The universe was very close to homogeneous at this time, and strongly radiation-dominated.
The fusion of nuclei occurred between roughly 10 seconds to 20 minutes after the Big Bang; this corresponds to the temperature range when the universe was cool enough for deuterium to survive, but hot and dense enough for fusion reactions to occur at a significant rate.
It was widespread, encompassing the entire observable universe.
The key parameter which allows one to calculate the effects of Big Bang nucleosynthesis is the baryon/photon number ratio, which is a small number of order 6 × 10−10. This parameter corresponds to the baryon density and controls the rate at which nucleons collide and react; from this it is possible to calculate element abundances after nucleosynthesis ends. Although the baryon per photon ratio is important in determining element abundances, the precise value makes little difference to the overall picture. Without major changes to the Big Bang theory itself, BBN will result in mass abundances of about 75% of hydrogen-1, about 25% helium-4, about 0.01% of deuterium and helium-3, trace amounts (on the order of 10−10) of lithium, and negligible heavier elements. That the observed abundances in the universe are generally consistent with these abundance numbers is considered strong evidence for the Big Bang theory.
In this field, for historical reasons it is customary to quote the helium-4 fraction by mass, symbol Y, so that 25% helium-4 means that helium-4 atoms account for 25% of the mass, but less than 8% of the nuclei would be helium-4 nuclei. Other (trace) nuclei are usually expressed as number ratios to hydrogen. The first detailed calculations of the primordial isotopic abundances came in 1966 and have been refined over the years using updated estimates of the input nuclear reaction rates. The first systematic Monte Carlo study of how nuclear reaction rate uncertainties impact isotope predictions, over the relevant temperature range, was carried out in 1993.
Important parameters
The creation of light elements during BBN was dependent on a number of parameters; among those was the neutron–proton ratio (calculable from Standard Model physics) and the baryon-photon ratio.
Neutron–proton ratio
The neutron–proton ratio was set by Standard Model physics before the nucleosynthesis era,
essentially within the first 1-second after the Big Bang.
Neutrons can react with positrons or electron neutrinos to create protons and other products in one of the following reactions:
n \ + e+ <=> \overline{\nu}_e + p
n \ + \nu_{e} <=> p + e-
At times much earlier than 1 sec, these reactions were fast and maintained the n/p ratio close to 1:1. As the temperature dropped, the equilibrium shifted in favour of protons due to their slightly lower mass, and the n/p ratio smoothly decreased.
These reactions continued until the decreasing temperature and density caused the reactions to become too slow, which occurred at about T = 0.7 MeV (time around 1 second) and is called the freeze out temperature. At freeze out, the neutron–proton ratio was about 1/6. However, free neutrons are unstable with a mean life of 880 sec; some neutrons decayed in the next few minutes before fusing into any nucleus, so the ratio of total neutrons to protons after nucleosynthesis ends is about 1/7. Almost all neutrons that fused instead of decaying ended up combined into helium-4, due to the fact that helium-4 has the highest binding energy per nucleon among light elements. This predicts that about 8% of all atoms should be helium-4, leading to a mass fraction of helium-4 of about 25%, which is in line with observations. Small traces of deuterium and helium-3 remained as there was insufficient time and density for them to react and form helium-4.
Baryon–photon ratio
The baryon–photon ratio, η, is the key parameter determining the abundances of light elements after nucleosynthesis ends. Baryons and light elements can fuse in the following main reactions:
along with some other low-probability reactions leading to 7Li or 7Be.
(An important feature is that there are no stable nuclei with mass 5 or 8, which implies that reactions adding one baryon to 4He, or fusing two 4He, do not occur).
Most fusion chains during BBN ultimately terminate in 4He (helium-4), while "incomplete" reaction chains lead to small amounts of left-over 2H or 3He; the amount of these decreases with increasing baryon-photon ratio. That is, the larger the baryon-photon ratio the more reactions there will be and the more efficiently deuterium will be eventually transformed into helium-4. This result makes deuterium a very useful tool in measuring the baryon-to-photon ratio.
Sequence
Big Bang nucleosynthesis began roughly about 20 seconds after the big bang, when the universe had cooled sufficiently to allow deuterium nuclei to survive disruption by high-energy photons. (Note that the neutron–proton freeze-out time was earlier). This time is essentially independent of dark matter content, since the universe was highly radiation dominated until much later, and this dominant component controls the temperature/time relation. At this time there were about six protons for every neutron, but a small fraction of the neutrons decay before fusing in the next few hundred seconds, so at the end of nucleosynthesis there are about seven protons to every neutron, and almost all the neutrons are in Helium-4 nuclei.
One feature of BBN is that the physical laws and constants that govern the behavior of matter at these energies are very well understood, and hence BBN lacks some of the speculative uncertainties that characterize earlier periods in the life of the universe. Another feature is that the process of nucleosynthesis is determined by conditions at the start of this phase of the life of the universe, and proceeds independently of what happened before.
As the universe expands, it cools. Free neutrons are less stable than helium nuclei, and the protons and neutrons have a strong tendency to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Before nucleosynthesis began, the temperature was high enough for many photons to have energy greater than the binding energy of deuterium; therefore any deuterium that was formed was immediately destroyed (a situation known as the "deuterium bottleneck"). Hence, the formation of helium-4 was delayed until the universe became cool enough for deuterium to survive (at about T = 0.1 MeV); after which there was a sudden burst of element formation. However, very shortly thereafter, around twenty minutes after the Big Bang, the temperature and density became too low for any significant fusion to occur. At this point, the elemental abundances were nearly fixed, and the only changes were the result of the radioactive decay of the two major unstable products of BBN, tritium and beryllium-7.
History of theory
The history of Big Bang nucleosynthesis began with the calculations of Ralph Alpher in the 1940s. Alpher published the Alpher–Bethe–Gamow paper that outlined the theory of light-element production in the early universe.
Heavy elements
Big Bang nucleosynthesis produced very few nuclei of elements heavier than lithium due to a bottleneck: the absence of a stable nucleus with 8 or 5 nucleons. This deficit of larger atoms also limited the amounts of lithium-7 produced during BBN. In stars, the bottleneck is passed by triple collisions of helium-4 nuclei, producing carbon (the triple-alpha process). However, this process is very slow and requires much higher densities, taking tens of thousands of years to convert a significant amount of helium to carbon in stars, and therefore it made a negligible contribution in the minutes following the Big Bang.
The predicted abundance of CNO isotopes produced in Big Bang nucleosynthesis is expected to be on the order of 10−15 that of H, making them essentially undetectable and negligible. Indeed, none of these primordial isotopes of the elements from beryllium to oxygen have yet been detected, although those of beryllium and boron may be able to be detected in the future. So far, the only stable nuclides known experimentally to have been made during Big Bang nucleosynthesis are protium, deuterium, helium-3, helium-4, and lithium-7.
Helium-4
Big Bang nucleosynthesis predicts a primordial abundance of about 25% helium-4 by mass, irrespective of the initial conditions of the universe. As long as the universe was hot enough for protons and neutrons to transform into each other easily, their ratio, determined solely by their relative masses, was about 1 neutron to 7 protons (allowing for some decay of neutrons into protons). Once it was cool enough, the neutrons quickly bound with an equal number of protons to form first deuterium, then helium-4. Helium-4 is very stable and is nearly the end of this chain if it runs for only a short time, since helium neither decays nor combines easily to form heavier nuclei (since there are no stable nuclei with mass numbers of 5 or 8, helium does not combine easily with either protons, or with itself). Once temperatures are lowered, out of every 16 nucleons (2 neutrons and 14 protons), 4 of these (25% of the total particles and total mass) combine quickly into one helium-4 nucleus. This produces one helium for every 12 hydrogens, resulting in a universe that is a little over 8% helium by number of atoms, and 25% helium by mass.
One analogy is to think of helium-4 as ash, and the amount of ash that one forms when one completely burns a piece of wood is insensitive to how one burns it. The resort to the BBN theory of the helium-4 abundance is necessary as there is far more helium-4 in the universe than can be explained by stellar nucleosynthesis. In addition, it provides an important test for the Big Bang theory. If the observed helium abundance is significantly different from 25%, then this would pose a serious challenge to the theory. This would particularly be the case if the early helium-4 abundance was much smaller than 25% because it is hard to destroy helium-4. For a few years during the mid-1990s, observations suggested that this might be the case, causing astrophysicists to talk about a Big Bang nucleosynthetic crisis, but further observations were consistent with the Big Bang theory.
Deuterium
Deuterium is in some ways the opposite of helium-4, in that while helium-4 is very stable and difficult to destroy, deuterium is only marginally stable and easy to destroy. The temperatures, time, and densities were sufficient to combine a substantial fraction of the deuterium nuclei to form helium-4 but insufficient to carry the process further using helium-4 in the next fusion step. BBN did not convert all of the deuterium in the universe to helium-4 due to the expansion that cooled the universe and reduced the density, and so cut that conversion short before it could proceed any further. One consequence of this is that, unlike helium-4, the amount of deuterium is very sensitive to initial conditions. The denser the initial universe was, the more deuterium would be converted to helium-4 before time ran out, and the less deuterium would remain.
There are no known post-Big Bang processes which can produce significant amounts of deuterium. Hence observations about deuterium abundance suggest that the universe is not infinitely old, which is in accordance with the Big Bang theory.
During the 1970s, there were major efforts to find processes that could produce deuterium, but those revealed ways of producing isotopes other than deuterium. The problem was that while the concentration of deuterium in the universe is consistent with the Big Bang model as a whole, it is too high to be consistent with a model that presumes that most of the universe is composed of protons and neutrons. If one assumes that all of the universe consists of protons and neutrons, the density of the universe is such that much of the currently observed deuterium would have been burned into helium-4. The standard explanation now used for the abundance of deuterium is that the universe does not consist mostly of baryons, but that non-baryonic matter (also known as dark matter) makes up most of the mass of the universe. This explanation is also consistent with calculations that show that a universe made mostly of protons and neutrons would be far more clumpy than is observed.
It is very hard to come up with another process that would produce deuterium other than by nuclear fusion. Such a process would require that the temperature be hot enough to produce deuterium, but not hot enough to produce helium-4, and that this process should immediately cool to non-nuclear temperatures after no more than a few minutes. It would also be necessary for the deuterium to be swept away before it reoccurs.
Producing deuterium by fission is also difficult. The problem here again is that deuterium is very unlikely due to nuclear processes, and that collisions between atomic nuclei are likely to result either in the fusion of the nuclei, or in the release of free neutrons or alpha particles. During the 1970s, cosmic ray spallation was proposed as a source of deuterium. That theory failed to account for the abundance of deuterium, but led to explanations of the source of other light elements.
Lithium
Lithium-7 and lithium-6 produced in the Big Bang are on the order of: lithium-7 to be 10−9 of all primordial nuclides; and lithium-6 around 10−13.
Measurements and status of theory
The theory of BBN gives a detailed mathematical description of the production of the light "elements" deuterium, helium-3, helium-4, and lithium-7. Specifically, the theory yields precise quantitative predictions for the mixture of these elements, that is, the primordial abundances at the end of the big-bang.
In order to test these predictions, it is necessary to reconstruct the primordial abundances as faithfully as possible, for instance by observing astronomical objects in which very little stellar nucleosynthesis has taken place (such as certain dwarf galaxies) or by observing objects that are very far away, and thus can be seen in a very early stage of their evolution (such as distant quasars).
As noted above, in the standard picture of BBN, all of the light element abundances depend on the amount of ordinary matter (baryons) relative to radiation (photons). Since the universe is presumed to be homogeneous, it has one unique value of the baryon-to-photon ratio. For a long time, this meant that to test BBN theory against observations one had to ask: can all of the light element observations be explained with a single value of the baryon-to-photon ratio? Or more precisely, allowing for the finite precision of both the predictions and the observations, one asks: is there some range of baryon-to-photon values which can account for all of the observations?
More recently, the question has changed: Precision observations of the cosmic microwave background radiation with the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck give an independent value for the baryon-to-photon ratio. Using this value, are the BBN predictions for the abundances of light elements in agreement with the observations?
The present measurement of helium-4 indicates good agreement, and yet better agreement for helium-3. But for lithium-7, there is a significant discrepancy between BBN and WMAP/Planck, and the abundance derived from Population II stars. The discrepancy is a factor of 2.4―4.3 below the theoretically predicted value. This discrepancy, called the "cosmological lithium problem", is considered a problem for the original models, that have resulted in revised calculations of the standard BBN based on new nuclear data, and to various reevaluation proposals for primordial proton–proton nuclear reactions, especially the abundances of , versus .
Non-standard scenarios
In addition to the standard BBN scenario there are numerous non-standard BBN scenarios. These should not be confused with non-standard cosmology: a non-standard BBN scenario assumes that the Big Bang occurred, but inserts additional physics in order to see how this affects elemental abundances. These pieces of additional physics include relaxing or removing the assumption of homogeneity, or inserting new particles such as massive neutrinos.
There have been, and continue to be, various reasons for researching non-standard BBN. The first, which is largely of historical interest, is to resolve inconsistencies between BBN predictions and observations. This has proved to be of limited usefulness in that the inconsistencies were resolved by better observations, and in most cases trying to change BBN resulted in abundances that were more inconsistent with observations rather than less. The second reason for researching non-standard BBN, and largely the focus of non-standard BBN in the early 21st century, is to use BBN to place limits on unknown or speculative physics. For example, standard BBN assumes that no exotic hypothetical particles were involved in BBN. One can insert a hypothetical particle (such as a massive neutrino) and see what has to happen before BBN predicts abundances that are very different from observations. This has been done to put limits on the mass of a stable tau neutrino.
See also
Big Bang
Chronology of the universe
Nucleosynthesis
Relic abundance
Stellar nucleosynthesis
Ultimate fate of the universe
References
External links
For a general audience
White, Martin: Overview of BBN
Wright, Ned: BBN (cosmology tutorial)
Big Bang nucleosynthesis on arxiv.org
Academic articles
Report-no: FERMILAB-Pub-00-239-A
Jedamzik, Karsten, "Non-Standard Big Bang Nucleosynthesis Scenarios". Max-Planck-Institut für Astrophysik, Garching.
Steigman, Gary, Primordial Nucleosynthesis: Successes And Challenges ; Forensic Cosmology: Probing Baryons and Neutrinos With BBN and the CBR ; and Big Bang Nucleosynthesis: Probing the First 20 Minutes
R. A. Alpher, H. A. Bethe, G. Gamow, The Origin of Chemical Elements , Physical Review 73 (1948), 803. The so-called αβγ paper, in which Alpher and Gamow suggested that the light elements were created by hydrogen ions capturing neutrons in the hot, dense early universe. Bethe's name was added for symmetry
These two 1948 papers of Gamow laid the foundation for our present understanding of big-bang nucleosynthesis
R. A. Alpher and R. Herman, "On the Relative Abundance of the Elements," Physical Review 74 (1948), 1577. This paper contains the first estimate of the present temperature of the universe
Java Big Bang element abundance calculator
C. Pitrou, A. Coc, J.-P. Uzan, E. Vangioni, Precision big bang nucleosynthesis with improved Helium-4 predictions ;
Nucleosynthesis
Physical cosmological concepts
Big Bang | Big Bang nucleosynthesis | [
"Physics",
"Chemistry",
"Astronomy"
] | 4,459 | [
"Physical cosmological concepts",
"Nuclear fission",
"Cosmogony",
"Concepts in astrophysics",
"Big Bang",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion"
] |
44,059 | https://en.wikipedia.org/wiki/Harrison%20Ford | Harrison Ford (born July 13, 1942) is an American actor. Regarded as a cinematic cultural icon, he has starred in several notable films over seven decades. His films have grossed more than $5.4billion in North America and more than $9.3billion worldwide. Ford is the recipient of various accolades, including the AFI Life Achievement Award, the Cecil B. DeMille Award, an Honorary César, and an Honorary Palme d'Or. He has also received nominations for an Academy Award, a BAFTA, two Screen Actors Guild awards, and five Golden Globe awards.
Ford made his screen acting debut in an uncredited appearance in the film Dead Heat on a Merry-Go-Round (1966) and went on to play bit parts for several years. After breakthrough supporting roles in American Graffiti (1973) and The Conversation (1974), he gained worldwide fame for his role as Han Solo in the space opera film Star Wars (1977), a part he reprised in four sequels over the next four decades. Ford is also known for his portrayal of the titular character in the Indiana Jones franchise, beginning with Raiders of the Lost Ark (1981). He also starred as Rick Deckard in the science fiction film Blade Runner (1982) and its sequel Blade Runner 2049 (2017), and portrayed Jack Ryan in the action thriller films Patriot Games (1992) and Clear and Present Danger (1994).
Ford received a nomination for the Academy Award for Best Actor for his role as a detective who envelops himself in the Amish community in the thriller Witness (1985). His other films include The Mosquito Coast (1986), Working Girl (1988), Presumed Innocent (1990), The Fugitive (1993), Sabrina (1995), The Devil's Own (1997), Air Force One (1997), Six Days, Seven Nights (1998), What Lies Beneath (2000), K-19: The Widowmaker (2002), Cowboys & Aliens (2011), 42 (2013), and The Age of Adaline (2015). Ford has since starred in the Paramount+ western series 1923 (2022–present) and the Apple TV+ comedy series Shrinking (2023–present).
Outside of acting, Ford is a licensed pilot; he has often assisted the emergency services in rescue missions near his home in Wyoming, and he chaired an aviation education program for youth from 2004 to 2009. Ford is also an environmental activist, having served as the inaugural vice chair of Conservation International since 1991.
Early life
Harrison Ford was born at the Swedish Covenant Hospital in Chicago, Illinois, on July 13, 1942, to former radio actress Dorothy (née Nidelman) and advertising executive and former actor John William "Christopher" Ford.
His younger brother, Terence, was born in 1945. Their father was a Catholic of Irish descent, while their mother was an Ashkenazi Jew whose parents were emigrants from Minsk, Belarus, then in the Russian Empire. When asked in which religion he and his brother were raised, Ford jokingly responded "Democrat" and more seriously stated that they were raised to be "liberals of every stripe". When asked about what influence his Jewish and Irish Catholic ancestry may have had on him, he quipped, "As a man I've always felt Irish, as an actor I've always felt Jewish."
Ford was a Boy Scout, achieving the second-highest rank of Life Scout. He worked at Napowan Adventure Base Scout Camp as a counselor for the Reptile Study merit badge. Because of this, he and director Steven Spielberg later decided to depict the young Indiana Jones as a Life Scout in Indiana Jones and the Last Crusade (1989). Ford graduated in 1960 from Maine East High School in Park Ridge, Illinois. His voice was the first student voice broadcast on his high school's new radio station, WMTH, and he was its first sportscaster during his senior year. He attended Ripon College in Ripon, Wisconsin, where he was a philosophy major and a member of the Sigma Nu fraternity. A self-described "late bloomer", Ford took a drama class in the final quarter of his senior year to get over his shyness and became fascinated with acting. Ford was expelled from college for plagiarism four days before graduation.
Career
1964–1976
In 1964, after a season of summer stock with the Belfry Players in Wisconsin, Ford traveled to Los Angeles and eventually signed a contract with Columbia Pictures' new talent program. His first known role was an uncredited one as a bellhop in Dead Heat on a Merry-Go-Round (1966). There is little record of his non-speaking (or "extra") roles in film. Ford was at the bottom of the hiring list, having offended producer Jerry Tokofsky. According to one anecdote, Tokofsky told Ford that when actor Tony Curtis delivered a bag of groceries, he could tell that Curtis was a movie star whereas Ford wasn't; Ford immediately retorted that if Curtis was truly a talented actor, he would've delivered them like a bellhop. Ford was apparently fired soon after.
His speaking roles continued next with Luv (1967), though he was still uncredited. He was finally credited as "Harrison J. Ford" in the 1967 Western film A Time for Killing, starring Glenn Ford, George Hamilton and Inger Stevens, but the "J" did not stand for anything since he has no middle name. It was added to avoid confusion with a silent film actor named Harrison Ford, who appeared in more than 80 films between 1915 and 1932 and died in 1957. Ford later said that he was unaware of the existence of the earlier actor until he came upon a star with his own name on the Hollywood Walk of Fame. Ford soon dropped the "J" and worked for Universal Studios, playing minor roles in many television series throughout the late 1960s and early 1970s, including Gunsmoke, Ironside, The Virginian, The F.B.I., Love, American Style and Kung Fu. He appeared in the western Journey to Shiloh (1968) and had an uncredited, non-speaking role in Michelangelo Antonioni's 1970 film Zabriskie Point as an arrested student protester. In 1968, he also worked as a camera operator for one of the Doors' tours. French filmmaker Jacques Demy chose Ford for the lead role of his first American film, Model Shop (1969), but the head of Columbia Pictures thought Ford had "no future" in the film business and told Demy to hire a more experienced actor. The part eventually went to Gary Lockwood. Ford later commented that the experience had been nevertheless a positive one because Demy was the first to show such faith in him.
Not happy with the roles offered to him, Ford became a self-taught professional carpenter to support his then-wife and two young sons. Clients at this time included the writers Joan Didion and John Gregory Dunne, who lived on the beach at Malibu. Ford appears in the documentary Joan Didion: The Center Will Not Hold. He and his wife became friends of the writers. Casting director and fledgling producer Fred Roos championed the young Ford and secured him an audition with George Lucas for the role of Bob Falfa, which Ford went on to play in American Graffiti (1973). Ford's relationship with Lucas profoundly affected his career later. After director Francis Ford Coppola's film The Godfather was a success, he hired Ford to expand his office and gave him small roles in his next two films, The Conversation (1974) and Apocalypse Now (1979); in the latter film, Ford played an army colonel named "G. Lucas".
1977–1997
Ford's work in American Graffiti eventually landed him his first starring film role, when Lucas hired him to read lines for actors auditioning for roles in Lucas's upcoming epic space-opera film Star Wars (1977). Lucas was eventually won over by Ford's performance during these line reads and cast him as Han Solo. Star Wars became one of the most successful and groundbreaking films of all time, and brought Ford, and his co-stars Mark Hamill and Carrie Fisher, widespread recognition. Ford began to be cast in bigger roles in films throughout the late 1970s, including Heroes (1977), Force 10 from Navarone (1978) and Hanover Street (1979). He also co-starred alongside Gene Wilder in the buddy-comedy western The Frisco Kid (1979), playing a bank robber with a heart of gold. Ford returned to star in the successful Star Wars sequels The Empire Strikes Back (1980) and Return of the Jedi (1983), as well as the Star Wars Holiday Special (1978). Ford wanted Lucas to kill off Han Solo at the end of Return of the Jedi, saying, "That would have given the whole film a bottom," but Lucas refused.
Ford's status as a leading actor was solidified with Raiders of the Lost Ark (1981), an action-adventure collaboration between Lucas and Steven Spielberg that gave Ford his second franchise role as the heroic, globe-trotting archaeologist Indiana Jones. Like Star Wars, the film was massively successful; it became the highest-grossing film of the year. Spielberg was interested in casting Ford from the beginning, but Lucas was not, having already worked with him in American Graffiti and Star Wars. Lucas relented after Tom Selleck was unable to accept. Ford went on to reprise the role throughout the rest of the decade in the prequel Indiana Jones and the Temple of Doom (1984), and the sequel Indiana Jones and the Last Crusade (1989). During the June 1983 filming of Temple of Doom in London, Ford herniated a disc in his back. The 40-year-old actor was forced to fly back to Los Angeles for surgery and returned six weeks later.
Following his leading-man success as Indiana Jones, Ford played Rick Deckard in Ridley Scott's dystopian science-fiction film Blade Runner (1982). Compared to his experiences on the Star Wars and Indiana Jones films, Ford had a difficult time with the production. He recalled to Vanity Fair, "It was a long slog. I didn't really find it that physically difficult—I thought it was mentally difficult." Ford and Scott also had differing views on the nature of his character, Deckard, that persist decades later. While not initially a success, Blade Runner became a cult classic and one of Ford's most highly regarded films. Ford proved his versatility throughout the 1980s with dramatic parts in films such as Witness (1985), The Mosquito Coast (1986), and Frantic (1988), as well as the romantic male lead opposite Melanie Griffith and Sigourney Weaver in the comedy-drama Working Girl (1988). Witness and The Mosquito Coast allowed Ford to explore his potential as a dramatic actor, and both performances were widely acclaimed. Ford later recalled that working with director Peter Weir on Witness and The Mosquito Coast were two of the best experiences of his career.
In late 1991, Ford was slated to portray company lawyer A. Philip Randolph in an action-historical film entitled Night Ride Down, which would have been set around a labor union strike in the 1930s. Paramount Pictures shelved the project, after Ford quit the film over script changes he disagreed with. In the years that followed, Ford became the second actor to portray Jack Ryan in two films of the film series based on the literary character created by Tom Clancy: Patriot Games (1992) and Clear and Present Danger (1994), both co-starring Anne Archer and James Earl Jones. Ford took over the role from Alec Baldwin, who had played Ryan in The Hunt for Red October (1990). This led to long-lasting resentment from Baldwin, who said that he had wanted to reprise the role but Ford had negotiated with Paramount behind his back. Ford played leading roles in other action-based thrillers throughout the decade, such as The Fugitive (1993), The Devil's Own (1997), and Air Force One (1997). For his performance in The Fugitive, which co-starred Tommy Lee Jones, Ford received some of the best reviews of his career, including from Roger Ebert, who concluded that, "Ford is once again the great modern movie everyman. As an actor, nothing he does seems merely for show, and in the face of this melodramatic material he deliberately plays down, lays low, gets on with business instead of trying to exploit the drama in meaningless acting flourishes."
Ford played more straight dramatic roles in Presumed Innocent (1990) and Regarding Henry (1991), and another romantic lead role in Sabrina (1995), a remake of the classic 1954 film of the same name. Ford established working relationships with many well-regarded directors during this time, including Weir, Alan J. Pakula, Mike Nichols, Phillip Noyce, and Sydney Pollack, collaborating twice with each of them. This was the most lucrative period of Ford's career. From 1977 to 1997, he appeared in 14 films that reached the top 15 in the yearly domestic box-office rankings, 12 of which reached the top ten. Six of the films he appeared in during this time were nominated for the Academy Award for Best Picture, among other awards: Star Wars, Apocalypse Now, Raiders of the Lost Ark, Witness, Working Girl, and The Fugitive.
1998–2014
In the late 1990s, Ford started appearing in several critically derided and commercially disappointing films that failed to match his earlier successes, including Six Days, Seven Nights (1998), Random Hearts (1999), K-19: The Widowmaker (2002), Hollywood Homicide (2003), Firewall (2006) and Extraordinary Measures (2010). One exception was What Lies Beneath (2000), which grossed over $155million in the United States and $291million worldwide. Ford served as an executive producer on K-19: The Widowmaker and Extraordinary Measures, both of which were based on true events.
In 2004, Ford declined a chance to star in the thriller Syriana, later commenting that "I didn't feel strongly enough about the truth of the material and I think I made a mistake." The role went to George Clooney, who won an Oscar and a Golden Globe for his work. Before that, Ford had passed on a role in another Stephen Gaghan-written film, that of Robert Wakefield in Traffic, which went to Michael Douglas.
In 2008, Ford enjoyed success with the release of Indiana Jones and the Kingdom of the Crystal Skull, the first Indiana Jones film in 19 years and another collaboration with Lucas and Spielberg. The film received generally positive reviews and was the second-highest-grossing film worldwide in 2008. Ford later said he would like to star in another sequel "if it didn't take another 20 years to digest."
Other 2008 work included Crossing Over, directed by Wayne Kramer. In the film, Ford plays an ICE/Homeland Security Investigations Special Agent, working alongside Ashley Judd and Ray Liotta. He also narrated a feature documentary film about the Dalai Lama, Dalai Lama Renaissance. Ford filmed the medical drama Extraordinary Measures in 2009 in Portland, Oregon. Released on January 22, 2010, the film also starred Brendan Fraser and Alan Ruck. Also in 2010, he co-starred in the film Morning Glory, along with Rachel McAdams, Diane Keaton and Patrick Wilson. Although the film was a disappointment at the box office, Ford's performance was well received by critics, some of whom thought it was his best role in years. In July 2011, Ford starred alongside Daniel Craig and Olivia Wilde in the science-fiction/western hybrid film Cowboys & Aliens. To promote the film, he appeared at San Diego Comic-Con and, apparently surprised by the warm welcome, told the audience, "I just wanted to make a living as an actor. I didn't know about this." Also in 2011, Ford starred in Japanese commercials advertising the video game Uncharted 3: Drake's Deception for the PlayStation 3.
2013 began a trend that saw Ford accepting more diverse supporting roles. That year, he co-starred in the corporate espionage thriller Paranoia with Liam Hemsworth and Gary Oldman, whom he had previously worked with in Air Force One, and also appeared in Ender's Game, 42 and Anchorman 2: The Legend Continues. His performance as Branch Rickey in the film 42 was praised by many critics and garnered Ford a nomination as best supporting actor for the Satellite Awards. In 2014, he appeared in The Expendables 3, and the following year, co-starred with Blake Lively in the romantic drama The Age of Adaline to positive reviews.
2015–present
Ford reprised the role of Han Solo in the long-awaited Star Wars sequel Star Wars: The Force Awakens (2015), which was highly successful, like its predecessors. During filming on June 11, 2014, Ford suffered what was said to be a fractured ankle when a hydraulic door fell on him. He was airlifted to John Radcliffe Hospital in Oxford, England, for treatment. Ford's son Ben released details on his father's injury, saying that his ankle would likely need a plate and screws, and that filming could be altered slightly, with the crew needing to shoot Ford from the waist up for a short time until he recovered. Ford made his return to filming in mid-August, after a two-month layoff as he recovered from his injury. Ford's character was killed off in The Force Awakens, but it was subsequently announced, via a casting call, that Ford would return in some capacity as Solo in Episode VIII. In February 2016, when the cast for Episode VIII was confirmed, it was indicated that Ford would not reprise his role in the film after all. When Ford was asked whether Solo could come back in "some form", he replied, "Anything is possible in space." He eventually made an uncredited appearance as a vision in Star Wars: The Rise of Skywalker (2019).
On February 26, 2015, Alcon Entertainment announced Ford would reprise his role as Rick Deckard in Denis Villeneuve's science fiction sequel film Blade Runner 2049. The film, and Ford's performance, was very well received by critics upon its release in October 2017. Scott Collura of IGN called it a "deep, rich, smart film that's visually awesome and full of great sci-fi concepts" and Ford's role "a quiet, sort of gut-wrenching interpretation to Deckard and what he must've gone through in the past three decades." The film grossed $259.3million worldwide, short of the estimated $400million that it needed to break even. In 2019, Ford had his first voice role in an animated film, as a dog named Rooster in The Secret Life of Pets 2. With filming of a fifth Indiana Jones film delayed by a year, Ford headlined a big-budget adaptation of Jack London's The Call of the Wild, playing prospector John Thornton. The film was released in February 2020 to a mixed critical reception and its theatrical release was shortened due to the impact of the COVID-19 pandemic on the film industry.
In 2022, Ford was cast to star alongside Helen Mirren in the Paramount+ western drama series 1923. The two had previously starred together 36 years earlier in The Mosquito Coast. The series premiered in December 2022 to positive reviews, and it is set to run for a total of two seasons. That same year, it was announced that Ford would star in the Apple TV+ comedy drama series Shrinking. The series premiered in January 2023 to positive reviews, with Ford receiving praise for his performance. In a 2023 interview with The Hollywood Reporter, it was revealed that he accepted the roles in both 1923 and Shrinking despite there not being a script at the time.
Ford reprised the role of Indiana Jones in Indiana Jones and the Dial of Destiny (2023), which he stated was his last appearance as the character. The film received generally positive reviews, with many critics highlighting Ford's performance. In October 2022, Ford was cast as Thaddeus "Thunderbolt" Ross in the 2025 superhero films Captain America: Brave New World and Thunderbolts*, set in the Marvel Cinematic Universe, replacing William Hurt, who played the character in previous MCU films from 2008 to 2021 before his death.
Personal life
Ford has been married three times and has four biological children and one adopted child. He was first married to Mary Marquardt from 1964 until their divorce in 1979. They had two sons, born in 1966 and 1969. The older son co-owns Ford's Filling Station, a gastropub located at Terminal 5 in Los Angeles International Airport. The younger son is owner of the Ludwig Clothing company and previously owned Strong Sports Gym and the Kim Sing Theater.
Ford's second marriage was to screenwriter Melissa Mathison from March 1983 until their separation in 2000; they divorced in 2004. They had a son, born in 1987, and a daughter, born in 1990. Mathison died in 2015.
Ford began dating actress Calista Flockhart after they met at the 2002 Golden Globe Awards. He proposed to Flockhart over Valentine's Day weekend in 2009. They married on June 15, 2010, in Santa Fe, New Mexico, where Ford was filming Cowboys & Aliens. They are the parents of a son, born in 2001, whom Flockhart had adopted before meeting Ford. Ford and Flockhart live on an ranch in Jackson, Wyoming, where he has lived since the 1980s and approximately half of which he has donated as a nature reserve. They retain a base in the Brentwood neighborhood of Los Angeles. Ford is one of Hollywood's most private actors, guarding much of his personal life.
Ford commented on his parenting choices in 2023: "I can tell you this: If I’d been less successful, I’d probably be a better parent."
In her 2016 autobiography The Princess Diarist, his co-star Carrie Fisher wrote that she and Ford had a three-month affair in 1976 during the filming of Star Wars.
Aviation
Ford is a licensed pilot of both fixed-wing aircraft and helicopters. On several occasions, he has personally provided emergency helicopter services at the request of local authorities in Wyoming, in one instance rescuing a hiker overcome by dehydration.
Ford began flight training in the 1960s at Wild Rose Idlewild Airport in Wild Rose, Wisconsin, flying in a Piper PA-22 Tri-Pacer, but at $15 an hour (), he could not afford to continue the training. In the mid-1990s, he bought a used Gulfstream II and asked one of his pilots, Terry Bender, to give him flying lessons. They started flying a Cessna 182 out of Jackson, Wyoming, later switching to Teterboro Airport in Teterboro, New Jersey, flying a Cessna 206, the aircraft in which he made his first solo flight.
Ford's aircraft are kept at Santa Monica Airport. The Bell 407 helicopter is often kept and flown in Jackson and has been used by Ford in two mountain rescues during his assigned duty time with Teton County Search and Rescue. During one of the rescues, Ford recovered a hiker who had become lost and disoriented. She boarded his helicopter and promptly vomited into one of the rescuers' caps, unaware of who the pilot was until much later; "I can't believe I barfed in Harrison Ford's helicopter!" she said later.
Ford flies his de Havilland Canada DHC-2 Beaver (N28S) more than any of his other aircraft, and has repeatedly said that he likes this aircraft and the sound of its Pratt & Whitney R-985 radial engine. According to Ford, it had been flown in the CIA's Air America operations and was riddled with bullet holes that had to be patched up.
In March 2004, Ford officially became chairman of the Experimental Aircraft Association (EAA)'s Young Eagles program, founded by then-EAA president Tom Poberezny and fellow actor-pilot Cliff Robertson. Ford was asked to take the position by Greg Anderson, Senior Vice President of the EAA at the time, to replace General Chuck Yeager, who was vacating the post that he had held for many years. Ford at first was hesitant, but later accepted the offer and has made appearances with the Young Eagles at the EAA AirVenture Oshkosh gathering at Oshkosh, Wisconsin, for two years. In July 2005, at the gathering in Oshkosh, Ford agreed to accept the position for another two years. He has flown over 280 children as part of the Young Eagles program, usually in his DHC-2 Beaver, which can seat the actor and five children. Ford stepped down as program chairman in 2009 and was replaced by Captain Chesley Sullenberger and First Officer Jeff Skiles. He is involved with the EAA chapter in Driggs, Idaho, just over the Teton Range from Jackson, Wyoming. On July 28, 2016, Ford flew the two millionth Young Eagle at the EAA AirVenture convention, making it the most successful aviation-youth introduction program in history.
As of 2009, Ford appears in Internet advertisements for General Aviation Serves America, a campaign by the advocacy group Aircraft Owners and Pilots Association (AOPA). He has also appeared in several independent aviation documentaries, including Wings Over the Rockies (2009), Flying the Feathered Edge: The Bob Hoover Project (2014), and Living in the Age of Airplanes (2015).
Ford is an honorary board member of the humanitarian aviation organization Wings of Hope, and is known for having made several trips to Washington, D.C., to fight for pilots' rights. He has also donated substantial funds to aerobatic champion Sean Tucker's charitable program, The Bob Hoover Academy (named after legendary aviator Bob Hoover), which educates at-risk teens in central California and teaches them how to fly.
Incidents
On August 22, 1987, Ford was traveling as a passenger with Clint Eastwood and Sondra Locke aboard a Gulfstream III when the jet developed an engine fire and stuck landing gear during a Paris-to-L.A. flight and was forced to land in Bangor, Maine. The charter company owning the G-3 sent another jet and mechanics to Bangor, and the group flew out on that plane the next day.
On October 23, 1999, Ford was involved in the crash of a Bell 206L4 LongRanger helicopter. The NTSB accident report states that Ford was piloting the aircraft over the Lake Piru riverbed near Santa Clarita, California, on a routine training flight. While making his second attempt at an autorotation with powered recovery, the aircraft was unable to recover power after the sudden drop in altitude. It landed hard and began skidding forward in the loose gravel before flipping onto its side. Neither Ford nor the instructor pilot suffered any injuries, though the helicopter was seriously damaged.
On March 5, 2015, Ford's plane, believed to be a Ryan PT-22 Recruit, made an emergency landing on the Penmar Golf Course in Venice, California after it lost engine power. He was taken to Ronald Reagan UCLA Medical Center, where he was reported to be in fair to moderate condition. Ford suffered a broken pelvis and broken ankle during the accident, as well as other injuries.
On February 13, 2017, Ford landed an Aviat Husky at John Wayne Airport in Orange County, California, on the taxiway left of runway 20L. A Boeing 737 was holding short of the runway on the taxiway when Ford overflew them.
On April 24, 2020, at the Los Angeles Hawthorne Airport while piloting his Husky, Ford crossed a runway where another aircraft was landing. According to the FAA, the two planes were about 3,600 feet from each other and there was no danger of a crash. A representative of Ford later said that he "misheard" an instruction given to him by air traffic control.
Activism
Environmental work
Ford is vice-chair of Conservation International, an American nonprofit environmental organization headquartered in Arlington, Virginia. The organization's intent is to protect nature. Since 1992, Ford has lent his voice to a series of public service messages promoting environmental involvement for EarthShare, an American federation of environmental and conservation charities. He has acted as a spokesperson for Restore Hetch Hetchy, a non-profit organization dedicated to restoring Yosemite National Park's Hetch Hetchy Valley to its original condition. Ford also appears in the documentary series Years of Living Dangerously, which reports on people affected by and seeking solutions to climate change.
In 1993, the arachnologist Norman Platnick named a new species of spider Calponia harrisonfordi, and in 2002 the entomologist Edward O. Wilson named a new ant species Pheidole harrisonfordi (in recognition of Harrison's work as Vice Chairman of Conservation International). The Peruvian snake species Tachymenoides harrisonfordi was named for Ford in 2023.
In September 2013, Ford, while filming an environmental documentary in Indonesia, interviewed the Indonesian Forestry Minister, Zulkifli Hasan. After the interview, Ford and his crew were accused of "harassing state institutions" and publicly threatened with deportation. Questions within the interview concerned the Tesso Nilo National Park, Sumatra. It was alleged the Minister of Forestry was given no prior warning of questions nor the chance to explain the challenges of catching illegal loggers. Ford was provided an audience with the Indonesian President, Susilo Bambang Yudhoyono, during which he expressed concerns regarding Indonesia's environmental degradation and the government efforts to address climate change. In response, the President explained Indonesia's commitment to preserving its oceans and forests.
In 2019, on behalf of Conservation International, Ford gave an impassioned speech during the United Nations' Climate Action Summit in New York on the destruction of the Amazon rainforest and its effect on climate change for the rest of the world. Ford urged his audience to listen to 'angry young people' trying to make a difference in the situation, emphasizing, "The most important thing we can do for them is to get the hell out of their way."
Political views
Like his parents, Ford is a lifelong Democrat.
On September 7, 1995, Ford testified before the U.S. Senate Foreign Relations Committee in support of the Dalai Lama and an independent Tibet. In 2007, he narrated the documentary Dalai Lama Renaissance.
In 2000, Ford donated $1000 to the presidential campaigns of Bill Bradley, Al Gore, and John McCain.
In 2003, he publicly condemned the Iraq War and called for "regime change" in the United States. He also criticized Hollywood for making movies which were "more akin to video games than stories about human life and relationships", and he called for more gun control in the United States.
In 2009, Ford signed a petition calling for the release of film director Roman Polanski, who had been arrested in Switzerland in relation to his 1977 charge for drugging and raping a 13-year-old girl.
After Republican presidential candidate Donald Trump said his favorite role of Ford's was Air Force One because he "stood up for America", Ford responded that it was just a film and made critical statements against Trump's presidential bid.
Ford endorsed Joe Biden's 2020 presidential campaign against Trump. He said that he wanted to "encourage people to support candidates that will support the environment" and felt that under Trump, the U.S. had "lost some of our credibility in the world". Along with Mark Hamill, Ford worked with the anti-Trump Republican group The Lincoln Project to produce and narrate a 2020 election ad attacking Trump's disparaging of Anthony Fauci.
On November 2, 2024, he endorsed Kamala Harris's 2024 presidential campaign.
Archaeology
Following on his success portraying the archaeologist Indiana Jones, Ford also plays a part in supporting the work of professional archaeologists. He serves as a General Trustee on the Governing Board of the Archaeological Institute of America (AIA), North America's oldest and largest organization devoted to the world of archaeology. Ford assists them in their mission of increasing public awareness of archaeology and preventing looting and the illegal antiquities trade.
Filmography
Awards and nominations
Throughout his career, Ford has received significant recognition for his work in the entertainment industry. In 1986, he was nominated for Best Actor at the 58th Academy Awards for his performance in Witness, a role for which he also received BAFTA and Golden Globe nominations in the same category. Three additional Golden Globe nominations went to Ford in 1987, 1994 and 1996 for his performances in The Mosquito Coast, The Fugitive and Sabrina. In 2000, he was the recipient of the AFI Life Achievement Award from the American Film Institute for his body of work, presented to him by two of his closest collaborators and fellow industry giants, George Lucas and Steven Spielberg. In 2002, he was given the Cecil B. DeMille Award, another career achievement honor, from the Hollywood Foreign Press Association at the 59th Golden Globe Awards ceremony. On May 30, 2003, Ford received a star on the Hollywood Walk of Fame.
In 2006, he received the Jules Verne Award, given to an actor who has "encouraged the spirit of adventure and imagination" throughout their career. He was presented with the first-ever Hero Award at the 2007 Scream Awards for his many iconic roles, including Indiana Jones and Han Solo (both of which earned him a collective three Saturn Awards for Best Actor in 1982, 2024 and 2016, respectively), and in 2008 he received the Spike TV's Guy's Choice Award for "Brass Balls". In 2015, Ford received the Albert R. Broccoli Britannia Award for Worldwide Contribution to Entertainment from BAFTA Los Angeles. In 2018, Ford was honored by the SAG-AFTRA Foundation with the Artists Inspiration Award for both his acting and philanthropic work alongside fellow honoree Lady Gaga. SAG-AFTRA Foundation Board President JoBeth Williams in the press release said, "Harrison Ford is an acting legend in every known galaxy, but what many do not know are the decades of philanthropic service and leadership he has given to Conservation International to help protect our planet."
Other prestigious film honors for Ford include an Honorary Cesar, an Honorary Palme d'Or from the Cannes Film Festival, the Career Achievement Award from the Hollywood Film Awards, the Kirk Douglas Award for Excellence in Film from the Santa Barbara International Film Festival, the Box Office Star of the Century Award from the National Association of Theatre Owners and the Lifetime Achievement Award from both the Locarno Film Festival and the Zurich Film Festival.
Ford has also been honored multiple times for his involvement in general aviation, receiving the Living Legends of Aviation Award and the Experimental Aircraft Association's Freedom of Flight Award in 2009, the Wright Brothers Memorial Trophy in 2010, and the Al Ueltschi Humanitarian Award in 2013. Flying magazine ranked him number 48 on their 2013 list of the 51 Heroes of Aviation. In 2024, Ford was a recipient of the Disney Legends Award for his outstanding film contributions to The Walt Disney Company.
References
External links
Harrison Ford interview on KVUE about The Mosquito Coast in 1986 from Texas Archive of the Moving Image
Harrison Ford at Hollywood Walk of Fame
20th-century American male actors
21st-century American male actors
1942 births
Activists from California
Actors from Park Ridge, Illinois
Entertainment Community Fund
AFI Life Achievement Award recipients
American carpenters
American conservationists
American male film actors
American male television actors
American male video game actors
American male voice actors
American people of Belarusian-Jewish descent
American people of German descent
American people of Irish descent
Aviators from Illinois
California Democrats
Cecil B. DeMille Award Golden Globe winners
César Honorary Award recipients
Film producers from Illinois
Living people
Male actors from Chicago
Ripon College (Wisconsin) alumni
Survivors of aviation accidents or incidents
American gun control activists
Experimental Aircraft Association
Actors from Illinois
Jewish film people | Harrison Ford | [
"Engineering"
] | 7,342 | [
"Experimental Aircraft Association",
"Aerospace engineering organizations"
] |
44,062 | https://en.wikipedia.org/wiki/X-ray%20astronomy | X-ray astronomy is an observational branch of astronomy which deals with the study of X-ray observation and detection from astronomical objects. X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray astronomy uses a type of space telescope that can see x-ray radiation which standard optical telescopes, such as the Mauna Kea Observatories, cannot.
X-ray emission is expected from astronomical objects that contain extremely hot gases at temperatures from about a million kelvin (K) to hundreds of millions of kelvin (MK). Moreover, the maintenance of the E-layer of ionized gas high in the Earth's thermosphere also suggested a strong extraterrestrial source of X-rays. Although theory predicted that the Sun and the stars would be prominent X-ray sources, there was no way to verify this because Earth's atmosphere blocks most extraterrestrial X-rays. It was not until ways of sending instrument packages to high altitudes were developed that these X-ray sources could be studied.
The existence of solar X-rays was confirmed early in the mid-twentieth century by V-2s converted to sounding rockets, and the detection of extra-terrestrial X-rays has been the primary or secondary mission of multiple satellites since 1958. The first cosmic (beyond the Solar System) X-ray source was discovered by a sounding rocket in 1962. Called Scorpius X-1 (Sco X-1) (the first X-ray source found in the constellation Scorpius), the X-ray emission of Scorpius X-1 is 10,000 times greater than its visual emission, whereas that of the Sun is about a million times less. In addition, the energy output in X-rays is 100,000 times greater than the total emission of the Sun in all wavelengths.
Many thousands of X-ray sources have since been discovered. In addition, the intergalactic space in galaxy clusters is filled with a hot, but very dilute gas at a temperature between 100 and 1000 megakelvins (MK). The total amount of hot gas is five to ten times the total mass in the visible galaxies.
History of X-ray astronomy
In 1927, E.O. Hulburt of the US Naval Research Laboratory and associates Gregory Breit and Merle A. Tuve of the Carnegie Institution of Washington explored the possibility of equipping Robert H. Goddard's rockets to explore the upper atmosphere. "Two years later, he proposed an experimental program in which a rocket might be instrumented to explore the upper atmosphere, including detection of ultraviolet radiation and X-rays at high altitudes".
In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. The Sun has been known to be surrounded by a hot tenuous corona. In the mid-1940s radio observations revealed a radio corona around the Sun.
The beginning of the search for X-ray sources from above the Earth's atmosphere was on August 5, 1948 12:07 GMT. A US Army (formerly German) V-2 rocket as part of Project Hermes was launched from White Sands Proving Grounds. The first solar X-rays were recorded by T. Burnight.
Through the 1960s, 70s, 80s, and 90s, the sensitivity of detectors increased greatly during the 60 years of X-ray astronomy. In addition, the ability to focus X-rays has developed enormously—allowing the production of high-quality images of many fascinating celestial objects.
Sounding rocket flights
The first sounding rocket flights for X-ray research were accomplished at the White Sands Missile Range in New Mexico with a V-2 rocket on January 28, 1949. A detector was placed in the nose cone section and the rocket was launched in a suborbital flight to an altitude just above the atmosphere. X-rays from the Sun were detected by the U.S. Naval Research Laboratory Blossom experiment on board.
An Aerobee 150 rocket launched on June 19, 1962 (UTC) detected the first X-rays emitted from a source outside our solar system (Scorpius X-1). It is now known that such X-ray sources as Sco X-1 are compact stars, such as neutron stars or black holes. Material falling into a black hole may emit X-rays, but the black hole itself does not. The energy source for the X-ray emission is gravity. Infalling gas and dust is heated by the strong gravitational fields of these and other celestial objects. Based on discoveries in this new field of X-ray astronomy, starting with Scorpius X-1, Riccardo Giacconi received the Nobel Prize in Physics in 2002.
The largest drawback to rocket flights is their very short duration (just a few minutes above the atmosphere before the rocket falls back to Earth) and their limited field of view. A rocket launched from the United States will not be able to see sources in the southern sky; a rocket launched from Australia will not be able to see sources in the northern sky.
X-ray Quantum Calorimeter (XQC) project
In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fields. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field.
Of interest is the hot ionized medium (HIM) consisting of a coronal cloud ejection from star surfaces at 106-107 K which emits X-rays. The ISM is turbulent and full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures are stellar wind bubbles and superbubbles of hot gas. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble.
To measure the spectrum of the diffuse X-ray emission from the interstellar medium over the energy range 0.07 to 1 keV, NASA launched a Black Brant 9 from White Sands Missile Range, New Mexico on May 1, 2008. The Principal Investigator for the mission is Dr. Dan McCammon of the University of Wisconsin–Madison.
Balloons
Balloon flights can carry instruments to altitudes of up to 40 km above sea level, where they are above as much as 99.997% of the Earth's atmosphere. Unlike a rocket where data are collected during a brief few minutes, balloons are able to stay aloft for much longer. However, even at such altitudes, much of the X-ray spectrum is still absorbed. X-rays with energies less than 35 keV (5,600 aJ) cannot reach balloons. On July 21, 1964, the Crab Nebula supernova remnant was discovered to be a hard X-ray (15–60 keV) source by a scintillation counter flown on a balloon launched from Palestine, Texas, United States. This was likely the first balloon-based detection of X-rays from a discrete cosmic X-ray source.
High-energy focusing telescope
The high-energy focusing telescope (HEFT) is a balloon-borne experiment to image astrophysical sources in the hard X-ray (20–100 keV) band. Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The angular resolution of HEFT is c. 1.5'. Rather than using a grazing-angle X-ray telescope, HEFT makes use of a novel tungsten-silicon multilayer coatings to extend the reflectivity of nested grazing-incidence mirrors beyond 10 keV. HEFT has an energy resolution of 1.0 keV full width at half maximum at 60 keV. HEFT was launched for a 25-hour balloon flight in May 2005. The instrument performed within specification and observed Tau X-1, the Crab Nebula.
High-resolution gamma-ray and hard X-ray spectrometer (HIREGS)
A balloon-borne experiment called the High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) observed X-ray and gamma-rays emissions from the Sun and other astronomical objects. It was launched from McMurdo Station, Antarctica in December 1991 and 1992. Steady winds carried the balloon on a circumpolar flight lasting about two weeks each time.
Rockoons
The rockoon, a blend of rocket and balloon, was a solid fuel rocket that, rather than being immediately lit while on the ground, was first carried into the upper atmosphere by a gas-filled balloon. Then, once separated from the balloon at its maximum height, the rocket was automatically ignited. This achieved a higher altitude, since the rocket did not have to move through the lower thicker air layers that would have required much more chemical fuel.
The original concept of "rockoons" was developed by Cmdr. Lee Lewis, Cmdr. G. Halvorson, S. F. Singer, and James A. Van Allen during the Aerobee rocket firing cruise of the on March 1, 1949.
From July 17 to July 27, 1956, the Naval Research Laboratory (NRL) shipboard launched eight Deacon rockoons for solar ultraviolet and X-ray observations at ~30° N ~121.6° W, southwest of San Clemente Island, apogee: 120 km.
X-ray telescopes and mirrors
Satellites are needed because X-rays are absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray telescopes (XRTs) have varying directionality or imaging ability based on glancing angle reflection rather than refraction or large deviation reflection.
This limits them to much narrower fields of view than visible or UV telescopes. The mirrors can be made of ceramic or metal foil.
The first X-ray telescope in astronomy was used to observe the Sun. The first X-ray picture (taken with a grazing incidence telescope) of the Sun was taken in 1963, by a rocket-borne telescope. On April 19, 1960, the very first X-ray image of the sun was taken using a pinhole camera on an Aerobee-Hi rocket.
The utilization of X-ray mirrors for extrasolar X-ray astronomy simultaneously requires:
the ability to determine the location at the arrival of an X-ray photon in two dimensions and
a reasonable detection efficiency.
X-ray astronomy detectors
X-ray astronomy detectors have been designed and configured primarily for energy and occasionally for wavelength detection using a variety of techniques usually limited to the technology of the time.
X-ray detectors collect individual X-rays (photons of X-ray electromagnetic radiation) and count the number of photons collected (intensity), the energy (0.12 to 120 keV) of the photons collected, wavelength (c. 0.008–8 nm), or how fast the photons are detected (counts per hour), to tell us about the object that is emitting them.
Astrophysical sources of X-rays
Several types of astrophysical objects emit, fluoresce, or reflect X-rays, from galaxy clusters, through black holes in active galactic nuclei (AGN) to galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars and super soft X-ray sources), neutron star or black hole (X-ray binaries). Some Solar System bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. The X-ray continuum can arise from bremsstrahlung, black-body radiation, synchrotron radiation, or what is called inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions.
An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate mass star.
Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Herculis) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, , between high- and low-mass X-ray binaries.
In July 2020, astronomers reported the observation of a "hard tidal disruption event candidate" associated with ASASSN-20hx, located near the nucleus of galaxy NGC 6297, and noted that the observation represented one of the "very few tidal disruption events with hard powerlaw X-ray spectra".
Celestial X-ray sources
The celestial sphere has been divided into 88 constellations. The International Astronomical Union (IAU) constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them have been identified from astrophysical modeling to be galaxies or black holes at the centers of galaxies. Some are pulsars. As with sources already successfully modeled by X-ray astrophysics, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth. Constellations are an astronomical device for handling observation and precision independent of current physical theory or interpretation. Astronomy has been around for a long time. Physical theory changes with time. With respect to celestial X-ray sources, X-ray astrophysics tends to focus on the physical reason for X-ray brightness, whereas X-ray astronomy tends to focus on their classification, order of discovery, variability, resolvability, and their relationship with nearby sources in other constellations.
Within the constellations Orion and Eridanus and stretching across them is a soft X-ray "hot spot" known as the Orion-Eridanus Superbubble, the Eridanus Soft X-ray Enhancement, or simply the Eridanus Bubble, a 25° area of interlocking arcs of Hα emitting filaments. Soft X-rays are emitted by hot gas (T ~ 2–3 MK) in the interior of the superbubble. This bright object forms the background for the "shadow" of a filament of gas and dust. The filament is shown by the overlaid contours, which represent 100 micrometre emission from dust at a temperature of about 30 K as measured by IRAS. Here the filament absorbs soft X-rays between 100 and 300 eV, indicating that the hot gas is located behind the filament. This filament may be part of a shell of neutral gas that surrounds the hot bubble. Its interior is energized by ultraviolet (UV) light and stellar winds from hot stars in the Orion OB1 association. These stars energize a superbubble about 1200 lys across which is observed in the visual (Hα) and X-ray portions of the spectrum.
Explorational X-ray astronomy
Usually observational astronomy is considered to occur on Earth's surface (or beneath it in neutrino astronomy). The idea of limiting observation to Earth includes orbiting the Earth. As soon as the observer leaves the cozy confines of Earth, the observer becomes a deep space explorer. Except for Explorer 1 and Explorer 3 and the earlier satellites in the series, usually if a probe is going to be a deep space explorer it leaves the Earth or an orbit around the Earth.
For a satellite or space probe to qualify as a deep space X-ray astronomer/explorer or "astronobot"/explorer, all it needs to carry aboard is an XRT or X-ray detector and leave Earth's orbit.
Ulysses was launched October 6, 1990, and reached Jupiter for its "gravitational slingshot" in February 1992. It passed the south solar pole in June 1994 and crossed the ecliptic equator in February 1995. The solar X-ray and cosmic gamma-ray burst experiment (GRB) had 3 main objectives: study and monitor solar flares, detect and localize cosmic gamma-ray bursts, and in-situ detection of Jovian aurorae. Ulysses was the first satellite carrying a gamma burst detector which went outside the orbit of Mars. The hard X-ray detectors operated in the range 15–150 keV. The detectors consisted of 23-mm thick × 51-mm diameter CsI(Tl) crystals mounted via plastic light tubes to photomultipliers. The hard detector changed its operating mode depending on (1) measured count rate, (2) ground command, or (3) change in spacecraft telemetry mode. The trigger level was generally set for 8-sigma above background and the sensitivity is 10−6 erg/cm2 (1 nJ/m2). When a burst trigger is recorded, the instrument switches to record high resolution data, recording it to a 32-kbit memory for a slow telemetry read out. Burst data consist of either 16 s of 8-ms resolution count rates or 64 s of 32-ms count rates from the sum of the 2 detectors. There were also 16 channel energy spectra from the sum of the 2 detectors (taken either in 1, 2, 4, 16, or 32 second integrations). During 'wait' mode, the data were taken either in 0.25 or 0.5 s integrations and 4 energy channels (with shortest integration time being 8 s). Again, the outputs of the 2 detectors were summed.
The Ulysses soft X-ray detectors consisted of 2.5-mm thick × 0.5 cm2 area Si surface barrier detectors. A 100 mg/cm2 beryllium foil front window rejected the low energy X-rays and defined a conical FOV of 75° (half-angle). These detectors were passively cooled and operate in the temperature range −35 to −55 °C. This detector had 6 energy channels, covering the range 5–20 keV.
Theoretical X-ray astronomy
Theoretical X-ray astronomy is a branch of theoretical astronomy that deals with the theoretical astrophysics and theoretical astrochemistry of X-ray generation, emission, and detection as applied to astronomical objects.
Like theoretical astrophysics, theoretical X-ray astronomy uses a wide variety of tools which include analytical models to approximate the behavior of a possible X-ray source and computational numerical simulations to approximate the observational data. Once potential observational consequences are available they can be compared with experimental observations. Observers can look for data that refutes a model or helps in choosing between several alternate or conflicting models.
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
Most of the topics in astrophysics, astrochemistry, astrometry, and other fields that are branches of astronomy studied by theoreticians involve X-rays and X-ray sources. Many of the beginnings for a theory can be found in an Earth-based laboratory where an X-ray source is built and studied.
Dynamos
Dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field. This theory is used to explain the presence of anomalously long-lived magnetic fields in astrophysical bodies. If some of the stellar magnetic fields are really induced by dynamos, then field strength might be associated with rotation rate.
Astronomical models
From the observed X-ray spectrum, combined with spectral emission results for other wavelength ranges, an astronomical model addressing the likely source of X-ray emission can be constructed. For example, with Scorpius X-1 the X-ray spectrum steeply drops off as X-ray energy increases up to 20 keV, which is likely for a thermal-plasma mechanism. In addition, there is no radio emission, and the visible continuum is roughly what would be expected from a hot plasma fitting the observed X-ray flux. The plasma could be a coronal cloud of a central object or a transient plasma, where the energy source is unknown, but could be related to the idea of a close binary.
In the Crab Nebula X-ray spectrum there are three features that differ greatly from Scorpius X-1: its spectrum is much harder, its source diameter is in light-years (ly)s, not astronomical units (AU), and its radio and optical synchrotron emission are strong. Its overall X-ray luminosity rivals the optical emission and could be that of a nonthermal plasma. However, the Crab Nebula appears as an X-ray source that is a central freely expanding ball of dilute plasma, where the energy content is 100 times the total energy content of the large visible and radio portion, obtained from the unknown source.
The "Dividing Line" as giant stars evolve to become red giants also coincides with the Wind and Coronal Dividing Lines. To explain the drop in X-ray emission across these dividing lines, a number of models have been proposed:
low transition region densities, leading to low emission in coronae,
high-density wind extinction of coronal emission,
only cool coronal loops become stable,
changes in a magnetic field structure to that an open topology, leading to a decrease of magnetically confined plasma, or
changes in the magnetic dynamo character, leading to the disappearance of stellar fields leaving only small-scale, turbulence-generated fields among red giants.
Analytical X-ray astronomy
High-mass X-ray binaries (HMXBs) are composed of OB supergiant companion stars and compact objects, usually neutron stars (NS) or black holes (BH). Supergiant X-ray binaries (SGXBs) are HMXBs in which the compact objects orbit massive companions with orbital periods of a few days (3–15 d), and in circular (or slightly eccentric) orbits. SGXBs show typical the hard X-ray spectra of accreting pulsars and most show strong absorption as obscured HMXBs. X-ray luminosity (Lx) increases up to 1036 erg·s−1 (1029 watts).
The mechanism triggering the different temporal behavior observed between the classical SGXBs and the recently discovered supergiant fast X-ray transients (SFXT)s is still debated.
Stellar X-ray astronomy
The first detection of stellar x-rays occurred on April 5, 1974, with the detection of X-rays from Capella. A rocket flight on that date briefly calibrated its attitude control system when a star sensor pointed the payload axis at Capella (α Aur). During this period, X-rays in the range 0.2–1.6 keV were detected by an X-ray reflector system co-aligned with the star sensor. The X-ray luminosity of Lx = 1031 erg·s−1 (1024 W) is four orders of magnitude above the Sun's X-ray luminosity.
Stellar coronae
Coronal stars, or stars within a coronal cloud, are ubiquitous among the stars in the cool half of the Hertzsprung-Russell diagram. Experiments with instruments aboard Skylab and Copernicus have been used to search for soft X-ray emission in the energy range ~0.14–0.284 keV from stellar coronae. The experiments aboard ANS succeeded in finding X-ray signals from Capella and Sirius (α CMa). X-ray emission from an enhanced solar-like corona was proposed for the first time. The high temperature of Capella's corona as obtained from the first coronal X-ray spectrum of Capella using HEAO 1 required magnetic confinement unless it was a free-flowing coronal wind.
In 1977 Proxima Centauri is discovered to be emitting high-energy radiation in the XUV. In 1978, α Cen was identified as a low-activity coronal source. With the operation of the Einstein observatory, X-ray emission was recognized as a characteristic feature common to a wide range of stars covering essentially the whole Hertzsprung-Russell diagram. The Einstein initial survey led to significant insights:
X-ray sources abound among all types of stars, across the Hertzsprung-Russell diagram and across most stages of evolution,
the X-ray luminosities and their distribution along the main sequence were not in agreement with the long-favored acoustic heating theories, but were now interpreted as the effect of magnetic coronal heating, and
stars that are otherwise similar reveal large differences in their X-ray output if their rotation period is different.
To fit the medium-resolution spectrum of UX Arietis, subsolar abundances were required.
Stellar X-ray astronomy is contributing toward a deeper understanding of
magnetic fields in magnetohydrodynamic dynamos,
the release of energy in tenuous astrophysical plasmas through various plasma-physical processes, and
the interactions of high-energy radiation with the stellar environment.
Current wisdom has it that the massive coronal main sequence stars are late-A or early F stars, a conjecture that is supported both by observation and by theory.
Young, low-mass stars
Newly formed stars are known as pre-main-sequence stars during the stage of stellar evolution before they reach the main-sequence. Stars in this stage (ages <10 million years) produce X-rays in their stellar coronae. However, their X-ray emission is 103 to 105 times stronger than for main-sequence stars of similar masses.
X-ray emission for pre–main-sequence stars was discovered by the Einstein Observatory. This X-ray emission is primarily produced by magnetic reconnection flares in the stellar coronae, with many small flares contributing to the "quiescent" X-ray emission from these stars. Pre–main sequence stars have large convection zones, which in turn drive strong dynamos, producing strong surface magnetic fields. This leads to the high X-ray emission from these stars, which lie in the saturated X-ray regime, unlike main-sequence stars that show rotational modulation of X-ray emission. Other sources of X-ray emission include accretion hotspots and collimated outflows.
X-ray emission as an indicator of stellar youth is important for studies of star-forming regions. Most star-forming regions in the Milky Way Galaxy are projected on Galactic-Plane fields with numerous unrelated field stars. It is often impossible to distinguish members of a young stellar cluster from field-star contaminants using optical and infrared images alone. X-ray emission can easily penetrate moderate absorption from molecular clouds, and can be used to identify candidate cluster members.
Unstable winds
Given the lack of a significant outer convection zone, theory predicts the absence of a magnetic dynamo in earlier A stars. In early stars of spectral type O and B, shocks developing in unstable winds are the likely source of X-rays.
Coolest M dwarfs
Beyond spectral type M5, the classical αω dynamo can no longer operate as the internal structure of dwarf stars changes significantly: they become fully convective. As a distributed (or α2) dynamo may become relevant, both the magnetic flux on the surface and the topology of the magnetic fields in the corona should systematically change across this transition, perhaps resulting in some discontinuities in the X-ray characteristics around spectral class dM5. However, observations do not seem to support this picture: long-time lowest-mass X-ray detection, VB 8 (M7e V), has shown steady emission at levels of X-ray luminosity (LX) ≈ 1026 erg·s−1 (1019 W) and flares up to an order of magnitude higher. Comparison with other late M dwarfs shows a rather continuous trend.
Strong X-ray emission from Herbig Ae/Be stars
Herbig Ae/Be stars are pre-main sequence stars. As to their X-ray emission properties, some are
reminiscent of hot stars,
others point to coronal activity as in cool stars, in particular the presence of flares and very high temperatures.
The nature of these strong emissions has remained controversial with models including
unstable stellar winds,
colliding winds,
magnetic coronae,
disk coronae,
wind-fed magnetospheres,
accretion shocks,
the operation of a shear dynamo,
the presence of unknown late-type companions.
K giants
The FK Com stars are giants of spectral type K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous (LX ≥ 1032 erg·s−1 or 1025 W) and the hottest known with dominant temperatures up to 40 MK. However, the current popular hypothesis involves a merger of a close binary system in which the orbital angular momentum of the companion is transferred to the primary.
Pollux is the brightest star in the constellation Gemini, despite its Beta designation, and the 17th brightest in the sky. Pollux is a giant orange K star that makes an interesting color contrast with its white "twin", Castor. Evidence has been found for a hot, outer, magnetically supported corona around Pollux, and the star is known to be an X-ray emitter.
Eta Carinae
New X-ray observations by the Chandra X-ray Observatory show three distinct structures: an outer, horseshoe-shaped ring about 2 light years in diameter, a hot inner core about 3 light-months in diameter, and a hot central source less than 1 light-month in diameter which may contain the superstar that drives the whole show. The outer ring provides evidence of another large explosion that occurred over 1,000 years ago. These three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. "The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays," says Prof. Kris Davidson of the University of Minnesota. Davidson is principal investigator for the Eta Carina observations by the Hubble Space Telescope. "In the most popular theory, X-rays are made by colliding gas streams from two stars so close together that they'd look like a point source to us. But what happens to gas streams that escape to farther distances? The extended hot stuff in the middle of the new image gives demanding new conditions for any theory to meet."
Amateur X-ray astronomy
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. The United States Air Force Academy (USAFA) is the home of the US's only undergraduate satellite program, and has and continues to develop the FalconLaunch sounding rockets. In addition to any direct amateur efforts to put X-ray astronomy payloads into space, there are opportunities that allow student-developed experimental payloads to be put on board commercial sounding rockets as a free-of-charge ride.
There are major limitations to amateurs observing and reporting experiments in X-ray astronomy: the cost of building an amateur rocket or balloon to place a detector high enough and the cost of appropriate parts to build a suitable X-ray detector.
Major questions in X-ray astronomy
As X-ray astronomy uses a major spectral probe to peer into the source, it is a valuable tool in efforts to understand many puzzles.
Stellar magnetic fields
Magnetic fields are ubiquitous among stars, yet we do not understand precisely why, nor have we fully understood the bewildering variety of plasma physical mechanisms that act in stellar environments. Some stars, for example, seem to have magnetic fields, fossil stellar magnetic fields left over from their period of formation, while others seem to generate the field anew frequently.
Extrasolar X-ray source astrometry
With the initial detection of an extrasolar X-ray source, the first question usually asked is "What is the source?" An extensive search is often made in other wavelengths such as visible or radio for possible coincident objects. Many of the verified X-ray locations still do not have readily discernible sources. X-ray astrometry becomes a serious concern that results in ever greater demands for finer angular resolution and spectral radiance.
There are inherent difficulties in making X-ray/optical, X-ray/radio, and X-ray/X-ray identifications based solely on positional coincidents, especially with handicaps in making identifications, such as the large uncertainties in positional determinants made from balloons and rockets, poor source separation in the crowded region toward the galactic center, source variability, and the multiplicity of source nomenclature.
X‐ray source counterparts to stars can be identified by calculating the angular separation between source centroids and the position of the star. The maximum allowable separation is a compromise between a larger value to identify as many real matches as possible and a smaller value to minimize the probability of spurious matches. "An adopted matching criterion of 40" finds nearly all possible X‐ray source matches while keeping the probability of any spurious matches in the sample to 3%."
Solar X-ray astronomy
All of the detected X-ray sources at, around, or near the Sun appear to be associated with processes in the corona, which is its outer atmosphere.
Coronal heating problem
In the area of solar X-ray astronomy, there is the coronal heating problem. The photosphere of the Sun has an effective temperature of 5,570 K yet its corona has an average temperature of 1–2 × 106 K. However, the hottest regions are 8–20 × 106 K. The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere.
It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient gas in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events—nanoflares.
Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms.
Coronal mass ejection
A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Evolution of these closed magnetic structures in response to various photospheric motions over different time scales (convection, differential rotation, meridional circulation) somehow leads to the CME. Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs.
The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production. "Relating the sigmoids at X-ray (and other) wavelengths to magnetic structures and current systems in the solar atmosphere is the key to understanding their relationship to CMEs."
The first detection of a Coronal mass ejection (CME) as such was made on December 1, 1971, by R. Tousey of the US Naval Research Laboratory using OSO 7. Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing.
The largest geomagnetic perturbation, resulting presumably from a "prehistoric" CME, coincided with the first-observed solar flare, in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens. The same instrument recorded a crotchet, an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen) and the recognition of the ionosphere (by Kennelly and Heaviside).
Exotic X-ray sources
A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary, with an often resolvable pair of radio jets.
LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01.
Observations are revealing a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs).
Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87. A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays.
X-ray dark stars
During the solar cycle, as shown in the sequence of images at right, at times the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse, on the other hand, appears to be always X-ray dark. Hardly any X-rays are emitted by red giants. There is a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F. Altair is spectral type A7V and Vega is A0V. Altair's total X-ray luminosity is at least an order of magnitude larger than the X-ray luminosity for Vega. The outer convection zone of early F stars is expected to be very shallow and absent in A-type dwarfs, yet the acoustic flux from the interior reaches a maximum for late A and early F stars provoking investigations of magnetic activity in A-type stars along three principal lines. Chemically peculiar stars of spectral type Bp or Ap are appreciable magnetic radio sources, most Bp/Ap stars remain undetected, and of those reported early on as producing X-rays only few of them can be identified as probably single stars. X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area."
X-ray dark planets and comets
X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area."
As X-ray detectors have become more sensitive, they have observed that some planets and other normally X-ray non-luminescent celestial objects under certain conditions emit, fluoresce, or reflect X-rays.
Comet Lulin
NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. "The solar wind—a fast-moving stream of particles from the sun—interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the Sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet.
See also
Balloons for X-ray astronomy
Crab (unit)
Gamma-ray astronomy
History of X-ray astronomy
IRAS 13224-3809
List of X-ray space telescopes
Solar X-ray astronomy
Stellar X-ray astronomy
Ultraviolet astronomy
X-ray telescope
References
Sources
The content of this article was adapted and expanded from http://imagine.gsfc.nasa.gov/ (Public Domain)
External links
How Many Known X-Ray (and Other) Sources Are There?
Is My Favorite Object an X-ray, Gamma-Ray, or EUV Source?
X-ray all-sky survey on WIKISKY
Audio – Cain/Gay (2009) Astronomy Cast – X-Ray Astronomy
Space plasmas
Astronomical imaging
Astronomical X-ray sources
Observational astronomy
Astronomical sub-disciplines | X-ray astronomy | [
"Physics",
"Astronomy"
] | 8,725 | [
"Space plasmas",
"Observational astronomy",
"Astrophysics",
"X-ray astronomy",
"Astronomical X-ray sources",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
44,063 | https://en.wikipedia.org/wiki/Extragalactic%20astronomy | Extragalactic astronomy is the branch of astronomy concerned with objects outside the Milky Way galaxy. In other words, it is the study of all astronomical objects which are not covered by galactic astronomy.
The closest objects in extragalactic astronomy include the galaxies of the Local Group, which are close enough to allow very detailed analyses of their contents (e.g. supernova remnants, stellar associations). As instrumentation has improved, distant objects can now be examined in more detail and so extragalactic astronomy includes objects at nearly the edge of the observable universe. Research into distant galaxies (outside of our local group) is valuable for studying aspects of the universe such as galaxy evolution and Active Galactic Nuclei (AGN) which give insight into physical phenomena (e.g. super massive black hole accretion and the presence of dark matter). It is through extragalactic astronomy that astronomers and physicists are able to study the effects of General Relativity such as gravitational lensing and gravitational waves, that are otherwise impossible (or nearly impossible) to study on a galactic scale.
A key interest in Extragalactic Astronomy is the study of how galaxies behave and interact through the universe. Astronomer's methodologies depend — from theoretical to observation based methods.
Galaxies form in various ways. In most Cosmological N-body simulations, the earliest galaxies in the cosmos formed in the first hundreds of millions of years.
These primordial galaxies formed as the enormous reservoirs of gas and dust in the early universe collapsed in on themselves, giving birth to the first stars, now known as Population III Stars. These stars were of enormous masses in the range of 300 to perhaps 3 million solar masses. Due to their large mass, these stars had extremely short lifespans.
Famous examples
Hubble Deep Field
LIGO's detection of gravitational waves
Chandra Deep Field South
Topics
Active Galactic Nuclei (AGN), Quasars
Dark Matter
Galaxy clusters, Superclusters
Intergalactic stars
Intergalactic dust
the observable universe
Radio galaxies
Supernovae
Extragalactic planet
See also
Andromeda–Milky Way collision
Galaxy color–magnitude diagram
Galaxy formation and evolution
Observational cosmology
References
Astronomical sub-disciplines
Physical cosmology
Edwin Hubble | Extragalactic astronomy | [
"Physics",
"Astronomy"
] | 449 | [
"Theoretical physics",
"Astrophysics",
"Physical cosmology",
"Extragalactic astronomy",
"Astronomical sub-disciplines"
] |
44,069 | https://en.wikipedia.org/wiki/Vulcan%20%28hypothetical%20planet%29 | Vulcan was a proposed planet that some pre-20th century astronomers thought existed in an orbit between Mercury and the Sun. Speculation about, and even purported observations of, intermercurial bodies or planets date back to the beginning of the 17th century. The case for their probable existence was bolstered by the support of the French mathematician Urbain Le Verrier, who had predicted the existence of Neptune using disturbances in the orbit of Uranus. By 1859 he had confirmed unexplained peculiarities in Mercury's orbit and predicted that they had to be the result of the gravitational influence of another unknown nearby planet or series of asteroids. A French amateur astronomer's report that he had observed an object passing in front of the Sun that same year led Le Verrier to announce that the long sought after planet, which he gave the name Vulcan, had been discovered at last.
Many searches were conducted for Vulcan over the following decades, but despite several claimed observations, its existence could not be confirmed. The need for the planet as an explanation for Mercury's orbital peculiarities was later rendered unnecessary when Einstein's 1915 theory of general relativity showed that Mercury's departure from an orbit predicted by Newtonian physics was explained by effects arising from the curvature of spacetime caused by the Sun's mass.
Hypotheses and observations
Celestial bodies interior to the orbit of Mercury had been hypothesized, searched for, and even claimed as having been observed, for centuries.
Claims of actually seeing objects passing in front of the Sun included those made by the German astronomer Christoph Scheiner in 1611 (which turned out to be the discovery of sunspots), British lawyer, writer and amateur astronomer Capel Lofft's observations of 'an opaque body traversing the sun's disc' on 6 January 1818, and Bavarian physician and astronomer Franz von Paula Gruithuisen's 26 June 1819 report of seeing "two small spots...on the Sun, round, black and unequal in size". German astronomer reported many observations also claiming to have seen two spots, with the first observation on 23 October 1822 and subsequent observations in 1823, 1834, 1836, and 1837; in 1834 the larger spot was recorded as 3 arcseconds across, and the smaller 1.25 arcseconds.
Proposals that there could be planets orbiting inside Mercury's orbit were put forward by British scientist Thomas Dick in 1838 and by French physicist, mathematician, and astronomer Jacques Babinet in 1846 who suggested there may be "incandescent clouds of a planetary kind, circling the Sun" and proposed the name "Vulcan" (after the god Vulcan from Roman mythology) for a planet close to the Sun.
As a planet near the Sun would be lost in its glare, several observers mounted systematic searches to try to catch it during "transit", i.e. when it passes in front of the Sun's disc. German amateur astronomer Heinrich Schwabe searched unsuccessfully on every clear day from 1826 to 1843 and Yale scientist Edward Claudius Herrick conducted observations twice daily starting in 1847, hoping to catch a planet in transit. French physician and amateur astronomer Edmond Modeste Lescarbault began searching the Sun's disk in 1853, and more systematically after 1858, with a 3.75 inch (95 mm) refractor in an observatory he set up outside his surgery.
Le Verrier's prediction
In 1840, François Arago, the director of the Paris Observatory, suggested to mathematician Urbain Le Verrier that he work on the topic of Mercury's orbit around the Sun. The goal of this study was to construct a model based on Sir Isaac Newton's laws of motion and gravitation. By 1843, Le Verrier published his provisional theory regarding Mercury's motion, with a detailed presentation published in 1845, which would be tested during a transit of Mercury across the face of the Sun in 1848. Predictions from Le Verrier's theory failed to match the observations.
Despite this, Le Verrier continued his work and, in 1859, published a more thorough study of Mercury's motion. This was based on a series of meridian observations of the planet and 14 transits. This study's rigor meant that any differences between the motion predicted and what was observed would point to the influence of an unknown factor. Indeed, some discrepancies remained. During Mercury's orbit, its perihelion advances by a small amount, something called perihelion precession. The observed value exceeds the classical mechanics prediction by the small amount of 43 arcseconds per century.
Le Verrier postulated that the excess precession could be explained by the presence of some unidentified object or objects inside the orbit of Mercury. He calculated that it was either another Mercury size planet or, since it was unlikely that astronomers were failing to see such a large object, an unknown asteroid belt near the Sun.
The fact that Le Verrier had predicted the existence of the planet Neptune in 1846 using the same techniques lent veracity to his claim.
Claimed discovery
On 22 December 1859, Le Verrier received a letter from Lescarbault, saying that he had seen a transit of the hypothetical planet on March 26 of that year. Le Verrier took the train to the village of Orgères-en-Beauce, some southwest of Paris, to Lescarbault's homemade observatory. Le Verrier arrived unannounced and proceeded to interrogate the man.
Lescarbault described in detail how, on 26 March 1859, he observed a small black dot on the face of the Sun. After some time had passed, he realized that it was moving. He thought it looked similar to the transit of Mercury which he had observed in 1845. He estimated the distance it had already traveled, made some measurements of its position and direction of motion and, using an old clock and a pendulum with which he took his patients' pulses, estimated the total duration of the transit (coming up with 1 hour, 17 minutes, and 9 seconds).
Le Verrier was not happy about Lescarbault's crude equipment but was satisfied the physician had seen the transit of a previously unknown planet. On 2 January 1860 he announced the discovery of the new planet with the proposed name from mythology, "Vulcan", at the meeting of the Académie des Sciences in Paris. Lescarbault, for his part, was awarded the Légion d'honneur and invited to appear before numerous learned societies.
Not everyone accepted the veracity of Lescarbault's "discovery", however. An eminent French astronomer, Emmanuel Liais, who was working for the Brazilian government in Rio de Janeiro in 1859, claimed to have been studying the surface of the Sun with a telescope twice as powerful as Lescarbault's at the very moment that Lescarbault said he observed his mysterious transit. Liais, therefore, was "in a condition to deny, in the most positive manner, the passage of a planet over the sun at the time indicated".
Based on Lescarbault's "transit", Le Verrier computed Vulcan's orbit: it supposedly revolved about the Sun in a nearly circular orbit at a distance of . The period of revolution was 19 days and 17 hours, and the orbit was inclined to the ecliptic by 12 degrees and 10 minutes (an incredible degree of precision). As seen from the Earth, Vulcan's greatest elongation from the Sun was 8 degrees.
Attempts to confirm the discovery
Numerous reports reached Le Verrier from other amateurs who claimed to have seen unexplained transits. Some of these reports referred to observations made many years earlier, and many were not dated, let alone accurately timed. Nevertheless, Le Verrier continued to tinker with Vulcan's orbital parameters as each newly reported sighting reached him. He frequently announced dates of future Vulcan transits. When these failed to materialize, he tinkered with the parameters some more.
Shortly after 08:00 on 29 January 1860, F.A.R. Russell and three other people in London saw an alleged transit of an intra-Mercurial planet. An American observer, Richard Covington, many years later claimed to have seen a well-defined black spot progress across the Sun's disk around 1860 when he was stationed in Washington Territory.
No observations of Vulcan were made in 1861. Then, on the morning of 20 March 1862, between 08:00 and 09:00 Greenwich Time, another amateur astronomer, a Mr. Lummis of Manchester, England, saw a transit. His colleague, whom he alerted, also saw the event. Based on these two men's reports, two French astronomers, Benjamin Valz and Rodolphe Radau, independently calculated the object's supposed orbital period, with Valz deriving a figure of 17 days and 13 hours and Radau a figure of 19 days and 22 hours.
On 8 May 1865 another French astronomer, Aristide Coumbary, observed an unexpected transit from Istanbul, Turkey.
Between 1866 and 1878, no reliable observations of the hypothetical planet were made. Then, during the total solar eclipse of July 29, 1878, two experienced astronomers, Professor James Craig Watson, the director of the Ann Arbor Observatory in Michigan, and Lewis Swift, from Rochester, New York, both claimed to have seen a Vulcan-type planet close to the Sun. Watson, observing from Separation Point, Wyoming, placed the planet about 2.5 degrees southwest of the Sun and estimated its magnitude at 4.5. Swift, observing the eclipse from a location near Denver, Colorado, saw what he took to be an intra-mercurial planet about 3 degrees southwest of the Sun. He estimated its brightness to be the same as that of Theta Cancri, a fifth-magnitude star which was also visible during totality, about six or seven minutes from the "planet". Theta Cancri and the planet were nearly in line with the Sun's centre.
Watson and Swift had reputations as excellent observers. Watson had already discovered more than twenty asteroids, while Swift had several comets named after him. Both described the colour of their hypothetical intra-mercurial planet as "red". Watson reported that it had a definite disk—unlike stars, which appear in telescopes as mere points of light—and that its phase indicated that it was on the far side of the Sun approaching superior conjunction.
Both Watson and Swift had observed two objects they believed were not known stars, but after Swift corrected an error in his coordinates, none of the coordinates matched each other, nor known stars. The idea that four objects were observed during the eclipse generated controversy in scientific journals and mockery from Watson's rival C. H. F. Peters. Peters noted that the margin of error in the pencil and cardboard recording device Watson had used was large enough to plausibly include a bright known star. A skeptic of the Vulcan hypothesis, Peters dismissed all the observations as mistaking known stars as planets.
Astronomers continued searching for Vulcan during total solar eclipses in 1883, 1887, 1889, 1900, 1901, 1905, and 1908. Finally, in 1908, William Wallace Campbell, Director, and Charles Dillon Perrine, Astronomer, of the Lick Observatory, after comprehensive photographic observations at three solar eclipse expeditions in 1901, 1905, and 1908, stated: "In our opinion, the work of the three Crocker Expeditions,...brings the observational side of the intermercurial planet problemfamous for half a centurydefinitely to a close."
Hypothesis disproved
In 1915 Einstein's theory of relativity, an approach to understanding gravity entirely differently from classical mechanics, removed the need for Le Verrier's hypothetical planet. It showed that the peculiarities in Mercury's orbit were the results of the curvature of spacetime caused by the mass of the Sun. This added a predicted 0.1 arc-second advance of Mercury's perihelion each orbital revolution, or 43 arc-seconds per century, exactly the observed amount (without any recourse to the existence of a hypothetical Vulcan). The new theory modified the predicted orbits of all planets, but the magnitude of the differences from Newtonian theory diminishes rapidly as one gets farther from the Sun. Also, Mercury's fairly eccentric orbit makes it much easier to detect the perihelion shift than is the case for the nearly circular orbits of Venus and Earth. Einstein's theory was empirically verified in the Eddington experiment during the solar eclipse of May 29, 1919 when photographs showed the curvature of spacetime was bending starlight around the Sun. Astronomers generally quickly accepted that a large planet inside the orbit of Mercury could not exist, given the corrected equation of gravity.
Today, the International Astronomical Union has reserved the name "Vulcan" for the hypothetical planet, even though it has been ruled out, and also for the Vulcanoids, a hypothetical population of asteroids that may exist inside the orbit of the planet Mercury. Thus far, however, earth- and space-based telescopes and the NASA Parker Solar Probe have detected no such asteroids. While three Atira asteroids have perihelion points within the orbit of Mercury, their aphelia are outside Mercury's orbit. Therefore, they cannot be defined as Vulcanoids, which would require wholly intra-Mercurian circular orbital trajectories, which none of them possess.
See also
Atira asteroid
Fictional planets of the Solar System
Hypothetical moon of Mercury
Nemesis (hypothetical star)
Planet Nine
Planets beyond Neptune
John H. Tice, weather forecaster who based predictions on supposed movements of Vulcan
Tyche (hypothetical planet)
Vulcan (Star Trek)
Vulcanoid
, an Atira asteroid with an intra-Mercurian perihelion, the smallest semi-major axis and the shortest orbital period of all asteroids
, an Atira asteroid with an intra-Mercurian perihelion
, an Atira asteroid with an intra-Mercurian perihelion
References
Further reading
Originally published as The Hunt for Vulcan: ... And How Albert Einstein Destroyed a Planet, Discovered Relativity, and Deciphered the Universe.
The subject was also featured on an episode of Arthur C. Clarke's Mysterious World entitled "Strange Skies", originally broadcast on November 18, 1980.
External links
Asimov, Isaac (1975). "The Planet That Wasn't", The Magazine of Fantasy and Science Fiction
Schlyter, Paul (2006). "Vulcan, the intra-Mercurial planet, 1860–1916, 1971", The Nine8 Planets: A Multimedia Tour of the Solar System (Appendix 7: "Hypothetical Planets") converted to HTML by Bill Arnett.
"The Planet Vulcan", Scientific American, 31 August 1878, p. 128, columns 2–3
Astronomical hypotheses
General relativity
Hypothetical bodies of the Solar System
Hypothetical planets
Mercury (planet)
Obsolete theories in physics
Solar System dynamic theories
Tests of general relativity
Vulcan (mythology)
Solar System | Vulcan (hypothetical planet) | [
"Physics",
"Astronomy"
] | 3,060 | [
"Astronomical hypotheses",
"Outer space",
"Theoretical physics",
"Astronomical myths",
"General relativity",
"Hypothetical astronomical objects",
"Astronomical controversies",
"Theory of relativity",
"Astronomical objects",
"Solar System",
"Obsolete theories in physics"
] |
44,127 | https://en.wikipedia.org/wiki/Military%20incompetence | Military incompetence refers to incompetencies and failures of military organisations, whether through incompetent individuals or through a flawed institutional culture.
The effects of isolated cases of personal incompetence can be disproportionately significant in military organisations. Strict hierarchies of command provide the opportunity for a single decision to direct the work of thousands, whilst an institutional culture devoted to following orders without debate can help ensure that a bad or miscommunicated decision is implemented without being challenged or corrected.
However, the most common cases of "military incompetence" can be attributable to a flawed organisational culture. Perhaps the most marked of these is a conservative and traditionalist attitude, where innovative ideas or new technology are discarded or left untested. A tendency to believe that a problem can be solved by applying an earlier (failed) solution "better", be that with more men, more firepower, or simply more zeal, is common. A strict hierarchical system often discourages the devolution of power to junior commanders, and can encourage micromanagement by senior officers.
The nature of warfare provides several factors which exacerbate these effects; the fog of war means that information about the enemy forces is often limited or inaccurate, making it easy for the intelligence process to interpret the information to agree with existing assumptions, or to fit it to their own preconceptions and expectations. Communications tend to deteriorate in battlefield situations, with the flow of information between commanders and combat units being disrupted, making it difficult to react to changes in the situation as they develop.
After operations have ceased, military organisations often fail to learn effectively from experience. In victory, whatever methods have been usedno matter how inefficientappear to have been vindicated (see victory disease), whilst in defeat there is a tendency to select scapegoats and to avoid looking in detail at the broader reasons for failure.
See also
List of military disasters
On the Psychology of Military Incompetence
Further reading
(also Pimlico, 1994 )
Military operations
Military theory
Military organization
Incompetence | Military incompetence | [
"Biology"
] | 436 | [
"Incompetence",
"Behavior",
"Human behavior"
] |
44,145 | https://en.wikipedia.org/wiki/Interquartile%20mean | The interquartile mean (IQM) (or midmean) is a statistical measure of central tendency based on the truncated mean of the interquartile range. The IQM is very similar to the scoring method used in sports that are evaluated by a panel of judges: discard the lowest and the highest scores; calculate the mean value of the remaining scores.
Calculation
In calculation of the IQM, only the data between the first and third quartiles is used, and the lowest 25% and the highest 25% of the data are discarded.
assuming the values have been ordered.
Examples
Dataset size divisible by four
The method is best explained with an example. Consider the following dataset:
5, 8, 4, 38, 8, 6, 9, 7, 7, 3, 1, 6
First sort the list from lowest-to-highest:
1, 3, 4, 5, 6, 6, 7, 7, 8, 8, 9, 38
There are 12 observations (datapoints) in the dataset, thus we have 4 quartiles of 3 numbers. Discard the lowest and the highest 3 values:
1, 3, 4, 5, 6, 6, 7, 7, 8, 8, 9, 38
We now have 6 of the 12 observations remaining; next, we calculate the arithmetic mean of these numbers:
xIQM = (5 + 6 + 6 + 7 + 7 + 8) / 6 = 6.5
This is the interquartile mean.
For comparison, the arithmetic mean of the original dataset is
(5 + 8 + 4 + 38 + 8 + 6 + 9 + 7 + 7 + 3 + 1 + 6) / 12 = 8.5
due to the strong influence of the outlier, 38.
Dataset size not divisible by four
The above example consisted of 12 observations in the dataset, which made the determination of the quartiles very easy. Of course, not all datasets have a number of observations that is divisible by 4. We can adjust the method of calculating the IQM to accommodate this. So ideally we want to have the IQM equal to the mean for symmetric distributions, e.g.:
1, 2, 3, 4, 5
has a mean value xmean = 3, and since it is a symmetric distribution, xIQM = 3 would be desired.
We can solve this by using a weighted average of the quartiles and the interquartile dataset:
Consider the following dataset of 9 observations:
1, 3, 5, 7, 9, 11, 13, 15, 17
There are 9/4 = 2.25 observations in each quartile, and 4.5 observations in the interquartile range. Truncate the fractional quartile size, and remove this number from the 1st and 4th quartiles (2.25 observations in each quartile, thus the lowest 2 and the highest 2 are removed).
1, 3, (5), 7, 9, 11, (13), 15, 17
Thus, there are 3 full observations in the interquartile range with a weight of 1 for each full observation, and 2 fractional observations with each observation having a weight of 0.75 (1-0.25 = 0.75). Thus we have a total of 4.5 observations in the interquartile range, (3×1 + 2×0.75 = 4.5 observations).
The IQM is now calculated as follows:
xIQM = {(7 + 9 + 11) + 0.75 × (5 + 13)} / 4.5 = 9
In the above example, the mean has a value xmean = 9. The same as the IQM, as was expected. The method of calculating the IQM for any number of observations is analogous; the fractional contributions to the IQM can be either 0, 0.25, 0.50, or 0.75.
Comparison with mean and median
The interquartile mean shares some properties of both the mean and the median:
Like the median, the IQM is insensitive to outliers; in the example given, the highest value (38) was an obvious outlier of the dataset, but its value is not used in the calculation of the IQM. On the other hand, the common average (the arithmetic mean) is sensitive to these outliers: xmean = 8.5.
Like the mean, the IQM is a distinct parameter, based on a large number of observations from the dataset. The median is always equal to one of the observations in the dataset (assuming an odd number of observations). The mean can be equal to any value between the lowest and highest observation, depending on the value of all the other observations. The IQM can be equal to any value between the first and third quartiles, depending on all the observations in the interquartile range.
See also
Related statistics
Interquartile range
Mid-hinge
Trimean
Applications
London Interbank Offered Rate estimated a reference interest rate as the interquartile mean of the rates offered by several banks. (SOFR, Libor's primary US replacement, uses a volume-weighted average price which is not robust.)
Everything2 uses the interquartile mean of the reputations of a user's writeups to determine the quality of the user's contribution.
References
Means
Robust statistics | Interquartile mean | [
"Physics",
"Mathematics"
] | 1,144 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
44,158 | https://en.wikipedia.org/wiki/Conservative%20force | In physics, a conservative force is a force with the property that the total work done by the force in moving a particle between two points is independent of the path taken. Equivalently, if a particle travels in a closed loop, the total work done (the sum of the force acting along the path multiplied by the displacement) by a conservative force is zero.
A conservative force depends only on the position of the object. If a force is conservative, it is possible to assign a numerical value for the potential at any point and conversely, when an object moves from one location to another, the force changes the potential energy of the object by an amount that does not depend on the path taken, contributing to the mechanical energy and the overall conservation of energy. If the force is not conservative, then defining a scalar potential is not possible, because taking different paths would lead to conflicting potential differences between the start and end points.
Gravitational force is an example of a conservative force, while frictional force is an example of a non-conservative force.
Other examples of conservative forces are: force in elastic spring, electrostatic force between two electric charges, and magnetic force between two magnetic poles. The last two forces are called central forces as they act along the line joining the centres of two charged/magnetized bodies. A central force is conservative if and only if it is spherically symmetric.
For conservative forces,
where is the conservative force, is the potential energy, and is the position.
Informal definition
Informally, a conservative force can be thought of as a force that conserves mechanical energy. Suppose a particle starts at point A, and there is a force F acting on it. Then the particle is moved around by other forces, and eventually ends up at A again. Though the particle may still be moving, at that instant when it passes point A again, it has traveled a closed path. If the net work done by F at this point is 0, then F passes the closed path test. Any force that passes the closed path test for all possible closed paths is classified as a conservative force.
The gravitational force, spring force, magnetic force (according to some definitions, see below) and electric force (at least in a time-independent magnetic field, see Faraday's law of induction for details) are examples of conservative forces, while friction and air drag are classical examples of non-conservative forces.
For non-conservative forces, the mechanical energy that is lost (not conserved) has to go somewhere else, by conservation of energy. Usually the energy is turned into heat, for example the heat generated by friction. In addition to heat, friction also often produces some sound energy. The water drag on a moving boat converts the boat's mechanical energy into not only heat and sound energy, but also wave energy at the edges of its wake. These and other energy losses are irreversible because of the second law of thermodynamics.
Path independence
A direct consequence of the closed path test is that the work done by a conservative force on a particle moving between any two points does not depend on the path taken by the particle.
This is illustrated in the figure to the right: The work done by the gravitational force on an object depends only on its change in height because the gravitational force is conservative. The work done by a conservative force is equal to the negative of change in potential energy during that process. For a proof, imagine two paths 1 and 2, both going from point A to point B. The variation of energy for the particle, taking path 1 from A to B and then path 2 backwards from B to A, is 0; thus, the work is the same in path 1 and 2, i.e., the work is independent of the path followed, as long as it goes from A to B.
For example, if a child slides down a frictionless slide, the work done by the gravitational force on the child from the start of the slide to the end is independent of the shape of the slide; it only depends on the vertical displacement of the child.
Mathematical description
A force field F, defined everywhere in space (or within a simply-connected volume of space), is called a conservative force or conservative vector field if it meets any of these three equivalent conditions:
The curl of F is the zero vector: where in two dimensions this reduces to:
There is zero net work (W) done by the force when moving a particle through a trajectory that starts and ends in the same place:
The force can be written as the negative gradient of a potential, :
The term conservative force comes from the fact that when a conservative force exists, it conserves mechanical energy. The most familiar conservative forces are gravity, the electric force (in a time-independent magnetic field, see Faraday's law), and spring force.
Many forces (particularly those that depend on velocity) are not force fields. In these cases, the above three conditions are not mathematically equivalent. For example, the magnetic force satisfies condition 2 (since the work done by a magnetic field on a charged particle is always zero), but does not satisfy condition 3, and condition 1 is not even defined (the force is not a vector field, so one cannot evaluate its curl). Accordingly, some authors classify the magnetic force as conservative, while others do not. The magnetic force is an unusual case; most velocity-dependent forces, such as friction, do not satisfy any of the three conditions, and therefore are unambiguously nonconservative.
Non-conservative force
Despite conservation of total energy, non-conservative forces can arise in classical physics due to neglected degrees of freedom or from time-dependent potentials. Many non-conservative forces may be perceived as macroscopic effects of small-scale conservative forces. For instance, friction may be treated without violating conservation of energy by considering the motion of individual molecules; however, that means every molecule's motion must be considered rather than handling it through statistical methods. For macroscopic systems the non-conservative approximation is far easier to deal with than millions of degrees of freedom.
Examples of non-conservative forces are friction and non-elastic material stress. Friction has the effect of transferring some of the energy from the large-scale motion of the bodies to small-scale movements in their interior, and therefore appear non-conservative on a large scale. General relativity is non-conservative, as seen in the anomalous precession of Mercury's orbit. However, general relativity does conserve a stress–energy–momentum pseudotensor.
See also
Conservative vector field
Conservative system
References
Force | Conservative force | [
"Physics",
"Mathematics"
] | 1,339 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
44,179 | https://en.wikipedia.org/wiki/Satellite%20temperature%20measurement | Satellite temperature measurements are inferences of the temperature of the atmosphere at various altitudes as well as sea and land surface temperatures obtained from radiometric measurements by satellites. These measurements can be used to locate weather fronts, monitor the El Niño-Southern Oscillation, determine the strength of tropical cyclones, study urban heat islands and monitor the global climate. Wildfires, volcanos, and industrial hot spots can also be found via thermal imaging from weather satellites.
Weather satellites do not measure temperature directly. They measure radiances in various wavelength bands. Since 1978 microwave sounding units (MSUs) on National Oceanic and Atmospheric Administration polar orbiting satellites have measured the intensity of upwelling microwave radiation from atmospheric oxygen, which is related to the temperature of broad vertical layers of the atmosphere. Measurements of infrared radiation pertaining to sea surface temperature have been collected since 1967.
Satellite datasets show that over the past four decades the troposphere has warmed and the stratosphere has cooled. Both of these trends are consistent with the influence of increasing atmospheric concentrations of greenhouse gases.
Principles
Satellites measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data have produced differing temperature datasets.
The satellite time series is not homogeneous. It is constructed from a series of satellites with similar but not identical sensors. The sensors also deteriorate over time, and corrections are necessary for orbital drift and decay. Particularly large differences between reconstructed temperature series occur at the few times when there is little temporal overlap between successive satellites, making intercalibration difficult.
Infrared measurements
Surface measurements
Infrared radiation can be used to measure both the temperature of the surface (using "window" wavelengths to which the atmosphere is transparent), and the temperature of the atmosphere (using wavelengths for which the atmosphere is not transparent, or measuring cloud top temperatures in infrared windows).
Satellites used to retrieve surface temperatures via measurement of thermal infrared in general require cloud-free conditions. Some of the instruments include the Advanced Very High Resolution Radiometer (AVHRR), Along Track Scanning Radiometers (AASTR), Visible Infrared Imaging Radiometer Suite (VIIRS), the Atmospheric Infrared Sounder (AIRS), and the ACE Fourier Transform Spectrometer (ACE‐FTS) on the Canadian SCISAT-1 satellite.
Weather satellites have been available to infer sea surface temperature (SST) information since 1967, with the first global composites occurring during 1970. Since 1982, satellites have been increasingly utilized to measure SST and have allowed its spatial and temporal variation to be viewed more fully. For example, changes in SST monitored via satellite have been used to document the progression of the El Niño-Southern Oscillation since the 1970s.
Over land the retrieval of temperature from radiances is harder, because of inhomogeneities in the surface. Studies have been conducted on the urban heat island effect via satellite imagery. By using the fractal technique, Weng, Q. et al. characterized the spatial pattern of urban heat island. Use of advanced very high resolution infrared satellite imagery can be used, in the absence of cloudiness, to detect density discontinuities (weather fronts) such as cold fronts at ground level. Using the Dvorak technique, infrared satellite imagery can be used to determine the temperature difference between the eye and the cloud top temperature of the central dense overcast of mature tropical cyclones to estimate their maximum sustained winds and their minimum central pressures.
Along Track Scanning Radiometers aboard weather satellites are able to detect wildfires, which show up at night as pixels with a greater temperature than . The Moderate-Resolution Imaging Spectroradiometer aboard the Terra satellite can detect thermal hot spots associated with wildfires, volcanoes, and industrial hot spots.
The Atmospheric Infrared Sounder on the Aqua satellite, launched in 2002, uses infrared detection to measure near-surface temperature.
Stratosphere measurements
Stratospheric temperature measurements are made from the Stratospheric Sounding Unit (SSU) instruments, which are three-channel infrared (IR) radiometers. Since this measures infrared emission from carbon dioxide, the atmospheric opacity is higher and hence the temperature is measured at a higher altitude (stratosphere) than microwave measurements.
Since 1979 the Stratospheric sounding units (SSUs) on the NOAA operational satellites have provided near global stratospheric temperature data above the lower stratosphere.
The SSU is a far-infrared spectrometer employing a pressure modulation technique to make measurement in three channels in the 15 μm carbon dioxide absorption band. The three channels use the same frequency but different carbon dioxide cell pressure, the corresponding weighting functions peaks at 29 km for channel 1, 37 km for channel 2 and 45 km for channel 3.
The process of deriving trends from SSUs measurement has proved particularly difficult because of satellite drift, inter-calibration between different satellites with scant overlap and gas leaks in the instrument carbon dioxide pressure cells. Furthermore since the radiances measured by SSUs are due to emission by carbon dioxide the weighting functions move to higher altitudes as the carbon dioxide concentration in the stratosphere increase.
Mid to upper stratosphere temperatures shows a strong negative trend interspersed by transient volcanic warming after the explosive volcanic eruptions of El Chichón and Mount Pinatubo, little temperature trend has been observed since 1995.
The greatest cooling occurred in the tropical stratosphere consistent with enhanced Brewer-Dobson circulation under greenhouse gas concentrations increase.
Lower stratospheric cooling is mainly caused by the effects of ozone depletion with a possible contribution from increased stratospheric water vapor and greenhouse gases increase. There has been a decline in stratospheric temperatures, interspersed by warmings related to volcanic eruptions. Global Warming theory suggests that the stratosphere should cool while the troposphere warms.
The long term cooling in the lower stratosphere occurred in two downward steps in temperature both after the transient warming related to explosive volcanic eruptions of El Chichón and Mount Pinatubo, this behavior of the global stratospheric temperature has been attributed to global ozone concentration variation in the two years following volcanic eruptions.
Since 1996 the trend is slightly positive due to ozone recovery juxtaposed to a cooling trend of 0.1K/decade that is consistent with the predicted impact of increased greenhouse gases.
The table below shows the stratospheric temperature trend from the SSU measurements in the three different bands, where negative trend indicated cooling.
Microwave (tropospheric and stratospheric) measurements
Microwave Sounding Unit (MSU) measurements
From 1979 to 2005 the microwave sounding units (MSUs) and since 1998 the Advanced Microwave Sounding Units on NOAA polar orbiting weather satellites have measured the intensity of upwelling microwave radiation from atmospheric oxygen. The intensity is proportional to the temperature of broad vertical layers of the atmosphere. Upwelling radiance is measured at different frequencies; these different frequency bands sample a different weighted range of the atmosphere.
Figure 3 (right) shows the atmospheric levels sampled by different wavelength reconstructions from the satellite measurements, where TLS, TTS, and TTT represent three different wavelengths.
Other microwave measurements
A different technique is used by the Aura spacecraft, the Microwave Limb Sounder, which measure microwave emission horizontally, rather than aiming at the nadir.
Temperature measurements are also made by GPS radio occultation. This technique measures the refraction of the radio waves transmitted by GPS satellites as they propagate in the Earth's atmosphere, thus allowing vertical temperature and moisture profiles to be measured.
Temperature measurements on other planets
Planetary science missions also make temperature measurements on other planets and moons of the solar system, using both infrared techniques (typical of orbiter and flyby missions of planets with solid surfaces) and microwave techniques (more often used for planets with atmospheres). Infrared temperature measurement instruments used in planetary missions include surface temperature measurements taken by the Thermal Emission Spectrometer (TES) instrument on Mars Global Surveyor and the Diviner instrument on the Lunar Reconnaissance Orbiter; and atmospheric temperature measurements taken by the composite infrared spectrometer instrument on the NASA Cassini spacecraft.
Microwave atmospheric temperature measurement instruments include the Microwave Radiometer on the Juno mission to Jupiter.
See also
Atmospheric sounding
Instrumental temperature record
Sea surface temperature
Temperature record
Outgoing longwave radiation
References
External links
A graph comparing of the surface, balloon and satellite records (2007 archive)
Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences CCSP Synthesis and Assessment Product 1.1
What Microwaves Teach Us About the Atmosphere
Globally Averaged Atmospheric Temperatures
Satellite meteorology
Articles containing video clips
Temperature | Satellite temperature measurement | [
"Physics",
"Chemistry"
] | 1,789 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
44,189 | https://en.wikipedia.org/wiki/Reciprocal%20altruism | In evolutionary biology, reciprocal altruism is a behaviour whereby an organism acts in a manner that temporarily reduces its fitness while increasing another organism's fitness, with the expectation that the other organism will act in a similar manner at a later time.
The concept was initially developed by Robert Trivers to explain the evolution of cooperation as instances of mutually altruistic acts. The concept is close to the strategy of "tit for tat" used in game theory. In 1987, Trivers presented at a symposium on reciprocity, noting that he initially titled his article "The Evolution of Delayed Return Altruism," but reviewer W. D. Hamilton suggested renaming it "The Evolution of Reciprocal Altruism." While Trivers adopted the new title, he retained the original examples, causing confusion about reciprocal altruism for decades. Rothstein and Pierotti (1988) addressed this issue at the symposium, proposing new definitions that clarified the concepts. They argued that Delayed Return Altruism was a superior term and introduced "pseudo-reciprocity" to replace it.
Theory
The concept of "reciprocal altruism", as introduced by Trivers, suggests that altruism, defined as an act of helping another individual while incurring some cost for this act, could have evolved since it might be beneficial to incur this cost if there is a chance of being in a reverse situation where the individual who was helped before may perform an altruistic act towards the individual who helped them initially. This concept finds its roots in the work of W.D. Hamilton, who developed mathematical models for predicting the likelihood of an altruistic act to be performed on behalf of one's kin.
Putting this into the form of a strategy in a repeated prisoner's dilemma would mean to cooperate unconditionally in the first period and behave cooperatively (altruistically) as long as the other agent does as well. If chances of meeting another reciprocal altruist are high enough, or if the game is repeated for a long enough amount of time, this form of altruism can evolve within a population.
This is close to the notion of "tit for tat" introduced by Anatol Rapoport, although there still seems a slight distinction in that "tit for tat" cooperates in the first period and from thereon always replicates an opponent's previous action, whereas "reciprocal altruists" stop cooperation in the first instance of non-cooperation by an opponent and stay non-cooperative from thereon. This distinction leads to the fact that in contrast to reciprocal altruism, tit for tat may be able to restore cooperation under certain conditions despite cooperation having broken down.
Christopher Stephens shows a set of necessary and jointly sufficient conditions "... for an instance of reciprocal altruism:
the behaviour must reduce a donor's fitness relative to a selfish alternative;
the fitness of the recipient must be elevated relative to non-recipients;
the performance of the behaviour must not depend on the receipt of an immediate benefit;
conditions 1, 2, and 3 must apply to both individuals engaging in reciprocal helping.
There are two additional conditions necessary "...for reciprocal altruism to evolve:"
A mechanism for detecting 'cheaters' must exist.
A large (indefinite) number of opportunities to exchange aid must exist.
The first two conditions are necessary for altruism as such, while the third is distinguishing reciprocal altruism from simple mutualism and the fourth makes the interaction reciprocal.
Condition number five is required as otherwise non-altruists may always exploit altruistic behaviour without any consequences and therefore evolution of reciprocal altruism would not be possible. However, it is pointed out that this "conditioning device" does not need to be conscious. Condition number six is required to avoid cooperation breakdown through forward induction—a possibility suggested by game theoretical models.
In 1987, Trivers told a symposium on reciprocity that he had originally submitted his article under the title "The Evolution of Delayed Return Altruism", but reviewer W. D. Hamilton suggested that he change the title to "The Evolution of Reciprocal Altruism". Trivers changed the title, but not the examples in the manuscript, which has led to confusion about what were appropriate examples of reciprocal altruism for the last 50 years. In their contribution to that symposium, Rothstein and Pierotti (1988) addressed this issue and proposed new definitions concerning the topic of altruism, that clarified the issue created by Trivers and Hamilton. They proposed that Delayed Return Altruism was a superior concept and used the term pseudo-reciprocity in place of DRA.
Examples
The following examples could be understood as altruism. However, showing reciprocal altruism in an unambiguous way requires more evidence as will be shown later.
Cleaner fish
An example of reciprocal altruism is cleaning symbiosis, such as between cleaner fish and their hosts, though cleaners include shrimps and birds, and clients include fish, turtles, octopuses and mammals. Aside from the apparent symbiosis of the cleaner and the host during actual cleaning, which cannot be interpreted as altruism, the host displays additional behaviour that meets the criteria for delayed return altruism:
The host fish allows the cleaner fish free entrance and exit and does not eat the cleaner, even after the cleaning is done. The host signals the cleaner it is about to depart the cleaner's locality, even when the cleaner is not in its body. The host sometimes chases off possible dangers to the cleaner.
The following evidence supports the hypothesis:
The cleaning by cleaners is essential for the host. In the absence of cleaners the hosts leave the locality or suffer from injuries inflicted by ectoparasites. There is difficulty and danger in finding a cleaner. Hosts leave their element to get cleaned. Others wait no longer than 30 seconds before searching for cleaners elsewhere.
A key requirement for the establishment of reciprocal altruism is that the same two individuals must interact repeatedly, as otherwise the best strategy for the host would be to eat the cleaner as soon as cleaning was complete. This constraint imposes both a spatial and a temporal condition on the cleaner and on its host. Both individuals must remain in the same physical location, and both must have a long enough lifespan, to enable multiple interactions. There is reliable evidence that individual cleaners and hosts do indeed interact repeatedly.
This example meets some, but not all, of the criteria described in Trivers's model. In the cleaner-host system the benefit to the cleaner is always immediate. However, the evolution of reciprocal altruism is contingent on opportunities for future rewards through repeated interactions. In one study, nearby host fish observed "cheater" cleaners and subsequently avoided them. In these examples, true reciprocity is difficult to demonstrate since failure means the death of the cleaner. However, if Randall's claim that hosts sometimes chase off possible dangers to the cleaner is correct, an experiment might be constructed in which reciprocity could be demonstrated. In actuality this is one of Trivers' examples of Delayed Return Altruism as discussed by Rothstein and Pierotti 1988.
Warning calls in birds
Warning calls, although exposing a bird and putting it in danger, are frequently given by birds. An explanation in terms of altruistic behaviors given by Trivers:
It has been shown that predators learn specific localities and specialize individually on prey types and hunting techniques.
It is therefore disadvantageous for a bird to have a predator eat a conspecific, because the experienced predator may then be more likely to eat them. Alarming another bird by giving a warning call tends to prevent predators from specializing on the caller's species and locality. In this way, birds in areas in which warning calls are given will be at a selective advantage relative to birds in areas free from warning calls.
Nevertheless, this presentation lacks important elements of reciprocity. It is very hard to detect and ostracize cheaters. There is no evidence that a bird refrains from giving calls when another bird is not reciprocating, nor evidence that individuals interact repeatedly. Given the aforementioned characteristics of bird calling, a continuous bird emigration and immigration environment (true of many avian species) is most likely to be partial to cheaters, since selection against the selfish gene is unlikely.
Another explanation for warning calls is that these are not warning calls at all:
A bird, once it has detected a bird of prey, calls to signal to the bird of prey that it was detected, and that there is no use trying to attack the calling bird. Two facts support this hypothesis:
The call frequencies match the hearing range of the predator bird.
Calling birds are less attacked—predator birds attack calling birds less frequently than other birds.
Nest protecting
Red-winged blackbird males help defend neighbor's nests. There are many theories as to why males behave this way. One is that males only defend other nests which contain their extra-pair offspring. Extra-pair offspring are juveniles which may contain some of the male bird's DNA. Another is the tit-for-tat strategy of reciprocal altruism. A third theory is, males help only other closely related males. A study done by The Department of Fisheries and Wildlife provided evidence that males used a tit-for-tat strategy. The Department of Fisheries and Wildlife tested many different nests by placing stuffed crows by nests, and then observing behavior of neighboring males. The behaviors they looked for included the number of calls, dives, and strikes. After analyzing the results, there was not significance evidence for kin selection; the presence of extra-pair offspring did not affect the probability of help in nest defense. However, males reduced the amount of defense given to neighbors when neighbor males reduced defense for their nests. This demonstrates a tit-for-tat strategy, where animals help those who previously helped them. This strategy is one type of reciprocal altruism.
Vampire bats
Vampire bats also display reciprocal altruism, as described by Wilkinson.
The bats feed each other by regurgitating blood. Since bats only feed on blood and will die after just 70 hours of not eating, this food sharing is a great benefit to the receiver and a great cost to the giver.
To qualify for reciprocal altruism, the benefit to the receiver would have to be larger than the cost to the donor. This seems to hold as these bats usually die if they do not find a blood meal two nights in a row. Also, the requirement that individuals who have behaved altruistically in the past are helped by others in the future is confirmed by the data. However, the consistency of the reciprocal behaviour, namely that a previously non-altruistic bat is refused help when it requires it, has not been demonstrated. Therefore, the bats do not seem to qualify yet as an unequivocal example of reciprocal altruism.
Primates
Grooming in primates meets the conditions for reciprocal altruism according to some studies. One of the studies in vervet monkeys shows that among unrelated individuals, grooming induce higher chance of attending to each other's calls for aid. However, vervet monkeys also display grooming behaviors within group members, displaying alliances. This would demonstrate vervet monkey's grooming behavior as a part of kin selection since the activity is done between siblings in this study. Moreover, following the criteria by Stephen, if the study is to be an example of reciprocal altruism, it must prove the mechanism for detecting cheaters.
Bacteria
Numerous species of bacteria engage in reciprocal altruistic behaviors with other species. Typically, this takes the form of bacteria providing essential nutrients for another species, while the other species provides an environment for the bacteria to live in. Reciprocal altruism is exhibited between nitrogen-fixing bacteria and plants in which they reside. Additionally, it can be observed between bacteria and some species of flies such as Bactrocera tryoni. These flies consume nutrient-producing bacteria found on the leaves of plants; in exchange, they reside within the flies' digestive system. This reciprocal altruistic behavior has been exploited by techniques designed to eliminate B. tryoni, which are fruit fly pests native to Australia.
Humans
Exceptions
Some animals seem to be unable to develop reciprocal altruism. For example, pigeons defect instead of a random response or a tit-for-tat in a prisoner's dilemma game against a computer. This may be due to favoring short-term thinking over long-term thinking.
Regulation by emotional disposition
In comparison to that of other animals, the human altruistic system is a sensitive and unstable one. Therefore, the tendency to give, to cheat, and the response to other's acts of giving and cheating must be regulated by a complex psychology in each individual, social structures, and cultural traditions. Individuals differ in the degree of these tendencies and responses.
According to Trivers, the following emotional dispositions and their evolution can be understood in terms of regulation of altruism.
Friendship and emotions of liking and disliking.
Moralistic aggression. A protection mechanism from cheaters acts to regulate the advantage of cheaters in selection against altruists. The moralistic altruist may want to educate or even punish a cheater.
Gratitude and sympathy. A fine regulation of altruism can be associated with gratitude and sympathy in terms of cost/benefit and the level in which the beneficiary will reciprocate.
Guilt and reparative altruism. Prevents the cheater from cheating again. The cheater shows regret to avoid paying too dearly for past acts.
Subtle cheating. A stable evolutionary equilibrium could include a low percentage of mimics in controversial support of adaptive sociopathy.
Trust and suspicion. These are regulators for cheating and subtle cheating.
Partnerships. Altruism to create friendships.
It is not known how individuals pick partners as there has been little research on choice. Modeling indicates that altruism about partner choices is unlikely to evolve, as costs and benefits between multiple individuals are variable. Therefore, the time or frequency of reciprocal actions contributes more to an individual's choice of partner than the reciprocal act itself.
See also
Altruism (biology)
Collaboration
The common good
Competitive altruism
Enlightened self-interest
Evolutionary models of food sharing
Gift economy
Helping behavior
Koinophilia
Mutual Aid: A Factor of Evolution (1902)
Norm of reciprocity
Prosocial behavior
Psychological egoism
Reciprocity (social psychology)
Reciprocity (evolution)
Signalling theory
References
Evolutionary biology
Symbiosis
Evolutionary psychology
Altruism | Reciprocal altruism | [
"Biology"
] | 2,984 | [
"Evolutionary biology",
"Behavior",
"Symbiosis",
"Biological interactions",
"Altruism"
] |
44,190 | https://en.wikipedia.org/wiki/The%20Selfish%20Gene | The Selfish Gene is a 1976 book on evolution by ethologist Richard Dawkins that promotes the gene-centred view of evolution, as opposed to views focused on the organism and the group. The book builds upon the thesis of George C. Williams's Adaptation and Natural Selection (1966); it also popularized ideas developed during the 1960s by W. D. Hamilton and others. From the gene-centred view, it follows that the more two individuals are genetically related, the more sense (at the level of the genes) it makes for them to behave cooperatively with each other.
A lineage is expected to evolve to maximise its inclusive fitness—the number of copies of its genes passed on globally (rather than by a particular individual). As a result, populations will tend towards an evolutionarily stable strategy. The book also introduces the term meme for a unit of human cultural evolution analogous to the gene, suggesting that such "selfish" replication may also model human culture, in a different sense. Memetics has become the subject of many studies since the publication of the book. In raising awareness of Hamilton's ideas, as well as making its own valuable contributions to the field, the book has also stimulated research on human inclusive fitness.
Dawkins uses the term "selfish gene" as a way of expressing the gene-centred view of evolution. As such, the book is not about a particular gene that causes selfish behaviour; in fact, much of the book's content is devoted to explaining the evolution of altruism. In the foreword to the book's 30th-anniversary edition, Dawkins said he "can readily see that [the book's title] might give an inadequate impression of its contents" and in retrospect thinks he should have taken Tom Maschler's advice and called the book The Immortal Gene.
In July 2017, a poll to celebrate the 30th anniversary of the Royal Society science book prize listed The Selfish Gene as the most influential science book of all time.
Background
Dawkins builds upon George C. Williams's book Adaptation and Natural Selection (1966), which argued that altruism is not based upon group benefit per se, but results from selection that occurs "at the level of the gene mediated by the phenotype" and that any selection at the group level occurred only under rare circumstances. W. D. Hamilton and others developed this approach further during the 1960s; they opposed the concepts of group selection and of selection aimed directly at benefit to the individual organism:
Despite the principle of 'survival of the fittest' the ultimate criterion which determines whether [a gene] G will spread is not whether the behavior is to the benefit of the behaver, but whether it is to the benefit of the gene G ...With altruism this will happen only if the affected individual is a relative of the altruist, therefore having an increased chance of carrying the gene.
— W. D. Hamilton, The Evolution of Altruistic Behavior (1963), pp. 354–355.
Wilkins and Hull (2014) provide an extended discussion of Dawkins's views and of his book The Selfish Gene.
Book
Contents
Dawkins begins by discussing the altruism that people display, indicating that he will argue it is explained by gene selfishness, and attacking group selection as an explanation. He considers the origin of life with the arrival of molecules able to replicate themselves. From there, he looks at DNA's role in evolution, and its organisation into chromosomes and genes, which in his view behave selfishly. He describes organisms as apparently purposive but fundamentally simple survival machines, which use negative feedback to achieve control. This extends, he argues, to the brain's ability to simulate the world with subjective consciousness, and signalling between species. He then introduces the idea of the evolutionarily stable strategy, and uses it to explain why alternative competitive strategies like bullying and retaliating exist. This allows him to consider what selfishness in a gene might actually mean, describing W. D. Hamilton's argument for kin selection, that genes for behaviour that improves the survival chances of close relatives can spread in a population, because those relatives carry the same genes.
Dawkins examines childbearing and raising children as evolutionary strategies. He attacks the idea of group selection for the good of the species as proposed by V. C. Wynne-Edwards, arguing instead that each parent necessarily behaves selfishly. A question is whether parents should invest in their offspring equally or should favour some of them and explains that what is best for the survival of the parents' genes is not always best for individual children. Similarly, Dawkins argues, there are conflicts of interest between males and females, but he notes that R. A. Fisher showed that the optimal sex ratio is 50:50. He explains that this is true even in an extreme case like the harem-keeping elephant seal, where 4% of the males get 88% of copulations. In that case, the strategy of having a female offspring is safe, as she'll have a pup, but the strategy of having a male can bring a large return (dozens of pups), even though many males live out their lives as bachelors. Amotz Zahavi's theory of honest signalling explains stotting as a selfish act, he argues, improving the springbok's chances of escaping from a predator by indicating how difficult the chase would be.
Dawkins discusses why many species live in groups, achieving mutual benefits through mechanisms such as Hamilton's selfish herd model: each individual behaves selfishly but the result is herd behaviour. Altruism too can evolve, as in the social insects such as ants and bees, where workers give up the right to reproduce in favour of a sister, the queen; in their case, the unusual (haplodiploid) system of sex determination may have helped to bring this about, as females in a nest are exceptionally closely related.
The final chapter of the first edition introduced the idea of the meme, a culturally-transmitted entity such as a hummable tune, by analogy to genetic transmission. Dawkins describes God as an old idea which probably arose many times, and which has sufficient psychological appeal to survive effectively in the meme pool. The second edition (1989) added two more chapters.
Themes
"Selfish" genes
In describing genes as being "selfish", Dawkins states unequivocally that he does not intend to imply that they are driven by any motives or will, but merely that their effects can be metaphorically and pedagogically described as if they were. His contention is that the genes that are passed on are the ones whose evolutionary consequences serve their own implicit interest (to continue the anthropomorphism) in being replicated, not necessarily those of the organism. In later work, Dawkins brings evolutionary "selfishness" down to creation of a widely proliferated extended phenotype.
For some, the metaphor of "selfishness" is entirely clear, while to others it is confusing, misleading, or simply silly to ascribe mental attributes to something that is mindless. For example, Andrew Brown has written:
""Selfish", when applied to genes, doesn't mean "selfish" at all. It means, instead, an extremely important quality for which there is no good word in the English language: "the quality of being copied by a Darwinian selection process." This is a complicated mouthful. There ought to be a better, shorter word—but "selfish" isn't it."
Donald Symons also finds it inappropriate to use anthropomorphism in conveying scientific meaning in general, and particularly in this instance. He writes in The Evolution of Human Sexuality (1979):
"In summary, the rhetoric of The Selfish Gene exactly reverses the real situation: through [the use of] metaphor genes are endowed with properties only sentient beings can possess, such as selfishness, while sentient beings are stripped of these properties and called machines...The anthropomorphism of genes...obscures the deepest mystery in the life sciences: the origin and nature of mind."
"Replicators"
Dawkins proposes the idea of the "replicator":
"It is finally time to return to the problem with which we started, to the tension between individual organism and gene as rival candidates for the central role in natural selection...One way of sorting this whole matter out is to use the terms 'replicator' and 'vehicle'. The fundamental units of natural selection, the basic things that survive or fail to survive, that form lineages of identical copies with occasional random mutations, are called replicators. DNA molecules are replicators. They generally, for reasons that we shall come to, gang together into large communal survival machines or 'vehicles'."
— Richard Dawkins, The Selfish Gene, p. 253 (Anniversary Edition)
The original replicator (Dawkins Replicator) was the initial molecule which first managed to reproduce itself and thus gained an advantage over other molecules within the primordial soup. As replicating molecules became more complex, Dawkins postulates, the replicators became the genes within organisms, with each organism's body serving the purpose of a 'survival machine' for its genes.
Dawkins writes that gene combinations which help an organism to survive and reproduce tend to also improve the gene's own chances of being replicated, and, as a result, "successful" genes frequently provide a benefit to the organism. An example of this might be a gene that protects the organism against a disease. This helps the gene spread, and also helps the organism.
Genes vs organisms
There are other times when the implicit interests of the vehicle and replicator are in conflict, such as the genes behind certain male spiders' instinctive mating behaviour, which increase the organism's inclusive fitness by allowing it to reproduce but shorten its life by exposing it to the risk of being eaten by the cannibalistic female. Another example is the existence of segregation distorter genes that are detrimental to their host, but nonetheless propagate themselves at its expense. Likewise, the persistence of junk DNA that [Dawkins believed at that time] provides no benefit to its host can be explained on the basis that it is not subject to selection. These unselected for but transmitted DNA variations connect the individual genetically to its parents but confer no survival benefit.
These examples might suggest that there is a power struggle between genes and their interactor. In fact, the claim is that there isn't much of a struggle because the genes usually win without a fight. However, the claim is made that if the organism becomes intelligent enough to understand its own interests, as distinct from those of its genes, there can be true conflict.
An example of such a conflict might be a person using birth control to prevent fertilisation, thereby inhibiting the replication of his or her genes. But this action might not be a conflict of the 'self-interest' of the organism with his or her genes, since a person using birth control might also be enhancing the survival chances of their genes by limiting family size to conform with available resources, thus avoiding extinction as predicted under the Malthusian model of population growth.
Altruism
Dawkins says that his "purpose" in writing The Selfish Gene is "to examine the biology of selfishness and altruism." He does this by supporting the claim that "gene selfishness will usually give rise to selfishness in individual behaviour. However, as we shall see, there are special circumstances in which a gene can achieve its own selfish goals best by fostering a limited form of altruism at the level of individual animals." Gene selection provides one explanation for kin selection and eusociality, where organisms act altruistically, against their individual interests (in the sense of health, safety or personal reproduction), namely the argument that by helping related organisms reproduce, a gene succeeds in "helping" copies of themselves (or sequences with the same phenotypic effect) in other bodies to replicate. The claim is made that these "selfish" actions of genes lead to unselfish actions by organisms. A requirement upon this claim, supported by Dawkins in Chapter 10: "You scratch my back, I'll ride on yours" by examples from nature, is the need to explain how genes achieve kin recognition, or manage to orchestrate mutualism and coevolution. Although Dawkins (and biologists in general) recognize these phenomena result in more copies of a gene, evidence is inconclusive whether this success is selected for at a group or individual level. In fact, Dawkins has proposed that it is at the level of the extended phenotype:
"We agree [referring to Wilson and Sober's book Unto others: The evolution and psychology of unselfish behavior] that genes are replicators, organisms and groups are not. We agree that the group selection controversy ought to be a controversy about groups as vehicles, and we could easily agree to differ on the answer...I coined the [term] vehicle not to praise it but to bury it...Darwinism can work on replicators whose phenotypic effects (interactors) are too diffuse, too multi-levelled, too incoherent to deserve the accolade of vehicle...Extended phenotypes can include inanimate artifacts like beaver dams...But the vehicle is not something fundamental...Ask rather 'Is there a vehicle in this situation and, if so, why?'"
—Richard Dawkins, Burying the Vehicle
Although Dawkins agrees that groups can assist survival, they rank as a "vehicle" for survival only if the group activity is replicated in descendants, recorded in the gene, the gene being the only true replicator. An improvement in the survival lottery for the group must improve that for the gene for sufficient replication to occur. Dawkins argues qualitatively that the lottery for the gene is based upon a very long and broad record of events, and group advantages are usually too specific, too brief, and too fortuitous to change the gene lottery:
"We can now see that the organism and the group of organisms are true rivals for the vehicle role in the story, but neither of them is even a candidate for the replicator role. The controversy between 'individual selection' and 'group selection' is a real controversy between alternative vehicles...As it happens the outcome, in my view, is a decisive victory for the individual organism. The group is too wishy-washy an entity."
—Richard Dawkins, The Selfish Gene, pp. 254–255
Prior to the 1960s, it was common for altruism to be explained in terms of group selection, where the benefits to the organism or even population were supposed to account for the popularity of the genes responsible for the tendency towards that behaviour. Modern versions of "multilevel selection" claim to have overcome the original objections, namely, that at that time no known form of group selection led to an evolutionarily stable strategy. The claim still is made by some that it would take only a single individual with a tendency towards more selfish behaviour to undermine a population otherwise filled only with the gene for altruism towards non-kin.
Reception
The Selfish Gene was extremely popular when first published, causing "a silent and almost immediate revolution in biology", and it continues to be widely read. It has sold over a million copies and has been translated into more than 25 languages. Proponents argue that the central point, that replicating the gene is the object of selection, usefully completes and extends the explanation of evolution given by Charles Darwin before the basic mechanisms of genetics were understood.
According to the ethologist Alan Grafen, acceptance of adaptionist theories is hampered by a lack of a mathematical unifying theory and a belief that anything in words alone must be suspect. According to Grafen, these difficulties along with an initial conflict with population genetics models at the time of its introduction "explains why within biology the considerable scientific contributions it [The Selfish Gene] makes are seriously underestimated, and why it is viewed mainly as a work of exposition." According to comparative psychologist Nicky Hayes, "Dawkins presented a version of sociobiology that rested heavily on metaphors drawn from animal behavior, and extrapolated these...One of the weaknesses of the sociological approach is that it tends only to seek confirmatory examples from among the huge diversity of animal behavior. Dawkins did not deviate from this tradition." More generally, critics argue that The Selfish Gene oversimplifies the relationship between genes and the organism. (As an example, see Thompson.)
The Selfish Gene further popularised sociobiology in Japan after its translation in 1980. With the addition of Dawkins's book to the country's consciousness, the term "meme" entered popular culture. Yuzuru Tanaka of Hokkaido University wrote a book, Meme Media and Meme Market Architectures, while the psychologist Susan Blackmore wrote The Meme Machine (2000), with a foreword by Dawkins. The information scientist Osamu Sakura has published a book in Japanese and several papers in English on the topic. Nippon Animation produced an educational television program titled The Many Journeys of Meme.
In 1976, the ecologist Arthur Cain, one of Dawkins's tutors at Oxford in the 1960s, called it a "young man's book" (which Dawkins points out was a deliberate quote of a commentator on the New College, Oxford philosopher A. J. Ayer's Language, Truth, and Logic (1936)). Dawkins noted that he had been "flattered by the comparison, [but] knew that Ayer had recanted much of his first book and [he] could hardly miss Cain's pointed implication that [he] should, in the fullness of time, do the same." This point also was made by the philosopher Mary Midgley: "The same thing happened to AJ Ayer, she says, but he spent the rest of his career taking back what he'd written in Language, Truth and Logic. "This hasn't occurred to Dawkins", she says. "He goes on saying the same thing."" However, according to Wilkins and Hull, Dawkins's thinking has developed, although perhaps not defusing this criticism:
"In Dawkins's early writings, replicators and vehicles played different but complementary and equally important roles in selection, but as Dawkins honed his view of the evolutionary process, vehicles became less and less fundamental...In later writings Dawkins goes even further and argues that phenotypic traits are what really matter in selection and that they can be treated independently of their being organized into vehicles...Thus, it comes as no surprise when Dawkins proclaims that he "coined the term 'vehicle' not to praise it but to bury it." As prevalent as organisms might be, as determinate as the causal roles that they play in selection are, reference to them can and must be omitted from any perspicuous characterization of selection in the evolutionary process. Dawkins is far from a genetic determinist, but he is certainly a genetic reductionist."
— John S Wilkins, David Hull, Dawkins on Replicators and Vehicles, The Stanford Encyclopedia of Philosophy
Units of selection
As to the unit of selection: "One internally consistent logical picture is that the unit of replication is the gene,...and the organism is one kind of ...entity on which selection acts directly." Dawkins proposed the matter without a distinction between 'unit of replication' and 'unit of selection' that he made elsewhere: "the fundamental unit of selection, and therefore of self-interest, is not the species, nor the group, nor even strictly the individual. It is the gene, the unit of heredity." However, he continues in a later chapter:
"On any sensible view of the matter Darwinian selection does not work on genes directly. ...The important differences between genes emerge only in their effects. The technical word phenotype is used for the bodily manifestation of a gene, the effect that a gene has on the body...Natural selection favours some genes rather than others not because of the nature of the genes themselves, but because of their consequences—their phenotypic effects...But we shall now see that the phenotypic effects of a gene need to be thought of as all the effects that it has on the world. ...The phenotypic effects of a gene are the tools by which it levers itself into the next generation. All I am going to add is that the tools may reach outside the individual body wall...Examples that spring to mind are artefacts like beaver dams, bird nests, and caddis houses."
— Richard Dawkins, The Selfish Gene, Chapter 13, pp. 234, 235, 238
Dawkins's later formulation is in his book The Extended Phenotype (1982), where the process of selection is taken to involve every possible phenotypical effect of a gene.
Stephen Jay Gould finds Dawkins's position tries to have it both ways:
"Dawkins claims to prefer genes and to find greater insight in this formulation. But he allows that you or I might prefer organisms—and it really doesn't matter."
— Stephen Jay Gould, The Structure of Evolutionary Theory, pp. 640-641
The view of The Selfish Gene is that selection based upon groups and populations is rare compared to selection on individuals. Although supported by Dawkins and by many others, this claim continues to be disputed. While naïve versions of group selectionism have been disproved, more sophisticated formulations make accurate predictions in some cases while positing selection at higher levels. Both sides agree that very favourable genes are likely to prosper and replicate if they arise and both sides agree that living in groups can be an advantage to the group members. The conflict arises in part over defining concepts:
"Cultural evolutionary theory, however, has suffered from an overemphasis on the experiences and behaviors of individuals at the expense of acknowledging complex group organization...Many important behaviors related to the success and function of human societies are only properly defined at the level of groups".
In The Social Conquest of Earth (2012), the entomologist E. O. Wilson contends that although the selfish-gene approach was accepted "until 2010 [when] Martin Nowak, Corina Tarnita, and I demonstrated that inclusive fitness theory, often called kin selection theory, is both mathematically and biologically incorrect." Chapter 18 of The Social Conquest of Earth describes the deficiencies of kin selection and outlines group selection, which Wilson argues is a more realistic model of social evolution. He criticises earlier approaches to social evolution, saying: "unwarranted faith in the central role of kinship in social evolution has led to the reversal of the usual order in which biological research is conducted. The proven best way in evolutionary biology, as in most of science, is to define a problem arising during empirical research, then select or devise the theory that is needed to solve it. Almost all research in inclusive-fitness theory has been the opposite: hypothesize the key roles of kinship and kin selection, then look for evidence to test that hypothesis." According to Wilson: "People must have a tribe...Experiments conducted over many years by social psychologists have revealed how swiftly and decisively people divide into groups, and then discriminate in favor of the one to which they belong." (pp. 57, 59) According to Wilson: "Different parts of the brain have evolved by group selection to create groupishness." (p. 61)
Some authors consider facets of this debate between Dawkins and his critics about the level of selection to be blather:
"The particularly frustrating aspects of these constantly renewed debates is that, even though they seemed to be sparked by rival theories about how evolution works, in fact they often involve only rival metaphors for the very same evolutionary logic and [the debates over these aspects] are thus empirically empty."
— Laurent Keller, Levels of Selection in Evolution, p.4
Other authors say Dawkins has failed to make some critical distinctions, in particular, the difference between group selection for group advantage and group selection conveying individual advantage.
Choice of words
A good deal of objection to The Selfish Gene stemmed from its failure to be always clear about "selection" and "replication". Dawkins says the gene is the fundamental unit of selection, and then points out that selection does not act directly upon the gene, but upon "vehicles" or '"extended phenotypes". Stephen Jay Gould took exception to calling the gene a 'unit of selection' because selection acted only upon phenotypes. Summarizing the Dawkins-Gould difference of view, Sterelny says:
"Gould thinks gene differences do not cause evolutionary changes in populations, they register those changes."
—Kim Sterelny: Dawkins vs. Gould, p. 83
The word "cause" here is somewhat tricky: does a change in lottery rules (for example, inheriting a defective gene "responsible" for a disorder) "cause" differences in outcome that might or might not occur? It certainly alters the likelihood of events, but a concatenation of contingencies decides what actually occurs. Dawkins thinks the use of "cause" as a statistical weighting is acceptable in common usage. Like Gould, Gabriel Dover in criticizing The Selfish Gene says:
"It is illegitimate to give 'powers' to genes, as Dawkins would have it, to control the outcome of selection...There are no genes for interactions, as such: rather, each unique set of inherited genes contributes interactively to one unique phenotype...the true determinants of selection".
— Gabriel Dover: Dear Mr. Darwin, p. 56
However, from a comparison with Dawkins's discussion of this very same point, it would seem both Gould's and Dover's comments are more a critique of his sloppy usage than a difference of views. Hull suggested a resolution based upon a distinction between replicators and interactors. The term "replicator" includes genes as the most fundamental replicators but possibly other agents, and interactor includes organisms but maybe other agents, much as do Dawkins's 'vehicles'. The distinction is as follows:
replicator: an entity that passes on its structure largely intact in successive replications.
interactor: an entity that interacts as a cohesive whole with its environment in such a way that this interaction causes replication to be differential.
selection: a process in which the differential extinction or proliferation of interactors causes the differential perpetuation of the replicators that produced them.
Hull suggests that, despite some similarities, Dawkins takes too narrow a view of these terms, engendering some of the objections to his views. According to Godfrey-Smith, this more careful vocabulary has cleared up "misunderstandings in the "units of selection" debates."
Enactive arguments
Behavioural genetics entertains the view:
"that genes are dynamic contributors to behavioral organization and are sensitive to feedback systems from the internal and external environments." "Technically behavior is not inherited; only DNA molecules are inherited. From that point on behavioral formation is a problem of constant interplay between genetic potential and environmental shaping"
—D.D. Thiessen, Mechanism specific approaches in behavior genetics, p. 91
This view from 1970 is still espoused today, and conflicts with Dawkins's view of "the gene as a form of "information [that] passes through bodies and affects them, but is not affected by them on its way through"". The philosophical/biological field of enactivism stresses the interaction of the living agent with its environment and the relation of probing the environment to cognition and adaptation. Gene activation depends upon the cellular milieu. An extended discussion of the contrasts between enactivism and Dawkins's views, and with their support by Dennett, is provided by Thompson.
In Mind in Life, the philosopher Evan Thompson has assembled a multi-sourced objection to the "selfish gene" idea. Thompson takes issue with Dawkin's reduction of "life" to "genes" and "information":
"Life is just bytes and bytes and bytes of digital information"
— Richard Dawkins: River out of Eden: A Darwinian View of Life, p. 19
"On the bank of the Oxford canal...is a large willow tree, and it is pumping downy seeds into the air...It is raining instructions out there; it's raining programs; it's raining tree-growing, fluff-spreading algorithms. That is not a metaphor, it is the plain truth"
— Richard Dawkins: The Blind Watchmaker, p. 111
Thompson objects that the gene cannot operate by itself, since it requires an environment such as a cell, and life is "the creative outcome of highly structured contingencies". Thompson quotes Sarkar:
"there is no clear technical notion of "information" in molecular biology. It is little more than a metaphor that masquerades as a theoretical concept and ...leads to a misleading picture of the nature of possible explanations in molecular biology."
— Sahotra Sarkar Biological information: a skeptical look at some central dogmas of molecular biology, p. 187
Thompson follows with a detailed examination of the concept of DNA as a look-up-table and the role of the cell in orchestrating the DNA-to-RNA transcription, indicating that by anyone's account the DNA is hardly the whole story. Thompson goes on to suggest that the cell-environment interrelationship has much to do with reproduction and inheritance, and a focus on the gene as a form of "information [that] passes through bodies and affects them but is not affected by them on its way through" is tantamount to adoption of a form of material-informational dualism that has no explanatory value and no scientific basis. (Thomson, p. 187) The enactivist view, however, is that information results from the probing and experimentation of the agent with the agent's environment subject to the limitations of the agent's abilities to probe and process the result of probing, and DNA is simply one mechanism the agent brings to bear upon its activity.
Moral arguments
Another criticism of the book is its treatment of morality, and more particularly altruism, as existing only as a form of selfishness:
"It is important to realize that the above definitions of altruism and selfishness are behavioural, not subjective. I am not concerned here with the psychology of motives...My definition is concerned only with whether the effect of an act is to lower or raise the survival prospects of the presumed altruist and the survival prospects of the presumed beneficiary."
— Richard Dawkins, The Selfish Gene, p. 12
"We can even discuss ways of cultivating and nurturing pure, disinterested altruism, something that has no place in nature, something that has never existed before in the whole history of the world."
— Richard Dawkins, The Selfish Gene, p. 179
The philosopher Mary Midgley has suggested this position is a variant of Hobbes's explanation of altruism as enlightened self-interest, and that Dawkins goes a step further to suggest that our genetic programming can be overcome by what amounts to an extreme version of free will. Part of Mary Midgley's concern is that Richard Dawkins's account of The Selfish Gene serves as a moral and ideological justification for selfishness to be adopted by modern human societies as simply following "nature", providing an excuse for behavior with bad consequences for future human society.
Dawkins's major concluding theme, that humanity is finally gaining power over the "selfish replicators" by virtue of their intelligence, is criticized also by primatologist Frans de Waal, who refers to it as an example of a "veneer theory" (the idea that morality is not fundamental, but is laid over a brutal foundation). Dawkins claims he merely describes how things are under evolution, and makes no moral arguments. On BBC-2 TV, Dawkins pointed to evidence for a "Tit-for-Tat" strategy (shown to be successful in game theory) as the most common, simple, and profitable choice.
More generally, the objection has been made that The Selfish Gene discusses philosophical and moral questions that go beyond biological arguments, relying upon anthropomorphisms and careless analogies.
Publication
The Selfish Gene was first published by Oxford University Press in 1976 in eleven chapters with a preface by the author and a foreword by Robert Trivers. A second edition was published in 1989. This edition added two extra chapters, and substantial endnotes to the preceding chapters, reflecting new findings and thoughts. It also added a second preface by the author, but the original foreword by Trivers was dropped. The book contains no illustrations.
The book has been translated into at least 23 languages including Arabic, Thai and Turkish.
In 2006, a 30th-anniversary edition was published with the Trivers foreword and a new introduction by the author in which he states, "This edition does, however---and it is a source of particular joy to me---restore the original Foreword by Robert Trivers." This edition was accompanied by a festschrift entitled Richard Dawkins: How a Scientist Changed the Way We Think (2006). In March 2006, a special event entitled The Selfish Gene: Thirty Years On was held at the London School of Economics. In March 2011, Audible Inc published an audiobook edition narrated by Richard Dawkins and Lalla Ward.
In 2016, Oxford University Press published a 40th anniversary edition with a new epilogue, in which Dawkins describes the continued relevance of the gene's eye view of evolution and states that it, along with coalescence analysis "illuminates the deep past in ways of which I had no inkling when I first wrote The Selfish Gene..."
Editions
Awards and recognition
In April 2016, The Selfish Gene was listed in The Guardian'''s list of the 100 best nonfiction books, by Robert McCrum.
In July 2017, the book was listed as the most influential science book of all time in a poll to celebrate the 30th anniversary of the Royal Society science book prize, ahead of Charles Darwin's On the Origin of Species and Isaac Newton's Principia Mathematica.
See also
Notes
References
Bibliography
External links
Video introduction by Richard Dawkins from Google Videos
The Selfish Gene: Thirty Years On and mp3 from Edge Foundation, Inc.
Richard Dawkins discusses The Selfish Gene on the BBC World Book Club''
Richard Dawkins on the origins of The Selfish Gene Royal Institution event video, 20 September 2013
1976 non-fiction books
Books about evolution
Books by Richard Dawkins
Cognitive science literature
DNA replication
English-language non-fiction books
English non-fiction books
Memetics
Modern synthesis (20th century)
Oxford University Press books
Popular science books
Science studies
Biology books | The Selfish Gene | [
"Biology"
] | 7,303 | [
"Genetics techniques",
"DNA replication",
"Molecular genetics"
] |
44,195 | https://en.wikipedia.org/wiki/Friends%20of%20the%20Earth | Friends of the Earth International (FoEI) is an international network of grassroots environmental organizations in 73 countries. About half of the member groups call themselves "Friends of the Earth" in their own languages; the others use other names. The organization was founded in 1969 in San Francisco by David Brower, Donald Aitken, and Gary Soucie after Brower's split with the Sierra Club because of the latter's positive approach to nuclear energy. It became an international network of organizations in 1971 with a meeting of representatives from four countries: U.S., Sweden, the UK and France.
FoEI currently has a secretariat (based in Amsterdam, Netherlands) which provides support for the network and its agreed major campaigns. The executive committee of elected representatives from national groups sets policy and oversees the work of the secretariat. In 2016, Uruguayan activist Karin Nansen was elected to serve as chair of the organization. Sri Lankan activist Hemantha Withanage has served as chair of FoEI since 2021.
Campaign issues
Friends of the Earth International is an international membership organisation, with members spread across the world. Its advocacy programs focus on environmental, economic and social issues, highlighting their political and human rights contexts.
As per its website, the current campaign priorities of Friends of the Earth International are: economic justice and resisting neoliberalism; forests and biodiversity; food sovereignty; and climate justice and energy. The campaign priorities of FOEI are set at its bi-annual general meeting. Additionally, FOEI also plans campaigns in other fields, such as waste and overconsumption, international financial institutions, ecological debt, mining and extractive industries, and opposition to nuclear power. FOEI has campaigned for the closure of the Diablo Canyon nuclear plant in California. FOEI also supports campaigns from the regions or member groups, such as the one on the consumption and intensive production of meat (Meat Atlas) by Friends of the Earth Europe.
FOEI claims that it has been successful as it has eliminated billions in taxpayer subsidies to corporate polluters, reformed the World Bank to address environmental and human rights concerns, pushed the debate on global warming to pressure the U.S. and U.K. to attempt the best legislation possible, stopped more than 150 destructive dams and water projects worldwide, pressed and won landmark regulations of strip mines and oil tankers and banned international whaling. Its critics claim that the organization tries only to obtain media attention (as by releasing the song "Love Song to the Earth"), but does not stay with locals to actually solve complicated problems, and that it prevents development in developing countries. They have also been critical of its policy to accept high levels of funding from companies and charities related to oil and gas.
One of Friends of the Earth's most recent campaigns and legal battles was the "Shell Case", led by Milieudefensie (Friends of the Earth Netherlands). In 2021, a court in the Netherlands ruled in a landmark case that the oil giant Shell must reduce its emissions in 2030 by 45% compared to 2019 levels. This was the first time that a company had been legally obliged to align its policies with the Paris Agreement. This was later overturned in November 2024.
In January 2025 when UK Prime Minister Keir Starmer announced plans to take on NIMBYs who block major infrastructure projects, such as nuclear power, roads, railway and wind farms, Friends of the Earth criticized Starmer, saying he was scapegoating people with "valid concerns about a project's impact".
Structure of the network
The member organization in a particular country may name itself Friends of the Earth or an equivalent translated phrase in the national language, e.g., Friends of the Earth (US), Friends of the Earth (EWNI) (England Wales and Northern Ireland), Amigos de la Tierra (Spain and Argentina). However, roughly half of the member groups work under their own names, sometimes reflecting an independent origin and subsequent accession to the network, such as Pro Natura (Switzerland), the Korean Federation for Environmental Movement, Environmental Rights Action (FOE Nigeria) and WALHI (FOE Indonesia).
Friends of the Earth International (FoEI) is supported by a secretariat based in Amsterdam, and an executive committee known as ExCom. The ExCom is elected by all member groups at a general meeting held every two years, and it is the ExCom which employs the secretariat. At the same general meeting, overall policies and priority activities are agreed.
In addition to work which is coordinated at the FoEI level, national member groups are free to carry out their own campaigns and to work bi- or multi-laterally as they see fit, as long as this does not go against agreed policy at the international level.
Publications
The Meat Atlas is an annual report on the methods and impact of industrial animal agriculture. The publication consists of 27 short essays and, with the help of graphs, visualises facts about the production and consumption of meat. The Meat Atlas is jointly published by Friends of the Earth and Heinrich Böll Foundation.
Notable supporters
Rock musician George Harrison became associated with Friends of the Earth after attending their anti-nuclear demonstrations in London in 1980. He dedicated his 1989 greatest hits album, Best of Dark Horse, to Friends of the Earth, among other environmental organisations.
Jay Kay, frontman of the funk and acid jazz group Jamiroquai, is known for donating a part of the profits earned from his album sales to Friends of the Earth and Oxfam, among other causes.
Thom Yorke, lead singer of Radiohead, has publicly supported a number of Friends of the Earth campaigns, including the Big Ask, which led the UK government to introduce the Climate Change Bill in the Queen's Speech on 15 November 2006. This was after 130,000 people across the country had asked their MP to support such a bill.
Proceeds from sales of the single "Love Song to the Earth" (2015), performed by Paul McCartney, Jon Bon Jovi, Sheryl Crow, Fergie, Sean Paul, and Colbie Caillat among others, went to Friends of the Earth U.S. and the United Nations Foundation.
Member organizations
Asia
Friends of the Earth Japan
Indonesian Forum for Environment, Indonesia
Korean Federation for Environmental Movement
Friends of the Earth Middle East
Legal Rights and Natural Resources Center – Kasama sa Kalikasan
Centre for Environmental Justice, Sri Lanka
Sahabat Alam Malaysia
Europe
Friends of the Earth Europe, Brussels
Young Friends of the Earth Europe, Brussels
Friends of the Earth – France
Friends of the Earth Scotland
Pro Natura (Switzerland)
Amigos de la tierra, Spain
Bund für Umwelt und Naturschutz Deutschland, Germany
Friends of the Earth (EWNI), England, Wales and Northern Ireland
Birmingham Friends of the Earth
, Austria
Friends of the Earth Malta
Friends of the Earth Finland
Friends of the Earth Hungary
Priatelia Zeme Slovensko (Friends of the Earth Slovakia)
Friends of the Earth (EWNI), (England, Wales and Northern Ireland)
Manchester Friends of the Earth
Green Action, Croatia
Hnutí DUHA, Czech Republic
Milieudefensie, Netherlands
Norwegian Society for the Conservation of Nature, Norway
Friends of the Earth (Malta)
NOAH, founded in 1969 in Denmark, national organisation of Foe since 1988, Denmark.
North America
Friends of the Earth Canada
, Canada
Friends of the Earth (US)
Oceania
Friends of the Earth Australia
See also
Friends of the Earth, Inc. v. Laidlaw Environmental Services, Inc.
List of environmental organizations
Anti-nuclear movement in the United States
List of anti-nuclear groups in the United States
Friends of the Earth (HK)
Notes and references
Bibliography
Brian Doherty and Timothy Doyle, Environmentalism, Resistance and Solidarity. The Politics of Friends of the Earth International (Basingstoke: Palgrave, 2013).
Jan-Henrik Meyer, “'Where do we go from Wyhl?' Transnational Anti-Nuclear Protest targeting European and International Organisations in the 1970s,” Historical Social Research 39: 1 (2014): 212–235.
External links
Article of Friends of the Earth France "Multinationals : Ecologists See Red"
Friends of the Earth International YouTube channel
Nature conservation organizations based in the United States
Anti-nuclear organizations
Environmental organizations established in 1969
1969 establishments in the United States
Nature conservation organisations based in the Netherlands
Organisations based in Amsterdam
International organisations based in the Netherlands | Friends of the Earth | [
"Engineering"
] | 1,704 | [
"Nuclear organizations",
"Anti-nuclear organizations"
] |
44,214 | https://en.wikipedia.org/wiki/Slate | Slate is a fine-grained, foliated, homogeneous, metamorphic rock derived from an original shale-type sedimentary rock composed of clay or volcanic ash through low-grade, regional metamorphism. It is the finest-grained foliated metamorphic rock. Foliation may not correspond to the original sedimentary layering, but instead is in planes perpendicular to the direction of metamorphic compression.
The foliation in slate, called "slaty cleavage", is caused by strong compression in which fine-grained clay forms flakes to regrow in planes perpendicular to the compression. When expertly "cut" by striking parallel to the foliation with a specialized tool in the quarry, many slates display a property called fissility, forming smooth, flat sheets of stone which have long been used for roofing, floor tiles, and other purposes. Slate is frequently grey in color, especially when seen en masse covering roofs. However, slate occurs in a variety of colors even from a single locality; for example, slate from North Wales can be found in many shades of grey, from pale to dark, and may also be purple, green, or cyan. Slate is not to be confused with shale, from which it may be formed, or schist.
The word "slate" is also used for certain types of object made from slate rock. It may mean a single roofing tile made of slate, or a writing slate, which was traditionally a small, smooth piece of the rock, often framed in wood, used with chalk as a notepad or notice board, and especially for recording charges in pubs and inns. The phrases "clean slate" and "blank slate" come from this usage.
Description
Slate is a fine-grained, metamorphic rock that shows no obvious compositional layering but can easily be split into thin slabs and plates. It is usually formed by low-grade regional metamorphism of mudrock. This mild degree of metamorphism produces a rock in which the individual mineral crystals remain microscopic in size, producing a characteristic slaty cleavage in which fresh cleavage surfaces appear dull. This is in contrast to the silky cleaved surfaces of phyllite, which is the next-higher grade of metamorphic rock derived from mudstone. The direction of cleavage is independent of any sedimentary structures in the original mudrock, reflecting instead the direction of regional compression.
Slaty cleavage is continuous, meaning that the individual cleavage planes are too closely spaced to be discernible in hand samples. The texture of the slate is totally dominated by these pervasive cleavage planes. Under a microscope, the slate is found to consist of very thin lenses of quartz and feldspar (QF-domains) separated by layers of mica (M-domains). These are typically less than 100 μm (micron) thick. Because slate was formed in low heat and pressure, compared to most other metamorphic rocks, some fossils can be found in slate; sometimes even microscopic remains of delicate organisms can be found in slate.
The process of conversion of mudrock to slate involves a loss of up to 50% of the volume of the mudrock as it is compacted. Grains of platy minerals, such as clay minerals, are rotated to form parallel layers perpendicular to the direction of compaction, which begin to impart cleavage to the rock. Slaty cleavage is fully developed as the clay minerals begin to be converted to chlorite and mica. Organic carbon in the rock is converted to graphite.
Slate is mainly composed of the minerals quartz, illite, and chlorite, which account for up to 95% of its composition. The most important accessory minerals are iron oxides (such as hematite and magnetite), iron sulfides (such as pyrite), and carbonate minerals. Feldspar may be present as albite or, less commonly, orthoclase. Occasionally, as in the purple slates of North Wales, ferrous (iron(II)) reduction spheres form around iron nuclei, leaving a light-green, spotted texture. These spheres are sometimes deformed by a subsequent applied stress field into ovoids, which appear as ellipses when viewed on a cleavage plane of the specimen. However, some evidence shows that reduced spots may also form after deformation and acquire an elliptical shape from preferential infiltration along the cleavage direction, so caution is required in using reduction ellipsoids to estimate deformation.
Terminology
Before the mid-19th century, the terms "slate", "shale", and "schist" were not sharply distinguished. In the context of underground coal mining in the United States, the term slate was commonly used to refer to shale well into the 20th century. For example, roof slate referred to shale above a coal seam, and draw slate referred to shale that fell from the mine roof as the coal was removed.
The British Geological Survey recommends that the term "slate" be used in scientific writings only when very little else is known about the rock that would allow a more definite classification. For example, if the characteristics of the rock show definitely that it was formed by metamorphosis of shale, it should be described in scientific writings as a metashale. If its origin is uncertain, but the rock is known to be rich in mica, it should be described as a pelite.
Uses
Construction
Slate can be made into roofing slate, a type of roof tile which are installed by a slater. Slate has two lines of breakability—cleavage and grain—which make it possible to split the stone into thin sheets. When broken, slate retains a natural appearance while remaining relatively flat and easy to stack. A series of "slate booms" occurred in Europe from the 1870s until the First World War following improvements in railway, road and waterway transportation systems.
Slate is particularly suitable as a roofing material as it has an extremely low water absorption index of less than 0.4%, making the material resistant to frost damage. Natural slate, which requires only minimal processing, has an embodied energy that compares favorably with other roofing materials.
Natural slate is used by building professionals as a result of its beauty and durability. Slate is incredibly durable and can last several hundred years, often with little or no maintenance. Natural slate is also fire resistant and energy efficient.
Slate roof tiles are usually fixed (fastened) either with nails or with hooks (as is common with Spanish slate). In the UK, fixing is typically with double nails onto timber battens (England and Wales) or nailed directly onto timber sarking boards (Scotland and Northern Ireland). Nails were traditionally of copper, although there are modern alloy and stainless steel alternatives. Both these methods, if used properly, provide a long-lasting weathertight roof with a lifespan of around 60–125 years.
Some mainland European slate suppliers suggest that using hook fixing means that:
Areas of weakness on the tile are fewer since no holes have to be drilled
Roofing features such as valleys and domes are easier to create since narrow tiles can be used
Hook fixing is particularly suitable in regions subject to severe weather conditions, since there is greater resistance to wind uplift, as the lower edge of the slate is secured.
The metal hooks are, however, visible and may be unsuitable for historic properties.
Slate tiles are often used for interior and exterior flooring, stairs, walkways and wall cladding. Tiles are installed and set on mortar and grouted along the edges. Chemical sealants are often used on tiles to improve durability and appearance, increase stain resistance, reduce efflorescence, and increase or reduce surface smoothness. Tiles are often sold gauged, meaning that the back surface is ground for ease of installation. Slate flooring can be slippery when used in external locations subject to rain.
Slate tiles were used in 19th century UK building construction (apart from roofs) and in slate quarrying areas such as Blaenau Ffestiniog and Bethesda, Wales there are still many buildings wholly constructed of slate. Slates can also be set into walls to provide a rudimentary damp-proof membrane. Small offcuts are used as shims to level floor joists. In areas where slate is plentiful it is also used in pieces of various sizes for building walls and hedges, sometimes combined with other kinds of stone.
Other uses
Because it is a good electrical insulator and fireproof, it was used to construct early-20th-century electric switchboards and relay controls for large electric motors. Because of its thermal stability and chemical inertness, slate has been used for laboratory bench tops and for billiard table tops.
Slate was used by earlier cultures as whetstone to hone knives, but whetstones are nowadays more typically made of quartz. In 18th- and 19th-century schools, slate was extensively used for blackboards and individual writing slates, for which slate or chalk pencils were used. In modern homes slate is often used as table coasters.
In areas where it is available, high-quality slate is used for tombstones and commemorative tablets. In some cases slate was used by the ancient Maya civilization to fashion stelae. Slate was the traditional material of choice for black Go stones in Japan, alongside clamshell for white stones. It is now considered to be a luxury.
Pennsylvania slate is widely used in the manufacture of turkey calls used for hunting turkeys. The tones produced from the slate, when scratched with various species of wood striker, imitates almost exactly the calls of all four species of wild turkey in North America: eastern, Rio Grande, Osceola and Merriam's.
Extraction
Slate is found in the Arctic and was used by Inuit to make the blades for ulus. China has vast slate deposits; in recent years its export of finished and unfinished slate has increased. Deposits of slate exist throughout Australia, with large reserves quarried in the Adelaide Hills in Willunga, Kanmantoo, and the Mid North at Mintaro and Spalding. Slate is abundant in Brazil, the world's second-largest producer of slate, around Papagaios in Minas Gerais, which extracts 95 percent of Brazil's slate. However, not all "slate" products from Brazil are entitled to bear the CE mark.
Most slate in Europe today comes from Spain, the world's largest producer and exporter of natural slate, and 90 percent of Europe's natural slate used for roofing originates from the slate industry there. Lesser slate-producing regions in present-day Europe include Wales (with UNESCO landscape status and a museum at Llanberis), Cornwall (famously the village of Delabole), Cumbria (see Burlington Slate Quarries, Honister Slate Mine and Skiddaw Slate) and, formerly in the West Highlands of Scotland, around Ballachulish and the Slate Islands in the United Kingdom. Parts of France (Anjou, Loire Valley, Ardennes, Brittany, Savoie) and Belgium (Ardennes), Liguria in northern Italy, especially between the town of Lavagna (whose name is inherited as the term for chalkboard in Italian) and Fontanabuona valley; Portugal especially around Valongo in the north of the country. Germany's Moselle River region, Hunsrück (with a former mine open as a museum at Fell), Eifel, Westerwald, Thuringia and north Bavaria; and Alta, Norway (actually schist, not a true slate). Some of the slate from Wales and Cumbria is colored slate (non-blue): purple and formerly green in Wales and green in Cumbria.
In North America, slate is produced in Newfoundland, eastern Pennsylvania, Buckingham County, Virginia, and the Slate Valley region in Vermont and New York, where colored slate is mined in the Granville, New York, area. A major slating operation existed in Monson, Maine, during the late 19th and early 20th centuries, where the slate is usually dark purple to blackish, and many local structures are roofed with slate tiles. The roof of St. Patrick's Cathedral in New York City and the headstone of John F. Kennedy's gravesite in Arlington National Cemetery are both made of Monson slate.
See also
Bluestone in South Australia, a form of slate used extensively in Adelaide 1850s–1920s
References
Further reading
Page, William (ed.) (1906). The Victoria History of the County of Cornwall; vol. I. (Chapter on quarries.) Westminster: Constable.
Hudson, Kenneth (1972). Building Materials; "Chapter 2: Stone and Slate". pp London: Longman, pp. 14–27. .
External links
AditNow—Photographic database of mines
Granville Slate Museum
Hower’s Lightning Slate Reckoner (1884/1904), by F. M. Hower, Cherryville, Penn., on Stone Quarries and Beyond (PDF/18.95 MB)
Stone Roofing Association (U.K.) website with detailed information about stone roofing
Building materials
Building stone
Dielectrics
Metasedimentary rocks
Natural materials
Pavements
Stone (material)
Roofing materials
Industrial minerals
Go equipment | Slate | [
"Physics",
"Engineering"
] | 2,699 | [
"Natural materials",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Dielectrics",
"Matter",
"Building materials"
] |
44,216 | https://en.wikipedia.org/wiki/Miniature%20effect | A miniature effect is a special effect created for motion pictures and television programs using scale models. Scale models are often combined with high speed photography or matte shots to make gravitational and other effects appear convincing to the viewer. The use of miniatures has largely been superseded by computer-generated imagery in contemporary cinema.
Where a miniature appears in the foreground of a shot, this is often very close to the camera lens — for example when matte-painted backgrounds are used. Since the exposure is set to the object being filmed so the actors appear well-lit, the miniature must be over-lit in order to balance the exposure and eliminate any depth of field differences that would otherwise be visible. This foreground miniature usage is referred to as forced perspective. Another form of miniature effect uses stop motion animation.
The use of scale models in the creation of visual effects by the entertainment industry dates back to the earliest days of cinema. Models and miniatures are copies of people, animals, buildings, settings, and objects. Miniatures or models are used to represent things that do not really exist, or that are too expensive or difficult to film in reality, such as explosions, floods, or fires.
From 1900 to the mid-1960s
French director Georges Méliès incorporated special effects in his 1902 film Le Voyage dans la Lune (A Trip to the Moon) — including double-exposure, split screens, miniatures and stop-action.
Some of the most influential visual effects films of these early years such as Metropolis (1927), Citizen Kane (1941), Godzilla (1954) The Ten Commandments (1956). The 1933 film King Kong made extensive use of miniature effects including scale models and stop-motion animation of miniature elements.
From the mid-1960s
The use of miniatures in 2001: A Space Odyssey was a major development. In production for three years, the film was a significant advancement in creating convincing models.
In the early 1970s, miniatures were often used to depict disasters in such films as The Poseidon Adventure (1972), Earthquake (1974) and The Towering Inferno (1974).
The resurgence of the science fiction genre in film in the late 1970s saw miniature fabrication rise to new heights in such films as Close Encounters of the Third Kind, (1977), Star Wars (also 1977), Alien (1979), Star Trek: The Motion Picture (1979) and Blade Runner (1982). Iconic film sequences such as the tanker truck explosion from The Terminator (1984) and the bridge destruction in True Lies (1994) were achieved through the use of large-scale miniatures.
Largely replaced by CGI
The release of Jurassic Park (1993) was a turning point in the use of computers to create effects for which physical miniatures would have previously been employed.
While the use of computer-generated imagery (CGI) has largely overtaken their use since then, they are still often employed, especially for projects requiring physical interaction with fire, explosions, or water.
Independence Day (1996), Titanic (1997), Godzilla (1998), the Star Wars prequel trilogy (1999–2005), The Lord of the Rings trilogy (2001–2003), Casino Royale (2006), The Dark Knight (2008), Inception (2010), and Interstellar (2014) are examples of highly successful films that have utilized miniatures for a significant component of their visual effects work.
Techniques
Acid-etching metal
Carpentry
Fiberglass
Kit-bashing
Laser cutting
Machining
Miniature lighting and electronics
Mold Making and Casting
Motion control photography
Painting
Plastic fabrication
Rapid prototyping
Vacuum forming
Welding
Notable model-makers
Brick Price: The Abyss
David Jones: Star Wars, The Hunt for Red October
Grant McCune: Star Wars, Battlestar Galactica, Star Trek: The Motion Picture.
Greg Jein: Close Encounters of the Third Kind, Star Trek: The Next Generation
Ian Hunter: The Dark Knight, Live Free or Die Hard, The Chronicles of Narnia: The Lion, the Witch and the Wardrobe
Leigh Took: The Da Vinci Code, The Imaginarium of Doctor Parnassus
Lorne Peterson: Star Wars Episodes 1 - 6, Raiders of the Lost Ark, Battlestar Galactica, War of the Worlds
Mark Stetson: Blade Runner, Die Hard, The Fifth Element, The Lord of the Rings
Matthew Gratzner: The Aviator, The Good Shepherd, Pitch Black, Alien Resurrection.
Michael Joyce: The Terminator, Independence Day
Patrick McClung: The Empire Strikes Back, Aliens, The Abyss, True Lies
Richard Taylor: The Lord of the Rings, Master and Commander: The Far Side of the World
Steve Gawley: Star Wars, Raiders of the Lost Ark
Miniature effects companies
Vision Crew Unlimited
Weta Workshop
WonderWorks
References
External links
Howard & Theodore Lydecker, miniature effects pioneers
Scale modeling
Visual effects
Cinematic techniques
Film and video technology
Special effects | Miniature effect | [
"Physics"
] | 974 | [
"Scale modeling"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.