id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
54000
https://en.wikipedia.org/wiki/Biophysics
Biophysics
Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology. The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry. Overview Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions. Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules. In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain. Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom. History The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller. William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery. The popularity of the field rose when the book What Is Life?'' by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world. Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena. Focus as a subfield While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments. Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics. Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof. Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships. Computer science – Neural networks, biomolecular and drug databases. Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry Bioinformatics – sequence alignment, structural alignment, protein structure prediction Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics. Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe. Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity. Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides. Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application. Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing. Agronomy and agriculture Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training.
Physical sciences
Physics basics: General
Physics
54025
https://en.wikipedia.org/wiki/Rubiaceae
Rubiaceae
Rubiaceae () is a family of flowering plants, commonly known as the coffee, madder, or bedstraw family. It consists of terrestrial trees, shrubs, lianas, or herbs that are recognizable by simple, opposite leaves with interpetiolar stipules and sympetalous actinomorphic flowers. The family contains about 14,100 species in about 580 genera, which makes it the fourth-largest angiosperm family. Rubiaceae has a cosmopolitan distribution; however, the largest species diversity is concentrated in the tropics and subtropics. Economically important genera include Coffea, the source of coffee; Cinchona, the source of the antimalarial alkaloid quinine; ornamental cultivars (e.g., Gardenia, Ixora, Pentas); and historically some dye plants (e.g., Rubia). Description The Rubiaceae are morphologically easily recognizable as a coherent group by a combination of characters: opposite or whorled leaves that are simple and entire, interpetiolar stipules, tubular sympetalous actinomorphic corollas and an inferior ovary. A wide variety of growth forms are present: shrubs are most common (e.g. Coffea, Psychotria), but members of the family can also be trees (e.g. Cinchona, Nauclea), lianas (e.g. Psychotria samoritourei), or herbs (e.g. Galium, Spermacoce). Some epiphytes are also present (e.g. Myrmecodia). The plants usually contain iridoids, various alkaloids, and raphide crystals are common. The leaves are simple, undivided, and entire; there is only one case of pinnately compound leaves (Pentagonia osapinnata). Leaf blades are usually elliptical, with a cuneate base and an acute tip. In three genera (Pavetta, Psychotria, Sericanthe), bacterial leaf nodules can be observed as dark spots or lines on the leaves. The phyllotaxis is usually decussate, rarely whorled (e.g. Fadogia), or rarely seemingly alternate resulting from the reduction of one leaf at each node (e.g. Sabicea sthenula). Characteristic for the Rubiaceae is the presence of stipules that are mostly fused to an interpetiolar structure on either side of the stem between the opposite leaves. Their inside surface often bears glands called "colleters", which produce mucilaginous compounds protecting the young shoot. The "whorled" leaves of the herbaceous tribe Rubieae have classically been interpreted as true leaves plus interpetiolar leaf-like stipules. The inflorescence is a cyme, rarely of solitary flowers (e.g. Rothmannia), and is either terminal or axillary and paired at the nodes. The 4-5-merous (rarely pleiomerous; e.g. six in Richardia) flowers are usually bisexual and usually epigynous. The perianth is usually biseriate, although the calyx is absent in some taxa (e.g. Theligonum). The calyx mostly has the lobes fused at the base; unequal calyx lobes are not uncommon, and sometimes (e.g. Mussaenda) one lobe is enlarged and coloured (a so-called “semaphyl”). The corolla is sympetalous, mostly actinomorphic, usually tubular, mostly white or creamy but also yellow (e.g. Gardenia spp., Mycelia basiflora), and rarely blue (e.g. Faramea calyptrata) or red (e.g. Alberta magna, Ixora coccinea). The stamens are alternipetalous and epipetalous. Anthers are longitudinal in dehiscence, but are poricidal in some genera (e.g. Rustia, Tresanthera). The gynoecium is syncarpous with an inferior ovary (rarely secondarily superior, e.g. Gaertnera, Pagamea). Placentation is axial, rarely parietal (e.g. Gardenia); ovules are anatropous to hemitropous, unitegmic, with a funicular obturator, one to many per carpel. Nectaries are often present as a nectariferous disk atop the ovary. The fruit is a berry, capsule (e.g. Oldenlandia), drupe (e.g. Coffea, Psychotria), or schizocarp (e.g. Cremocarpon). Red fruits are fairly dominant (e.g. Coffea arabica); yellow (e.g. Rosenbergiodendron formosum), orange (e.g. Vangueria infausta), or blackish fruits (e.g. Pavetta gardeniifolia) are equally common; blue fruits are rather exceptional save in the Psychotrieae and associated tribes. Most fruits are about 1 cm in diameter; very small fruits are relatively rare and occur in herbaceous tribes; very large fruits are rare and confined to the Gardenieae. The seeds are endospermous. Distribution and habitat Rubiaceae have a cosmopolitan distribution and are found in nearly every region of the world, except for extreme environments such as the polar regions and deserts. The distribution pattern of the family is very similar to the global distribution of plant diversity overall. However, the largest diversity is distinctly concentrated in the humid tropics and subtropics. An exception is the tribe Rubieae, which is cosmopolitan but centered in temperate regions. Only a few genera are pantropical (e.g. Ixora, Psychotria), many are paleotropical, while Afro-American distributions are rare (e.g. Sabicea). Endemic rubiaceous genera are found in most tropical and subtropical floristic regions of the world. The highest number of species is found in Colombia, Venezuela, and New Guinea. When adjusted for area, Venezuela is the most diverse, followed by Colombia and Cuba. The Rubiaceae consist of terrestrial and predominantly woody plants. Woody rubiaceous shrubs constitute an important part of the understorey of low- and mid-altitude rainforests. Rubiaceae are tolerant of a broad array of environmental conditions (soil types, altitudes, community structures, etc.) and do not specialize in one specific habitat type (although genera within the family often specialize). Ecology Flower biology Most members of the Rubiaceae are zoophilous, pollinated mainly by insects. Entomophilous species produce nectar from an epigynous disk at the base of the corolla tube to attract insects. Ornithophily is rare and is found in red-flowered species of Alberta, Bouvardia, and Burchellia. Anemophilous species are found in the tribes Anthospermeae and Theligoneae and are characterized by hermaphroditic or unisexual flowers that exhibit a set of specialized features, such as striking sexual dimorphism, increased receptive surface of the stigmas and pendulous anthers. Although most Rubiaceae species are hermaphroditic, outbreeding is promoted through sequential hermaphroditism and spatial isolation of the reproductive organs. More complex reproductive strategies include secondary pollen presentation, heterostyly, and unisexual flowers. Secondary pollen presentation (also known as stylar pollen presentation or ixoroid pollen mechanism) is especially known from the Gardenieae and related tribes. The flowers are proterandrous and the pollen is shed early onto the outside of the stigmas or the upper part of the style, which serve as a pollen receptacle. Increased surface area and irregularity of the pollen receptacle, caused by swellings, hairs, grooves or ridges often ensure a more efficient pollen deposition. After elongation of the style, animals transport the pollen to flowers in the female or receptive stage with exposed stigmatic surfaces. A pollen catapult mechanism is present in the genera Molopanthera and Posoqueria (tribe Posoquerieae) that projects a spherical pollen mass onto visiting hawk moths. Heterostyly is another mechanism to avoid inbreeding and is widely present in the family Rubiaceae. The tribes containing the largest number of heterostylous species are Spermacoceae and Psychotrieae. Heterostyly is absent in groups that have secondary pollen presentation (e.g. Vanguerieae). Unisexual flowers also occur in Rubiaceae and most taxa that have this characteristic are dioecious. The two flower morphs are however difficult to observe as they are rather morphologically similar; male flowers have a rudimentary pistil with the ovaries empty and female flowers sterile or rudimentary stamens with empty anthers. Flowers that are morphologically hermaphrodite, but functionally dioecious occur in Pyrostria. Fruit biology The dispersal units in Rubiaceae can be entire fruits, syncarps, mericarps, pyrenes or seeds. Fleshy fruit taxa are probably all (endo)zoochorous (e.g. tribes Pavetteae, Psychotrieae), while the dispersal of dry fruits is often unspecialized (e.g. tribes Knoxieae, Spermacoceae). When seeds function as diaspores, the dispersal is either anemochorous or hydrochorous. The three types of wind-dispersed diaspores in Rubiaceae are dust seeds (rare, e.g. Lerchea), plumed seeds (e.g. Hillia), and winged seeds (e.g. Coutarea). Long-distance dispersal by ocean currents is very rare (e.g. the seashore tree Guettarda speciosa). Other dispersal mechanisms are absent or at least very rare. Some Spermacoceae having seeds with elaiosomes are probably myrmecochorous (e.g. Spermacoce hepperiana). Epizoochorous taxa are limited to herbaceous Rubiaceae (e.g. Galium aparine fruits are densely covered with hooked bristly hairs). Associations with other organisms The genera Anthorrhiza, Hydnophytum, Myrmecodia, Myrmephytum, and Squamellaria are succulent epiphytes that have evolved a mutualistic relationship with ants. Their hypocotyl grows out into an ant-inhabited tuber. Some shrubs or trees have ant holes in their stems (e.g. Globulostylis). Some Rubiaceae species have domatia that are inhabited by mites (viz. acarodomatia; e.g. Plectroniella armata). An intimate association between bacteria and plants is found in three rubiaceous genera (viz. Pavetta, Psychotria, and Sericanthe). The presence of endophytic bacteria is visible by eye because of the formation of dark spots or nodules in the leaf blades. The endophytes have been identified as Burkholderia bacteria. A second type of bacterial leaf symbiosis is found in the genera Fadogia, Fadogiella, Globulostylis, Rytigynia, and Vangueria (all belonging to the tribe Vanguerieae), and in some species of Empogona and Tricalysia (both belonging to the tribe Coffeeae), where Burkholderia bacteria are found freely distributed among the mesophyll cells and no leaf nodules are formed. The hypothesis regarding the function of the symbiosis is that the endophytes provide chemical protection against herbivory by producing certain toxic secondary metabolites. Systematics The family Rubiaceae is named after Rubia, a name used by Pliny the Elder in his Naturalis Historia for madder (Rubia tinctorum). The roots of this plant have been used since ancient times to extract alizarin and purpurin, two red dyes used for coloring clothes. The name rubia is therefore derived from the Latin word ruber, meaning red. The well-known genus Rubus (blackberries and raspberries) is unrelated and belongs to Rosaceae, the rose family. Taxonomy The name Rubiaceae (nomen conservandum) was published in 1789 by Antoine Laurent de Jussieu, but the name was already mentioned in 1782. Several historically accepted families are included in Rubiaceae: Aparinaceae, Asperulaceae, Catesbaeaceae, Cephalanthaceae, Cinchonaceae, Coffeaceae, Coutariaceae, Dialypetalanthaceae, Galiaceae, Gardeniaceae, Guettardaceae, Hameliaceae, Hedyotidaceae, Henriqueziaceae, Houstoniaceae, Hydrophylacaceae, Lippayaceae, Lygodisodeaceae, Naucleaceae, Nonateliaceae, Operculariaceae, Pagamaeaceae, Psychotriaceae, Randiaceae, Sabiceaceae, Spermacoceaceae, Theligonaceae. Subfamilies and tribes The classical classification system of Rubiaceae distinguished only two subfamilies: Cinchonoideae, characterized by more than one ovule in each locule, and Coffeoideae, having one ovule in each locule. This distinction, however, was criticized because of the distant position of two obviously related tribes, viz. Gardenieae with many ovules in Cinchonoideae and Ixoreae with one ovule in Coffeoideae, and because in species of Tarenna the number of ovules varies from one to several in each locule. During the 20th century, other morphological characters were used to delineate subfamilies, e.g. stylar pollen presentation, raphides, endosperm, heterostyly, etc. On this basis, three or eight subfamilies were recognised. The last subfamilial classification solely based on morphological characters divided Rubiaceae into four subfamilies: Cinchonoideae, Ixoroideae, Antirheoideae, and Rubioideae. In general, problems of subfamilies delimitation in Rubiaceae based on morphological characters are linked with the extreme naturalness of the family, hence a relatively low divergence of its members. The introduction of molecular phylogenetics in Rubiaceae research has corroborated or rejected several of the conclusions made in the pre-molecular era. There was support for the subfamilies Cinchonoideae, Ixoroideae, and Rubioideae, although differently circumscribed, and Antirheoideae was shown to be polyphyletic. For a long time, the classification with three subfamilies (Cinchonoideae, Ixoroideae, and Rubioideae) was followed. However, an alternative opinion existed with only two subfamilies: an expanded Cinchonoideae (that includes Ixoroideae, Coptosapelteae, and Luculieae) and Rubioideae. Finally, more and more evidence pointed towards a two-family classification. The adoption of the Melbourne Code for botanical nomenclature had an unexpected impact on many names that have been long in use and are well-established in literature. According to the Melbourne Code, the subfamilial name Ixoroideae had to be replaced by Dialypetalanthoideae. This means that the two subfamilies in Rubiaceae now are: Dialypetalanthoideae and Rubioideae. The monogeneric tribes Coptosapelteae, Acranthereae, and Luculieae are not placed within a subfamily and are sister to the rest of Rubiaceae. The following overview shows the latest classification of the family, with two subfamilies and 71 tribes. The approximate number of species and genera are indicated between brackets (species/genera). Genera The family Rubiaceae contains about 14,100 species in 580 genera. This makes it the fourth-largest family of flowering plants by number of species and fifth-largest by number of genera. Although taxonomic adjustments are still being made, the total number of accepted genera remains stable. In total, around 1338 genus names have been published, indicating that more than half of the published names are synonyms. Psychotria, with around 1630 species, is the largest genus within the family and the third-largest genus of the angiosperms, after the legume Astragalus and the orchid Bulbophyllum. However, the delimitation of Psychotria remains problematic and its adjustment might reduce the number of species. In total, 30 genera have more than 100 species. However, 197 genera are monotypic, which account for a third of all genera, but only for 1.4% of all species. Phylogeny Molecular studies have demonstrated the phylogenetic placement of Rubiaceae within the order Gentianales and the monophyly of the family is confirmed. The relationships of the two subfamilies of Rubiaceae together with the tribes Acranthereae, Coptosapelteae, and Luculieae are shown in the phylogenetic tree below. The placement of these three tribes relative to the two subfamilies has not been fully resolved. Evolution The fossil history of the Rubiaceae goes back at least as far as the Eocene. The geographic distribution of these fossils, coupled with the fact that they represent all three subfamilies, is indicative of an earlier origin for the family, probably in the Late Cretaceous or Paleocene. Although fossils dating back to the Cretaceous and Palaeocene have been referred to the family by various authors, none of these fossils has been confirmed as belonging to the Rubiaceae. The oldest confirmed fossils, which are fruits that strongly resemble those of the genus Emmenopterys, were found in the Washington and are 48–49 million years old. A fossil infructescence and fruit found in 44 million-year-old strata in Oregon was assigned to Emmenopterys dilcheri, an extinct species. The next-oldest fossils date to the Late Eocene and include Canthium from Australia, Faramea from Panama, Guettarda from New Caledonia, and Paleorubiaceophyllum, an extinct genus from the southeastern United States. Fossil Rubiaceae are known from three regions in the Eocene (North America north of Mexico, Mexico-Central America-Caribbean, and Southeast Pacific-Asia). In the Oligocene, they are found in these three regions plus Africa. In the Miocene, they are found in these four regions plus South America and Europe. Uses Food No staple foods are found in the Rubiaceae, but some species are consumed locally and fruits may be used as famine food. Examples are African medlar fruits (e.g. V. infausta, V. madagascariensis), African peach (Nauclea latifolia), and noni (Morinda citrifolia). Beverage The most economically important member of the family is the genus Coffea used in the production of coffee. Coffea includes 124 species, but only three species are cultivated for coffee production: C. arabica, C. canephora, and C. liberica. Medicinal The bark of trees in the genus Cinchona is the source of a variety of alkaloids, the most familiar of which is quinine, one of the first agents effective in treating malaria. Woodruff (Galium odoratum) is a small herbaceous perennial that contains coumarin, a natural precursor of warfarin, and the South American plant Carapichea ipecacuanha is the source of the emetic ipecac. Psychotria viridis is frequently used as a source of dimethyltryptamine in the preparation of ayahuasca, a psychoactive decoction. The bark of the species Breonadia salicina have been used in traditional African medicine for many years. The leaves of the Kratom plant (Mitragyna speciosa) contain a variety of alkaloids, including several psychoactive alkaloids and is traditionally prepared and consumed in Southeast Asia, where it has been known to exhibit both painkilling and stimulant qualities, behaving as a μ-opioid receptor agonist, and often being used in traditional Thai medicine in a similar way to and often as a replacement for opioid painkillers like morphine. Ornamentals Originally from China, the common gardenia (Gardenia jasminoides) is a widely grown garden plant and flower in frost-free climates worldwide. Several other species from the genus are also seen in horticulture. The genus Ixora contains plants cultivated in warmer-climate gardens; the most commonly grown species, Ixora coccinea, is frequently used for pretty red-flowering hedges. Mussaenda cultivars with enlarged, colored calyx lobes are shrubs with the aspect of Hydrangea; they are mainly cultivated in tropical Asia. The New Zealand native Coprosma repens is a commonly used plant for hedges. The South African Rothmannia globosa is seen as a specimen tree in horticulture. Nertera granadensis is a well-known house plant cultivated for its conspicuous orange berries. Other ornamental plants include Mitchella, Morinda, Pentas, and Rubia. Dyes Rose madder, the crushed root of Rubia tinctorum, yields a red dye, and the tropical Morinda citrifolia yields a yellow dye. Culture Cinchona officinalis is the national tree of Ecuador and Peru. Coffea arabica is the national flower of Yemen. Ixora coccinea is the national flower of Suriname. Warszewiczia coccinea is the national flower of Trinidad and Tobago. Image gallery
Biology and health sciences
Others
null
54047
https://en.wikipedia.org/wiki/Rubus
Rubus
Rubus is a large and diverse genus of flowering plants in the rose family, Rosaceae, subfamily Rosoideae, commonly known as brambles. Fruits of various species are known as raspberries, blackberries, dewberries, and bristleberries. It is a diverse genus, with the estimated number of Rubus species varying from 250 to over 1000, found across all continents except Antarctica. Most of these plants have woody stems with prickles like roses; spines, bristles, and gland-tipped hairs are also common in the genus. The Rubus fruit, sometimes called a bramble fruit, is an aggregate of drupelets. The term "cane fruit" or "cane berry" applies to any Rubus species or hybrid which is commonly grown with supports such as wires or canes, including raspberries, blackberries, and hybrids such as loganberry, boysenberry, marionberry and tayberry. The stems of such plants are also referred to as canes. Description Bramble bushes typically grow as shrubs (though a few are herbaceous), with their stems being typically covered in sharp prickles. They grow long, arching shoots that readily root upon contact with soil, and form a soil rootstock from which new shoots grow in the spring. The leaves are either evergreen or deciduous, and simple, lobed, or compound. The shoots typically do not flower or set fruit until the second year of growth (i.e. they are biennial). The rootstock is perennial. Most species are hermaphrodites with male and female parts being present on the same flower. Bramble fruits are aggregate fruits formed from smaller units called drupelets. Around 60-70% of species of Rubus are polyploid (having more than two copies of each chromosome), with species ranging in ploidy from diploid (2x, with 14 chromosomes) to tetradecaploid (14x). Taxonomy Modern classification Rubus is the only genus in the tribe Rubeae. Rubus is very complex, particularly within the blackberry/dewberry subgenus (Rubus), with polyploidy, hybridization, and facultative apomixis apparently all frequently occurring, making species classification of the great variation in the subgenus one of the grand challenges of systematic botany. In publications between 1910 and 1914, German botanist Wilhelm Olbers Focke attempted to organize the genus into 12 subgenera, a classification system that since became widely accepted, though modern genetic studies have found that many of these subgenera are not monophyletic. Some treatments have recognized dozens of species each for what other, comparably qualified botanists have considered single, more variable species. On the other hand, species in the other Rubus subgenera (such as the raspberries) are generally distinct, or else involved in more routine one-or-a-few taxonomic debates, such as whether the European and American red raspberries are better treated as one species or two (in this case, the two-species view is followed here, with R. idaeus and R. strigosus both recognized; if these species are combined, then the older name R. idaeus has priority for the broader species). The classification presented below recognizes 13 subgenera within Rubus, with the largest subgenus (Rubus) in turn divided into 12 sections. Representative examples are presented, but many more species are not mentioned here. A comprehensive 2019 study found subgenera Orobatus and Anoplobatus to be monophyletic, while all other subgenera to be paraphyletic or polyphyletic. Phylogeny The genus has a likely North American origin, with fossils known from the Eocene-aged Florissant Formation of Colorado, around 34 million years old. Rubus expanded into Eurasia, South America, and Oceania during the Miocene. Fossil seeds from the early Miocene of Rubus have been found in the Czech part of the Zittau Basin. Many fossil fruits of †Rubus laticostatus, †Rubus microspermus and †Rubus semirotundatus have been extracted from bore hole samples of the Middle Miocene fresh water deposits in Nowy Sacz Basin, West Carpathians, Poland. Molecular data have backed up classifications based on geography and chromosome number, but following morphological data, such as the structure of the leaves and stems, do not appear to produce a phylogenetic classification. Species Better-known species of Rubus include: Rubus aboriginum – garden dewberry Rubus allegheniensis – Allegheny blackberry Rubus arcticus – Arctic raspberry Rubus argutus Rubus armeniacus – Himalayan blackberry Rubus caesius – European dewberry Rubus canadensis – smooth blackberry Rubus chamaemorus – cloudberry Rubus cockburnianus – white-stemmed bramble Rubus coreanus – bokbunja Rubus crataegifolius Rubus deliciosus Rubus domingensis Rubus ellipticus Rubus flagellaris – northern dewberry Rubus fraxinifolius – mountain raspberry Rubus glaucus Rubus hawaiensis Rubus hayata-koidzumii Rubus hispidus – swamp dewberry Rubus idaeus – red raspberry Rubus illecebrosus Rubus laciniatus – cut-leaved blackberry Rubus leucodermis – whitebark raspberry Rubus moluccanus Rubus nepalensis Rubus nivalis – snow raspberry Rubus niveus Rubus occidentalis – black raspberry Rubus odoratus – purple-flowered raspberry Rubus parviflorus – thimbleberry Rubus pedatus Rubus pensilvanicus – Pennsylvania blackberry Rubus phoenicolasius – wineberry Rubus probus Rubus pubescens – dwarf raspberry Rubus rosifolius Rubus saxatilis – stone bramble Rubus spectabilis – salmonberry Rubus tricolor Rubus trivialis – Southern dewberry Rubus ulmifolius – elm-leaved blackberry Rubus ursinus – trailing blackberry Rubus vestitus – European blackberry A more complete subdivision is as follows: Hybrid berries The term "hybrid berry" is often used collectively for those fruits in the genus Rubus which have been developed mainly in the U.S. and U.K. in the last 130 years. As Rubus species readily interbreed and are apomicts (able to set seed without fertilisation), the parentage of these plants is often highly complex, but is generally agreed to include cultivars of blackberries (R. ursinus, R. fruticosus) and raspberries (R. idaeus). The British National Collection of Rubus stands at over 200 species and, although not within the scope of the National Collection, also hold many cultivars. The hybrid berries include:- loganberry (California, U.S., 1883) R. × loganobaccus, a spontaneous hybrid between R. ursinus 'Aughinbaugh' and R. idaeus 'Red Antwerp' boysenberry (U.S., 1920s) a hybrid between R. idaeus and R. × loganobaccus nectarberry Suspected variant of boysenberry, a hybrid between R. idaeus and R. × loganobaccus olallieberry (U.S., 1930s) a hybrid between the loganberry and youngberry, themselves both hybrid berries veitchberry (Europe, 1930s) a hybrid between R. fruticosus and R. idaeus skellyberry (Texas, U.S., 2000s), a hybrid between R. invisus and R. phoenicolasius marionberry (1956) now thought to be a blackberry cultivar R. 'Marion' silvanberry, R. 'Silvan', a hybrid between R. 'Marion' and the boysenberry tayberry (Dundee, Scotland, 1979), another blackberry/raspberry hybrid tummelberry, R. 'Tummel', from the same Scottish breeding programme as the tayberry hildaberry (1980s), a tayberry/boysenberry hybrid discovered by an amateur grower youngberry, a complex hybrid of raspberries, blackberries, and dewberries Etymology The generic name means blackberry in Latin and was derived from the word ruber, meaning "red". The blackberries, as well as various other Rubus species with mounding or rambling growth habits, are often called brambles. However, this name is not used for those like the raspberry that grow as upright canes, or for trailing or prostrate species, such as most dewberries, or various low-growing boreal, arctic, or alpine species. The scientific study of brambles is known as "batology". "Bramble" comes from Old English bræmbel, a variant of bræmel.
Biology and health sciences
Berries
Plants
54048
https://en.wikipedia.org/wiki/Steppe
Steppe
In physical geography, a steppe () is an ecoregion characterized by grassland plains without closed forests except near rivers and lakes. Steppe biomes may include: the montane grasslands and shrublands biome the tropical and subtropical grasslands, savannas, and shrublands biome the temperate grasslands, savannas, and shrublands biome A steppe is usually covered with grass and shrubs, depending on the season and latitude. The term steppe climate denotes a semi-arid climate, which is encountered in regions too dry to support a forest, but not dry enough to be a desert. Steppes are usually characterized by a semi-arid or continental climate. Extremes can be recorded in the summer of up to and in winter of down to . Besides this major seasonal difference, fluctuations between day and night are also significant. In both the highlands of Mongolia and northern Nevada, can be reached during the day with sub-freezing readings at night. Steppes average of annual precipitation and feature hot summers and cold winters when located in mid-latitudes. In addition to the precipitation level, its combination with potential evapotranspiration defines a steppe climate. Classification Steppe can be classified by climate: Temperate steppe: the true steppe, found in continental climates can be further subdivided, as in the Rocky Mountains Steppes Subtropical steppe: a similar association of plants occurring in the driest areas with a Mediterranean climate; it usually has a short wet period It can also be classified by vegetation type, e.g. shrub-steppe and alpine-steppe. Cold steppe The world's largest steppe region, often referred to as "the Great Steppe", is found in Eastern Europe and Central Asia, and neighbouring countries stretching from Ukraine in the west through Russia, Kazakhstan, Turkmenistan and Uzbekistan to the Altai, Koppet Dag and Tian Shan ranges in China. The Eurasian Steppe is speculated by David W. Anthony to have had a role in the spread of the horse, the wheel and Indo-European languages. In the Eurasian steppe, soils often consist of chernozem. The inner parts of Anatolia in Turkey, Central Anatolia and East Anatolia in particular and also some parts of Southeast Anatolia, as well as much of Armenia and Iran are largely dominated by cold steppe. The Pannonian Plain is another steppe region in Central Europe, centered in Hungary but also including portions of Slovakia, Poland, Ukraine, Romania, Serbia, Croatia, Slovenia, and Austria. Another large steppe area (prairie) is located in the central United States, western Canada and the northern part of Mexico. The shortgrass prairie steppe is the westernmost part of the Great Plains region. The Columbia Plateau in southern British Columbia, Oregon, Idaho, and Washington state, is an example of a steppe region in North America outside of the Great Plains. In South America, cold steppe can be found in Patagonia and much of the high elevation regions east of the southern Andes. Relatively small steppe areas can be found in the interior of the South Island of New Zealand. In Australia, a moderately sized temperate steppe region exists in the northern and northwest regions of Victoria, extending to the southern and mid regions of New South Wales. This area borders the semi-arid and arid Australian Outback which is found farther inland on the continent. Subtropical steppe In Europe, some Mediterranean areas have a steppe-like vegetation, such as central Sicily in Italy, southern Portugal, parts of Greece in the southern Athens area, and central-eastern Spain, especially the southeastern coast (around Murcia), and places cut off from adequate moisture due to rain shadow effects such as Zaragoza. In northern Africa, the Mediterranean area also hosts the same steppe-like vegetation, such as the Algerian-Moroccan Hautes Plaines and by extension the North Saharan steppe and woodlands. In Asia, a subtropical steppe can be found in semi-arid lands that fringe the Thar Desert of the Indian subcontinent as well as much of the Deccan Plateau in the rain shadow of the Western Ghats, and the Badia of the Levant. In Australia, subtropical steppe can be found in a belt surrounding the most severe deserts of the continent and around the Musgrave Ranges. In North America this environment is typical of transition areas between zones with a Mediterranean climate and true deserts, such as Reno, Nevada, the inner part of California, and much of western Texas and adjacent areas in Mexico.
Physical sciences
Grasslands
null
54050
https://en.wikipedia.org/wiki/Typographic%20unit
Typographic unit
Typographic units are the units of measurement used in typography or typesetting. Traditional typometry units are different from familiar metric units because they were established in the early days of printing. Though most printing is digital now, the old terms and units have persisted. Even though these units are all very small, across a line of print they add up quickly. Confusions such as resetting text originally in type of one unit in type of another will result in words moving from one line to the next, resulting in all sorts of typesetting errors (viz. rivers, widows and orphans, disrupted tables, and misplaced captions). Before the popularization of desktop publishing, type measurements were done with a tool called a typometer. Development In Europe, the Didot point system was created by François-Ambroise Didot (1730–1804) in c. 1783. Didot's system was based on Pierre Simon Fournier's (1712–1768), but Didot modified Fournier's by adjusting the base unit precisely to a French Royal inch (pouce), as Fournier's unit was based on a less common foot. (Fournier's printed scale of his point system, from Manuel Typographique, Barbou, Paris 1764, enlarged) However, the basic idea of the point system – to generate different type sizes by multiplying a single minimum unit calculated by dividing a base measurement unit such as one French Royal inch – was not Didot's invention, but Fournier's. In Fournier's system, an approximate French Royal inch (pouce) is divided by 12 to calculate 1 ligne, which is then divided by 6 to get 1 point. Didot just made the base unit (one French Royal inch) identical to the standard value defined by the government. In Didot's point system: 1 point = 1⁄6 ligne = 1⁄72 French Royal inch = 15 625⁄41 559 mm ≤ 0.375 971 510 4 mm, however in practice mostly: 0.376 mm (i.e. + 0.0076%). Both in Didot's and Fournier's systems, some point sizes have traditional names such as Cicero (before introduction of point systems, type sizes were called by names such as Cicero, Pica, Ruby, Great Primer, etc.). 1 cicero = 12 Didot points = 1⁄6 French Royal inch = 62 500⁄13 853 mm ≤ 4.511 658 124 6 mm, also in practice mostly: 4.512 mm (i.e. + 0.0076%). The Didot point system has been widely used in European countries. An abbreviation for it that these countries use is "dd", employing an old method for indicating plurals. Hence "12 dd" means twelve didot points. In Britain and the United States, many proposals for type size standardization had been made by the end of 19th century (such as Bruce Typefoundry's mathematical system that was based on a precise geometric progression). However, no nationwide standard was created until the American Point System was decided in 1886. The American Point System was proposed by Nelson C. Hawks of Marder Luse & Company in Chicago in the 1870s, and his point system used the same method of size division as Fournier's; viz. dividing 1 inch by 6 to get 1 pica, and dividing it again by 12 to get 1 point. However, the American Point System standardized finally in 1886 is different from Hawks' original idea in that 1 pica is not precisely equal to 1⁄6 inch (neither the Imperial inch nor the US inch), as the United States Type Founders' Association defined the standard pica to be the Johnson Pica, which had been adopted and used by Mackellar, Smiths and Jordan type foundry (MS&J), Philadelphia. As MS&J was very influential in those days, many other type foundries were using the Johnson Pica. Also, MS&J defined that 83 Picas are equal to 35 centimeters. The choice of the metric unit for the prototype was because at the time the Imperial and US inches differed in size slightly, and neither country could legally specify a unit of the other. The Johnson Pica was named after Lawrence Johnson who had succeeded Binny & Ronaldson in 1833. Binny & Ronaldson was one of the oldest type foundries in the United States, established in Philadelphia in 1796. Binny & Ronaldson had bought the type founding equipment of Benjamin Franklin's (1706–1790) type foundry established in 1786 and run by his grandson Benjamin Franklin Bache (1769–1798). The equipment is thought to be that which Benjamin Franklin purchased from Pierre Simon Fournier when he visited France for diplomatic purposes (1776–85). The official standard approved by the Fifteenth Meeting of the Type Founders Association of the United States in 1886 was this Johnson pica, equal to exactly 0.166 inch. Therefore, the two other – very close – definitions, 1200 / 7227 inch and 350 / 83 mm, are both unofficial. Monotype wedges used in England and America were based on a pica = .1660 inch. But on the European continent all available wedges were based on the "old-pica" 1 pica - .1667 inch. These wedges were marked with an extra E behind the numbers of the wedge and the set. These differences can also be found in the tables of the manuals. In the American point system: 1 Johnson pica = exactly 0.166 inch (versus 0.166 = 1⁄6 inch for the DTP-pica) = 4.2164 mm. 1 point = 1⁄12 pica = exactly 0.013 83 inch = 0.351 36 mm. The American point system has been used in the US, Britain, Japan, and many other countries. Today, digital printing and display devices and page layout software use a unit that is different from these traditional typographic units. On many digital printing systems (desktop publishing systems in particular), the following equations are applicable (with exceptions, most notably the popular TeX typesetting system and its derivatives). 1 pica = 1⁄6 inch (British/American inch of today) = 4.233 mm. 1 point = 1⁄12 pica = 1⁄72 inch = 127⁄360 mm = 0.3527 mm. Digital displays and printing led to the use an additional unit: 1 twip = 1⁄20 point = 1⁄1440 inch = 127⁄7200 mm = 0.017 638 mm. Fournier's original method of division is now restored in today's digital typography. Comparing a piece of type in didots for Continental European countries – 12 dd, for example – to a piece of type for an English-speaking country – 12 pt – shows that the main body of a character is actually about the same size. The difference is that the languages of the former often need extra space atop the capital letters for accent marks (e.g. Ñ, Â, Ö, É), but English rarely needs this. Metric units The traditional typographic units are based either on non-metric units, or on odd multiples (such as 35⁄83) of a metric unit. There are no specifically metric units for this particular purpose, although there is a DIN standard sometimes used in German publishing, which measures type sizes in multiples of 0.25 mm, and proponents of the metrication of typography generally recommend the use of the millimetre for typographical measurements, rather than the development of new specifically typographical metric units. The Japanese already do this for their own characters (using the kyu, which is q in romanized Japanese and is also 0.25 mm), and have metric-sized type for European languages as well. One advantage of the q is that it reintroduces the proportional integer division of 3 mm (12 q) by 6 & 4. During the age of the French Revolution or Napoleonic Empire, the French established a typographic unit of 0.4 mm, but except for the government's print shops, this did not catch on. In 1973, the didot was restandardized in the EU as 0.375 (= 3⁄8) mm. Care must be taken because the name of the unit is often left unmodified. The Germans, however, use the terms Fournier-Punkt and Didot-Punkt for the earlier ones, and Typografischer Punkt for this metric one. The TeX typesetting system uses the abbreviation dd for the earlier definition, and nd for the metric new didot
Physical sciences
Measurement systems
Basics and measurement
54077
https://en.wikipedia.org/wiki/Perpetual%20motion
Perpetual motion
Perpetual motion is the motion of bodies that continues forever in an unperturbed system. A perpetual motion machine is a hypothetical machine that can do work indefinitely without an external energy source. This kind of machine is impossible, since its existence would violate the first and/or second laws of thermodynamics. These laws of thermodynamics apply regardless of the size of the system. For example, the motions and rotations of celestial bodies such as planets may appear perpetual, but are actually subject to many processes that slowly dissipate their kinetic energy, such as solar wind, interstellar medium resistance, gravitational radiation and thermal radiation, so they will not keep moving forever. Thus, machines that extract energy from finite sources cannot operate indefinitely because they are driven by the energy stored in the source, which will eventually be exhausted. A common example is devices powered by ocean currents, whose energy is ultimately derived from the Sun, which itself will eventually burn out. In 2016, new states of matter, time crystals, were discovered in which, on a microscopic scale, the component atoms are in continual repetitive motion, thus satisfying the literal definition of "perpetual motion". However, these do not constitute perpetual motion machines in the traditional sense, or violate thermodynamic laws, because they are in their quantum ground state, so no energy can be extracted from them; they exhibit motion without energy. History The history of perpetual motion machines dates back to the Middle Ages. For millennia, it was not clear whether perpetual motion devices were possible or not, until the development of modern theories of thermodynamics showed that they were impossible. Despite this, many attempts have been made to create such machines, continuing into modern times. Modern designers and proponents often use other terms, such as "over unity", to describe their inventions. Basic principles There is a scientific consensus that perpetual motion in an isolated system violates either the first law of thermodynamics, the second law of thermodynamics, or both. The first law of thermodynamics is a version of the law of conservation of energy. The second law can be phrased in several different ways, the most intuitive of which is that heat flows spontaneously from hotter to colder places; relevant here is that the law observes that in every macroscopic process, there is friction or something close to it; another statement is that no heat engine (an engine which produces work while moving heat from a high temperature to a low temperature) can be more efficient than a Carnot heat engine operating between the same two temperatures. In other words: In any isolated system, one cannot create new energy (law of conservation of energy). As a result, the thermal efficiency—the produced work power divided by the input heating power—cannot be greater than one. The output work power of heat engines is always smaller than the input heating power. The rest of the heat energy supplied is wasted as heat to the ambient surroundings. The thermal efficiency therefore has a maximum, given by the Carnot efficiency, which is always less than one. The efficiency of real heat engines is even lower than the Carnot efficiency due to irreversibility arising from the speed of processes, including friction. Statements 2 and 3 apply to heat engines. Other types of engines that convert e.g. mechanical into electromagnetic energy, cannot operate with 100% efficiency, because it is impossible to design any system that is free of energy dissipation. Machines that comply with both laws of thermodynamics by accessing energy from unconventional sources are sometimes referred to as perpetual motion machines, although they do not meet the standard criteria for the name. By way of example, clocks and other low-power machines, such as Cox's timepiece, have been designed to run on the differences in barometric pressure or temperature between night and day. These machines have a source of energy, albeit one which is not readily apparent, so that they only seem to violate the laws of thermodynamics. Even machines that extract energy from long-lived sources - such as ocean currents - will run down when their energy sources inevitably do. They are not perpetual motion machines because they are consuming energy from an external source and are not isolated systems. Classification One classification of perpetual motion machines refers to the particular law of thermodynamics the machines purport to violate: A perpetual motion machine of the first kind produces work without the input of energy. It thus violates the law of conservation of energy. A perpetual motion machine of the second kind is a machine that spontaneously converts thermal energy into mechanical work. When the thermal energy is equivalent to the work done, this does not violate the law of conservation of energy. However, it does violate the more subtle second law of thermodynamics in a cyclic process (see also entropy). The signature of a perpetual motion machine of the second kind is that there is only one heat reservoir involved, which is being spontaneously cooled without involving a transfer of heat to a cooler reservoir. This conversion of heat into useful work, without any side effect, is impossible, according to the second law of thermodynamics. A perpetual motion machine of the third kind is defined as one that completely eliminates friction and other dissipative forces, to maintain motion forever due to its mass inertia (third in this case refers solely to the position in the above classification scheme, not the third law of thermodynamics). It is impossible to make such a machine, as dissipation can never be completely eliminated in a mechanical system, no matter how close a system gets to this ideal (see examples at below). Impossibility "Epistemic impossibility" describes things which absolutely cannot occur within our current formulation of the physical laws. This interpretation of the word "impossible" is what is intended in discussions of the impossibility of perpetual motion in a closed system. The conservation laws are particularly robust from a mathematical perspective. Noether's theorem, which was proven mathematically in 1915, states that any conservation law can be derived from a corresponding continuous symmetry of the action of a physical system. The symmetry which is equivalent to conservation of energy is the time invariance of physical laws. Therefore, if the laws of physics do not change with time, then the conservation of energy follows. For energy conservation to be violated to allow perpetual motion would require that the foundations of physics would change. Scientific investigations as to whether the laws of physics are invariant over time use telescopes to examine the universe in the distant past to discover, to the limits of our measurements, whether ancient stars were identical to stars today. Combining different measurements such as spectroscopy, direct measurement of the speed of light in the past and similar measurements demonstrates that physics has remained substantially the same, if not identical, for all of observable time spanning billions of years. The principles of thermodynamics are so well established, both theoretically and experimentally, that proposals for perpetual motion machines are universally dismissed by physicists. Any proposed perpetual motion design offers a potentially instructive challenge to physicists: one is certain that it cannot work, so one must explain how it fails to work. The difficulty (and the value) of such an exercise depends on the subtlety of the proposal; the best ones tend to arise from physicists' own thought experiments and often shed light upon certain aspects of physics. So, for example, the thought experiment of a Brownian ratchet as a perpetual motion machine was first discussed by Gabriel Lippmann in 1900 but it was not until 1912 that Marian Smoluchowski gave an adequate explanation for why it cannot work. However, during that twelve-year period scientists did not believe that the machine was possible. They were merely unaware of the exact mechanism by which it would inevitably fail. In the mid-19th-century Henry Dircks investigated the history of perpetual motion experiments, writing a vitriolic attack on those who continued to attempt what he believed to be impossible: Techniques Some common ideas recur repeatedly in perpetual motion machine designs. Many ideas that continue to appear today were stated as early as 1670 by John Wilkins, Bishop of Chester and an official of the Royal Society. He outlined three potential sources of power for a perpetual motion machine, " Extractions", "Magnetical Virtues" and "the Natural Affection of Gravity". The seemingly mysterious ability of magnets to influence motion at a distance without any apparent energy source has long appealed to inventors. One of the earliest examples of a magnetic motor was proposed by Wilkins and has been widely copied since: it consists of a ramp with a magnet at the top, which pulled a metal ball up the ramp. Near the magnet was a small hole that was supposed to allow the ball to drop under the ramp and return to the bottom, where a flap allowed it to return to the top again. However, if the magnet is to be strong enough to pull the ball up the ramp, it cannot then be weak enough to allow gravity to pull it through the hole. Faced with this problem, more modern versions typically use a series of ramps and magnets, positioned so the ball is to be handed off from one magnet to another as it moves. The problem remains the same. Gravity also acts at a distance, without an apparent energy source, but to get energy out of a gravitational field (for instance, by dropping a heavy object, producing kinetic energy as it falls) one has to put energy in (for instance, by lifting the object up), and some energy is always dissipated in the process. A typical application of gravity in a perpetual motion machine is Bhaskara's wheel in the 12th century, whose key idea is itself a recurring theme, often called the overbalanced wheel: moving weights are attached to a wheel in such a way that they fall to a position further from the wheel's center for one half of the wheel's rotation, and closer to the center for the other half. Since weights further from the center apply a greater torque, it was thought that the wheel would rotate forever. However, since the side with weights further from the center has fewer weights than the other side, at that moment, the torque is balanced and perpetual movement is not achieved. The moving weights may be hammers on pivoted arms, or rolling balls, or mercury in tubes; the principle is the same. Another theoretical machine involves a frictionless environment for motion. This involves the use of diamagnetic or electromagnetic levitation to float an object. This is done in a vacuum to eliminate air friction and friction from an axle. The levitated object is then free to rotate around its center of gravity without interference. However, this machine has no practical purpose because the rotated object cannot do any work as work requires the levitated object to cause motion in other objects, bringing friction into the problem. Furthermore, a perfect vacuum is an unattainable goal since both the container and the object itself would slowly vaporize, thereby degrading the vacuum. To extract work from heat, thus producing a perpetual motion machine of the second kind, the most common approach (dating back at least to Maxwell's demon) is unidirectionality. Only molecules moving fast enough and in the right direction are allowed through the demon's trap door. In a Brownian ratchet, forces tending to turn the ratchet one way are able to do so while forces in the other direction are not. A diode in a heat bath allows through currents in one direction and not the other. These schemes typically fail in two ways: either maintaining the unidirectionality costs energy (requiring Maxwell's demon to perform more thermodynamic work to gauge the speed of the molecules than the amount of energy gained by the difference of temperature caused) or the unidirectionality is an illusion and occasional big violations make up for the frequent small non-violations (the Brownian ratchet will be subject to internal Brownian forces and therefore will sometimes turn the wrong way). Buoyancy is another frequently misunderstood phenomenon. Some proposed perpetual-motion machines miss the fact that to push a volume of air down in a fluid takes the same work as to raise a corresponding volume of fluid up against gravity. These types of machines may involve two chambers with pistons, and a mechanism to squeeze the air out of the top chamber into the bottom one, which then becomes buoyant and floats to the top. The squeezing mechanism in these designs would not be able to do enough work to move the air down, or would leave no excess work available to be extracted. Patents Proposals for such inoperable machines have become so common that the United States Patent and Trademark Office (USPTO) has made an official policy of refusing to grant patents for perpetual motion machines without a working model. The USPTO Manual of Patent Examining Practice states: And, further, that: The filing of a patent application is a clerical task, and the USPTO will not refuse filings for perpetual motion machines; the application will be filed and then most probably rejected by the patent examiner, after he has done a formal examination. Even if a patent is granted, it does not mean that the invention actually works, it just means that the examiner believes that it works, or was unable to figure out why it would not work. The United Kingdom Patent Office has a specific practice on perpetual motion; Section 4.05 of the UKPO Manual of Patent Practice states: Examples of decisions by the UK Patent Office to refuse patent applications for perpetual motion machines include: Decision BL O/044/06, John Frederick Willmott's application no. 0502841 Decision BL O/150/06, Ezra Shimshi's application no. 0417271 The European Patent Classification (ECLA) has classes including patent applications on perpetual motion systems: ECLA classes "F03B17/04: Alleged perpetua mobilia" and "F03B17/00B: [... machines or engines] (with closed loop circulation or similar : ... Installations wherein the liquid circulates in a closed loop; Alleged perpetua mobilia of this or similar kind". Apparent perpetual motion machines As a perpetual motion machine can only be defined in a finite isolated system with discrete parameters, and since true isolated systems do not exist (among other things, due to quantum uncertainty), "perpetual motion" in the context of this article is better defined as a "perpetual motion machine", since a machine is a "a mechanically, electrically, or electronically operated device for performing a task", whereas "motion" is simply movement (such as Brownian motion). Distinctions aside, on the macro scale, there are concepts and technical drafts that propose "perpetual motion", but on closer analysis it is revealed that they actually "consume" some sort of natural resource or latent energy, such as the phase changes of water or other fluids or small natural temperature gradients, or simply cannot sustain indefinite operation. In general, extracting work from these devices is impossible. Resource consuming Some examples of such devices include: The drinking bird toy functions using small ambient temperature gradients and evaporation. It runs until all water is evaporated. A capillary action-based water pump functions using small ambient temperature gradients and vapour pressure differences. With the "capillary bowl", it was thought that the capillary action would keep the water flowing in the tube, but since the cohesion force that draws the liquid up the tube in the first place holds the droplet from releasing into the bowl, the flow is not perpetual. A Crookes radiometer consists of a partial vacuum glass container with a lightweight propeller moved by (light-induced) temperature gradients. Any device picking up minimal amounts of energy from the natural electromagnetic radiation around it, such as a solar-powered motor. Any device powered by changes in air pressure, such as some clocks (Cox's timepiece, Beverly Clock). The motion leeches energy from moving air which in turn gained its energy from being acted on. A heat pump, due to it having a COP above 1: the energy it consumes as work is less than the energy it moves as heat. The Atmos clock uses changes in the vapor pressure of ethyl chloride with temperature to wind the clock spring. A device powered by induced nuclear reactions or by radioactive decay from an isotope with a relatively long half-life; such a device could plausibly operate for hundreds or thousands of years. The Oxford Electric Bell and the are driven by dry pile batteries. Low friction In flywheel energy storage, "modern flywheels can have a zero-load rundown time measurable in years". Once spun up, objects in the vacuum of space—stars, black holes, planets, moons, spin-stabilized satellites, etc.—dissipate energy very slowly, allowing them to spin for long periods. Tides on Earth are dissipating the gravitational energy of the Moon/Earth system at an average rate of about 3.75 terawatts. In certain quantum-mechanical systems (such as superfluidity and superconductivity), very low friction movement is possible. However, the motion stops when the system reaches an equilibrium state (e.g. all the liquid helium arrives at the same level). Similarly, seemingly entropy-reversing effects like superfluids climbing the walls of containers operate by ordinary capillary action. Thought experiments In some cases a thought experiment appears to suggest that perpetual motion may be possible through accepted and understood physical processes. However, in all cases, a flaw has been found when all of the relevant physics is considered. Examples include: Maxwell's demon: This was originally proposed to show that the second law of thermodynamics applied in the statistical sense only, by postulating a "demon" that could select energetic molecules and extract their energy. Subsequent analysis (and experiment) have shown there is no way to physically implement such a system that does not result in an overall increase in entropy. Brownian ratchet: In this thought experiment, one imagines a paddle wheel connected to a ratchet. Brownian motion would cause surrounding gas molecules to strike the paddles, but the ratchet would only allow it to turn in one direction. A more thorough analysis showed that when a physical ratchet was considered at this molecular scale, Brownian motion would also affect the ratchet and cause it to randomly fail resulting in no net gain. Thus, the device would not violate the laws of thermodynamics. Vacuum energy and zero-point energy: In order to explain effects such as virtual particles and the Casimir effect, many formulations of quantum physics include a background energy which pervades empty space, known as vacuum or zero-point energy. The ability to harness zero-point energy for useful work is considered pseudoscience by the scientific community at large. Inventors have proposed various methods for extracting useful work from zero-point energy, but none have been found to be viable, no claims for extraction of zero-point energy have ever been validated by the scientific community, and there is no evidence that zero-point energy can be used in violation of conservation of energy. Ellipsoid paradox: This paradox considers a perfectly reflecting cavity with two black bodies at points A and B. The reflecting surface is composed of two elliptical sections E1 and E2 and a spherical section S, and the bodies at A and B are located at the joint foci of the two ellipses and B is at the center of S. This configuration is such that apparently black body at B heat up relative to A: the radiation originating from the blackbody at A will land on and be absorbed by the blackbody at B. Similarly, rays originating from point B that land on E1 and E2 will be reflected to A. However, a significant proportion of rays that start from B will land on S will be reflected back to B. This paradox is solved when the black bodies' finite sizes are considered instead of punctual black bodies. Conspiracy theories Despite being dismissed as pseudoscientific, perpetual motion machines have become the focus of conspiracy theories, alleging that they are being hidden from the public by corporations or governments, who would lose economic control if a power source capable of producing energy cheaply was made available.
Physical sciences
Classical mechanics
Physics
54099
https://en.wikipedia.org/wiki/Pantothenic%20acid
Pantothenic acid
Pantothenic acid (vitamin B5) is a B vitamin and an essential nutrient. All animals need pantothenic acid in order to synthesize coenzyme A (CoA), which is essential for cellular energy production and for the synthesis and degradation of proteins, carbohydrates, and fats. Pantothenic acid is the combination of pantoic acid and β-alanine. Its name comes from the Greek pantothen, meaning "from everywhere", because pantothenic acid, at least in small amounts, is in almost all foods. Deficiency of pantothenic acid is very rare in humans. In dietary supplements and animal feed, the form commonly used is calcium pantothenate, because chemically it is more stable, and hence makes for longer product shelf-life, than sodium pantothenate and free pantothenic acid. Definition Pantothenic acid is a water-soluble vitamin, one of the B vitamins. It is synthesized from the amino acid β-alanine and pantoic acid (see biosynthesis and structure of coenzyme A figures). Unlike vitamin E or vitamin K, which occurs in several chemically related forms known as vitamers, pantothenic acid is only one chemical compound. It is a starting compound in the synthesis of coenzyme A (CoA), a cofactor for many enzyme processes. Use in biosynthesis of coenzyme A Pantothenic acid is a precursor to CoA via a five-step process. The biosynthesis requires pantothenic acid, cysteine, and four equivalents of ATP (see figure). Pantothenic acid is phosphorylated to 4′-phosphopantothenate by the enzyme pantothenate kinase. This is the committed step in CoA biosynthesis and requires ATP. A cysteine is added to 4′-phosphopantothenate by the enzyme phosphopantothenoylcysteine synthetase to form 4'-phospho-N-pantothenoylcysteine (PPC). This step is coupled with ATP hydrolysis. PPC is decarboxylated to 4′-phosphopantetheine by phosphopantothenoylcysteine decarboxylase 4′-Phosphopantetheine is adenylated (or more properly, AMPylated) to form dephospho-CoA by the enzyme phosphopantetheine adenylyl transferase Finally, dephospho-CoA is phosphorylated to coenzyme A by the enzyme dephosphocoenzyme A kinase. This final step also requires ATP. This pathway is suppressed by end-product inhibition, meaning that CoA is a competitive inhibitor of pantothenate kinase, the enzyme responsible for the first step. Coenzyme A is necessary in the reaction mechanism of the citric acid cycle. This process is the body's primary catabolic pathway and is essential in breaking down the building blocks of the cell such as carbohydrates, amino acids and lipids, for fuel. CoA is important in energy metabolism for pyruvate to enter the tricarboxylic acid cycle (TCA cycle) as acetyl-CoA, and for α-ketoglutarate to be transformed to succinyl-CoA in the cycle. CoA is also required for acylation and acetylation, which, for example, are involved in signal transduction, and various enzyme functions. In addition to functioning as CoA, this compound can act as an acyl group carrier to form acetyl-CoA and other related compounds; this is a way to transport carbon atoms within the cell. CoA is also required in the formation of acyl carrier protein (ACP), which is required for fatty acid synthesis. Its synthesis also connects with other vitamins such as thiamin and folic acid. Dietary recommendations The US Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for B vitamins in 1998. At that time, there was not sufficient information to establish EARs and RDAs for pantothenic acid. In instances such as this, the Board sets Adequate Intakes (AIs), with the understanding that at some later date, AIs may be replaced by more exact information. The current AI for teens and adults ages 14 and up is 5 mg/day. This was based in part on the observation that for a typical diet, urinary excretion was approximately 2.6 mg/day, and that bioavailability of food-bound pantothenic acid was roughly 50%. AI for pregnancy is 6 mg/day. AI for lactation is 7 mg/day. For infants up to 12 months, the AI is 1.8 mg/day. For children ages 1–13 years, the AI increases with age from 2 to 4 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). While for many nutrients, the US Department of Agriculture uses food composition data combined with food consumption survey results to estimate average consumption, the surveys and reports do not include pantothenic acid in the analyses. Less formal estimates of adult daily intakes report about 4 to 7 mg/day. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the US. For women and men over age 11, the Adequate Intake (AI) is set at 5 mg/day. AI for pregnancy is 5 mg/day, for lactation 7 mg/day. For children ages 1–10 years, the AI is 4 mg/day. These AIs are similar to the US AIs. Safety As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of pantothenic acid, there is no UL, as there is no human data for adverse effects from high doses. The EFSA also reviewed the safety question and reached the same conclusion as in the United States – that there was not sufficient evidence to set a UL for pantothenic acid. Labeling requirements For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For pantothenic acid labeling purposes, 100% of the Daily Value was 10 mg, but as of May 2016 it was revised to 5 mg to bring it into agreement with the AI. Compliance with the updated labeling regulations was required by January 2020 for manufacturers with US$10 million or more in annual food sales, and by January 2021 for manufacturers with lower volume food sales. A table of the old and new adult daily values is provided at Reference Daily Intake. Sources Dietary Food sources of pantothenic acid include animal-sourced foods, including dairy foods and eggs. Potatoes, tomato products, oat-cereals, sunflower seeds, avocado are good plant sources. Mushrooms are good sources, too. Whole grains are another source of the vitamin, but milling to make white rice or white flour removes much of the pantothenic acid, as it is found in the outer layers of whole grains. In animal feeds, the most important sources are alfalfa, cereal, fish meal, peanut meal, molasses, rice bran, wheat bran, and yeasts. Supplements Dietary supplements of pantothenic acid commonly use pantothenol (or panthenol), a shelf-stable analog, which is converted to pantothenic acid once consumed. Calcium pantothenate – a salt – may be used in manufacturing because it is more resistant than pantothenic acid to factors that deteriorate stability, such as acid, alkali or heat. The amount of pantothenic acid in dietary supplement products may contain up to 1,000 mg (200 times the Adequate Intake level for adults), without evidence that such large amounts provide any benefit. According to WebMD, pantothenic acid supplements have a long list of claimed uses, but there is insufficient scientific evidence to support any of them. As a dietary supplement, pantothenic acid is not the same as pantethine, which is composed of two pantothenic acid molecules linked by a disulfide bridge. Sold as a high-dose supplement (600 mg), pantethine may be effective for lowering blood levels of LDL cholesterol – a risk factor for cardiovascular diseases – but its long-term effects are unknown, so use should be supervised by a physician. Dietary supplementation with pantothenic acid does not have the cholesterol-lowering effect as pantethine. Fortification According to the Global Fortification Data Exchange, pantothenic acid deficiency is so rare that no countries require that foods be fortified. Absorption, metabolism and excretion When found in foods, most pantothenic acid is in the form of CoA or bound to acyl carrier protein (ACP). For the intestinal cells to absorb this vitamin, it must be converted into free pantothenic acid. Within the lumen of the intestine, CoA and ACP are hydrolyzed into 4'-phosphopantetheine. The 4'-phosphopantetheine is then dephosphorylated into pantetheine. Pantetheinase, an intestinal enzyme, then hydrolyzes pantetheine into free pantothenic acid. Free pantothenic acid is absorbed into intestinal cells via a saturable, sodium-dependent active transport system. At high levels of intake, when this mechanism is saturated, some pantothenic acid may also be additionally absorbed via passive diffusion. As a whole, when intake increases 10-fold, absorption rate decreases to 10%. Pantothenic acid is excreted in urine. This occurs after its release from CoA. Urinary amounts are on the order of 2.6 mg/day, but decreased to negligible amounts when subjects in multi-week experimental situations were fed diets devoid of the vitamin. Deficiency Pantothenic acid deficiency in humans is very rare and has not been thoroughly studied. In the few cases where deficiency has been seen (prisoners of war during World War II, victims of starvation, or limited volunteer trials), nearly all symptoms were reversed with orally administered pantothenic acid. Symptoms of deficiency are similar to other vitamin B deficiencies. There is impaired energy production, due to low CoA levels, which could cause symptoms of irritability, fatigue, and apathy. Acetylcholine synthesis is also impaired; therefore, neurological symptoms can also appear in deficiency; they include sensation of numbness in hands and feet, paresthesia and muscle cramps. Additional symptoms could include restlessness, malaise, sleep disturbances, nausea, vomiting and abdominal cramps. In animals, symptoms include disorders of the nervous, gastrointestinal, and immune systems, reduced growth rate, decreased food intake, skin lesions and changes in hair coat, and alterations in lipid and carbohydrate metabolism. In rodents, there can be loss of hair color, which led to marketing of pantothenic acid as a dietary supplement which could prevent or treat graying of hair in humans (despite the lack of any human trial evidence). Pantothenic acid status can be assessed by measuring either whole blood concentration or 24-hour urinary excretion. In humans, whole blood values less than 1 μmol/L are considered low, as is urinary excretion of less than 4.56 mmol/day. Animal nutrition Calcium pantothenate and dexpanthenol (D-panthenol) are European Food Safety Authority (EFSA) approved additives to animal feed. Supplementation is on the order of 8–20 mg/kg for pigs, 10–15 mg/kg for poultry, 30–50 mg/kg for fish and 8–14 mg/kg feed for pets. These are recommended concentrations, designed to be higher than what are thought to be requirements. There is some evidence that feed supplementation increases pantothenic acid concentration in tissues, i.e., meat, consumed by humans, and also for eggs, but this raises no concerns for consumer safety. No dietary requirement for pantothenic acid has been established in ruminant species. Synthesis of pantothenic acid by ruminal microorganisms appears to be 20 to 30 times more than dietary amounts. Net microbial synthesis of pantothenic acid in the rumen of steer calves has been estimated to be 2.2 mg/kg of digestible organic matter consumed per day. Supplementation of pantothenic acid at 5 to 10 times theoretical requirements did not improve growth performance of feedlot cattle. Synthesis Biosynthesis Bacteria synthesize pantothenic acid from the amino acids aspartate and a precursor to the amino acid valine. Aspartate is converted to β-alanine. The amino group of valine is replaced by a keto-moiety to yield α-ketoisovalerate, which, in turn, forms α-ketopantoate following transfer of a methyl group, then D-pantoate (also known as pantoic acid) following reduction. β-alanine and pantoic acid are then condensed to form pantothenic acid (see figure). Industrial synthesis The industrial synthesis of pantothenic acid starts with the aldol condensation of isobutyraldehyde and formaldehyde. The resulting hydroxypivaldehyde is converted to its cyanohydrin derivative. which is cyclised to give racemic pantolactone. This sequence of reactions was first published in 1904. Synthesis of the vitamin is completed by resolution of the lactone using quinine, for example, followed by treatment with the calcium or sodium salt of β-alanine. History The term vitamin is derived from the word vitamine, which was coined in 1912 by Polish biochemist Casimir Funk, who isolated a complex of water-soluble micronutrients essential to life, all of which he presumed to be amines. When this presumption was later determined not to be true, the "e" was dropped from the name, hence "vitamin". Vitamin nomenclature was alphabetical, with Elmer McCollum calling these fat-soluble A and water-soluble B. Over time, eight chemically distinct, water-soluble B vitamins were isolated and numbered, with pantothenic acid as vitamin B5. The essential nature of pantothenic acid was discovered by Roger J. Williams in 1933 by showing it was required for the growth of yeast. Three years later Elvehjem and Jukes demonstrated that it was a growth and anti-dermatitis factor in chickens. Williams dubbed the compound "pantothenic acid", deriving the name from the Greek word pantothen, which translates as "from everywhere". His reason was that he found it to be present in almost every food he tested. Williams went on to determine the chemical structure in 1940. In 1953, Fritz Lipmann shared the Nobel Prize in Physiology or Medicine "for his discovery of co-enzyme A and its importance for intermediary metabolism", work he had published in 1946.
Biology and health sciences
Vitamins
Health
54104
https://en.wikipedia.org/wiki/Vitamin%20E
Vitamin E
Vitamin E is a group of eight compounds related in molecular structure that includes four tocopherols and four tocotrienols. The tocopherols function as fat-soluble antioxidants which may help protect cell membranes from reactive oxygen species. Vitamin E is classified as an essential nutrient for humans. Various government organizations recommend that adults consume between 3 and 15 mg per day, while a 2016 worldwide review reported a median dietary intake of 6.2 mg per day. Sources rich in vitamin E include seeds, nuts, seed oils, peanut butter, vitamin E–fortified foods, and dietary supplements. Symptomatic vitamin E deficiency is rare, usually caused by an underlying problem with digesting dietary fat rather than from a diet low in vitamin E. Deficiency can cause neurological disorders. Tocopherols and tocotrienols both occur in α (alpha), β (beta), γ (gamma), and δ (delta) forms, as determined by the number and position of methyl groups on the chromanol ring. All eight of these vitamers feature a chromane double ring, with a hydroxyl group that can donate a hydrogen atom to reduce free radicals, and a hydrophobic side chain that allows for penetration into biological membranes. Both natural and synthetic tocopherols are subject to oxidation, so dietary supplements are esterified, creating tocopheryl acetate for stability purposes. Population studies have suggested that people who consumed foods with more vitamin E, or who chose on their own to consume a vitamin E dietary supplement, had lower incidence of cardiovascular diseases, cancer, dementia, and other diseases. However, placebo-controlled clinical trials using alpha-tocopherol as a supplement, with daily amounts as high as 2,000 mg per day, could not always replicate these findings. In the United States, vitamin E supplement use peaked around 2002, but had declined by over 50% by 2006. Declining use was theorized to be due to publications of meta-analyses that showed either no benefits or actual negative consequences from high-dose vitamin E. Vitamin E was discovered in 1922, isolated in 1935, and first synthesized in 1938. Because the vitamin activity was first identified as essential for fertilized eggs to result in live births (in rats), it was given the name "tocopherol" from Greek words meaning birth and to bear or carry. Alpha-tocopherol, either naturally extracted from plant oils or, most commonly, as the synthetic tocopheryl acetate, is sold as a popular dietary supplement, either by itself or incorporated into a multivitamin product, and in oils or lotions for use on skin. Chemistry The nutritional content of vitamin E is defined by equivalency to 100% RRR-configuration α-tocopherol activity. The molecules that contribute α-tocopherol activity are four tocopherols and four tocotrienols, within each group of four identified by the prefixes alpha- (α-), beta- (β-), gamma- (γ-), and delta- (δ-). For alpha(α)-tocopherol each of the three "R" sites has a methyl group (CH3) attached. For beta(β)-tocopherol: R1 = methyl group, R2 = H, R3 = methyl group. For gamma(γ)-tocopherol: R1 = H, R2 = methyl group, R3 = methyl group. For delta(δ)-tocopherol: R1 = H, R2 = H, R3 = methyl group. The same configurations exist for the tocotrienols, except that the unsaturated side chain has three carbon-carbon double bonds whereas the tocopherols have a saturated side chain. Stereoisomers In addition to distinguishing tocopherols and tocotrienols by position of methyl groups, the tocopherols have a phytyl tail with three chiral points or centers that can have a right or left orientation. The naturally occurring plant form of alpha-tocopherol is RRR-α-tocopherol, also referred to as d-tocopherol, whereas the synthetic form (all-racemic or all-rac vitamin E, also dl-tocopherol) is equal parts of eight stereoisomers RRR, RRS, RSS, SSS, RSR, SRS, SRR and SSR with progressively decreasing biological equivalency, so that 1.36 mg of dl-tocopherol is considered equivalent to 1.0 mg of d-tocopherol, the natural form. Rephrased, the synthetic has 73.5% of the potency of the natural. Tocopherols Alpha-tocopherol is a fat-soluble antioxidant functioning within the glutathione peroxidase pathway, and protecting cell membranes from oxidation by reacting with lipid radicals produced in the lipid peroxidation chain reaction. This removes the free radical intermediates and prevents the oxidation reaction from continuing. The oxidized α-tocopheroxyl radicals produced in this process may be recycled back to the active reduced form through reduction by other antioxidants, such as ascorbate, retinol or ubiquinol. Other forms of vitamin E have their own unique properties; for example, γ-tocopherol is a nucleophile that can react with electrophilic mutagens. Tocotrienols The four tocotrienols (alpha, beta, gamma, delta) are similar in structure to the four tocopherols, with the main difference being that the former have hydrophobic side chains with three carbon-carbon double bonds, whereas the tocopherols have saturated side chains. For alpha(α)-tocotrienol each of the three "R" sites has a methyl group (CH3) attached. For beta(β)-tocotrienol: R1 = methyl group, R2 = H, R3 = methyl group. For gamma(γ)-tocotrienol: R1 = H, R2 = methyl group, R3 = methyl group. For delta(δ)-tocotrienol: R1 = H, R2 = H, R3 = methyl group. Tocotrienols have only a single chiral center, which exists at the 2' chromanol ring carbon, at the point where the isoprenoid tail joins the ring. The other two corresponding centers in the phytyl tail of the corresponding tocopherols do not exist as chiral centers for tocotrienols due to unsaturation (C-C double bonds) at these sites. Tocotrienols extracted from plants are always dextrorotatory stereoisomers, signified as d-tocotrienols. In theory, levorotatory forms of tocotrienols (l-tocotrienols) could exist as well, which would have a 2S rather than 2R configuration at the molecules' single chiral center, but unlike synthetic dl-alpha-tocopherol, the marketed tocotrienol dietary supplements are extracted from palm oil or rice bran oil. Tocotrienols are not essential nutrients; government organizations have not specified an estimated average requirement or recommended dietary allowance. A number of health benefits of tocotrienols have been proposed, including decreased risk of age-associated cognitive impairment, heart disease and cancer. Reviews of human research linked tocotrienol treatment to improved biomarkers for inflammation and cardiovascular disease, although those reviews did not report any information on clinically significant disease outcomes. Biomarkers for other diseases were not affected by tocotrienol supplementation. Functions Vitamin E may have various roles as a vitamin. Many biological functions have been postulated, including a role as a lipid-soluble antioxidant. In this role, vitamin E acts as a radical scavenger, delivering a hydrogen (H) atom to free radicals. At 323 kJ/mol, the O-H bond in tocopherols is about 10% weaker than in most other phenols. This weak bond allows the vitamin to donate a hydrogen atom to the peroxyl radical and other free radicals, minimizing their damaging effect. The thus-generated tocopheryl radical is recycled to tocopherol by a redox reaction with a hydrogen donor, such as vitamin C. Vitamin E affects gene expression and is an enzyme activity regulator, such as for protein kinase C (PKC) – which plays a role in smooth muscle growth – with vitamin E participating in deactivation of PKC to inhibit smooth muscle growth. Synthesis Biosynthesis Photosynthesizing plants, algae, and cyanobacteria synthesize tocochromanols, the chemical family of compounds made up of four tocopherols and four tocotrienols; in a nutrition context this family is referred to as Vitamin E. Biosynthesis starts with formation of the closed-ring part of the molecule as homogentisic acid (HGA). The side chain is attached (saturated for tocopherols, polyunsaturated for tocotrienols). The pathway for both is the same, so that gamma- is created and from that alpha-, or delta- is created and from that the beta- compounds. Biosynthesis takes place in the plastids. The main reason plants synthesize tocochromanols appears to be for antioxidant activity. Different parts of plants, and different species, are dominated by different tocochromanols. The predominant form in leaves, and hence leafy green vegetables, is α-tocopherol. Located in chloroplast membranes in close proximity to the photosynthetic process, they protect against damage from the ultraviolet radiation of sunlight. Under normal growing conditions, the presence of α-tocopherol does not appear to be essential, as there are other photo-protective compounds; plants that, through mutations, have lost the ability to synthesize α-tocopherol demonstrate normal growth. However, under stressed growing conditions such as drought, elevated temperature, or salt-induced oxidative stress, the plants' physiological status is superior if it has the normal synthesis capacity. Seeds are lipid-rich to provide energy for germination and early growth. Tocochromanols protect the seed lipids from oxidizing and becoming rancid. The presence of tocochromanols extends seed longevity and promotes successful germination and seedling growth. Gamma-tocopherol dominates in seeds of most plant species, but there are exceptions. For canola, corn and soy bean oils, there is more γ-tocopherol than α-tocopherol, but for safflower, sunflower and olive oils the reverse is true. Of the commonly used food oils, palm oil is unique in that tocotrienol content is higher than tocopherol content. Seed tocochromanols content is also dependent on environmental stressors. In almonds, for example, drought or elevated temperature increase α-tocopherol and γ-tocopherol content of the nuts. Drought increases the tocopherol content of olives, and heat likewise for soybeans. Vitamin E biosynthesis occurs in the plastid and goes through two different pathways: the Shikimate pathway and the Methylerythritol Phosphate pathway (MEP pathway). The Shikimate pathway generates the chromanol ring from the Homogentisic Acid (HGA), and the MEP pathway produces the hydrophobic tail which differs between tocopherol and tocotrienol. The synthesis of the specific tail is dependent on which molecule it originates from. In a tocopherol, its prenyl tail emerges from the geranylgeranyl diphosphate (GGDP) group, while the phytyl tail of a tocotrienol stems from a phytyl diphosphate. Industrial synthesis The synthetic product is all-rac-alpha-tocopherol, also referred to as dl-alpha tocopherol. It consists of eight stereoisomers (RRR, RRS, RSS, RSR, SRR, SSR, SRS and SSS) in equal quantities. "It is synthesized from a mixture of toluene and 2,3,5-trimethyl-hydroquinone that reacts with isophytol to all-rac-alpha-tocopherol, using iron in the presence of hydrogen chloride gas as catalyst. The reaction mixture obtained is filtered and extracted with aqueous caustic soda. Toluene is removed by evaporation and the residue (all rac-alpha-tocopherol) is purified by vacuum distillation." The natural alpha tocopherol extracted from plants is RRR-alpha tocopherol, referred to as d-alpha-tocopherol. The synthetic has 73.5% of the potency of the natural. Manufacturers of dietary supplements and fortified foods for humans or domesticated animals convert the phenol form of the vitamin to an ester using either acetic acid or succinic acid because the esters are more chemically stable, providing for a longer shelf-life. Deficiency A worldwide summary of more than one hundred human studies reported a median of 22.1 μmol/L for serum α-tocopherol and defined α-tocopherol deficiency as less than 12 μmol/L. It cited a recommendation that serum α-tocopherol concentration be ≥30 μmol/L to optimize health benefits. In contrast, the U.S. Dietary Reference Intake text for vitamin E concluded that a plasma concentration of 12 μmol/L was sufficient to achieve normal ex vivo hydrogen peroxide-induced hemolysis. A 2014 review defined less than 9 μmol/L as deficient, 9-12 μmol/L as marginal, and greater than 12 μmol/L as adequate. Regardless of which definition is used, vitamin E deficiency is rare in humans, occurring as a consequence of abnormalities in dietary fat absorption or metabolism rather than from a diet low in vitamin E. Cystic fibrosis and other fat malabsorption conditions can result in low serum vitamin E. One example of a genetic abnormality in metabolism is mutations of genes coding for alpha-tocopherol transfer protein (α-TTP). Humans with this genetic defect exhibit a progressive neurodegenerative disorder known as ataxia with vitamin E deficiency (AVED) despite consuming normal amounts of vitamin E. Large amounts of alpha-tocopherol as a dietary supplement are needed to compensate for the lack of α-TTP. Bariatric surgery as a treatment for obesity can lead to vitamin deficiencies. Long-term follow-up reported a 16.5% prevalence of vitamin E deficiency. There are guidelines for multivitamin supplementation, but adherence rates are reported to be less than 20%. Vitamin E deficiency due to either malabsorption or metabolic anomaly can cause nerve problems due to poor conduction of electrical impulses along nerves due to changes in nerve membrane structure and function. In addition to ataxia, vitamin E deficiency can cause peripheral neuropathy, myopathies, retinopathy, and impairment of immune responses. Drug interactions The amounts of alpha-tocopherol, other tocopherols, and tocotrienols that are components of dietary vitamin E, when consumed from foods, do not appear to cause any interactions with drugs. Consumption of alpha-tocopherol as a dietary supplement in amounts in excess of 300 mg/day may lead to interactions with aspirin, warfarin, tamoxifen and cyclosporine A in ways that alter function. For aspirin and warfarin, high amounts of vitamin E may potentiate anti-blood clotting action. In multiple clinical trials, vitamin E lowered blood concentration of the immunosuppressant medication cyclosporine A. The US National Institutes of Health, Office of Dietary Supplements, raises a concern that co-administration of vitamin E could counter the mechanisms of anti-cancer radiation therapy and some types of chemotherapy, and so advises against its use in these patient populations. The references it cites report instances of reduced treatment adverse effects, but also poorer cancer survival, raising the possibility of tumor protection from the intended oxidative damage by the treatments. Dietary recommendations The U.S. National Academy of Medicine updated estimated average requirements (EARs) and recommended dietary allowances (RDAs) for vitamin E in 2000. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. Adequate intakes (AIs) are identified when there is not sufficient information to set EARs and RDAs. The EAR for vitamin E for women and men ages 14 and up is 12 mg/day. The RDA is 15 mg/day. As for safety, tolerable upper intake levels ("upper limits" or ULs) are set for vitamins and minerals when evidence is sufficient. Hemorrhagic effects in rats were selected as the critical endpoint to calculate the upper limit via starting with the lowest-observed-adverse-effect-level. The result was a human upper limit set at 1000 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes. The European Food Safety Authority (EFSA) refers to the collective set of information as dietary reference values, with population reference intakes (PRIs) instead of RDAs, and average requirements instead of EARs. AIs and ULs are defined the same as in the United States. For women and men ages 10 and older, the PRIs are set at 11 and 13 mg/day, respectively. PRI for pregnancy is 11 mg/day, for lactation 11 mg/day. For children ages 1–9 years the PRIs increase with age from 6 to 9 mg/day. The EFSA used an effect on blood clotting as a safety-critical effect. It identified that no adverse effects were observed in a human trial as 540 mg/day, used an uncertainty factor of 2 to derive an upper limit of half of that, then rounded to 300 mg/day. The People's Republic of China publishes dietary guidelines without specifics for individual vitamins or minerals. The United Kingdom recommends 4 mg/day for adult men and 3 mg/day for adult women. The Japan National Institute of Health and Nutrition set adult AIs at 6.5 mg/day (females) and 7.0 mg/day (males), and 650–700 mg/day (females), and 750–900 mg/day (males) for upper limits (amounts depending on age). India recommends an adult intake of 7.5–10 mg/day and does not set an upper limit. The World Health Organization recommends that adults consume 10 mg/day. Consumption tends to be below these recommendations. A worldwide summary reported a median dietary intake of 6.2 mg/d for alpha-tocopherol. Food labeling For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of daily value. For vitamin E labeling purposes 100% of the daily value was 30 international units (IUs), but as of May 2016, it was revised to 15 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake. European Union regulations require that labels declare energy, protein, fat, saturated fat, carbohydrates, sugars, and salt. Voluntary nutrients may be shown if present in significant amounts. Instead of daily values, amounts are shown as percent of reference intakes (RIs). For vitamin E, 100% RI was set at 12 mg in 2011. The international unit measurement was used by the United States in 1968–2016. 1 IU is the biological equivalent of about 0.667 mg d (RRR)-alpha-tocopherol (2/3 mg exactly), or of 0.90 mg of dl-alpha-tocopherol, corresponding to the then-measured relative potency of stereoisomers. In May 2016, the measurements were revised, such that 1 mg of "Vitamin E" is 1 mg of d-alpha-tocopherol or 2 mg of dl-alpha-tocopherol. The change was originally started in 2000, when forms of vitamin E other than alpha-tocopherol were dropped from dietary calculations by the IOM. The UL amount disregards any conversion. The EFSA has never used an IU unit, and their measurement only considers RRR-alpha-tocopherol. Sources Of the different forms of vitamin E, gamma-tocopherol (γ-tocopherol) is the most common form found in the North American diet, but alpha-tocopherol (α-tocopherol) is the most biologically active. The U.S. Department of Agriculture (USDA), Agricultural Research Services, maintains a food composition database. The last major revision was Release 28, September 2015. Common naturally occurring vitamin E sources are shown in the table, as are some alpha-tocopherol fortified sources such as ready-to-eat cereals, infant formulas, and liquid nutrition products. Tocotrienols occur in some food sources, the richest being palm oil, and to a lesser extent rice bran oil, barley, oats, and certain seeds, nuts and grains, and the oils derived from them. Supplements Vitamin E is fat soluble, so dietary supplement products are usually in the form of the vitamin, esterified with acetic acid to generate tocopheryl acetate, and dissolved in vegetable oil in a softgel capsule. For alpha-tocopherol, amounts range from 100 to 1000 IU per serving. Smaller amounts are incorporated into multi-vitamin/mineral tablets. Gamma-tocopherol and tocotrienol supplements are also available from dietary supplement companies. The latter are extracts from palm oil. Fortification The World Health Organization does not have any recommendations for food fortification with vitamin E. The Food Fortification Initiative does not list any countries that have mandatory or voluntary programs for vitamin E. Infant formulas have alpha-tocopherol as an ingredient. In some countries, certain brands of ready-to-eat cereals, liquid nutrition products, and other foods have alpha-tocopherol as an added ingredient. Non-nutrient food additives Various forms of vitamin E are common food additives in oily food, used to deter rancidity caused by peroxidation. Those with an E number include: E306 Tocopherol-rich extract (mixed, natural, can include tocotrienol) E307 Alpha-tocopherol (synthetic) E308 Gamma-tocopherol (synthetic) E309 Delta-tocopherol (synthetic) These E numbers include all racemic forms and acetate esters thereof. Commonly found on food labels in Europe and some other countries, their safety assessment and approval are the responsibility of the European Food Safety Authority. Absorption, metabolism, excretion Tocotrienols and tocopherols, the latter including the stereoisomers of synthetic alpha-tocopherol, are absorbed from the intestinal lumen, incorporated into chylomicrons, and secreted into the portal vein, leading to the liver. Absorption efficiency is estimated at 51% to 86%, and that applies to all of the vitamin E family – there is no discrimination among the vitamin E vitamers during absorption. Bile is necessary for chylomicron formation, so disease conditions such as cystic fibrosis result in biliary insufficiency and vitamin E malabsorption. When consumed as an alpha-tocopheryl acetate dietary supplement, absorption is promoted when consumed with a fat-containing meal. Unabsorbed vitamin E is excreted via feces. Additionally, vitamin E is excreted by the liver via bile into the intestinal lumen, where it will either be reabsorbed or excreted via feces, and all of the vitamin E vitamers are metabolized and then excreted via urine. Upon reaching the liver, RRR-alpha-tocopherol is preferentially taken up by alpha-tocopherol transfer protein (α-TTP). All other forms are degraded to 2'-carboxethyl-6-hydroxychromane (CEHC), a process that involves truncating the phytic tail of the molecule, then either sulfated or glucuronidated. This renders the molecules water-soluble and leads to excretion via urine. Alpha-tocopherol is also degraded by the same process, to 2,5,7,8-tetramethyl-2-(2'-carboxyethyl)-6-hydroxychromane (α-CEHC), but more slowly because it is partially protected by α-TTP. Large intakes of α-tocopherol result in increased urinary α-CEHC, so this appears to be a means of disposing of excess vitamin E. Alpha-tocopherol transfer protein is coded by the TTPA gene on chromosome 8. The binding site for RRR-α-tocopherol is a hydrophobic pocket with a lower affinity for beta-, gamma-, or delta-tocopherols, or for the stereoisomers with an S configuration at the chiral 2 site. Tocotrienols are also a poor fit because the double bonds in the phytic tail create a rigid configuration that is a mismatch with the α-TTP pocket. A rare genetic defect of the TTPA gene results in people exhibiting a progressive neurodegenerative disorder known as ataxia with vitamin E deficiency (AVED) despite consuming normal amounts of vitamin E. Large amounts of alpha-tocopherol as a dietary supplement are needed to compensate for the lack of α-TTP. The role of α-TTP is to move α-tocopherol to the plasma membrane of hepatocytes (liver cells), where it can be incorporated into newly created very low density lipoprotein (VLDL) molecules. These convey α-tocopherol to cells in the rest of the body. As an example of a result of the preferential treatment, the US diet delivers approximately 70 mg/d of γ-tocopherol, and plasma concentrations are on the order of 2–5 μmol/L; meanwhile, dietary α-tocopherol is about 7 mg/d, but plasma concentrations are in the range of 11–37 μmol/L. Affinity of α-TTP for vitamin E vitamers Medical applications Vitamin E has been suggested as a supplement for helping many health conditions, mostly due to its antioxidant activity and potential to protect cells from oxidative damage. In the US, the vitamin is widely available as an over-the-counter supplement; however, medical evidence supporting its effectiveness and safety for treating or preventing a variety of health conditions is mixed. Vitamin E can also interact with some medications and other supplements. Vitamin E has been studied as a treatment for skin health and skin ageing, immune function, and managing conditions like cardiovascular disease or Alzheimer's disease (AD), or certain types of cancer. Most studies have found limited or inconclusive benefits and the potential for some risks. It is most often recommended to obtain vitamin E through a balanced diet because high-dose supplementation may have health risks. There is evidence that the sale of dietary supplement vitamin E has decreased by up to 33% following a report showing little or no effect of vitamin E in preventing cancer or cardiovascular disease. In 2022, it was the 244th most commonly prescribed medication in the United States, with more than 1million prescriptions. All-cause mortality Two meta-analyses concluded that as a dietary supplement, vitamin E neither improved nor impaired all-cause mortality. A meta-analysis of long-term clinical trials reported a non-significant 2% increase in all-cause mortality when alpha-tocopherol was the only supplement used. The same journal article reported a statistically significant 3% increase for results when alpha-tocopherol was used in combination with other nutrients (vitamin A, vitamin C, beta-carotene, selenium). Age-related macular degeneration A Cochrane review concluded that there were no changes seen for risk of developing age-related macular degeneration (AMD) from long-term vitamin E supplementation and that supplementation may slightly increase the chances of developing late AMD. Cognitive impairment and Alzheimer's disease Two meta-analyses reported lower vitamin E blood levels in AD people compared to healthy, age-matched people. However, a review of vitamin E supplementation trials concluded that there was insufficient evidence to state that supplementation reduced the risk of developing AD or slowed the progression of AD. Cancer In a 2022 update of an earlier report, the United States Preventive Services Task Force recommended against the use of vitamin E supplements for the prevention of cardiovascular disease or cancer, concluding there was insufficient evidence to assess the balance of benefits and harms, yet also concluding with moderate certainty that there is no net benefit of supplementation. As for literature on different types of cancer, an inverse relationship between dietary vitamin E and kidney cancer and bladder cancer is seen in observational studies. A large clinical trial reported no difference in bladder cancer cases between treatment and placebo. An inverse relationship between dietary vitamin E and lung cancer was reported in observational studies, but a large clinical trial in male tobacco smokers reported no impact on lung cancer between treatment and placebo, and a trial which tracked people who chose to consume a vitamin E dietary supplement reported an increased risk of lung cancer for those consuming more than 215 mg/day. For prostate cancer, there are also conflicting results. A meta-analysis based on serum alpha-tocopherol content reported an inverse correlation in relative risk, but a second meta-analysis of observational studies reported no such relationship. A large clinical trial with male tobacco smokers and reported a 32% decrease in the incidence of prostate cancer, but the SELECT trial of selenium or vitamin E for prostate cancer enrolled men ages 55 or older and reported relative risk 17% higher for the vitamin group. For colorectal cancer, a systematic review of randomized clinical trials and the large SELECT trial reported no statistically significant change in relative risk. The Women's Health Study reported no significant differences for incidences of all types of cancer, cancer deaths, or specifically for breast, lung or colon cancers. Potential confounding factors are the form of vitamin E used in prospective studies and the amounts. Synthetic, racemic mixtures of vitamin E isomers are not bioequivalent to natural, non-racemic mixtures, yet are widely used in clinical trials and as dietary supplement ingredients. One review reported a modest increase in cancer risk with vitamin E supplementation while stating that more than 90% of the cited clinical trials used the synthetic, racemic form dl-alpha-tocopherol. Cancer health claims The U.S. Food and Drug Administration initiated a process of reviewing and approving food and dietary supplement health claims in 1993. Reviews of petitions results in proposed claims being rejected or approved. If approved, specific wording is allowed on package labels. In 1999, a second process for claims review was created. If there is not a scientific consensus on the totality of the evidence, a Qualified Health Claim (QHC) may be established. The FDA does not "approve" qualified health claim petitions. Instead, it issues a Letter of Enforcement Discretion that includes very specific claim language and the restrictions on using that wording. The first QHCs relevant to vitamin E were issued in 2003: "Some scientific evidence suggests that consumption of antioxidant vitamins may reduce the risk of certain forms of cancer." In 2009, the claims became more specific, allowing that vitamin E might reduce the risk of renal, bladder and colorectal cancers, but with required mention that the evidence was deemed weak and the claimed benefits highly unlikely. A petition to add brain, cervical, gastric and lung cancers was rejected. A further revision, May 2012, allowed that vitamin E may reduce risk of renal, bladder and colorectal cancers, with a more concise qualifier sentence added: "FDA has concluded that there is very little scientific evidence for this claim." Any company product label making the cancer claims has to include a qualifier sentence. Cataracts A review measured serum tocopherol and reported higher serum concentration was associated with a 23% reduction in relative risk of age-related cataracts (ARC), with the effect due to differences in nuclear cataract rather than cortical or posterior subcapsular cataract. In contrast, meta-analyses reporting on clinical trials of alpha-tocopherol supplementation reported no statistically significant change to risk of ARC compared to placebo. Cardiovascular diseases In a 2022 update of an earlier report, the United States Preventive Services Task Force recommended against the use of vitamin E supplements for the prevention of cardiovascular disease or cancer, concluding there was insufficient evidence to assess the balance of benefits and harms, yet also concluding with moderate certainty that there is no net benefit of supplementation. Research on the effects of vitamin E on cardiovascular disease has produced conflicting results. In theory, oxidative modification of LDL-cholesterol promotes blockages in coronary arteries that lead to atherosclerosis and heart attacks, so vitamin E functioning as an antioxidant would reduce oxidized cholesterol and lower risk of cardiovascular disease. Vitamin E status has also been implicated in the maintenance of normal endothelial cell function of cells lining the inner surface of arteries, anti-inflammatory activity and inhibition of platelet adhesion and aggregation. An inverse relation has been observed between coronary heart disease and the consumption of foods high in vitamin E, and also higher serum concentration of alpha-tocopherol. The problem with observational studies is that these cannot confirm a relation between the lower risk of coronary heart disease and vitamin E consumption diets higher in vitamin E may also be higher in other, unidentified components that promote heart health, or lower in diet components detrimental to heart health, or people choosing such diets may be making other healthy lifestyle choices. A meta-analysis of randomized clinical trials (RCTs) reported that when consumed without any other antioxidant nutrient, the relative risk of heart attack was reduced by 18%. However, two large trials that were incorporated into the meta-analysis either did not show any benefit for heart attack, stroke, coronary mortality or all-cause mortality, or else a higher risk of heart failure in the alpha-tocopherol group. Vitamin E supplementation does not reduce the incidence of ischemic or hemorrhagic stroke. However, supplementation of vitamin E with other antioxidants reduced risk of ischemic stroke by 9% while increased the risk for hemorrhagic stroke by 22%. Denial of cardiovascular health claims In 2001, the U.S. Food and Drug Administration rejected proposed health claims for vitamin E and cardiovascular health. The U.S. National Institutes of Health reviewed literature published up to 2008 and concluded "In general, clinical trials have not provided evidence that routine use of vitamin E supplements prevents cardiovascular disease or reduces its morbidity and mortality." The European Food Safety Authority (EFSA) reviews proposed health claims for the European Union countries. In 2010, the EFSA reviewed and rejected claims that a cause and effect relationship has been established between the dietary intake of vitamin E and maintenance of normal cardiac function or of normal blood circulation. Nonalcoholic fatty liver disease Supplemental vitamin E significantly reduced elevated liver enzymes, steatosis, inflammation and fibrosis, suggesting that the vitamin may be useful for treatment of nonalcoholic fatty liver disease (NAFLD) and the more extreme subset known as nonalcoholic steatohepatitis (NASH) in adults, but not in children. Exercise recovery In healthy adults, after exercise, vitamin E was shown to not have any benefits for post-exercise recovery, as measured by muscle soreness and muscle strength, or measured by indicators for inflammation or muscle damage, such as interleukin-6 and creatine kinase. Parkinson's disease For Parkinson's disease, there is an observed inverse correlation seen with dietary vitamin E, but no confirming evidence from placebo-controlled clinical trials. Pregnancy Supplementation with a combination of vitamins E and C during pregnancy is not recommended by the World Health Organization. A Cochrane review concluded there was no support for the combination reducing risk of stillbirth, neonatal death, preterm birth, preeclampsia, or any other maternal or infant outcomes, either in healthy women or those considered at risk for pregnancy complications. Topical applications There is widespread use of tocopheryl acetate in some skincare and wound-treatment products as a topical medication, with claims for improved wound healing and reduced scar tissue, but reviews have repeatedly concluded that there is insufficient evidence to support these claims. There are also reports of allergic contact dermatitis from use of vitamin-E derivatives such as tocopheryl linoleate and tocopherol acetate in skin care products. Vaping-associated lung injury The US Centers for Disease Control and Prevention (CDC) stated in February 2020 that previous research suggested inhaled vitamin E acetate (α-tocopheryl acetate) may interfere with normal lung functioning. In September 2019, the US Food and Drug Administration had announced that vape liquids linked to recent vaping related lung disease outbreak in the United States, tested positive for vitamin E acetate which had been used as a thickening agent by illicit THC vape cartridge manufacturers. By November 2019, the CDC had identified vitamin E acetate as a very strong culprit of concern in the vaping-related illnesses, but has not ruled out other chemicals or toxicants as possible causes. These findings were based on fluid samples from the lungs of people with vaping-associated pulmonary injury. Pyrolysis of vitamin E acetate produces exceptionally toxic ketene gas, along with carcinogenic alkenes and benzene. History Vitamin E was discovered in 1922 by Herbert McLean Evans and Katharine Scott Bishop and first isolated in a pure form by Evans and Gladys Anderson Emerson in 1935 at the University of California, Berkeley. Because the vitamin activity was first identified as a dietary fertility factor in rats, it was given the name "tocopherol" from the Greek words "τόκος" [tókos, birth], and "φέρειν", [phérein, to bear or carry] meaning in sum "to carry a pregnancy," with the ending "-ol" signifying its status as a chemical alcohol. George M. Calhoun, Professor of Greek at the University of California, was credited with helping with the naming process. Erhard Fernholz elucidated its structure in 1938 and shortly afterward the same year, Paul Karrer and his team first synthesized it. Nearly 50 years after the discovery of vitamin E, an editorial in the Journal of the American Medical Association titled "Vitamin in search of a disease" read in part "...research revealed many of the vitamin's secrets, but no certain therapeutic use and no definite deficiency disease in man." The animal discovery experiments had been a requirement for successful pregnancy, but no benefits were observed for women prone to miscarriage. Evidence for vascular health was characterized as unconvincing. The editorial closed with mention of some preliminary human evidence for protection against hemolytic anemia in young children. A role for vitamin E in coronary heart disease was first proposed in 1946 by Evan Shute and colleagues. More cardiovascular work from the same research group followed, including a proposal that megadoses of vitamin E could slow down and even reverse the development of atherosclerosis. Subsequent research showed no association between vitamin E supplementation and cardiovascular events such as nonfatal stroke or myocardial infarction, or cardiovascular mortality. There is a long history of belief that topical application of vitamin E containing oil benefits burn and wound healing. This belief persists even though scientific reviews refuted this claim. The role of vitamin E in infant nutrition has a long research history. From 1949 onward there were trials with premature infants suggesting that oral alpha-tocopherol was protective against edema, intracranial hemorrhage, hemolytic anemia and retrolental fibroplasia. A more recent review concluded that vitamin E supplementation in preterm infants reduced the risk of intracranial hemorrhage and retinopathy, but noted an increased risk of sepsis.
Biology and health sciences
Vitamins
Health
54110
https://en.wikipedia.org/wiki/Vitamin%20B6
Vitamin B6
{{DISPLAYTITLE:Vitamin B6}} Vitamin B6 is one of the B vitamins, and is an essential nutrient for humans. The term essential nutrient refers to a group of six chemically similar compounds, i.e., "vitamers", which can be interconverted in biological systems. Its active form, pyridoxal 5′-phosphate, serves as a coenzyme in more than 140 enzyme reactions in amino acid, glucose, and lipid metabolism. Plants synthesize pyridoxine as a means of protection from the UV-B radiation found in sunlight and for the role it plays in the synthesis of chlorophyll. Animals cannot synthesize any of the various forms of the vitamin, and hence must obtain it via diet, either of plants, or of other animals. There is some absorption of the vitamin produced by intestinal bacteria, but this is not sufficient to meet dietary needs. For adult humans, recommendations from various countries' food regulatory agencies are in the range of 1.0 to 2.0 milligrams (mg) per day. These same agencies also recognize ill effects from intakes that are too high, and so set safe upper limits, ranging from as low as 12 mg/day to as high as 100 mg/day depending on the country. Beef, pork, fowl and fish are generally good sources; dairy, eggs, mollusks and crustaceans also contain vitamin B6, but at lower levels. There is enough in a wide variety of plant foods so that a vegetarian or vegan diet does not put consumers at risk for deficiency. Dietary deficiency is rare. Classic clinical symptoms include rash and inflammation around the mouth and eyes, plus neurological effects that include drowsiness and peripheral neuropathy affecting sensory and motor nerves in the hands and feet. In addition to dietary shortfall, deficiency can be the result of anti-vitamin drugs. There are also rare genetic defects that can trigger vitamin B6 deficiency-dependent epileptic seizures in infants. These are responsive to pyridoxal 5'-phosphate therapy. Definition Vitamin B6 is a water-soluble vitamin, one of the B vitamins. The vitamin actually comprises a group of six chemically related compounds, i.e., vitamers, that all contain a pyridine ring as their core. These are pyridoxine, pyridoxal, pyridoxamine, and their respective phosphorylated derivatives pyridoxine 5'-phosphate, pyridoxal 5'-phosphate and pyridoxamine 5'-phosphate. Pyridoxal 5'-phosphate has the highest biological activity, but the others are convertible to that form. Vitamin B6 serves as a co-factor in more than 140 cellular reactions, mostly related to amino acid biosynthesis and catabolism, but is also involved in fatty acid biosynthesis and other physiological functions. Forms Because of its chemical stability, pyridoxine hydrochloride is the form most commonly given as vitamin B6 dietary supplement. Absorbed pyridoxine (PN) is converted to pyridoxamine 5'-phosphate (PMP) by the enzyme pyridoxal kinase, with PMP further converted to pyridoxal 5'-phosphate (PLP), the metabolically active form, by the enzymes pyridoxamine-phosphate transaminase or pyridoxine 5'-phosphate oxidase, the latter of which also catalyzes the conversion of pyridoxine 5′-phosphate (PNP) to PLP. Pyridoxine 5'-phosphate oxidase is dependent on flavin mononucleotide (FMN) as a cofactor produced from riboflavin (vitamin B2). For degradation, in a non-reversible reaction, PLP is catabolized to 4-pyridoxic acid, which is excreted in urine. Synthesis Biosynthesis Two pathways for PLP are currently known: one requires deoxyxylulose 5-phosphate (DXP), while the other does not, hence they are known as DXP-dependent and DXP-independent. These pathways have been studied extensively in Escherichia coli and Bacillus subtilis, respectively. Despite the disparity in the starting compounds and the different number of steps required, the two pathways possess many commonalities. The DXP-dependent pathway: Commercial synthesis The starting material is either the amino acid alanine, or propionic acid converted into alanine via halogenation and amination. Then, the procedure accomplishes the conversion of the amino acid into pyridoxine through the formation of an oxazole intermediate followed by a Diels–Alder reaction, with the entire process referred to as the "oxazole method". The product used in dietary supplements and food fortification is pyridoxine hydrochloride, the chemically stable hydrochloride salt of pyridoxine. Pyridoxine is converted in the liver into the metabolically active coenzyme form pyridoxal 5'-phosphate. At present, while the industry mainly utilizes the oxazole method, there is research exploring means of using less toxic and dangerous reagents in the process. Fermentative bacterial biosynthesis methods are also being explored, but are not yet scaled up for commercial production. Functions PLP is involved in many aspects of macronutrient metabolism, neurotransmitter synthesis, histamine synthesis, hemoglobin synthesis and function, and gene expression. PLP generally serves as a coenzyme (cofactor) for many reactions including decarboxylation, transamination, racemization, elimination, replacement, and beta-group interconversion. Amino acid metabolism Transaminases break down amino acids with PLP as a cofactor. The proper activity of these enzymes is crucial for the process of moving amine groups from one amino acid to another. To function as a transaminase coenzyme, PLP bound to a lysine of the enzyme then binds to a free amino acid via formation of a Schiff's base. The process then dissociates the amine group from the amino acid, releasing a keto acid, then transfers the amine group to a different keto acid to create a new amino acid. Serine racemase which synthesizes the neuromodulator D-serine from its enantiomer is a PLP-dependent enzyme. PLP is a coenzyme needed for the proper function of the enzymes cystathionine synthase and cystathionase. These enzymes catalyze reactions in the catabolism of methionine. Part of this pathway (the reaction catalyzed by cystathionase) also produces cysteine. Selenomethionine is the primary dietary form of selenium. PLP is needed as a cofactor for the enzymes that allow selenium to be used from the dietary form. PLP also plays a cofactor role in releasing selenium from selenohomocysteine to produce hydrogen selenide, which can then be used to incorporate selenium into selenoproteins. PLP is required for the conversion of tryptophan to niacin, so low vitamin B6 status impairs this conversion. Neurotransmitters PLP is a cofactor in the biosynthesis of five important neurotransmitters: serotonin, dopamine, epinephrine, norepinephrine, and gamma-aminobutyric acid. Glucose metabolism PLP is a required coenzyme of glycogen phosphorylase, the enzyme necessary for glycogenolysis. Glycogen serves as a carbohydrate storage molecule, primarily found in muscle, liver and brain. Its breakdown frees up glucose for energy. PLP also catalyzes transamination reactions that are essential for providing amino acids as a substrate for gluconeogenesis, the biosynthesis of glucose. Lipid metabolism PLP is an essential component of enzymes that facilitate the biosynthesis of sphingolipids. Particularly, the synthesis of ceramide requires PLP. In this reaction, serine is decarboxylated and combined with palmitoyl-CoA to form sphinganine, which is combined with a fatty acyl-CoA to form dihydroceramide. This compound is then further desaturated to form ceramide. In addition, the breakdown of sphingolipids is also dependent on vitamin B6 because sphingosine-1-phosphate lyase, the enzyme responsible for breaking down sphingosine-1-phosphate, is also PLP-dependent. Hemoglobin synthesis and function PLP aids in the synthesis of hemoglobin, by serving as a coenzyme for the enzyme aminolevulinic acid synthase. It also binds to two sites on hemoglobin to enhance the oxygen binding of hemoglobin. Gene expression PLP has been implicated in increasing or decreasing the expression of certain genes. Increased intracellular levels of the vitamin lead to a decrease in the transcription of glucocorticoids. Vitamin B6 deficiency leads to the increased gene expression of albumin mRNA. Also, PLP influences expression of glycoprotein IIb by interacting with various transcription factors; the result is inhibition of platelet aggregation. In plants Plant synthesis of vitamin B6 contributes to protection from sunlight. Ultraviolet-B radiation (UV-B) from sunlight stimulates plant growth, but in high amounts can increase production of tissue-damaging reactive oxygen species (ROS), i.e., oxidants. Using Arabidopsis thaliana (common name: thale cress), researchers demonstrated that UV-B exposure increased pyridoxine biosynthesis, but in a mutant variety, pyridoxine biosynthesis capacity was not inducible, and as a consequence, ROS levels, lipid peroxidation, and cell proteins associated with tissue damage were all elevated. Biosynthesis of chlorophyll depends on aminolevulinic acid synthase, a PLP-dependent enzyme that uses succinyl-CoA and glycine to generate aminolevulinic acid, a chlorophyll precursor. In addition, plant mutants with severely limited capacity to synthesize vitamin B6 have stunted root growth, because synthesis of plant hormones such as auxin require the vitamin as an enzyme cofactor. Medical uses Isoniazid is an antibiotic used for the treatment of tuberculosis. A common side effect is numbness in the hands and feet, also known as peripheral neuropathy. Co-treatment with vitamin B6 alleviates the numbness. Overconsumption of seeds from Ginkgo biloba can deplete vitamin B6, because the ginkgotoxin is an anti-vitamin (vitamin antagonist). Symptoms include vomiting and generalized convulsions. Ginkgo seed poisoning can be treated with vitamin B6. Dietary recommendations From regulatory agency to regulatory agency there is a wide range between what is considered Tolerable upper intake levels (ULs). The European Food Safety Authority (EFSA) adult UL for vitamin B6 is set at 12 mg/day versus 100 mg/day for the United States. The US National Academy of Medicine updated Dietary Reference Intakes for many vitamins in 1998. Recommended Dietary Allowances (RDAs), expressed as milligrams per day, increase with age from 1.2 to 1.5 mg/day for women and from 1.3 to 1.7 mg/day for men. The RDA for pregnancy is 1.9 mg/day, for lactation, 2.0 mg/day. For children ages 1–13 years the RDA increases with age from 0.5 to 1.0 mg/day. As for safety, ULs for vitamins and minerals are identified when evidence is sufficient. In the case of vitamin B6 the US-established adult UL was set at 100 mg/day. The EFSA refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA. For women and men ages 15 and older the PRI is set at 1.6 and 1.7 mg/day, respectively; for pregnancy 1.8 mg/day, for lactation 1.7 mg/day. For children ages 1–14 years the PRIs increase with age from 0.6 to 1.4 mg/day. The EFSA also reviewed the safety question and in 2023 set an upper limit for vitamin B6 of 12 mg/day for adults, with lower amounts ranging from 2.2 to 10.7 mg/day for infants and children, depending on age. This replaced the adult UL set in 2008 at 25 mg/day. The Japanese Ministry of Health, Labour and Welfare updated its vitamin and mineral recommendations in 2015. The adult RDAs are at 1.2 mg/day for women 1.4 mg/day for men. The RDA for pregnancy is 1.4 mg/day, for lactation is 1.5 mg/day. For children ages 1–17 years the RDA increases with age from 0.5 to 1.5 mg/day. The adult UL was set at 40–45 mg/day for women and 50–60 mg/day for men, with the lower values in those ranges for adults over 70 years of age. Safety Adverse effects have been documented from vitamin B6 dietary supplements, but never from food sources. Even though it is a water-soluble vitamin and is excreted in the urine, doses of pyridoxine in excess of the dietary upper limit (UL) over long periods cause painful and ultimately irreversible neurological problems. The primary symptoms are pain and numbness of the extremities. In severe cases, motor neuropathy may occur with "slowing of motor conduction velocities, prolonged F wave latencies, and prolonged sensory latencies in both lower extremities", causing difficulty in walking. Sensory neuropathy typically develops at doses of pyridoxine in excess of 1,000 mg per day, but adverse effects can occur with much less, so intakes over 200 mg/day are not considered safe. Trials with amounts equal to or less than 200 mg/day established that as a "No-observed-adverse-effect level", meaning the highest amount at which no adverse effects were observed. This was divided by two to allow for people who might be extra sensitive to the vitamin, referred to as an "uncertainty factor", resulting in the aforementioned adult UL of 100 mg/day set for the United States. As noted above, in 2023 the European Food Safety Commission set an adult UL at 12 mg/day. While Australia has set an upper limit of 50 mg/day, the Therapeutic Goods Administration requires a label warning about peripheral neuropathy if the daily dose is predicted to exceed 10 mg/day. Labeling For US food and dietary supplement labeling purposes the amount in a serving is expressed as a percent of Daily Value. For vitamin B6 labeling purposes 100% of the Daily Value was 2.0 mg, but as of May 27, 2016, it was revised to 1.7 mg to bring it into agreement with the adult RDA. A table of the old and new adult daily values is provided at Reference Daily Intake. Sources Bacteria residing in the large intestine are known to synthesize B-vitamins, including B6, but the amounts are not sufficient to meet host requirements, in part because the vitamins are competitively taken up by non-synthesizing bacteria. Vitamin B6 is found in a wide variety of foods. In general, meat, fish and fowl are good sources, but dairy foods and eggs are not (table). Crustaceans and mollusks contain about 0.1 mg/100 grams. Fruit (apples, oranges, pears) contain less than 0.1 mg/100g. Bioavailability from a mixed diet (containing animal- and plant-sourced foods) is estimated at being 75% – higher for PLP from meat, fish and fowl, lower from plants, as those are mostly in the form of pyridoxine glucoside, which has approximately half the bioavailability of animal-sourced B6 because removal of the glucoside by intestinal cells is not 100% efficient. Given lower amounts and lower bioavailability of the vitamin from plants there was a concern that a vegetarian or vegan diet could cause a vitamin deficiency state. However, the results from a population-based survey conducted in the U.S. demonstrated that despite a lower vitamin intake, serum PLP was not significantly different between meat-eaters and vegetarians, suggesting that a vegetarian diet does not pose a risk for vitamin B6 deficiency. Cooking, storage, and processing losses vary, and in some foods may be more than 50% depending on the form of vitamin present in the food. Plant foods lose less during processing, as they contain pyridoxine, which is more stable than the pyridoxal or pyridoxamine forms found in animal-sourced foods. For example, milk can lose 30–70% of its vitamin B6 content when dried. The vitamin is found in the germ and aleurone layer of grains, so there is more in grains from which these layers have not been removed, for example more in whole wheat bread than in white wheat bread, and more in brown rice than in white rice. Most values shown in the table are rounded to nearest tenth of a milligram: Fortification As of 2024, eighteen countries require food fortification of wheat flour, maize flour or rice with vitamin B6 as pyridoxine hydrochloride. Most of these are in southeast Africa or Central America. The amounts stipulated range from 3.0 to 6.5 mg/kg. An additional six countries, including India, have a voluntary fortification program. India stipulates 2.0 mg/kg. Dietary supplements In the US, multi-vitamin/mineral products typically contain 2 to 4 mg of vitamin B6 per daily serving as pyridoxine hydrochloride. However, many US dietary supplement companies also market a B6-only dietary supplement with 100 mg per daily serving. While the US National Academy of Medicine set an adult safety UL at 100 mg/day in 1998, in 2023 the European Food Safety Authority set its UL at 12 mg/day. Health claims The Japanese Ministry of Health, Labor, and Welfare (MHLW) set up the 'Foods for Specified Health Uses' (; FOSHU) regulatory system in 1991 to individually approve the statements made on food labels concerning the effects of foods on the human body. The regulatory range of FOSHU was later broadened to allow for the certification of capsules and tablets. In 2001, MHLW enacted a new regulatory system, 'Foods with Health Claims' (; FHC), which consists of the existing FOSHU system and the newly established 'Foods with Nutrient Function Claims' (; FNFC), under which claims were approved for any product containing a specified amount per serving of 12 vitamins, including vitamin B6, and two minerals. To make a health claim based on a food's vitamin B6 content, the amount per serving must be in the range of 0.3–25 mg. The allowed claim is: "Vitamin B6 is a nutrient that helps produce energy from protein and helps maintain healthy skin and mucous membranes." In 2010, the European Food Safety Authority (EFSA) published a review of proposed health claims for vitamin B6, disallowing claims for bone, teeth, hair skin and nails, and allowing claims that the vitamin provided for normal homocysteine metabolism, normal energy-yielding metabolism, normal psychological function, reduced tiredness and fatigue, and provided for normal cysteine synthesis. The US Food and Drug Administration (FDA) has several processes for permitting health claims on food and dietary supplement labels. There are no FDA-approved Health Claims or Qualified Health Claims for vitamin B6. Structure/Function Claims can be made without FDA review or approval as long as there is some credible supporting science. Examples for this vitamin are "Helps support nervous system function" and "Supports healthy homocysteine metabolism." Absorption, metabolism and excretion Vitamin B6 is absorbed in the jejunum of the small intestine by passive diffusion. Even extremely large amounts are well absorbed. Absorption of the phosphate forms involves their dephosphorylation catalyzed by the enzyme alkaline phosphatase. Most of the vitamin is taken up by the liver. There, the dephosphorylated vitamins are converted to the phosphorylated PLP, PNP and PMP, with the two latter converted to PLP. In the liver, PLP is bound to proteins, primarily albumin. The PLP-albumin complex is what is released by the liver to circulate in plasma. Protein-binding capacity is the limiting factor for vitamin storage. Total body stores, the majority in muscle, with a lesser amount in liver, have been estimated to be in the range of 61 to 167 mg. Enzymatic processes utilize PLP as a phosphate-donating cofactor. PLP is restored via a salvage pathway that requires three key enzymes, pyridoxal kinase, pyridoxine 5'-phosphate oxidase, and phosphatases. Inborn errors in the salvage enzymes are known to cause inadequate levels of PLP in the cell, particularly in neuronal cells. The resulting PLP deficiency is known to cause or implicated in several pathologies, most notably infant epileptic seizures. The half-life of vitamin B6 varies according to different sources: one source suggests that the half-life of pyridoxine is up to 20 days, while another source indicates half-life of vitamin B6 is in range of 25 to 33 days. After considering the different sources, it can be concluded that the half-life of vitamin B6 is typically measured in several weeks. The end-product of vitamin B6 catabolism is 4-pyridoxic acid, which makes up about half of the B6 compounds in urine. 4-Pyridoxic acid is formed by the action of aldehyde oxidase in the liver. Amounts excreted increase within 1–2 weeks with vitamin supplementation and decrease as rapidly after supplementation ceases. Other vitamin forms excreted in the urine include pyridoxal, pyridoxamine and pyridoxine, and their phosphates. When large doses of pyridoxine are given orally, the proportion of these other forms increases. A small amount of vitamin B6 is also excreted in the feces. This may be a combination of unabsorbed vitamin and what was synthesized by large intestine microbiota. Deficiency Signs and symptoms The classic clinical syndrome for vitamin B6 deficiency is a seborrheic dermatitis-like eruption, atrophic glossitis with ulceration, angular cheilitis, conjunctivitis, intertrigo, abnormal electroencephalograms, microcytic anemia (due to impaired heme synthesis), and neurological symptoms of somnolence, confusion, depression, and neuropathy (due to impaired sphingosine synthesis). In infants, a deficiency in vitamin B6 can lead to irritability, abnormally acute hearing, and convulsive seizures. Less severe cases present with metabolic disease associated with insufficient activity of the coenzyme pyridoxal 5' phosphate (PLP). The most prominent of the lesions is due to impaired tryptophan–niacin conversion. This can be detected based on urinary excretion of xanthurenic acid after an oral tryptophan load. Vitamin B6 deficiency can also result in impaired transsulfuration of methionine to cysteine. The PLP-dependent transaminases and glycogen phosphorylase provide the vitamin with its role in gluconeogenesis, so deprivation of vitamin B6 results in impaired glucose tolerance. Diagnosis The assessment of vitamin B6 status is essential, as the clinical signs and symptoms in less severe cases are not specific. The three biochemical tests most widely used are plasma PLP concentrations, the activation coefficient for the erythrocyte enzyme aspartate aminotransferase, and the urinary excretion of vitamin B6 degradation products, specifically urinary PA. Of these, plasma PLP is probably the best single measure, because it reflects tissue stores. Plasma PLP of less than 10 nmol/L is indicative of vitamin B6 deficiency. A PLP concentration greater than 20 nmol/L has been chosen as a level of adequacy for establishing Estimated Average Requirements and Recommended Daily Allowances in the USA. Urinary PA is also an indicator of vitamin B6 deficiency; levels of less than 3.0 mmol/day is suggestive of vitamin B6 deficiency. Other methods of measurement, including UV spectrometric, spectrofluorimetric, mass spectrometric, thin-layer and high-performance liquid chromatographic, electrophoretic, electrochemical, and enzymatic, have been developed. The classic clinical symptoms for vitamin B6 deficiency are rare, even in developing countries. A handful of cases were seen between 1952 and 1953, particularly in the United States, having occurred in a small percentage of infants who were fed a formula lacking in pyridoxine. Causes A deficiency of vitamin B6 alone is relatively uncommon and often occurs in association with other vitamins of the B complex. Evidence exists for decreased levels of vitamin B6 in women with type 1 diabetes and in patients with systemic inflammation, liver disease, rheumatoid arthritis, and those infected with HIV. Use of oral contraceptives and treatment with certain anticonvulsants, isoniazid, cycloserine, penicillamine, and hydrocortisone negatively impact vitamin B6 status. Hemodialysis reduces vitamin B6 plasma levels. Overconsumption of Ginkgo biloba seeds can also deplete vitamin B6. Genetic defects Genetically confirmed diagnoses of diseases affecting vitamin B6 metabolism (ALDH7A1 deficiency, pyridoxine-5'-phosphate oxidase deficiency, PLP binding protein deficiency, hyperprolinaemia type II and hypophosphatasia) can trigger vitamin B6 deficiency-dependent epileptic seizures in infants. These are responsive to pyridoxal 5'-phosphate therapy. History An overview of the history was published in 2012. In 1934, the Hungarian physician Paul György discovered a substance that was able to cure a skin disease in rats (dermatitis acrodynia). He named this substance vitamin B6, as numbering of the B vitamins was chronological, and pantothenic acid had been assigned vitamin B5 in 1931. In 1938, Richard Kuhn was awarded the Nobel Prize in Chemistry for his work on carotenoids and vitamins, specifically B2 and B6. Also in 1938, Samuel Lepkovsky isolated vitamin B6 from rice bran. A year later, Stanton A. Harris and Karl August Folkers determined the structure of pyridoxine and reported success in chemical synthesis, and then in 1942 Esmond Emerson Snell developed a microbiological growth assay that led to the characterization of pyridoxamine, the aminated product of pyridoxine, and pyridoxal, the formyl derivative of pyridoxine. Further studies showed that pyridoxal, pyridoxamine, and pyridoxine have largely equal activity in animals and owe their vitamin activity to the ability of the organism to convert them into the enzymatically active form pyridoxal-5-phosphate. Following a recommendation of IUPAC-IUB in 1973, vitamin B6 is the official name for all 2-methyl,3-hydroxy,5-hydroxymethylpyridine derivatives exhibiting the biological activity of pyridoxine. Because these related compounds have the same effect, the word "pyridoxine" should not be used as a synonym for vitamin B6. Research Observational studies suggested an inverse correlation between a higher intake of vitamin B6 and all cancers, with the strongest evidence for gastrointestinal cancers. However, evidence from a review of randomized clinical trials did not support a protective effect. The authors noted that high B6 intake may be an indicator of higher consumption of other dietary protective micronutrients. A review and two observational trials reporting lung cancer risk reported that serum vitamin B6 was lower in people with lung cancer compared to people without lung cancer, but did not incorporate any intervention or prevention trials. According to a prospective cohort study the long-term use of vitamin B6 from individual supplement sources at greater than 20 mg per day, which is more than ten times the adult male RDA of 1.7 mg/day, was associated with an increased risk for lung cancer among men. Smoking further elevated this risk. However, a more recent review of this study suggested that a causal relationship between supplemental vitamin B6 and an increased lung cancer risk cannot be confirmed yet. For coronary heart disease, a meta-analysis reported lower relative risk for a 0.5 mg/day increment in dietary vitamin B6 intake. As of 2021, there were no published reviews of randomized clinical trials for coronary heart disease or cardiovascular disease. In reviews of observational and intervention trials, neither higher vitamin B6 concentrations nor treatment showed any significant benefit on cognition and dementia risk. Low dietary vitamin B6 correlated with a higher risk of depression in women but not in men. When treatment trials were reviewed, no meaningful treatment effect for depression was reported, but a subset of trials in pre-menopausal women suggested a benefit, with a recommendation that more research was needed. The results of several trials with children diagnosed as having autism spectrum disorder (ASD) treated with high dose vitamin B6 and magnesium did not result in treatment effect on the severity of symptoms of ASD.
Biology and health sciences
Vitamins
Health
54114
https://en.wikipedia.org/wiki/Vitamin%20A
Vitamin A
Vitamin A is a fat-soluble vitamin that is an essential nutrient. The term "vitamin A" encompasses a group of chemically related organic compounds that includes retinol, retinyl esters, and several provitamin (precursor) carotenoids, most notably β-carotene (beta-carotene). Vitamin A has multiple functions: growth during embryo development, maintaining the immune system, and healthy vision. For aiding vision specifically, it combines with the protein opsin to form rhodopsin, the light-absorbing molecule necessary for both low-light (scotopic vision) and color vision. Vitamin A occurs as two principal forms in foods: A) retinoids, found in animal-sourced foods, either as retinol or bound to a fatty acid to become a retinyl ester, and B) the carotenoids α-carotene (alpha-carotene), β-carotene, γ-carotene (gamma-carotene), and the xanthophyll beta-cryptoxanthin (all of which contain β-ionone rings) that function as provitamin A in herbivore and omnivore animals which possess the enzymes that cleave and convert provitamin carotenoids to retinol. Some carnivore species lack this enzyme. The other carotenoids do not have retinoid activity. Dietary retinol is absorbed from the digestive tract via passive diffusion. Unlike retinol, β-carotene is taken up by enterocytes by the membrane transporter protein scavenger receptor B1 (SCARB1), which is upregulated in times of vitamin A deficiency (VAD). Retinol is stored in lipid droplets in the liver. A high capacity for long-term storage of retinol means that well-nourished humans can go months on a vitamin A-deficient diet, while maintaining blood levels in the normal range. Only when the liver stores are nearly depleted will signs and symptoms of deficiency show. Retinol is reversibly converted to retinal, then irreversibly to retinoic acid, which activates hundreds of genes. Vitamin A deficiency is common in developing countries, especially in Sub-Saharan Africa and Southeast Asia. Deficiency can occur at any age but is most common in pre-school age children and pregnant women, the latter due to a need to transfer retinol to the fetus. Vitamin A deficiency is estimated to affect approximately one-third of children under the age of five around the world, resulting in hundreds of thousands of cases of blindness and deaths from childhood diseases because of immune system failure. Reversible night blindness is an early indicator of low vitamin A status. Plasma retinol is used as a biomarker to confirm vitamin A deficiency. Breast milk retinol can indicate a deficiency in nursing mothers. Neither of these measures indicates the status of liver reserves. The European Union and various countries have set recommendations for dietary intake, and upper limits for safe intake. Vitamin A toxicity also referred to as hypervitaminosis A, occurs when there is too much vitamin A accumulating in the body. Symptoms may include nervous system effects, liver abnormalities, fatigue, muscle weakness, bone and skin changes, and others. The adverse effects of both acute and chronic toxicity are reversed after consumption of high dose supplements is stopped. Definition Vitamin A is a fat-soluble vitamin, a category that also includes vitamins D, E and K. The vitamin encompasses several chemically related naturally occurring compounds or metabolites, i.e., vitamers, that all contain a β-ionone ring. The primary dietary form is retinol, which may have a fatty acid molecule attached, creating a retinyl ester, when stored in the liver. Retinol the transport and storage form of vitamin A is interconvertible with retinal, catalyzed to retinal by retinol dehydrogenases and back to retinol by retinaldehyde reductases. retinal + NADH + H+ retinol + NAD+ retinol + NAD+ retinal + NADH + H+ Retinal, (also known as retinaldehyde) can be irreversibly converted to all-trans-retinoic acid by the action of retinal dehydrogenase retinal + NAD+ + H2O → retinoic acid + NADH + H+ Retinoic acid is actively transported into the cell nucleus by CRABp2 where it regulates thousands of genes by binding directly to gene targets via retinoic acid receptors. In addition to retinol, retinal and retinoic acid, there are plant-, fungi- or bacteria-sourced carotenoids which can be metabolized to retinol, and are thus vitamin A vitamers. There are also what are referred to as 2nd, 3rd and 4th generation retinoids which are not considered vitamin A vitamers because they cannot be converted to retinol, retinal or all-trans-retinoic acid. Some are prescription drugs, oral or topical, for various indications. Examples are etretinate, acitretin, adapalene, bexarotene, tazarotene and trifarotene. Absorption, metabolism and excretion Retinyl esters from animal-sourced foods (or synthesized for dietary supplements for humans and domesticated animals) are acted upon by retinyl ester hydrolases in the lumen of the small intestine to release free retinol. Retinol enters enterocytes by passive diffusion. Absorption efficiency is in the range of 70 to 90%. Humans are at risk for acute or chronic vitamin A toxicity because there are no mechanisms to suppress absorption or excrete the excess in urine. Within the cell, retinol is there bound to retinol binding protein 2 (RBP2). It is then enzymatically re-esterified by the action of lecithin retinol acyltransferase and incorporated into chylomicrons that are secreted into the lymphatic system. Unlike retinol, β-carotene is taken up by enterocytes by the membrane transporter protein scavenger receptor B1 (SCARB1). The protein is upregulated in times of vitamin A deficiency. If vitamin A status is in the normal range, SCARB1 is downregulated, reducing absorption. Also downregulated is the enzyme beta-carotene 15,15'-dioxygenase (formerly known as beta-carotene 15,15'-monooxygenase) coded for by the BCMO1 gene, responsible for symmetrically cleaving β-carotene into retinal. Absorbed β-carotene is either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to RBP2. After a meal, roughly two-thirds of the chylomicrons are taken up by the liver with the remainder delivered to peripheral tissues. Peripheral tissues also can convert chylomicron β-carotene to retinol. The capacity to store retinol in the liver means that well-nourished humans can go months on a vitamin A deficient diet without manifesting signs and symptoms of deficiency. Two liver cell types are responsible for storage and release: hepatocytes and hepatic stellate cells (HSCs). Hepatocytes take up the lipid-rich chylomicrons, bind retinol to retinol-binding protein 4 (RBP4), and transfer the retinol-RBP4 to HSCs for storage in lipid droplets as retinyl esters. Mobilization reverses the process: retinyl ester hydrolase releases free retinol which is transferred to hepatocytes, bound to RBP4, and put into blood circulation. Other than either after a meal or when consumption of large amounts exceeds liver storage capacity, more than 95% of retinol in circulation is bound to RBP4. Carnivores Strict carnivores manage vitamin A differently than omnivores and herbivores. Carnivores are more tolerant of high intakes of retinol because those species have the ability to excrete retinol and retinyl esters in urine. Carnivores also have the ability to store more in the liver, due to a higher ratio of liver HSCs to hepatocytes compared to omnivores and herbivores. For humans, liver content can range from 20 to 30 μg/gram wet weight. Notoriously, polar bear liver is acutely toxic to humans because content has been reported in range of 2,215 to 10,400 μg/g wet weight. As noted, in humans, retinol circulates bound to RBP4. Carnivores maintain R-RBP4 within a tight range while also having retinyl esters in circulation. Bound retinol is delivered to cells while the esters are excreted in the urine. In general, carnivore species are poor converters of ionone-containing carotenoids, and pure carnivores such as felidae (cats) lack the cleaving enzyme entirely. They must have retinol or retinyl esters in their diet. Herbivores Herbivores consume ionone-containing carotenoids and convert those to retinal. Some species, including cattle and horses, have measurable amounts of β-carotene circulating in the blood, and stored in body fat, creating yellow fat cells. Most species have white fat and no β-carotene in circulation. Activation and excretion In the liver and peripheral tissues of humans, retinol is reversibly converted to retinal by the action of alcohol dehydrogenases, which are also responsible for the conversion of ethanol to acetaldehyde. Retinal is irreversibly oxidized to retinoic acid (RA) by the action of aldehyde dehydrogenases. RA regulates the activation or deactivation of genes. The oxidative degradation of RA is induced by RA – its presence triggers its removal, making for a short-acting gene transcription signal. This deactivation is mediated by a cytochrome P450 (CYP) enzyme system, specifically enzymes CYP26A1, CYP26B1 and CYP26C1. CYP26A1 is the predominant form in the human liver; all other human adult tissues contained higher levels of CYP26B1. CYP26C1 is expressed mainly during embryonic development. All three convert retinoic acid into 4-oxo-RA, 4-OH-RA and 18-OH-RA. Glucuronic acid forms water-soluble glucuronide conjugates with the oxidized metabolites, which are then excreted in urine and feces. Metabolic functions Other than for vision, the metabolic functions of vitamin A are mediated by all-trans-retinoic acid (RA). The formation of RA from retinal is irreversible. To prevent accumulation of RA it is oxidized and eliminated fairly quickly, i.e., has a short half-life. Three cytochromes catalyze the oxidation of retinoic acid. The genes for Cyp26A1, Cyp26B1 and Cyp26C1 are induced by high levels of RA, providing a self-regulating feedback loop. Vision and eye health Vitamin A status involves eye health via two separate functions. Retinal is an essential factor in rod cells and cone cells in the retina responding to light exposure by sending nerve signals to the brain. An early sign of vitamin A deficiency is night blindness. Vitamin A in the form of retinoic acid is essential to normal epithelial cell functions. Severe vitamin A deficiency, common in infants and young children in southeast Asia causes xerophthalmia characterized by dryness of the conjunctival epithelium and cornea. Untreated, xerophthalmia progresses to corneal ulceration and blindness. Vision The role of vitamin A in the visual cycle is specifically related to the retinal compound. Retinol is converted by the enzyme RPE65 within the retinal pigment epithelium into 11-cis-retinal. Within the eye, 11-cis-retinal is bound to the protein opsin to form rhodopsin in rod cells and iodopsin in cone cells. As light enters the eye, the 11-cis-retinal is isomerized to the all-trans form. The all-trans-retinal dissociates from the opsin in a series of steps called photo-bleaching. This isomerization induces a nervous signal along the optic nerve to the visual center of the brain. After separating from opsin, the all-trans-retinal is recycled and converted back to the 11-cis-retinal form by a series of enzymatic reactions, which then completes the cycle by binding to opsin to reform rhodopsin in the retina. In addition, some of the all-trans-retinal may be converted to all-trans-retinol form and then transported with an interphotoreceptor retinol-binding protein to the retinal pigmented epithelial cells. Further esterification into all-trans-retinyl esters allow for storage of all-trans-retinol within the pigment epithelial cells to be reused when needed. It is for this reason that a deficiency in vitamin A will inhibit the reformation of rhodopsin, and will lead to one of the first symptoms, night blindness. Night blindness Vitamin A deficiency-caused night blindness is a reversible difficulty for the eyes to adjust to dim light. It is common in young children who have a diet inadequate in retinol and β-carotene. A process called dark adaptation typically causes an increase in photopigment amounts in response to low levels of illumination. This increases light sensitivity by up to 100,000 times compared to normal daylight conditions. Significant improvement in night vision takes place within ten minutes, but the process can take up to two hours to reach maximal effect. People expecting to work in a dark environment wore red-tinted goggles or were in a red light environment to not reverse the adaptation because red light does not deplete rhodopsin versus what occurs with yellow or green light. Xerophthalmia and childhood blindness Xerophthalmia, caused by a severe vitamin A deficiency, is described by pathologic dryness of the conjunctival epithelium and cornea. The conjunctiva becomes dry, thick, and wrinkled. Indicative is the appearance of Bitot's spots, which are clumps of keratin debris that build up inside the conjunctiva. If untreated, xerophthalmia can lead to dry eye syndrome, corneal ulceration and ultimately to blindness as a result of cornea and retina damage. Although xerophthalmia is an eye-related issue, prevention (and reversal) are functions of retinoic acid having been synthesized from retinal rather than the 11-cis-retinal to rhodopsin cycle. Throughout southeast Asia, estimates are that more than half of children under the age of six years have subclinical vitamin A deficiency and night blindness, with progression to xerophthalmia being the leading cause of preventable childhood blindness. Estimates are that each year there are 350,000 cases of childhood blindness due to vitamin A deficiency. The causes are vitamin A deficiency during pregnancy, followed by low transfer of vitamin A during lactation and infant/child diets low in vitamin A or β-carotene. The prevalence of pre-school age children who are blind due to vitamin A deficiency is lower than expected from incidence of new cases only because childhood vitamin A deficiency significantly increases all-cause mortality. According to a 2017 Cochrane review, vitamin A deficiency, using serum retinol less than 0.70 μmol/L as a criterion, is a major public health problem affecting an estimated 190 million children under five years of age in low- and middle-income countries, primarily in Sub-Saharan Africa and Southeast Asia. In lieu of or in combination with food fortification programs, many countries have implemented public health programs in which children are periodically given very large oral doses of synthetic vitamin A, usually retinyl palmitate, as a means of preventing and treating vitamin A deficiency. Doses were 50,000 to 100,000 IU (International units) for children aged 6 to 11 months and 100,000 to 200,000 IU for children aged 12 months to five years, the latter typically every four to six months. In addition to a 24% reduction in all-cause mortality, eye-related results were reported. Prevalence of Bitot's spots at follow-up were reduced by 58%, night blindness by 68%, xerophthalmia by 69%. Gene regulation RA regulates gene transcription by binding to nuclear receptors known as retinoic acid receptors (RARs; RARα, RARβ, RARγ) which are bound to DNA as heterodimers with retinoid "X" receptors (RXRs; RXRα, RXRβ, RXRγ). RARs and RXRs must dimerize before they can bind to the DNA. Expression of more than 500 genes is responsive to retinoic acid. RAR-RXR heterodimers recognize retinoic acid response elements on DNA. Upon binding retinoic acid, the receptors undergo a conformational change that causes co-repressors to dissociate from the receptors. Coactivators can then bind to the receptor complex, which may help to loosen the chromatin structure from the histones or may interact with the transcriptional machinery. This response upregulates or downregulates the expression of target genes, including the genes that encode for the receptors themselves. To deactivate retinoic acid receptor signaling, three cytochromes (Cyp26A1, Cyp26B1 Cyp26C1) catalyze the oxidation of RA. The genes for these proteins are induced by high concentrations of RA, thus providing a regulatory feedback mechanism. Embryology In vertebrates and invertebrate chordates, RA has a pivotal role during development. Altering levels of endogenous RA signaling during early embryology, either too low or too high, leads to birth defects, including congenital vascular and cardiovascular defects. Of note, fetal alcohol spectrum disorder encompasses congenital anomalies, including craniofacial, auditory, and ocular defects, neurobehavioral anomalies and mental disabilities caused by maternal consumption of alcohol during pregnancy. It is proposed that in the embryo there is competition between acetaldehyde, an ethanol metabolite, and retinaldehyde (retinal) for aldehyde dehydrogenase activity, resulting in a retinoic acid deficiency, and attributing the congenital birth defects to the loss of RA activated gene activation. In support of this theory, ethanol-induced developmental defects can be ameliorated by increasing the levels of retinol or retinal. As for the risks of too much RA during embryogenesis, the prescription drugs tretinoin (all-trans-retinoic acid) and isotretinoin (13-cis-retinoic acid), used orally or topically for acne treatment, are labeled with boxed warnings for pregnant women or women who may become pregnant, as they are known human teratogens. Immune functions Vitamin A deficiency has been linked to compromised resistance to infectious diseases. In countries where early childhood vitamin A deficiency is common, vitamin A supplementation public health programs initiated in the 1980s were shown to reduce the incidence of diarrhea and measles, and all-cause mortality. Vitamin A deficiency also increases the risk of immune system over-reaction, leading to chronic inflammation in the intestinal system, stronger allergic reactions and autoimmune diseases. Lymphocytes and monocytes are types of white blood cells of the immune system. Lymphocytes include natural killer cells, which function in innate immunity, T cells for adaptive cellular immunity and B cells for antibody-driven adaptive humoral immunity. Monocytes differentiate into macrophages and dendritic cells. Some lymphocytes migrate to the thymus where they differentiate into several types of T cells, in some instances referred to as "killer" or "helper" T cells and further differentiate after leaving the thymus. Each subtype has functions driven by the types of cytokines secreted and organs to which the cells preferentially migrate, also described as trafficking or homing. Retinoic acid (RA) triggers receptors in bone marrow, resulting in generation of new white blood cells. RA regulates proliferation and differentiation of white blood cells, the directed movement of T cells to the intestinal system, and to the up- and down-regulation of lymphocyte function. If RA is adequate, T helper cell subtype Th1 is suppressed and subtypes Th2, Th17 and iTreg (for regulatory) are induced. Dendritic cells located in intestinal tissue have enzymes that convert retinal to all-trans-retinoic acid, to be taken up by retinoic acid receptors on lymphocytes. The process triggers gene expression that leads to T cell types Th2, Th17 and iTreg moving to and taking up residence in mesenteric lymph nodes and Peyer's patches, respectively outside and on the inner wall of the small intestine. The net effect is a down-regulation of immune activity, seen as tolerance of food allergens, and tolerance of resident bacteria and other organisms in the microbiome of the large intestine. In a vitamin A deficient state, innate immunity is compromised and pro-inflammatory Th1 cells predominate. Skin Deficiencies in vitamin A have been linked to an increased susceptibility to skin infection and inflammation. Vitamin A appears to modulate the innate immune response and maintains homeostasis of epithelial tissues and mucosa through its metabolite, retinoic acid (RA). As part of the innate immune system, toll-like receptors in skin cells respond to pathogens and cell damage by inducing a pro-inflammatory immune response which includes increased RA production. The epithelium of the skin encounters bacteria, fungi and viruses. Keratinocytes of the epidermal layer of the skin produce and secrete antimicrobial peptides (AMPs). Production of AMPs resistin and cathelicidin, are promoted by RA. Units of measurement As some carotenoids can be converted into vitamin A, attempts have been made to determine how much of them in the diet is equivalent to a particular amount of retinol, so that comparisons can be made of the benefit of different foods. The situation can be confusing because the accepted equivalences have changed over time. For many years, a system of equivalencies in which an international unit (IU) was equal to 0.3 μg of retinol (~1 nmol), 0.6 μg of β-carotene, or 1.2 μg of other provitamin-A carotenoids was used. This relationship was alternatively expressed by the retinol equivalent (RE): one RE corresponded to 1 μg retinol, to 2 μg β-carotene dissolved in oil, to 6 μg β-carotene in foods, and to 12 μg of either α-carotene, γ-carotene, or β-cryptoxanthin in food. Newer research has shown that the absorption of provitamin-A carotenoids is only half as much as previously thought. As a result, in 2001 the US Institute of Medicine recommended a new unit, the retinol activity equivalent (RAE). Each μg RAE corresponds to 1 μg retinol, 2 μg of β-carotene in oil, 12 μg of "dietary" β-carotene, or 24 μg of the three other dietary provitamin-A carotenoids. Animal models have shown that at the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is converted to retinal and then retinol. The first step of the conversion process consists of one molecule of β-carotene cleaved by the enzyme β-carotene-15, 15'-monooxygenase, which in humans and other mammalian species is encoded by the BCM01 gene, into two molecules of retinal. When plasma retinol is in the normal range, gene expression for SCARB1 and BC01 are suppressed, creating a feedback loop that suppresses β-carotene absorption and conversion. Absorption suppression is not complete, as receptor 36 is not downregulated. Dietary recommendations The US National Academy of Medicine updated Dietary Reference Intakes (DRIs) in 2001 for vitamin A, which included Recommended Dietary Allowances (RDAs). For infants up to 12 months, there was not sufficient information to establish an RDA, so Adequate Intake (AI) is shown instead. As for safety, tolerable upper intake levels (ULs) were also established. For ULs, carotenoids are not added when calculating total vitamin A intake for safety assessments. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men of ages 15 and older, the PRIs are set respectively at 650 and 750 μg RE/day. PRI for pregnancy is 700 μg RE/day, for lactation 1300/day. For children of ages 1–14 years, the PRIs increase with age from 250 to 600 μg RE/day. These PRIs are similar to the US RDAs. The EFSA reviewed the same safety question as the United States, and set ULs at 800 for ages 1–3, 1100 for ages 4–6, 1500 for ages 7–10, 2000 for ages 11–14, 2600 for ages 15–17 and 3000 μg/day for ages 18 and older for preformed vitamin A, i.e., not including dietary contributions from carotenoids. Safety Vitamin A toxicity (hypervitaminosis A) occurs when too much vitamin A accumulates in the body. It comes from consumption of preformed vitamin A but not of carotenoids, as conversion of the latter to retinol is suppressed by the presence of adequate retinol. Retinol safety There are historical reports of acute hypervitaminosis from Arctic explorers consuming bearded seal or polar bear liver, both very rich sources of stored retinol, and there are also case reports of acute hypervitaminosis from consuming fish liver, but otherwise there is no risk from consuming too much via commonly consumed foods. Only consumption of retinol-containing dietary supplements can result in acute or chronic toxicity. Acute toxicity occurs after a single or short-term doses of greater than 150,000 μg. Symptoms include blurred vision, nausea, vomiting, dizziness and headache within 8 to 24 hours. For infants ages 0–6 months given an oral dose to prevent development of vitamin A deficiency, bulging skull fontanel was evident after 24 hours, usually resolved by 72 hours. Chronic toxicity may occur with long-term consumption of vitamin A at doses of 25,000–33,000 IU/day for several months. Excessive consumption of alcohol can lead to chronic toxicity at lower intakes. Symptoms may include nervous system effects, liver abnormalities, fatigue, muscle weakness, bone and skin changes and others. The adverse effects of both acute and chronic toxicity are reversed after consumption is stopped. In 2001, for the purpose of determining ULs for adults, the US Institute of Medicine considered three primary adverse effects and settled on two: teratogenicity, i.e., causing birth defects, and liver abnormalities. Reduced bone mineral density was considered, but dismissed because the human evidence was contradictory. During pregnancy, especially during the first trimester, consumption of retinol in amounts exceeding 4,500 μg/day increased the risk of birth defects, but not below that amount, thus setting a "No-Observed Adverse-Effect Level" (NOAEL). Given the quality of the clinical trial evidence, the NOAEL was divided by an uncertainty factor of 1.5 to set the UL for women of reproductive age at 3,000 μg/day of preformed vitamin A. For all other adults, liver abnormalities were detected at intakes above 14,000 μg/day. Given the weak quality of the clinical evidence, an uncertainty factor of 5 was used, and with rounding, the UL was set at 3,000 μg/day. For children, ULs were extrapolated from the adult value, adjusted for relative body weight. For infants, several case studies reported adverse effects that include bulging fontanels, increased intracranial pressure, loss of appetite, hyperirritability and skin peeling after chronic ingestion of the order of 6,000 or more μg/day. Given the small database, an uncertainty factor of 10 divided into the "Lowest-Observed-Adverse-Effect Level" (LOAEL) led to a UL of 600 μg/day. β-carotene safety No adverse effects other than carotenemia have been reported for consumption of β-carotene rich foods. Supplementation with β-carotene does not cause hypervitaminosis A. Two large clinical trials (ATBC and CARET) were conducted in tobacco smokers to see if years of β-carotene supplementation at 20 or 30 mg/day in oil-filled capsules would reduce the risk of lung cancer. These trials were implemented because observational studies had reported a lower incidence of lung cancer in tobacco smokers who had diets higher in β-carotene. Unexpectedly, high-dose β-carotene or retinol supplementation resulted in a higher incidence of lung cancer and of total mortality due to cardiac mortality. Taking this and other evidence into consideration, the U.S. Institute of Medicine decided not to set a Tolerable Upper Intake Level (UL) for β-carotene. The European Food Safety Authority, acting for the European Union, also decided not to set a UL for β-carotene. Carotenosis Carotenoderma, also referred to as carotenemia, is a benign and reversible medical condition where an excess of dietary carotenoids results in orange discoloration of the outermost skin layer. It is associated with a high blood β-carotene value. This can occur after a month or two of consumption of β-carotene rich foods, such as carrots, carrot juice, tangerine juice, mangos, or in Africa, red palm oil. β-carotene dietary supplements can have the same effect. The discoloration extends to palms and soles of feet, but not to the white of the eye, which helps distinguish the condition from jaundice. Consumption of greater than 30 mg/day for a prolonged period has been confirmed as leading to carotenemia. U.S. labeling For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For vitamin A labeling purposes, 100% of the Daily Value was set at 5,000 IU, but it was revised to 900 μg RAE on 27 May 2016. A table of the old and new adult daily values is provided at Reference Daily Intake. Sources Vitamin A is found in many foods. Vitamin A in food exists either as preformed retinol an active form of vitamin A found in animal liver, dairy and egg products, and some fortified foods, or as provitamin A carotenoids, which are plant pigments digested into vitamin A after consuming carotenoid-rich plant foods, typically in red, orange, or yellow colors. Carotenoid pigments may be masked by chlorophylls in dark green leaf vegetables, such as spinach. The relatively low bioavailability of plant-food carotenoids results partly from binding to proteins chopping, homogenizing or cooking disrupts the plant proteins, increasing provitamin A carotenoid bioavailability. Vegetarian and vegan diets can provide sufficient vitamin A in the form of provitamin A carotenoids if the diet contains carrots, carrot juice, sweet potatoes, green leafy vegetables such as spinach and kale, and other carotenoid-rich foods. In the U.S., the average daily intake of β-carotene is in the range 2–7 mg. Some manufactured foods and dietary supplements are sources of vitamin A or β-carotene. Fortification Some countries require or recommend fortification of foods. As of January 2022, 37 countries, mostly in Sub-Saharan Africa, require food fortification of cooking oil, rice, wheat flour or maize (corn) flour with vitamin A, usually as retinyl palmitate or retinyl acetate. Examples include Pakistan, oil, 11.7 mg/kg and Nigeria, oil, 6 mg/kg; wheat and maize flour, 2 mg/kg. An additional 12 countries, mostly in southeast Asia, have a voluntary fortification program. For example, the government of India recommends 7.95 mg/kg in oil and 0.626 mg/kg for wheat flour and rice. However, compliance in countries with voluntary fortification is lower than countries with mandatory fortification. No countries in Europe or North America fortify foods with vitamin A. Other means of fortifying foods via genetic engineering have been explored. Research on rice began in 1982. The first field trials of golden rice cultivars were conducted in 2004. The result was "Golden Rice", a variety of Oryza sativa rice produced through genetic engineering to biosynthesize β-carotene, a precursor of retinol, in the edible parts of rice. In May 2018, regulatory agencies in the United States, Canada, Australia and New Zealand had concluded that Golden Rice met food safety standards. In July 2021, the Philippines became the first country to officially issue the biosafety permit for commercially propagating Golden Rice. However, in April 2023, the Supreme Court of the Philippines issued a Writ of Kalikasan ordering the Department of Agriculture to stop the commercial distribution of genetically modified rice in the country. Vitamin A supplementation (VAS) Delivery of oral high-dose supplements remains the principal strategy for minimizing deficiency. As of 2017, more than 80 countries worldwide are implementing universal VAS programs targeted to children 6–59 months of age through semi-annual national campaigns. Doses in these programs are one dose of 50,000 or 100,000 IU for children aged 6 to 11 months and 100,000 to 200,000 IU for children aged 12 months to five years, every four to six months. Deficiency Primary causes Vitamin A deficiency is common in developing countries, especially in Sub-Saharan Africa and Southeast Asia. Deficiency can occur at any age, but is most common in pre-school-age children and pregnant women, the latter due to a need to transfer retinol to the fetus. The causes are low intake of retinol-containing, animal-sourced foods and low intake of carotene-containing, plant-sourced foods. Vitamin A deficiency is estimated to affect approximately one third of children under the age of five around the world, possibly leading to the deaths of 670,000 children under five annually. Between 250,000 and 500,000 children in developing countries become blind each year owing to vitamin A deficiency. Vitamin A deficiency is "the leading cause of preventable childhood blindness", according to UNICEF. It also increases the risk of death from common childhood conditions, such as diarrhea. UNICEF regards addressing vitamin A deficiency as critical to reducing child mortality, the fourth of the United Nations' Millennium Development Goals. During diagnosis, night blindness and dry eyes are signs of vitamin A deficiency that can be recognized without requiring biochemical tests. Plasma retinol is used to confirm vitamin A status. A plasma concentration of about 2.0 μmol/L is normal; less than 0.70 μmol/L (equivalent to 20 μg/dL) indicates moderate vitamin A deficiency, and less than 0.35 μmol/L (10 μg/dL) indicates severe vitamin A deficiency. Breast milk retinol of less than 8 μg/gram milk fat is considered insufficient. One weakness of these measures is that they are not good indicators of liver vitamin A stores as retinyl esters in hepatic stellate cells. The amount of vitamin A leaving the liver, bound to retinol binding protein (RBP), is under tight control as long as there are sufficient liver reserves. Only when liver content of vitamin A drops below approximately 20 μg/gram will concentration in the blood decline. Secondary causes There are causes for deficiency other than low dietary intake of vitamin A as retinol or carotenes. Adequate dietary protein and caloric energy are needed for a normal rate of synthesis of RBP, without which, retinol cannot be mobilized to leave the liver. Systemic infections can cause transient decreases in RBP synthesis even if protein-calorie malnutrition is absent. Chronic alcohol consumption reduces liver vitamin A storage. Non-alcoholic fatty liver disease (NAFLD), characterized by the accumulation of fat in the liver, is the hepatic manifestation of metabolic syndrome. Liver damage from NAFLD reduces liver storage capacity for retinol and reduces the ability to mobilize liver stores to maintain normal circulating concentration. Vitamin A appears to be involved in the pathogenesis of anemia by diverse biological mechanisms, such as the enhancement of growth and differentiation of erythrocyte progenitor cells, potentiation of immunity to infection , and mobilization of iron stores from tissues. Animal requirements All vertebrate and chordate species require vitamin A, either as dietary carotenoids or preformed retinol from consuming other animals. Deficiencies have been reported in laboratory-raised and pet dogs, cats, birds, reptiles and amphibians, also commercially raised chickens and turkeys. Herbivore species such as horses, cattle and sheep can get sufficient β-carotene from green pasture to be healthy, but the content in pasture grass dry due to drought and long-stored hay can be too low, leading to vitamin A deficiency. Omnivore and carnivore species, especially those toward the top of the food chain, can accrue large amounts of retinyl esters in their livers, or else excrete retinyl esters in urine as a means of dealing with surplus. Before the era of synthetic retinol, cod liver oil, high in vitamins A and D, was a commonly consumed dietary supplement. Invertebrates cannot synthesize carotenoids or retinol, and thus must accrue these essential nutrients from consumption of algae, plants or animals. Medical uses In 2022, vitamin A was the 346th most commonly prescribed medication in the United States, with more than 50,000 prescriptions. Preventing and treating vitamin A deficiency Recognition of its prevalence and consequences has led to governments and non-government organizations promoting vitamin A fortification of foods and creating programs that administer large bolus-size oral doses of vitamin A to young children every four to six months. In 2008, the World Health Organization estimated that vitamin A supplementation over a decade in 40 countries averted 1.25 million deaths due to vitamin A deficiency. A Cochrane review reported that vitamin A supplementation is associated with a clinically meaningful reduction in morbidity and mortality in children ages six month to five years of age. All-cause mortality was reduced by 14%, and incidence of diarrhea by 12%. However, a Cochrane review by the same group concluded there was insufficient evidence to recommend blanket vitamin A supplementation for infants one to six months of age, as it did not reduce infant mortality or morbidity. Acne Topical retinoic acid and retinol Retinoic acids tretinoin (all-trans-retinoic acid) and isotretinoin (13-cis-retinoic acid) are prescription topical medications used to treat moderate to severe cystic acne and acne not responsive to other treatments. These are usually applied as a skin cream to the face after cleansing to remove make-up and skin oils. Tretinoin and isotretinoin act by binding to two nuclear receptor families within keratinocytes: the retinoic acid receptors (RAR) and the retinoid X receptors (RXR). These events contribute to the normalization of follicular keratinization and decreased cohesiveness of keratinocytes, resulting in reduced follicular occlusion and microcomedone formation. The retinoid-receptor complex competes for coactivator proteins of AP-1, a key transcription factor involved in inflammation. Retinoic acid products also reduce sebum secretion, a nutrient source for bacteria, from facial pores. These drugs, when applied topically, are US-designated Pregnancy Category C (animal reproduction studies have shown an adverse effect on the fetus), and should not be used by pregnant women or women who are anticipating becoming pregnant. Many countries established a physician- and patient- education pregnancy prevention policy. Trifarotene is a prescription retinoid for the topical treatment acne vulgaris. It functions as a retinoic acid receptor (RAR)-γ agonist. Non-prescription topical products that have health claims for reducing facial acne, combating skin dark spots and reducing wrinkles and lines associated with aging often contain retinyl palmitate. The hypothesis is that this is absorbed and de-esterified to free retinol, then converted to retinaldehyde and further metabolized to all-trans-retinoic acid, whence it will have the same effects as prescription products with fewer side effects. There is some ex vivo evidence with human skin that esterified retinol is absorbed and then converted to retinol. In addition to esterified retinol, some of these products contain hydroxypinacolone retinoate, identified as esterified 9-cis-retinoic acid. Oral isotretinoin Oral isotretinoin (retinoic acid isomer) is recommended for treating treatment resistant acne, acne that can lead to scarring, and acne that is associated with psychosocial distress. It is approved by the FDA for treating severe acne vulgaris that is resistant to other treatment options. Isotretinoin is a known teratogen, with an estimated 20–35% risk of physical birth defects to infants that are exposed to isotretinoin in utero, including numerous congenital defects such as craniofacial defects, cardiovascular and neurological malformations or thymic disorders. Neurocognitive impairments in the absence of any physical defects has been established to be 30–60%. For these reasons, physician- and patient-education programs were initiated, recommending that for women of child-bearing age, contraception be initiated a month before starting oral (or topical) isotretinoin, and continue for a month after treatment ended. In the US, isotretinoin was released to the market in 1982 as a revolutionary treatment for severe and refractory acne vulgaris. It was shown that a dose of 0.5–1.0 mg/kg body weight/day is enough to produce a reduction in sebum excretion by 90% within a month or two, but the recommended treatment duration is 4 to 6 months. The mechanism by which orally consumed retinoic acid (RA), as all-trans-tretinoin or 13-cis-isotretinoin improves facial skin health is thought to be by switching on genes and differentiating keratinocytes (immature skin cells) into mature epidermal cells. RA reduces the size and secretion of the sebaceous glands, and by doing so reduces bacterial numbers in both the ducts and skin surface. It reduces inflammation via inhibition of chemotactic responses of monocytes and neutrophils. Other dermatological conditions In addition to the approved use for treating acne vulgaris, researchers have investigated off-label applications for dermatological conditions, such as rosacea, psoriasis, and other conditions. Rosacea was reported as responding favorably to doses lower than used for acne. Isotretinoin in combination with ultraviolet light was shown affective for treating psoriasis. Isotretinoin in combination with injected interferon-alpha showed some potential for treating genital warts. Isotretinoin in combination with topical fluorouracil or injected interferon-alpha showed some potential for treating precancerous skin lesions and skin cancer. Immune function Vitamin A plays an important role in the body's immune function, both the adaptive response, and to help the body fight off infection. The anti-inflammatory effects of vitamin A also contribute to repairing mucosal cells that can be damaged by an infection. For these reasons, there have been quite a few studies looking at the potential role that Vitamin A supplementation may play in improving an immune response or to helping the body fight off an infection. The evidence supporting vitamin A supplementation for children under the age of 7 years to prevent upper respiratory tract infections is weak, and the weak evidence from low-quality clinical trials does not support vitamin A as being effective or having a benefit. More research is needed to consider different doses, the ages and populations of people who may potentially benefit, and the length of treatment. Synthesis Biosynthesis Carotenoid synthesis takes place in plants, certain fungi, and bacteria. Structurally carotenes are tetraterpenes, meaning that they are synthesized biochemically from four 10-carbon terpene units, which in turn were formed from eight 5-carbon isoprene units. Intermediate steps are the creation of a 40-carbon phytoene molecule, conversion to lycopene via desaturation, and then creation of ionone rings at both ends of the molecule. β-carotene has a β-ionone ring at both ends, meaning that the molecule can be divided symmetrically to yield two retinol molecules. α-Carotene has a β-ionone ring at one end and an Ɛ-ionone ring at the other, so it has half the retinol conversion capacity. In most animal species, retinol is synthesized from the breakdown of the plant-formed provitamin, β-carotene. First, the enzyme beta-carotene 15,15'-dioxygenase (BCO-1) cleaves β-carotene at the central double bond, creating an epoxide. This epoxide is then attacked by water creating two hydroxyl groups in the center of the structure. The cleavage occurs when these alcohols are oxidized to the aldehydes using NAD+. The resultant retinal is then quickly reduced to retinol by the enzyme retinol dehydrogenase. Omnivore species such as dogs, wolves, coyotes and foxes in general are low producers of BCO-1. The enzyme is lacking in felids (cats), meaning that vitamin A requirements are met from the retinyl ester content of prey animals. Industrial synthesis β-carotene can be extracted from fungus Blakeslea trispora, marine algae Dunaliella salina or genetically modified yeast Saccharomyces cerevisiae, starting with xylose as a substrate. Chemical synthesis uses either a method developed by BASF or a Grignard reaction utilized by Hoffman-La Roche. The world market for synthetic retinol is primarily for animal feed, leaving approximately 13% for a combination of food, prescription medication and dietary supplement use. Industrial methods for the production of retinol rely on chemical synthesis. The first industrialized synthesis of retinol was achieved by the company Hoffmann-La Roche in 1947. In the following decades, eight other companies developed their own processes. β-ionone, synthesized from acetone, is the essential starting point for all industrial syntheses. Each process involves elongating the unsaturated carbon chain. Pure retinol is extremely sensitive to oxidization and is prepared and transported at low temperatures and oxygen-free atmospheres. When prepared as a dietary supplement or food additive, retinol is stabilized as the ester derivatives retinyl acetate or retinyl palmitate. Prior to 1999, three companies, Roche, BASF and Rhone-Poulenc controlled 96% of global vitamin A sales. In 2001, the European Commission imposed total fines of 855.22 million euros on these and five other companies for their participation in eight distinct market-sharing and price-fixing cartels that dated back to 1989. Roche sold its vitamin division to DSM in 2003. DSM and BASF have the major share of industrial production. A biosynthesis alternative utilizes genetically engineered yeast species Saccharomyces cerevisiae to synthesize retinal and retinol, using xylose as a starting substrate. This was accomplished by having the yeast first synthesize β-carotene and then the cleaving enzyme β-carotene 15,15'-dioxygenase to yield retinal. Research Brain Animal research (on mice), which is pre-clinical, also found Retinoid acid, the bioactive metabolite of vitamin A, to have an effect on brain areas responsible for memory and learning. Cancer Meta-analyses of intervention and observational trials for various types of cancer report mixed results. Supplementation with β-carotene did not appear to decrease the risk of cancer overall, nor specific cancers including: pancreatic, colorectal, prostate, breast, melanoma, or skin cancer generally. High-dose β-carotene supplementation unexpectedly resulted in a higher incidence of lung cancer and of total mortality in people who were cigarette smokers. For dietary retinol, no effects were observed for high dietary intake and breast cancer survival, risk of liver cancer, risk of bladder cancer or risk of colorectal cancer, although the last review did report lower risk for higher β-carotene consumption. In contrast, an inverse association was reported between retinol intake and relative risk of esophageal cancer, gastric cancer, ovarian cancer, pancreatic cancer, lung cancer, melanoma, and cervical cancer. For lung cancer, an inverse association was also seen for β-carotene intake, separate from the retinol results. When high dietary intake was compared to low dietary intake, the decreases in relative risk were in the range of 15 to 20%. For gastric cancer, a meta-analysis of prevention trials reported a 29% decrease in relative risk from retinol supplementation at 1500 μg/day Fetal alcohol spectrum disorder Fetal alcohol spectrum disorder (FASD), formerly referred to as fetal alcohol syndrome, presents as craniofacial malformations, neurobehavioral disorders and mental disabilities, all attributed to exposing human embryos to alcohol during fetal development. The risk of FASD depends on the amount consumed, the frequency of consumption, and the points in pregnancy at which the alcohol is consumed. Ethanol is a known teratogen, i.e., causes birth defects. Ethanol is metabolized by alcohol dehydrogenase enzymes into acetaldehyde. The subsequent oxidation of acetaldehyde into acetate is performed by aldehyde dehydrogenase enzymes. Given that retinoic acid (RA) regulates numerous embryonic and differentiation processes, one of the proposed mechanisms for the teratogenic effects of ethanol is a competition for the enzymes required for the biosynthesis of RA from vitamin A. Animal research demonstrates that in the embryo, the competition takes place between acetaldehyde and retinaldehyde for aldehyde dehydrogenase activity. In this model, acetaldehyde inhibits the production of retinoic acid by retinaldehyde dehydrogenase. Ethanol-induced developmental defects can be ameliorated by increasing the levels of retinol, retinaldehyde, or retinaldehyde dehydrogenase. Thus, animal research supports the reduction of retinoic acid activity as an etiological trigger in the induction of FASD. Malaria Malaria and vitamin A deficiency are both common among young children in sub-Saharan Africa. Vitamin A supplementation to children in regions where vitamin A deficiency is common has repeatedly been shown to reduce overall mortality rates, especially from measles and diarrhea. For malaria, clinical trial results are mixed, either showing that vitamin A treatment did not reduce the incidence of probable malarial fever, or else did not affect incidence, but did reduce slide-confirmed parasite density and reduced the number of fever episodes. The question was raised as to whether malaria causes vitamin A deficiency, or vitamin A deficiency contributes to the severity of malaria, or both. Researchers proposed several mechanisms by which malaria (and other infections) could contribute to vitamin A deficiency, including a fever-induced reduction in synthesis of retinal-binding protein (RBP) responsible for transporting retinol from liver to plasma and tissues, but reported finding no evidence for a transient depression or restoration of plasma RBP or retinol after a malarial infection was eliminated. In history In 1912, Frederick Gowland Hopkins demonstrated that unknown accessory factors found in milk, other than carbohydrates, proteins, and fats were necessary for growth in rats. Hopkins received a Nobel Prize for this discovery in 1929. By 1913, one of these substances was independently discovered by Elmer McCollum and Marguerite Davis at the University of Wisconsin–Madison, and Lafayette Mendel and Thomas Burr Osborne at Yale University. McCollum and Davis ultimately received credit because they submitted their paper three weeks before Mendel and Osborne. Both papers appeared in the same issue of the Journal of Biological Chemistry in 1913. The "accessory factors" were termed "fat soluble" in 1918, and later "vitamin A" in 1920. In 1919, Harry Steenbock (University of Wisconsin–Madison) proposed a relationship between yellow plant pigments (β-carotene) and vitamin A. In 1931, Swiss chemist Paul Karrer described the chemical structure of vitamin A. Retinoic acid and retinol were first synthesized in 1946 and 1947 by two Dutch chemists, David Adriaan van Dorp and Jozef Ferdinand Arens. During World War II, German bombers would attack at night to evade British defenses. In order to keep the 1939 invention of a new on-board Airborne Intercept Radar system secret from Germany, the British Ministry of Information told newspapers an unproven claim that the nighttime defensive success of Royal Air Force pilots was due to a high dietary intake of carrots rich in β-carotene, successfully convincing many people. In 1967, George Wald shared the Nobel Prize in Physiology and Medicine for his work on chemical visual processes in the eye. Wald had demonstrated in 1935 that photoreceptor cells in the eye contain rhodopsin, a chromophore composed of the protein opsin and 11-cis-retinal. When struck by light, 11-cis-retinal undergoes photoisomerization to all-trans-retinal and via signal transduction cascade send a nerve signal to the brain. The all-trans-retinal is reduced to all-trans-retinol and travels back to the retinal pigment epithelium to be recycled to 11-cis-retinal and reconjugated to opsin. Wald's work was the culmination of nearly 60 years of research. In 1877, Franz Christian Boll identified a light-sensitive pigment in the outer segments of rod cells of the retina that faded/bleached when exposed to light, but was restored after light exposure ceased. He suggested that this substance, by a photochemical process, conveyed the impression of light to the brain. The research was taken up by Wilhelm Kühne, who named the pigment rhodopsin, also known as "visual purple." Kühne confirmed that rhodopsin is extremely sensitive to light, and thus enables vision in low-light conditions, and that it was this chemical decomposition that stimulated nerve impulses to the brain. Research stalled until after identification of "fat-soluble vitamin A" as a dietary substance found in milkfat but not lard, would reverse night blindness and xerophthalmia. In 1925, Fridericia and Holm demonstrated that vitamin A deficient rats were unable to regenerate rhodopsin after being moved from a light to a dark room.
Biology and health sciences
Vitamins
Health
54117
https://en.wikipedia.org/wiki/Folate
Folate
Folate, also known as vitamin B9 and folacin, is one of the B vitamins. Manufactured folic acid, which is converted into folate by the body, is used as a dietary supplement and in food fortification as it is more stable during processing and storage. Folate is required for the body to make DNA and RNA and metabolise amino acids necessary for cell division and maturation of blood cells. As the human body cannot make folate, it is required in the diet, making it an essential nutrient. It occurs naturally in many foods. The recommended adult daily intake of folate in the U.S. is 400 micrograms from foods or dietary supplements. Folate in the form of folic acid is used to treat anemia caused by folate deficiency. Folic acid is also used as a supplement by women during pregnancy to reduce the risk of neural tube defects (NTDs) in the baby. NTDs include anencephaly and spina bifida, among other defects. Low levels in early pregnancy are believed to be the cause of more than half of babies born with NTDs. More than 80 countries use either mandatory or voluntary fortification of certain foods with folic acid as a measure to decrease the rate of NTDs. Long-term supplementation with relatively large amounts of folic acid is associated with a small reduction in the risk of stroke and an increased risk of prostate cancer. There are concerns that large amounts of supplemental folic acid can hide vitamin B12 deficiency. Not consuming enough folate can lead to folate deficiency. This may result in a type of anemia in which red blood cells become abnormally large. Symptoms may include feeling tired, heart palpitations, shortness of breath, open sores on the tongue, and changes in the color of the skin or hair. Folate deficiency in children may develop within a month of poor dietary intake. In adults, normal total body folate is between 10 and 30 mg with about half of this amount stored in the liver and the remainder in blood and body tissues. In plasma, the natural folate range is 150 to 450 nM. Folate was discovered between 1931 and 1943. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 65th most commonly prescribed medication in the United States, with more than 10million prescriptions. The term "folic" is from the Latin word (which means leaf) because it was found in dark-green leafy vegetables. Definition Folate (vitamin B9) refers to the many forms of folic acid and its related compounds, including tetrahydrofolic acid (the active form), methyltetrahydrofolate (the primary form found in blood), methenyltetrahydrofolate, folinic acid, folacin, and pteroylglutamic acid. Historic names included L. ⁠casei factor, vitamin Bc and vitamin M. The terms folate and folic acid have somewhat different meanings in different contexts, although sometimes used interchangeably. Within the field of organic chemistry, folate refers to the conjugate base of folic acid. Within the field of biochemistry, folates refer to a class of biologically active compounds related to and including folic acid. Within the field of nutrition, the folates are a family of essential nutrients related to folic acid obtained from natural sources whereas the term folic acid is reserved for the manufactured form that is used as a dietary supplement. Chemically, folates consist of three distinct chemical moieties linked together. A pterin (2-amino-4-hydroxy-pteridine) heterocyclic ring is linked by a methylene bridge to a p-aminobenzoyl group that in turn is bonded through an amide linkage to either glutamic acid or poly-glutamate. One-carbon units in a variety of oxidation states may be attached to the N5 nitrogen atom of the pteridine ring and/or the N10 nitrogen atom of the p-aminobenzoyl group. Health effects Folate is especially important during periods of frequent cell division and growth, such as infancy and pregnancy. Folate deficiency hinders DNA synthesis and cell division, affecting hematopoietic cells and neoplasms the most because of their greater frequency of cell division. RNA transcription and subsequent protein synthesis are less affected by folate deficiency, as the mRNA can be recycled and used again (as opposed to DNA synthesis, where a new genomic copy must be created). Birth defects Deficiency of folate in pregnant women has been implicated in neural tube defects (NTDs), with an estimate of 300,000 cases worldwide prior to the implementation in many countries of mandatory food fortification. NTDs occur early in pregnancy (first month), therefore women must have abundant folate upon conception and for this reason there is a recommendation that any woman planning to become pregnant consume a folate-containing dietary supplement before and during pregnancy. The Center for Disease Control and Prevention (CDC) recommends a daily amount of 400 micrograms of folic acid for the prevention of NTDs. Many women take this medication less than the CDC recommends, especially in cases where the pregnancy was unplanned, or in countries that lack healthcare resources and education. Some countries have implemented either mandatory or voluntary food fortification of wheat flour and other grains, but many others rely on public health education and one-on-one healthcare practitioner advice. A meta-analysis of global birth prevalence of spina bifida showed that when a national, mandatory program to fortify the diet with folate was compared to countries without such a fortification program, there was a 30% reduction in live births with spina bifida. Some countries reported a greater than 50% reduction. The United States Preventive Services Task Force recommends folic acid as the supplement or fortification ingredient, as forms of folate other than folic acid have not been studied. A meta-analysis of folate supplementation during pregnancy reported a 28% lower relative risk of newborn congenital heart defects. Prenatal supplementation with folic acid did not appear to reduce the risk of preterm births. One systematic review indicated no effect of folic acid on mortality, growth, body composition, respiratory, or cognitive outcomes of children from birth to 9 years old. There was no relation between maternal folic acid supplementation and an increased risk for childhood asthma. Fertility Folate contributes to spermatogenesis. In women, folate is important for oocyte quality and maturation, implantation, placentation, fetal growth and organ development. Heart disease One meta-analysis reported that multi-year folic acid supplementation, in amounts in most of the included clinical trials at higher than the upper limit of 1,000 μg/day, reduced the relative risk of cardiovascular disease by a modest 4%. Two older meta-analyses, which would not have incorporated results from newer clinical trials, reported no changes to the risk of cardiovascular disease. Stroke The absolute risk of stroke with supplementation decreases from 4.4% to 3.8% (a 10% decrease in relative risk). Two other meta-analyses reported a similar decrease in relative risk. Two of these three were limited to people with pre-existing cardiovascular disease or coronary heart disease. The beneficial result may be associated with lowering circulating homocysteine concentration, as stratified analysis showed that risk was reduced more when there was a larger decrease in homocysteine. The effect was also larger for the studies that were conducted in countries that did not have mandatory grain folic acid fortification. The beneficial effect was larger in the subset of trials that used a lower folic acid supplement compared to higher. Cancer Chronically insufficient intake of folate may increase the risk of colorectal, breast, ovarian, pancreatic, brain, lung, cervical, and prostate cancers. Early after fortification programs were implemented, high intakes were theorized to accelerate the growth of preneoplastic lesions that could lead to cancer, specifically colon cancer. Subsequent meta-analyses of the effects of low versus high dietary folate, elevated serum folate, and supplemental folate in the form of folic acid have reported at times conflicting results. Comparing low to high dietary folate showed a modest but statistically significant reduced risk of colon cancer. For prostate cancer risk, comparing low to high dietary folate showed no effect. A review of trials that involved folic acid dietary supplements reported a statistically significant 24% increase in prostate cancer risk. It was shown that supplementation with folic acid at 1,000 to 2,500 μg/day – the amounts used in many of the cited supplement trials – would result in higher concentrations of serum folate than what is achieved from diets high in food-derived folate. The second supplementation review reported no significant increase or decrease in total cancer incidence, colorectal cancer, other gastrointestinal cancer, genitourinary cancer, lung cancer or hematological malignancies in people who were consuming folic acid supplements. A third supplementation meta-analysis limited to reporting only on colorectal cancer incidence showed that folic acid treatment was not associated with colorectal cancer risk. Anti-folate chemotherapy Folate is important for cells and tissues that divide rapidly. Cancer cells divide rapidly, and drugs that interfere with folate metabolism are used to treat cancer. The antifolate drug methotrexate is often used to treat cancer because it inhibits the production of the active tetrahydrofolate (THF) from the inactive dihydrofolate (DHF). However, methotrexate can be toxic, producing side effects such as inflammation in the digestive tract that make eating normally more difficult. Bone marrow depression (inducing leukopenia and thrombocytopenia) and acute kidney and liver failure have been reported. Folinic acid, under the drug name leucovorin, a form of folate (formyl-THF), can help "rescue" or reverse the toxic effects of methotrexate. Folic acid supplements have little established role in cancer chemotherapy. The supplement of folinic acid in people undergoing methotrexate treatment is to give less rapidly dividing cells enough folate to maintain normal cell functions. The amount of folate given is quickly depleted by rapidly dividing (cancer) cells, so this does not negate the effects of methotrexate. Neurological disorders Conversion of homocysteine to methionine requires folate and vitamin B12. Elevated plasma homocysteine and low folate are associated with cognitive impairment, dementia and Alzheimer's disease. Supplementing the diet with folic acid and vitamin B12 lowers plasma homocysteine. However, several reviews reported that supplementation with folic acid alone or in combination with other B vitamins did not prevent development of cognitive impairment nor slow cognitive decline. Relative risk of autism spectrum disorders (ASDs) was reported reduced by 23% when the maternal diet was supplemented with folic acid during pregnancy. Subset analysis confirmed this among Asian, European and American populations. Cerebral folate deficiency (CFD) has been associated with ASDs. The cerebral folate receptor alpha (FRα) transports 5-methyltetrahydrofolate into the brain. One cause of CFD is autoantibodies that interfere with FRa, and FRa autoantibodies have been reported in ASDs. For individuals with ASD and CFD, meta-analysis reported improvements with treatment with folinic acid, a 5-formyl derivative of tetrahydrofolic acid, for core and associated ASD symptoms. Some evidence links a shortage of folate with clinical depression. Limited evidence from randomized controlled trials showed using folic acid in addition to selective serotonin reuptake inhibitors (SSRIs) may have benefits. Research found a link between depression and low levels of folate. The exact mechanisms involved in the development of schizophrenia and depression are not entirely clear, but the bioactive folate, methyltetrahydrofolate (5-MTHF), a direct target of methyl donors such as S-adenosyl methionine (SAMe), recycles the inactive dihydrobiopterin (BH2) into tetrahydrobiopterin (BH4), the necessary cofactor in various steps of monoamine synthesis, including that of dopamine and serotonin. BH4 serves a regulatory role in monoamine neurotransmission and is required to mediate the actions of most antidepressants. Folic acid, B12 and iron A complex interaction occurs between folic acid, vitamin B12, and iron. A deficiency of folic acid or vitamin B12 may mask the deficiency of iron; so when taken as dietary supplements, the three need to be in balance. Malaria Some studies show iron–folic acid supplementation in children under five may result in increased mortality due to malaria; this has prompted the World Health Organization to alter their iron–folic acid supplementation policies for children in malaria-prone areas, such as India. Absorption, metabolism and excretion Folate in food is roughly one-third in the form of monoglutamate and two-thirds polyglutamate; the latter is hydrolyzed to monoglutamate via a reaction mediated by folate conjugase at the brush border of enterocytes in the proximal small intestine. Subsequently, intestinal absorption is primarily accomplished by the action of the proton-coupled folate transporter (PCFT) protein coded for by the SLC46A1 gene. This functions best at pH 5.5, which corresponds to the acidic status of the proximal small intestine. PCFT binds to both reduced folates and folic acid. A secondary folate transporter is the reduced folate carrier (RFC), coded for by the SLC19A1 gene. It operates optimally at pH 7.4 in the ileum portion of the small intestine. It has a low affinity for folic acid. Production of the receptor proteins is increased in times of folate deficiency. In addition to a role in intestinal absorption, RFC is expressed in virtually all tissues and is the major route of delivery of folate to cells within the systemic circulation under physiological conditions. When pharmacological amounts of folate are taken as a dietary supplement, absorption also takes place by a passive diffusion-like process. In addition, bacteria in the distal portion of the small intestine and in the large intestine synthesize modest amounts of folate, and there are RFC receptors in the large intestine, so this in situ source may contribute to toward the cellular nutrition and health of the local colonocytes. The biological activity of folate in the body depends upon dihydrofolate reductase action in the liver which converts folate into tetrahydrofolate (THF). This action is rate-limiting in humans leading to elevated blood concentrations of unmetabolized folic acid when consumption from dietary supplements and fortified foods nears or exceeds the U.S. Tolerable Upper Intake Level of 1,000 μg per day. The total human body content of folate is estimated to be approximately 15–30 milligrams, with approximately half in the liver. Excretion is via urine and feces. Under normal dietary intake, urinary excretion is mainly as folate cleavage products, but if a dietary supplement is being consumed then there will be intact folate in the urine. The liver produces folate-containing bile, which if not all absorbed in the small intestine, contributes to fecal folate, intact and as cleavage products, which under normal dietary intake has been estimated to be similar in amount to urinary excretion. Fecal content includes what is synthezized by intestinal microflora. Biosynthesis Animals, including humans, cannot synthesize (produce) folate and therefore must obtain folate from their diet. All plants and fungi and certain protozoa, bacteria, and archaea can synthesize folate de novo through variations on the same biosynthetic pathway. The folate molecule is synthesized from pterin pyrophosphate, para-aminobenzoic acid (PABA), and glutamate through the action of dihydropteroate synthase and dihydrofolate synthase. Pterin is in turn derived in a series of enzymatically catalyzed steps from guanosine triphosphate (GTP), while PABA is a product of the shikimate pathway. Bioactivation All of the biological functions of folic acid are performed by THF and its methylated derivatives. Hence folic acid must first be reduced to THF. This four electron reduction proceeds in two chemical steps both catalyzed by the same enzyme, dihydrofolate reductase. Folic acid is first reduced to dihydrofolate and then to tetrahydrofolate. Each step consumes one molecule of NADPH (biosynthetically derived from vitamin B3) and produces one molecule of NADP. Mechanistically, hydride is transferred from NADPH to the C6 position of the pteridine ring. A one-carbon (1C) methyl group is added to tetrahydrofolate through the action of serine hydroxymethyltransferase (SHMT) to yield 5,10-methylenetetrahydrofolate (5,10-CH2-THF). This reaction also consumes serine and pyridoxal phosphate (PLP; vitamin B6) and produces glycine and pyridoxal. A second enzyme, methylenetetrahydrofolate dehydrogenase (MTHFD2) oxidizes 5,10-methylenetetrahydrofolate to an iminium cation which in turn is hydrolyzed to produce 5-formyl-THF and 10-formyl-THF. This series of reactions using the β-carbon atom of serine as the carbon source provide the largest part of the one-carbon units available to the cell. Alternative carbon sources include formate which by the catalytic action of formate–tetrahydrofolate ligase adds a 1C unit to THF to yield 10-formyl-THF. Glycine, histidine, and sarcosine can also directly contribute to the THF-bound 1C pool. Drug interference A number of drugs interfere with the biosynthesis of THF from folic acid. Among them are the antifolate dihydrofolate reductase inhibitors such as the antimicrobial, trimethoprim, the antiprotozoal, pyrimethamine and the chemotherapy drug methotrexate, and the sulfonamides (competitive inhibitors of PABA in the reactions of dihydropteroate synthetase). Valproic acid, one of the most commonly prescribed epilepsy treatment drugs, also used to treat certain psychological conditions such as bipolar disorder, is a known inhibitor of folic acid, and as such, has been shown to cause birth defects, including neural tube defects, plus increased risk for children having cognitive impairment and autism. There is evidence that folate consumption is protective. Folate deficiency is common in alcoholics, attributed to both inadequate diet and an inhibition in intestinal processing of the vitamin. Chronic alcohol use inhibits both the digestion process of dietary folate polyglutamates and the uptake phase of liberated folate monoglutamates. The latter is associated with a significant reduction in the level of expression of RFC. Function Tetrahydrofolate's main function in metabolism is transporting single-carbon groups (i.e., a methyl group, methylene group, or formyl group). These carbon groups can be transferred to other molecules as part of the modification or biosynthesis of a variety of biological molecules. Folates are essential for the synthesis of DNA, the modification of DNA and RNA, the synthesis of methionine from homocysteine, and various other chemical reactions involved in cellular metabolism. These reactions are collectively known as folate-mediated one-carbon metabolism. DNA synthesis Folate derivatives participate in the biosynthesis of both purines and pyrimidines. Formyl folate is required for two of the steps in the biosynthesis of inosine monophosphate, the precursor to GMP and AMP. Methylenetetrahydrofolate donates the C1 center required for the biosynthesis of dTMP (2-deoxythymidine-5-phosphate) from dUMP (2-deoxyuridine-5-phosphate). The conversion is catalyzed by thymidylate synthase. Vitamin B12 activation Methyl-THF converts vitamin B12 to methyl-B12 (methylcobalamin). Methyl-B12 converts homocysteine, in a reaction catalyzed by homocysteine methyltransferase, to methionine. A defect in homocysteine methyltransferase or a deficiency of B12 may lead to a so-called "methyl-trap" of THF, in which THF converts to methyl-THF, causing a deficiency in folate. Thus, a deficiency in B12 can cause accumulation of methyl-THF, mimicking folate deficiency. Dietary recommendations Because of the difference in bioavailability between supplemented folic acid and the different forms of folate found in food, the dietary folate equivalent (DFE) system was established. One DFE is defined as 1 μg of dietary folate. 1 μg of folic acid supplement counts as 1.7 μg DFE. The reason for the difference is that when folic acid is added to food or taken as a dietary supplement with food it is at least 85% absorbed, whereas only about 50% of folate naturally present in food is absorbed. The U.S. Institute of Medicine defines Estimated Average Requirements (EARs), Recommended Dietary Allowances (RDAs), Adequate Intakes (AIs), and Tolerable upper intake levels (ULs) – collectively referred to as Dietary Reference Intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men over age 18, the PRI is set at 330 μg/day. PRI for pregnancy is 600 μg/day, for lactation 500 μg/day. For children ages 1–17 years, the PRIs increase with age from 120 to 270 μg/day. These values differ somewhat from the U.S. RDAs. The United Kingdom's Dietary Reference Value for folate, set by the Committee on Medical Aspects of Food and Nutrition Policy in 1991, is 200 μg/day for adults. Safety The risk of toxicity from folic acid is low because folate is a water-soluble vitamin and is regularly removed from the body through urine. One potential issue associated with high doses of folic acid is that it has a masking effect on the diagnosis of pernicious anaemia due to vitamin B12 deficiency, and may even precipitate or exacerbate neuropathy in vitamin B12-deficient individuals. This evidence justified development of a UL for folate. In general, ULs are set for vitamins and minerals when evidence is sufficient. The adult UL of 1,000 μg for folate (and lower for children) refers specifically to folic acid used as a supplement, as no health risks have been associated with high intake of folate from food sources. The EFSA reviewed the safety question and agreed with United States that the UL be set at 1,000 μg. The Japan National Institute of Health and Nutrition set the adult UL at 1,300 or 1,400 μg depending on age. Reviews of clinical trials that called for long-term consumption of folic acid in amounts exceeding the UL have raised concerns. Excessive amounts derived from supplements are more of a concern than that derived from natural food sources and the relative proportion to vitamin B12 may be a significant factor in adverse effects. One theory is that consumption of large amounts of folic acid leads to detectable amounts of unmetabolized folic acid circulating in blood because the enzyme dihydrofolate reductase that converts folic acid to the biologically active forms is rate limiting. Evidence of a negative health effect of folic acid in blood is not consistent, and folic acid has no known cofactor function that would increase the likelihood of a causal role for free folic acid in disease development. However, low vitamin B12 status in combination with high folic acid intake, in addition to the previously mentioned neuropathy risk, appeared to increase the risk of cognitive impairment in the elderly. Long-term use of folic acid dietary supplements in excess of 1,000 μg/day has been linked to an increase in prostate cancer risk. Food labeling For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For folate labeling purposes, 100% of the Daily Value was 400 μg. As of the 27 May 2016 update, it was kept unchanged at 400 μg. Compliance with the updated labeling regulations was required by 1 January 2020 for manufacturers with US$10 million or more in annual food sales, and by 1 January 2021 for manufacturers with lower volume food sales. A table of the old and new adult daily values is provided at Reference Daily Intake. European Union regulations require that labels declare energy, protein, fat, saturated fat, carbohydrates, sugars, and salt. Voluntary nutrients may be shown if present in significant amounts. Instead of Daily Values, amounts are shown as percent of Reference Intakes (RIs). For folate, 100% RI was set at 200 μg in 2011. Deficiency Folate deficiency can be caused by unhealthy diets that do not include enough vegetables and other folate-rich foods; diseases in which folates are not well absorbed in the digestive system (such as Crohn's disease or celiac disease); some genetic disorders that affect levels of folate; and certain medicines (such as phenytoin, sulfasalazine, or trimethoprim-sulfamethoxazole). Folate deficiency is accelerated by alcohol consumption, possibly by interference with folate transport. Folate deficiency may lead to glossitis, diarrhea, depression, confusion, anemia, and fetal neural tube and brain defects. Other symptoms include fatigue, gray hair, mouth sores, poor growth, and swollen tongue. Folate deficiency is diagnosed by analyzing a complete blood count (CBC) and plasma vitamin B12 and folate levels. A serum folate of 3 μg/L or lower indicates deficiency. Serum folate level reflects folate status, but erythrocyte folate level better reflects tissue stores after intake. An erythrocyte folate level of 140 μg/L or lower indicates inadequate folate status. Serum folate reacts more rapidly to folate intake than erythrocyte folate. Since folate deficiency limits cell division, erythropoiesis (production of red blood cells) is hindered. This leads to megaloblastic anemia, which is characterized by large, immature red blood cells. This pathology results from persistently thwarted attempts at normal DNA replication, DNA repair, and cell division, and produces abnormally large red cells called megaloblasts (and hypersegmented neutrophils) with abundant cytoplasm capable of RNA and protein synthesis, but with clumping and fragmentation of nuclear chromatin. Some of these large cells, although immature (reticulocytes), are released early from the marrow in an attempt to compensate for the anemia. Both adults and children need folate to make normal red and white blood cells and prevent anemia, which causes fatigue, weakness, and inability to concentrate. Increased homocysteine levels suggest tissue folate deficiency, but homocysteine is also affected by vitamin B12 and vitamin B6, renal function, and genetics. One way to differentiate between folate deficiency and vitamin B12 deficiency is by testing for methylmalonic acid (MMA) levels. Normal MMA levels indicate folate deficiency and elevated MMA levels indicate vitamin B12 deficiency. Elevated MMA levels may also be due to the rare metabolic disorder combined malonic and methylmalonic aciduria (CMAMMA). Folate deficiency is treated with supplemental oral folic acid of 400 to 1000 μg per day. This treatment is very successful in replenishing tissues, even if deficiency was caused by malabsorption. People with megaloblastic anemia need to be tested for vitamin B12 deficiency before treatment with folic acid, because if the person has vitamin B12 deficiency, folic acid supplementation can remove the anemia, but can also worsen neurologic problems. Cobalamin (vitamin B12) deficiency may lead to folate deficiency, which, in turn, increases homocysteine levels and may result in the development of cardiovascular disease or birth defects. Sources The United States Department of Agriculture, Agricultural Research Service maintains a food composition database from which folate content in hundreds of foods can be searched as shown in the table. The Food Fortification Initiative lists all countries in the world that conduct fortification programs, and within each country, what nutrients are added to which foods, and whether those programs are voluntary or mandatory. In the US, mandatory fortification of enriched breads, cereals, flours, corn meal, pastas, rice, and other grain products began in January 1998. As of 2023, 140 countries require food fortification with one or more vitamins, with folate required in 69 countries. The most commonly fortified food is wheat flour, followed by maize flour and rice. From country to country, added folic acid amounts range from 0.4 to 5.1 mg/kg, but the great majority are in a more narrow range of 1.0 to 2.5 mg/kg, i.e. 100–250 μg/100g. Folate naturally found in food is susceptible to destruction from high heat cooking, especially in the presence of acidic foods and sauces. It is soluble in water, and so may be lost from foods boiled in water. For foods that are normally consumed cooked, values in the table are for folate naturally occurring in cooked foods. Food fortification Folic acid fortification is a process where synthetic folic acid is added to wheat flour or other foods with the intention of promoting public health through increasing blood folate levels in the populace. It is used as it is more stable during processing and storage. After the discovery of the link between insufficient folic acid and neural tube defects, governments and health organizations worldwide made recommendations concerning folic acid supplementation for women intending to become pregnant. Because the neural tube closes in the first four weeks of gestation, often before many women even know they are pregnant, many countries in time decided to implement mandatory food fortification programs. A meta-analysis of global birth prevalence of spina bifida showed that when mandatory fortification was compared to countries with voluntary fortification or no fortification program, there was a 30% reduction in live births with spina bifida, with some countries reporting a greater than 50% reduction. Folic acid is added to grain products in more than 80 countries, either as required or voluntary fortification, and these fortified products make up a significant source of the population's folate intake. Fortification is controversial, with issues having been raised concerning individual liberty, as well as the theorized health concerns described in the Safety section. In the U.S., there is concern that the federal government mandates fortification but does not provide monitoring of potential undesirable effects of fortification. The Food Fortification Initiative lists all countries in the world that conduct fortification programs, and within each country, what nutrients are added to which foods. The most commonly mandatory fortified vitamin – in 62 countries – is folate; the most commonly fortified food is wheat flour. Australia and New Zealand Australia and New Zealand jointly agreed to wheat flour fortification through the Food Standards Australia New Zealand in 2007. The requirement was set at 135 μg of folate per 100 g of bread. Australia implemented the program in 2009. New Zealand was also planning to fortify bread (excluding organic and unleavened varieties) starting in 2009, but then opted to wait until more research was done. The Association of Bakers and the Green Party had opposed mandatory fortification, describing it as "mass medication". Food Safety Minister Kate Wilkinson reviewed the decision to fortify in July 2009, citing as reasons to oppose claims for links between over consumption of folate with increased risk of cancer. In 2012 the delayed mandatory fortification program was revoked and replaced by a voluntary program, with the hope of achieving a 50% bread fortification target. Canada Canadian public health efforts focused on promoting awareness of the importance of folic acid supplementation for all women of childbearing age and decreasing socio-economic inequalities by providing practical folic acid support to vulnerable groups of women. Folic acid food fortification became mandatory in 1998, with the fortification of 150 μg of folic acid per 100 grams of enriched flour and uncooked cereal grains. The results of folic acid fortification on the rate of neural tube defects in Canada have been positive, showing a 46% reduction in prevalence of NTDs; the magnitude of reduction was proportional to the prefortification rate of NTDs, essentially removing geographical variations in rates of NTDs seen in Canada before fortification. United Kingdom While the Food Standards Agency recommended folic acid fortification, and wheat flour is fortified with iron, folic acid fortification of wheat flour is allowed voluntarily rather than required. A 2018 review by authors based in the United Kingdom strongly recommended that mandatory fortification be reconsidered as a means of reducing the risk of neural tube defects. In November 2024 the UK government announced legislation to require folic acid fortification in bread by the end of 2026. United States In 1996, the United States Food and Drug Administration (FDA) published regulations requiring the addition of folic acid to enriched breads, cereals, flours, corn meals, pastas, rice, and other grain products. This ruling took effect on 1 January 1998, and was specifically targeted to reduce the risk of neural tube birth defects in newborns. There were concerns expressed that the amount of folate added was insufficient. The fortification program was expected to raise a person's folic acid intake level by 70–130 μg/day; however, an increase of almost double that amount was actually observed. This could be from the fact that many foods are fortified by 160–175% over the required amount. Much of the elder population take supplements that add 400 μg to their daily folic acid intake. This is a concern because 70–80% of the population have detectable levels of unmetabolized folic acid in their blood, a consequence of folic acid supplementation and fortification. However, at blood concentrations achieved via food fortification, folic acid has no known cofactor function that would increase the likelihood of a causal role for free folic acid in disease development. The U.S. National Center for Health Statistics conducts the biannual National Health and Nutrition Examination Survey (NHANES) to assess the health and nutritional status of adults and children in the United States. Some results are reported as What We Eat In America. The 2013–2014 survey reported that for adults ages 20 years and older, men consumed an average of 249 μg/day folate from food plus 207 μg/day of folic acid from consumption of fortified foods, for a combined total of 601 μg/day of dietary folate equivalents (DFEs because each microgram of folic acid counts as 1.7 μg of food folate). For women, the values are 199, 153 and 459 μg/day, respectively. This means that fortification led to a bigger increase in folic acid intake than first projected, and that more than half the adults are consuming more than the RDA of 400 μg (as DFEs). Even so, fewer than half of pregnant women are exceeding the pregnancy RDA of 600 μg/day. Before folic acid fortification, about 4,100 pregnancies were affected by a neural tube defect each year in the United States. The Centers for Disease Control and Prevention reported in 2015 that since the addition of folic acid in grain-based foods as mandated by the FDA, the rate of neural tube defects dropped by 35%. This translates to an annual saving in total direct costs of approximately $508 million for the NTD-affected births that were prevented. History In the 1920s, scientists believed folate deficiency and anemia were the same condition. In 1931, researcher Lucy Wills made a key observation that led to the identification of folate as the nutrient required to prevent anemia during pregnancy. Wills demonstrated that anemia could be reversed with brewer's yeast. In the late 1930s, folate was identified as the corrective substance in brewer's yeast. It was first isolated via extraction from spinach leaves by Herschel K. Mitchell, Esmond E. Snell, and Roger J. Williams in 1941. The term "folic" is from the Latin word (which means leaf) because it was found in dark-green leafy vegetables. Historic names included L. casei factor, vitamin Bc after research done in chicks and vitamin M after research done in monkeys. Bob Stokstad isolated the pure crystalline form in 1943, and was able to determine its chemical structure while working at the Lederle Laboratories of the American Cyanamid Company. This historical research project, of obtaining folic acid in a pure crystalline form in 1945, was done by the team called the "folic acid boys", under the supervision and guidance of Director of Research Dr. Yellapragada Subbarow, at the Lederle Lab, Pearl River, New York. This research subsequently led to the synthesis of the antifolate aminopterin, which was used to treat childhood leukemia by Sidney Farber in 1948. In the 1950s and 1960s, scientists began to discover the biochemical mechanisms of action for folate. In 1960, researchers linked folate deficiency to risk of neural tube defects. In the late 1990s, the U.S. and Canadian governments decided that despite public education programs and the availability of folic acid supplements, there was still a challenge for women of child-bearing age to meet the daily folate recommendations, which is when those two countries implemented folate fortification programs. As of December 2018, 62 countries mandated food fortification with folic acid. Animals Veterinarians may test cats and dogs if a risk of folate deficiency is indicated. Cats with exocrine pancreatic insufficiency, more so than dogs, may have low serum folate. In dog breeds at risk for cleft lip and cleft palate dietary folic acid supplementation significantly decreased incidence.
Biology and health sciences
Vitamins
Health
54118
https://en.wikipedia.org/wiki/Biotin
Biotin
Biotin (also known as vitamin B7 or vitamin H) is one of the B vitamins. It is involved in a wide range of metabolic processes, both in humans and in other organisms, primarily related to the utilization of fats, carbohydrates, and amino acids. The name biotin, borrowed from the German , derives from the Ancient Greek word (; 'life') and the suffix "-in" (a suffix used in chemistry usually to indicate 'forming'). Biotin appears as a white, needle-like crystalline solid. Chemical description Biotin is classified as a heterocyclic compound, with a sulfur-containing tetrahydrothiophene ring fused to a ureido group. A C5-carboxylic acid side chain is appended to the former ring. The ureido ring, containing the −N−CO−N− group, serves as the carbon dioxide carrier in carboxylation reactions. Biotin is a coenzyme for five carboxylase enzymes, which are involved in the catabolism of amino acids and fatty acids, synthesis of fatty acids, and gluconeogenesis. Biotinylation of histone proteins in nuclear chromatin plays a role in chromatin stability and gene expression. Dietary recommendations The US National Academy of Medicine updated Dietary Reference Intakes for many vitamins in 1998. At that time there was insufficient information to establish estimated average requirement or recommended dietary allowance, terms that exist for most vitamins. In instances such as this, the academy sets adequate intakes (AIs) with the understanding that at some later date, when the physiological effects of biotin are better understood, AIs will be replaced by more exact information. The biotin AIs for both males and females are: Australia and New Zealand set AIs similar to the US. The European Food Safety Authority (EFSA) also identifies AIs, setting values at 40 μg/day for adults, pregnancy at 40 μg/day, and breastfeeding at 45 μg/day. For children ages 1–17 years, the AIs increase with age from 20 to 35 μg/day. Safety The US National Academy of Medicine estimates upper limits for vitamins and minerals when evidence for a true limit is sufficient. For biotin, however, there is no upper limit because the adverse effects of high biotin intake have not been determined. The EFSA also reviewed safety and reached the same conclusion as in the United States. Labeling regulations For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of daily value. For biotin labeling purposes, 100% of the daily value was 300 μg/day, but as of May 27, 2016, it was revised to 30 μg/day to agree with the adequate intake. Compliance with the updated labeling regulations was required by January 1, 2020, for manufacturers with US$10 million or more in annual food sales, and by January 1, 2021, for manufacturers with lower volume food sales. A table of the old and new adult daily values is provided at Reference Daily Intake. Sources Biotin is stable at room temperature and is not destroyed by cooking. The dietary biotin intake in Western populations has been estimated to be in the range of 35 to 70 μg/day. Nursing infants ingest about 6 μg/day. Biotin is available in dietary supplements, individually or as an ingredient in multivitamins. According to the Global Fortification Data Exchange, biotin deficiency is so rare that no countries require that foods be fortified. Physiology Biotin is a water-soluble B vitamin. Consumption of large amounts as a dietary supplement results in absorption, followed by excretion into urine as biotin. Consumption of biotin as part of a normal diet results in urinary excretion of biotin and biotin metabolites. Absorption Biotin in food is bound to proteins. Digestive enzymes reduce the proteins to biotin-bound peptides. The intestinal enzyme biotinidase, found in pancreatic secretions and in the brush border membranes of all three parts of the small intestine, frees biotin, which is then absorbed from the small intestine. When consumed as a biotin dietary supplement, absorption is nonsaturable, meaning that even very high amounts are absorbed effectively. Transport across the jejunum is faster than across the ileum. The large intestine microbiota synthesizes amounts of biotin estimated to be similar to the amount taken in the diet, and a significant portion of this biotin exists in the free (protein-unbound) form and, thus, is available for absorption. How much is absorbed in humans is unknown, although a review did report that human colon epithelial cells in vitro demonstrated an ability to uptake biotin. Once absorbed, sodium-dependent multivitamin transporter (SMVT) mediates biotin uptake into the liver. SMVT also binds pantothenic acid, so high intakes of either of these vitamins can interfere with the transport of the other. Metabolism and excretion Biotin catabolism occurs via two pathways. In one, the valeric acid sidechain is cleaved, resulting in bisnorbiotin. In the other path, the sulfur is oxidized, resulting in biotin sulfoxide. Urine content is proportionally about half biotin, plus bisnorbiotin, biotin sulfoxide, and small amounts of other metabolites. Factors that affect biotin requirements Chronic alcohol use is associated with a significant reduction in plasma biotin. Intestinal biotin uptake also appears to be sensitive to the effect of the anti-epilepsy drugs carbamazepine and primidone. Relatively low levels of biotin have also been reported in the urine or plasma of patients who have had a partial gastrectomy or have other causes of achlorhydria, as well as burn patients, elderly individuals, and athletes. Pregnancy and lactation may be associated with an increased demand for biotin. In pregnancy, this may be due to a possible acceleration of biotin catabolism, whereas, in lactation, the higher demand has yet to be elucidated. Recent studies have shown marginal biotin deficiency can be present in human gestation, as evidenced by increased urinary excretion of 3-hydroxyisovaleric acid, decreased urinary excretion of biotin and bisnorbiotin, and decreased plasma concentration of biotin. Biosynthesis Biotin, synthesized in plants, is essential to plant growth and development. Bacteria also synthesize biotin, and it is thought that bacteria resident in the large intestine may synthesize biotin that is absorbed and utilized by the host organism. Biosynthesis starts from two precursors, alanine and pimeloyl-CoA. These form 7-keto-8-aminopelargonic acid (KAPA). KAPA is transported from plant peroxisomes to mitochondria where it is converted to 7,8-diaminopelargonic acid (DAPA) with the help of the enzyme, BioA. The enzyme dethiobiotin synthetase catalyzes the formation of the ureido ring via a DAPA carbamate activated with ATP, creating dethiobiotin with the help of the enzyme, BioD, which is then converted into biotin which is catalyzed by BioB. The last step is catalyzed by biotin synthase, a radical SAM enzyme. The sulfur is donated by an unusual [2Fe-2S] ferredoxin. Depending on the species of bacteria, Biotin can be synthesized via multiple pathways. Cofactor biochemistry The enzyme holocarboxylase synthetase covalently attaches biotin to five human carboxylase enzymes: Acetyl-CoA carboxylase alpha (ACC1) Acetyl-CoA carboxylase beta (ACC2) Pyruvate carboxylase (PC) Methylcrotonyl-CoA carboxylase (MCC) Propionyl-CoA carboxylase (PCC) For the first two, biotin serves as a cofactor responsible for the transfer of bicarbonate to acetyl-CoA, converting it to malonyl-CoA for fatty acid synthesis. PC participates in gluconeogenesis. MCC catalyzes a step in leucine metabolism. PCC catalyzes a step in the metabolism of propionyl-CoA. Metabolic degradation of the biotinylated carboxylases leads to the formation of biocytin. This compound is further degraded by biotinidase to release biotin, which is then reutilized by holocarboxylase synthetase. Biotinylation of histone proteins in nuclear chromatin is a posttranslational modification that plays a role in chromatin stability and gene expression. Deficiency Primary biotin deficiency, meaning deficiency due to too little biotin in the diet, is rare because biotin is contained in many foods. Subclinical deficiency can cause mild symptoms, such as hair thinning, brittle fingernails, or skin rash, typically on the face. Aside from inadequate dietary intake (rare), biotin deficiency can be caused by a genetic disorder that affects biotin metabolism. The most common among these is biotinidase deficiency. Low activity of this enzyme causes a failure to recycle biotin from biocytin. Rarer are carboxylase and biotin transporter deficiencies. Neonatal screening for biotinidase deficiency started in the United States in 1984, with many countries now also testing for this genetic disorder at birth. Treatment is a lifelong dietary supplement with biotin. If biotinidase deficiency goes untreated, it can be fatal. Diagnosis Low serum and urine biotin are not sensitive indicators of inadequate biotin intake. However, serum testing can be useful for confirmation of consumption of biotin-containing dietary supplements, and whether a period of refraining from supplement use is long enough to eliminate the potential for interfering with drug tests. Indirect measures depend on the biotin requirement for carboxylases. 3-Methylcrotonyl-CoA is an intermediate step in the catabolism of the amino acid leucine. Without biotin, the pathway diverts to 3-hydroxyisovaleric acid. Urinary excretion of this compound is an early and sensitive indicator of biotin deficiency. Deficiency as a result of metabolic disorders Biotinidase deficiency is a deficiency of the enzyme that recycles biotin, due to an inherited genetic mutation. Biotinidase catalyzes the cleavage of biotin from biocytin and biotinyl-peptides (the proteolytic degradation products of each holocarboxylase) and thereby recycles biotin. It is also important in freeing biotin from dietary protein-bound biotin. Neonatal screening for biotinidase deficiency started in the United States in 1984, which as of 2017 was reported as required in more than 30 countries. Profound biotinidase deficiency, defined as less than 10% of normal serum enzyme activity, which has been reported as 7.1 nmol/min/mL, has an incidence of 1 in 40,000 to 1 in 60,000, but with rates as high as 1 in 10,000 in countries with high incidence of consanguineous marriages (second cousin or closer). Partial biotinidase deficiency is defined as 10% to 30% of normal serum activity. Incidence data stems from government-mandated newborn screening. For profound deficiency, treatment is oral dosing with 5 to 20 mg per day. Seizures are reported as resolving in hours to days, with other symptoms resolving within weeks. Treatment of partial biotinidase deficiency is also recommended even though some untreated people never manifest symptoms. Lifelong treatment with supplemental biotin is recommended for both profound and partial biotinidase deficiency. Inherited metabolic disorders characterized by deficient activities of biotin-dependent carboxylases are termed multiple carboxylase deficiency. These include deficiencies in the enzymes holocarboxylase synthetase. Holocarboxylase synthetase deficiency prevents the body's cells from using biotin effectively and thus interferes with multiple carboxylase reactions. There can also be a genetic defect affecting the sodium-dependent multivitamin transporter protein. Biochemical and clinical manifestations of any of these metabolic disorders can include ketolactic acidosis, organic aciduria, hyperammonemia, rash, hypotonia, seizures, developmental delay, alopecia and coma. Use in biotechnology Chemically modified versions of biotin are widely used throughout the biotechnology industry to isolate proteins and non-protein compounds for biochemical assays. Because egg-derived avidin binds strongly to biotin with a dissociation constant Kd ≈ 10−15 M, biotinylated compounds of interest can be isolated from a sample by exploiting this highly stable interaction. First, the chemically modified biotin reagents are bound to the targeted compounds in a solution via a process called biotinylation. The choice of which chemical modification to use is responsible for the biotin reagent binding to a specific protein. Second, the sample is incubated with avidin bound to beads, then rinsed, removing all unbound proteins, while leaving only the biotinylated protein bound to avidin. Last, the biotinylated protein can be eluted from the beads with excess free biotin. The process can also utilize bacteria-derived streptavidin bound to beads, but because it has a higher dissociation constant than avidin, very harsh conditions are needed to elute the biotinylated protein from the beads, which often will denature the protein of interest. Interference with medical laboratory results When people are ingesting high levels of biotin in dietary supplements, a consequence can be clinically significant interference with diagnostic blood tests that use biotin-streptavidin technology. This methodology is commonly used to measure levels of hormones such as thyroid hormones, and other analytes such as 25-hydroxyvitamin D. Biotin interference can produce both falsely normal and falsely abnormal results. In the US, biotin as a non-prescription dietary supplement is sold in amounts of 1 to 10 mg per serving, with claims for supporting hair and nail health, and as 300 mg per day as a possibly effective treatment for multiple sclerosis (see § Research). Overconsumption of 5 mg/day or higher causes elevated concentration in plasma that interferes with biotin-streptavidin immunoassays in an unpredictable manner. Healthcare professionals are advised to instruct patients to stop taking biotin supplements for 48 h or even up to weeks before the test, depending on the specific test, dose, and frequency of biotin uptake. Guidance for laboratory staff is proposed to detect and manage biotin interference. History In 1916, W. G. Bateman observed that a diet high in raw egg whites caused toxic symptoms in dogs, cats, rabbits, and humans. By 1927, scientists such as Margarete Boas and Helen Parsons had performed experiments demonstrating the symptoms associated with "egg-white injury." They had found that rats fed large amounts of egg whites as their only protein source exhibited neurological dysfunction, hair loss, dermatitis, and eventually, death. In 1936, Fritz Kögl and Benno Tönnis documented isolating a yeast growth factor in a journal article titled "." (Representation of crystallized biotin from egg yolk). The name biotin derives from the Greek word ('to live') and the suffix "-in" (a general chemical suffix used in organic chemistry). Other research groups, working independently, had isolated the same compound under different names. Hungarian scientist Paul Gyorgy began investigating the factor responsible for egg-white injury in 1933 and in 1939, was successful in identifying what he called "Vitamin H" (the H represents , German for 'hair and skin'). Further chemical characterization of vitamin H revealed that it was water-soluble and present in high amounts in the liver. After experiments performed with yeast and Rhizobium trifolii, West and Wilson isolated a compound they called co-enzyme R. By 1940, it was recognized that all three compounds were identical and were collectively given the name: biotin. Gyorgy continued his work on biotin and in 1941 published a paper demonstrating that egg-white injury was caused by the binding of biotin by avidin. Unlike for many vitamins, there is insufficient information to establish a recommended dietary allowance, so dietary guidelines identify an "adequate intake" based on best available science with the understanding that at some later date this will be replaced by more exact information. Using E. coli, a biosynthesis pathway was proposed by Rolfe and Eisenberg in 1968. The initial step was described as a condensation of pimelyl-CoA and alanine to form 7-oxo-8-aminopelargonic acid. From there, they described a three-step process, the last being introducing a sulfur atom to form the tetrahydrothiophene ring. Research Multiple sclerosis High-dose biotin (300 mg/day = 10,000 times adequate intake) has been used in clinical trials for treatment of multiple sclerosis, a demyelinating autoimmune disease. The hypothesis is that biotin may promote remyelination of the myelin sheath of nerve cells, slowing or even reversing neurodegeneration. The proposed mechanisms are that biotin activates acetyl-CoA carboxylase, a key rate-limiting enzyme during the synthesis of myelin, and by reducing axonal hypoxia through enhanced energy production. Clinical trial results are mixed; a 2019 review concluded that a further investigation of the association between multiple sclerosis symptoms and biotin should be undertaken, whereas two 2020 reviews of a larger number of clinical trials reported no consistent evidence for benefits, and some evidence for increased disease activity and higher risk of relapse. Hair, nails, skin In the United States, biotin is promoted as a dietary supplement for strengthening hair and fingernails, though scientific data supporting these outcomes in humans are very weak. A review of the fingernails literature reported brittle nail improvement as evidence from two pre-1990 clinical trials that had administered an oral dietary supplement of 2.5 mg/day for several months, without a placebo control comparison group. There is no more recent clinical trial literature. A review of biotin as a treatment for hair loss identified case studies of infants and young children with genetic defect biotin deficiency having improved hair growth after supplementation, but went on to report that "there have been no randomized, controlled trials to prove the efficacy of supplementation with biotin in normal, healthy individuals." Biotin is also incorporated into topical hair and skin products with similar claims. The Dietary Supplement Health and Education Act of 1994 states that the US Food and Drug Administration must allow on the product label what are described as "Structure:Function" (S:F) health claims that ingredient(s) are essential for health. For example: Biotin helps maintain healthy skin, hair, and nails. If a S:F claim is made, the label must include the disclaimer "This statement has not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease." Animals In cattle, biotin is necessary for hoof health. Lameness due to hoof problems is common, with herd prevalence estimated at 10 to 35%. Consequences of lameness include less food consumption, lower milk production, and increased veterinary treatment costs. Results after 4–6 months from supplementing biotin at 20 mg/day into daily diet reduces the risk of lameness. A review of controlled trials reported that supplementation at 20 mg/day increased milk yield by 4.8%. The discussion speculated that this could be an indirect consequence of improved hoof health or a direct effect on milk production. For horses, conditions such as chronic laminitis, cracked hooves, or dry, brittle feet incapable of holding shoes are a common problem. Biotin is a popular nutritional supplement. There are recommendations that horses need 15 to 25 mg/day. Studies report biotin improves the growth of new hoof horn rather than improving the status of existing hoof, so months of supplementation are needed for the hoof wall to be completely replaced.
Biology and health sciences
Vitamins
Health
54124
https://en.wikipedia.org/wiki/Rhyolite
Rhyolite
Rhyolite ( ) is the most silica-rich of volcanic rocks. It is generally glassy or fine-grained (aphanitic) in texture, but may be porphyritic, containing larger mineral crystals (phenocrysts) in an otherwise fine-grained groundmass. The mineral assemblage is predominantly quartz, sanidine, and plagioclase. It is the extrusive equivalent of granite. Its high silica content makes rhyolitic magma extremely viscous. This favors explosive eruptions over effusive eruptions, so this type of magma is more often erupted as pyroclastic rock than as lava flows. Rhyolitic ash-flow tuffs are among the most voluminous of continental igneous rock formations. Rhyolitic tuff has been used extensively for construction. Obsidian, which is rhyolitic volcanic glass, has been used for tools from prehistoric times to the present day because it can be shaped to an extremely sharp edge. Rhyolitic pumice finds use as an abrasive, in concrete, and as a soil amendment. Description Rhyolite is an extrusive igneous rock, formed from magma rich in silica that is extruded from a volcanic vent to cool quickly on the surface rather than slowly in the subsurface. It is generally light in color due to its low content of mafic minerals, and it is typically very fine-grained (aphanitic) or glassy. An extrusive igneous rock is classified as rhyolite when quartz constitutes 20% to 60% by volume of its total content of quartz, alkali feldspar, and plagioclase (QAPF) and alkali feldspar makes up 35% to 90% of its total feldspar content. Feldspathoids are not present. This makes rhyolite the extrusive equivalent of granite. However, while the IUGS recommends classifying volcanic rocks on the basis of their mineral composition whenever possible, volcanic rocks are often glassy or so fine-grained that mineral identification is impractical. The rock must then be classified chemically based on its content of silica and alkali metal oxides (K2O plus Na2O). Rhyolite is high in silica and total alkali metal oxides, placing it in the R field of the TAS diagram. The alkali feldspar in rhyolites is sanidine or, less commonly, orthoclase. It is rarely anorthoclase. These feldspar minerals sometimes are present as phenocrysts. The plagioclase is usually sodium-rich (oligoclase or andesine). Cristobalite and trydimite are sometimes present along with the quartz. Biotite, augite, fayalite, and hornblende are common accessory minerals. Geology Due to their high content of silica and low iron and magnesium contents, rhyolitic magmas form highly viscous lavas. As a result, many eruptions of rhyolite are highly explosive, and rhyolite occurs more frequently as pyroclastic rock than as lava flows. Rhyolitic ash flow tuffs are the only volcanic product with volumes rivaling those of flood basalts. Rhyolites also occur as breccias or in lava domes, volcanic plugs, and dikes. Rhyolitic lavas erupt at a relatively low temperature of , significantly cooler than basaltic lavas, which typically erupt at temperatures of . Rhyolites that cool too quickly to grow crystals form a natural glass or vitrophyre, also called obsidian. Slower cooling forms microscopic crystals in the lava and results in textures such as flow foliations, spherulitic, nodular, and lithophysal structures. Some rhyolite is highly vesicular pumice. Peralkaline rhyolites (rhyolites unusually rich in alkali metals) include comendite and pantellerite. Peralkalinity has significant effects on lava flow morphology and mineralogy, such that peralkaline rhyolites can be 10–30 times more fluid than typical calc-alkaline rhyolites. As a result of their increased fluidity, they are able to form small-scale flow folds, lava tubes and thin dikes. Peralkaline rhyolites erupt at relatively high temperatures of more than . They comprise bimodal shield volcanoes at hotspots and rifts (e.g. Rainbow Range, Ilgachuz Range and Level Mountain in British Columbia, Canada). Eruptions of rhyolite lava are relatively rare compared to eruptions of less felsic lavas. Only four eruptions of rhyolite have been recorded since the start of the 20th century: at the St. Andrew Strait volcano in Papua New Guinea and Novarupta volcano in Alaska as well as at Chaitén and Cordón Caulle volcanoes in southern Chile. The eruption of Novarupta in 1912 was the largest volcanic eruption of the 20th century, and began with explosive volcanism that later transitioned to effusive volcanism and the formation of a rhyolite dome in the vent. Petrogenesis Rhyolite magmas can be produced by igneous differentiation of a more mafic (silica-poor) magma, through fractional crystallization or by assimilation of melted crustal rock (anatexis). Associations of andesites, dacites, and rhyolites in similar tectonic settings and with similar chemistry suggests that the rhyolite members were formed by differentiation of mantle-derived basaltic magmas at shallow depths. In other cases, the rhyolite appears to be a product of melting of crustal sedimentary rock. Water vapor plays an important role in lowering the melting point of silicic rock, and some rhyolitic magmas may have a water content as high as 7–8 weight percent. High-silica rhyolite (HSR), with a silica content of 75 to 77·8% , forms a distinctive subgroup within the rhyolites. HSRs are the most evolved of all igneous rocks, with a composition very close to the water-saturated granite eutectic and with extreme enrichment in most incompatible elements. However, they are highly depleted in strontium, barium, and europium. They are interpreted as products of repeated melting and freezing of granite in the subsurface. HSRs typically erupt in large caldera eruptions. Occurrence Rhyolite is common along convergent plate boundaries, where a slab of oceanic lithosphere is being subducted into the Earth's mantle beneath overriding oceanic or continental lithosphere. It can sometimes be the predominant igneous rock type in these settings. Rhyolite is more common when the overriding lithosphere is continental rather than oceanic. The thicker continental crust gives the rising magma more opportunity to differentiate and assimilate crustal rock. Rhyolite has been found on islands far from land, but such oceanic occurrences are rare. The tholeiitic magmas erupted at volcanic ocean islands, such as Iceland, can sometimes differentiate all the way to rhyolite, and about 8% of the volcanic rock in Iceland is rhyolite. However, this is unusual, and the Hawaiian Islands (for example) have no known occurrences of rhyolite. The alkaline magmas of volcanic ocean islands will very occasionally differentiate all the way to peralkaline rhyolites, but differentiation usually ends with trachyte. Small volumes of rhyolite are sometimes erupted in association with flood basalts, late in their history and where central volcanic complexes develop. Name The name rhyolite was introduced into geology in 1860 by the German traveler and geologist Ferdinand von Richthofen from the Greek word rhýax ("a stream of lava") and the rock name suffix "-lite". Uses In North American pre-historic times, rhyolite was quarried extensively in what is now eastern Pennsylvania. Among the leading quarries was the Carbaugh Run Rhyolite Quarry Site in Adams County. Rhyolite was mined there starting 11,500 years ago. Tons of rhyolite were traded across the Delmarva Peninsula, because the rhyolite kept a sharp point when knapped and was used to make spear points and arrowheads. Obsidian is usually of rhyolitic composition, and it has been used for tools since prehistoric times. Obsidian scalpels have been investigated for use in delicate surgery. Pumice, also typically of rhyolitic composition, finds important uses as an abrasive, in concrete, and as a soil amendment. Rhyolitic tuff was used extensively for construction in ancient Rome and has been used in construction in modern Europe.
Physical sciences
Igneous rocks
Earth science
54125
https://en.wikipedia.org/wiki/Breccia
Breccia
Breccia ( , ; ) is a rock composed of large angular broken fragments of minerals or rocks cemented together by a fine-grained matrix. The word has its origins in the Italian language, in which it means "rubble". A breccia may have a variety of different origins, as indicated by the named types including sedimentary breccia, fault or tectonic breccia, igneous breccia, impact breccia, and hydrothermal breccia. A megabreccia is a breccia composed of very large rock fragments, sometimes kilometers across, which can be formed by landslides, impact events, or caldera collapse. Types Breccia is composed of coarse rock fragments held together by cement or a fine-grained matrix. Like conglomerate, breccia contains at least 30 percent of gravel-sized particles (particles over 2mm in size), but it is distinguished from conglomerate because the rock fragments have sharp edges that have not been worn down. These indicate that the gravel was deposited very close to its source area, since otherwise the edges would have been rounded during transport. Most of the rounding of rock fragments takes place within the first few kilometers of transport, though complete rounding of pebbles of very hard rock may take up to of river transport. A megabreccia is a breccia containing very large rock fragments, from at least a meter in size to greater than 400 meters. In some cases, the clasts are so large that the brecciated nature of the rock is not obvious. Megabreccias can be formed by landslides, impact events, or caldera collapse. Breccias are further classified by their mechanism of formation. Sedimentary Sedimentary breccia is breccia formed by sedimentary processes. For example, scree deposited at the base of a cliff may become cemented to form a talus breccia without ever experiencing transport that might round the rock fragments. Thick sequences of sedimentary (colluvial) breccia are generally formed next to fault scarps in grabens. Sedimentary breccia may be formed by submarine debris flows. Turbidites occur as fine-grained peripheral deposits to sedimentary breccia flows. In a karst terrain, a collapse breccia may form due to collapse of rock into a sinkhole or in cave development. Collapse breccias also form by dissolution of underlying evaporite beds. Fault Fault or tectonic breccia results from the grinding action of two fault blocks as they slide past each other. Subsequent cementation of these broken fragments may occur by means of the introduction of mineral matter in groundwater. Igneous Igneous clastic rocks can be divided into two classes: Broken, fragmental rocks associated with volcanic eruptions, both of the lava and pyroclastic type; Broken, fragmental rocks produced by intrusive processes, usually associated with plutons or porphyry stocks. Volcanic Volcanic pyroclastic rocks are formed by explosive eruption of lava and any rocks which are entrained within the eruptive column. This may include rocks plucked off the wall of the magma conduit, or physically picked up by the ensuing pyroclastic surge. Lavas, especially rhyolite and dacite flows, tend to form clastic volcanic rocks by a process known as autobrecciation. This occurs when the thick, nearly solid lava breaks up into blocks and these blocks are then reincorporated into the lava flow again and mixed in with the remaining liquid magma. The resulting breccia is uniform in rock type and chemical composition. Caldera collapse leads to the formation of megabreccias, which are sometimes mistaken for outcrops of the caldera floor. These are instead blocks of precaldera rock, often coming from the unstable oversteepened rim of the caldera. They are distinguished from mesobreccias whose clasts are less than a meter in size and which form layers in the caldera floor. Some clasts of caldera megabreccias can be over a kilometer in length. Within the volcanic conduits of explosive volcanoes the volcanic breccia environment merges into the intrusive breccia environment. There the upwelling lava tends to solidify during quiescent intervals only to be shattered by ensuing eruptions. This produces an alloclastic volcanic breccia. Intrusive Clastic rocks are also commonly found in shallow subvolcanic intrusions such as porphyry stocks, granites and kimberlite pipes, where they are transitional with volcanic breccias. Intrusive rocks can become brecciated in appearance by multiple stages of intrusion, especially if fresh magma is intruded into partly consolidated or solidified magma. This may be seen in many granite intrusions where later aplite veins form a late-stage stockwork through earlier phases of the granite mass. When particularly intense, the rock may appear as a chaotic breccia. Clastic rocks in mafic and ultramafic intrusions have been found and form via several processes: consumption and melt-mingling with wall rocks, where the wall rocks are softened and gradually invaded by the hotter ultramafic intrusion (producing taxitic texture); accumulation of rocks which fall through the magma chamber from the roof, forming chaotic remnants; autobrecciation of partly consolidated cumulate by fresh magma injections; accumulation of xenoliths within a feeder conduit or vent conduit, forming a diatreme breccia pipe. Impact Impact breccias are thought to be diagnostic of an impact event such as an asteroid or comet striking the Earth and are normally found at impact craters. Impact breccia, a type of impactite, forms during the process of impact cratering when large meteorites or comets impact with the Earth or other rocky planets or asteroids. Breccia of this type may be present on or beneath the floor of the crater, in the rim, or in the ejecta expelled beyond the crater. Impact breccia may be identified by its occurrence in or around a known impact crater, and/or an association with other products of impact cratering such as shatter cones, impact glass, shocked minerals, and chemical and isotopic evidence of contamination with extraterrestrial material (e.g., iridium and osmium anomalies). An example of an impact breccia is the Neugrund breccia, which was formed in the Neugrund impact. Hydrothermal Hydrothermal breccias usually form at shallow crustal levels (<1 km) between 150 and 350 °C, when seismic or volcanic activity causes a void to open along a fault deep underground. The void draws in hot water, and as pressure in the cavity drops, the water violently boils. In addition, the sudden opening of a cavity causes rock at the sides of the fault to destabilise and implode inwards, and the broken rock gets caught up in a churning mixture of rock, steam and boiling water. Rock fragments collide with each other and the sides of the void, and the angular fragments become more rounded. Volatile gases are lost to the steam phase as boiling continues, in particular carbon dioxide. As a result, the chemistry of the fluids changes and ore minerals rapidly precipitate. Breccia-hosted ore deposits are quite common. The morphology of breccias associated with ore deposits varies from tabular sheeted veins and clastic dikes associated with overpressured sedimentary strata, to large-scale intrusive diatreme breccias (breccia pipes), or even some synsedimentary diatremes formed solely by the overpressure of pore fluid within sedimentary basins. Hydrothermal breccias are usually formed by hydrofracturing of rocks by highly pressured hydrothermal fluids. They are typical of the epithermal ore environment and are intimately associated with intrusive-related ore deposits such as skarns, greisens and porphyry-related mineralisation. Epithermal deposits are mined for copper, silver and gold. In the mesothermal regime, at much greater depths, fluids under lithostatic pressure can be released during seismic activity associated with mountain building. The pressurised fluids ascend towards shallower crustal levels that are under lower hydrostatic pressure. On their journey, high-pressure fluids crack rock by hydrofracturing, forming an angular in situ breccia. Rounding of rock fragments is less common in the mesothermal regime, as the formational event is brief. If boiling occurs, methane and hydrogen sulfide may be lost to the steam phase, and ore may precipitate. Mesothermal deposits are often mined for gold. Ornamental uses For thousands of years, the striking visual appearance of breccias has made them a popular sculptural and architectural material. Breccia was used for column bases in the Minoan palace of Knossos on Crete in about 1800 BC. Breccia was used on a limited scale by the ancient Egyptians; one of the best-known examples is the statue of the goddess Tawaret in the British Museum. Breccia was regarded by the Romans as an especially precious stone and was often used in high-profile public buildings. Many types of marble are brecciated, such as Breccia Oniciata.
Physical sciences
Petrology
null
54137
https://en.wikipedia.org/wiki/Methane%20clathrate
Methane clathrate
Methane clathrate (CH4·5.75H2O) or (4CH4·23H2O), also called methane hydrate, hydromethane, methane ice, fire ice, natural gas hydrate, or gas hydrate, is a solid clathrate compound (more specifically, a clathrate hydrate) in which a large amount of methane is trapped within a crystal structure of water, forming a solid similar to ice. Originally thought to occur only in the outer regions of the Solar System, where temperatures are low and water ice is common, significant deposits of methane clathrate have been found under sediments on the ocean floors of the Earth (around 1100m below the sea level). Methane hydrate is formed when hydrogen-bonded water and methane gas come into contact at high pressures and low temperatures in oceans. Methane clathrates are common constituents of the shallow marine geosphere and they occur in deep sedimentary structures and form outcrops on the ocean floor. Methane hydrates are believed to form by the precipitation or crystallisation of methane migrating from deep along geological faults. Precipitation occurs when the methane comes in contact with water within the sea bed subject to temperature and pressure. In 2008, research on Antarctic Vostok Station and EPICA Dome C ice cores revealed that methane clathrates were also present in deep Antarctic ice cores and record a history of atmospheric methane concentrations, dating to 800,000 years ago. The ice-core methane clathrate record is a primary source of data for global warming research, along with oxygen and carbon dioxide. Methane clathrates used to be considered as a potential source of abrupt climate change, following the clathrate gun hypothesis. In this scenario, heating causes catastrophic melting and breakdown of primarily undersea hydrates, leading to a massive release of methane and accelerating warming. Current research shows that hydrates react very slowly to warming, and that it's very difficult for methane to reach the atmosphere after dissociation. Some active seeps instead act as a minor carbon sink, because with the majority of methane dissolved underwater and encouraging methanotroph communities, the area around the seep also becomes more suitable for phytoplankton. As the result, methane hydrates are no longer considered one of the tipping points in the climate system, and according to the IPCC Sixth Assessment Report, no "detectable" impact on the global temperatures will occur in this century through this mechanism. Over several millennia, a more substantial response may still be seen. General Methane hydrates were discovered in Russia in the 1960s, and studies for extracting gas from it emerged at the beginning of the 21st century. Structure and composition The nominal methane clathrate hydrate composition is (CH4)4(H2O)23, or 1 mole of methane for every 5.75 moles of water, corresponding to 13.4% methane by mass, although the actual composition is dependent on how many methane molecules fit into the various cage structures of the water lattice. The observed density is around 0.9 g/cm3, which means that methane hydrate will float to the surface of the sea or of a lake unless it is bound in place by being formed in or anchored to sediment. One litre of fully saturated methane clathrate solid would therefore contain about 120 grams of methane (or around 169 litres of methane gas at 0 °C and 1 atm), or one cubic metre of methane clathrate releases about 160 cubic metres of gas. Methane forms a "structure-I" hydrate with two dodecahedral (12 vertices, thus 12 water molecules) and six tetradecahedral (14 water molecules) water cages per unit cell. (Because of sharing of water molecules between cages, there are only 46 water molecules per unit cell.) This compares with a hydration number of 20 for methane in aqueous solution. A methane clathrate MAS NMR spectrum recorded at 275 K and 3.1 MPa shows a peak for each cage type and a separate peak for gas phase methane. In 2003, a clay-methane hydrate intercalate was synthesized in which a methane hydrate complex was introduced at the interlayer of a sodium-rich montmorillonite clay. The upper temperature stability of this phase is similar to that of structure-I hydrate. Natural deposits Methane clathrates are restricted to the shallow lithosphere (i.e. < 2,000 m depth). Furthermore, necessary conditions are found only in either continental sedimentary rocks in polar regions where average surface temperatures are less than 0 °C; or in oceanic sediment at water depths greater than 300 m where the bottom water temperature is around 2 °C. In addition, deep fresh water lakes may host gas hydrates as well, e.g. the fresh water Lake Baikal, Siberia. Continental deposits have been located in Siberia and Alaska in sandstone and siltstone beds at less than 800 m depth. Oceanic deposits seem to be widespread in the continental shelf (see Fig.) and can occur within the sediments at depth or close to the sediment-water interface. They may cap even larger deposits of gaseous methane. Oceanic Methane hydrate can occur in various forms like massive, dispersed within pore spaces, nodules, veins/fractures/faults, and layered horizons. Generally, it is found unstable at standard pressure and temperature conditions, and 1 m3 of methane hydrate upon dissociation yields about 164 m3 of methane and 0.87 m3 of freshwater. There are two distinct types of oceanic deposits. The most common is dominated (> 99%) by methane contained in a structure I clathrate and generally found at depth in the sediment. Here, the methane is isotopically light (δ13C < −60‰), which indicates that it is derived from the microbial reduction of CO2. The clathrates in these deep deposits are thought to have formed in situ from the microbially produced methane since the δ13C values of clathrate and surrounding dissolved methane are similar. However, it is also thought that freshwater used in the pressurization of oil and gas wells in permafrost and along the continental shelves worldwide combines with natural methane to form clathrate at depth and pressure since methane hydrates are more stable in freshwater than in saltwater. Local variations may be widespread since the act of forming hydrate, which extracts pure water from saline formation waters, can often lead to local and potentially significant increases in formation water salinity. Hydrates normally exclude the salt in the pore fluid from which it forms. Thus, they exhibit high electric resistivity like ice, and sediments containing hydrates have higher resistivity than sediments without gas hydrates (Judge [67]). These deposits are located within a mid-depth zone around 300–500 m thick in the sediments (the gas hydrate stability zone, or GHSZ) where they coexist with methane dissolved in the fresh, not salt, pore-waters. Above this zone methane is only present in its dissolved form at concentrations that decrease towards the sediment surface. Below it, methane is gaseous. At Blake Ridge on the Atlantic continental rise, the GHSZ started at 190 m depth and continued to 450 m, where it reached equilibrium with the gaseous phase. Measurements indicated that methane occupied 0-9% by volume in the GHSZ, and ~12% in the gaseous zone. In the less common second type found near the sediment surface, some samples have a higher proportion of longer-chain hydrocarbons (< 99% methane) contained in a structure II clathrate. Carbon from this type of clathrate is isotopically heavier (δ13C is −29 to −57 ‰) and is thought to have migrated upwards from deep sediments, where methane was formed by thermal decomposition of organic matter. Examples of this type of deposit have been found in the Gulf of Mexico and the Caspian Sea. Some deposits have characteristics intermediate between the microbially and thermally sourced types and are considered formed from a mixture of the two. The methane in gas hydrates is dominantly generated by microbial consortia degrading organic matter in low oxygen environments, with the methane itself produced by methanogenic archaea. Organic matter in the uppermost few centimeters of sediments is first attacked by aerobic bacteria, generating CO2, which escapes from the sediments into the water column. Below this region of aerobic activity, anaerobic processes take over, including, successively with depth, the microbial reduction of nitrite/nitrate, metal oxides, and then sulfates are reduced to sulfides. Finally, methanogenesis becomes a dominant pathway for organic carbon remineralization. If the sedimentation rate is low (about 1  cm/yr), the organic carbon content is low (about 1% ), and oxygen is abundant, aerobic bacteria can use up all the organic matter in the sediments faster than oxygen is depleted, so lower-energy electron acceptors are not used. But where sedimentation rates and the organic carbon content are high, which is typically the case on continental shelves and beneath western boundary current upwelling zones, the pore water in the sediments becomes anoxic at depths of only a few centimeters or less. In such organic-rich marine sediments, sulfate becomes the most important terminal electron acceptor due to its high concentration in seawater. However, it too is depleted by a depth of centimeters to meters. Below this, methane is produced. This production of methane is a rather complicated process, requiring a highly reducing environment (Eh −350 to −450 mV) and a pH between 6 and 8, as well as a complex syntrophic, consortia of different varieties of archaea and bacteria. However, it is only archaea that actually emit methane. In some regions (e.g., Gulf of Mexico, Joetsu Basin) methane in clathrates may be at least partially derive from thermal degradation of organic matter (e.g. petroleum generation), with oil even forming an exotic component within the hydrate itself that can be recovered when the hydrate is disassociated. The methane in clathrates typically has a biogenic isotopic signature and highly variable δ13C (−40 to −100‰), with an approximate average of about −65‰ . Below the zone of solid clathrates, large volumes of methane may form bubbles of free gas in the sediments. The presence of clathrates at a given site can often be determined by observation of a "bottom simulating reflector" (BSR), which is a seismic reflection at the sediment to clathrate stability zone interface caused by the unequal densities of normal sediments and those laced with clathrates. Gas hydrate pingos have been discovered in the Arctic oceans Barents sea. Methane is bubbling from these dome-like structures, with some of these gas flares extending close to the sea surface. Reservoir size The size of the oceanic methane clathrate reservoir is poorly known, and estimates of its size decreased by roughly an order of magnitude per decade since it was first recognized that clathrates could exist in the oceans during the 1960s and 1970s. The highest estimates (e.g. 3 m3) were based on the assumption that fully dense clathrates could litter the entire floor of the deep ocean. Improvements in our understanding of clathrate chemistry and sedimentology have revealed that hydrates form in only a narrow range of depths (continental shelves), at only some locations in the range of depths where they could occur (10-30% of the Gas hydrate stability zone), and typically are found at low concentrations (0.9–1.5% by volume) at sites where they do occur. Recent estimates constrained by direct sampling suggest the global inventory occupies between . This estimate, corresponding to 500–2500 gigatonnes carbon (Gt C), is smaller than the 5000 Gt C estimated for all other geo-organic fuel reserves but substantially larger than the ~230 Gt C estimated for other natural gas sources. The permafrost reservoir has been estimated at about 400 Gt C in the Arctic, but no estimates have been made of possible Antarctic reservoirs. These are large amounts. In comparison, the total carbon in the atmosphere is around 800 gigatons (see Carbon: Occurrence). These modern estimates are notably smaller than the 10,000 to 11,000 Gt C (2 m3) proposed by previous researchers as a reason to consider clathrates to be a geo-organic fuel resource (MacDonald 1990, Kvenvolden 1998). Lower abundances of clathrates do not rule out their economic potential, but a lower total volume and apparently low concentration at most sites does suggest that only a limited percentage of clathrates deposits may provide an economically viable resource. Continental Methane clathrates in continental rocks are trapped in beds of sandstone or siltstone at depths of less than 800 m. Sampling indicates they are formed from a mix of thermally and microbially derived gas from which the heavier hydrocarbons were later selectively removed. These occur in Alaska, Siberia, and Northern Canada. In 2008, Canadian and Japanese researchers extracted a constant stream of natural gas from a test project at the Mallik gas hydrate site in the Mackenzie River delta. This was the second such drilling at Mallik: the first took place in 2002 and used heat to release methane. In the 2008 experiment, researchers were able to extract gas by lowering the pressure, without heating, requiring significantly less energy. The Mallik gas hydrate field was first discovered by Imperial Oil in 1971–1972. Commercial use Economic deposits of hydrate are termed natural gas hydrate (NGH) and store 164 m3 of methane, 0.8 m3 water in 1 m3 hydrate. Most NGH is found beneath the seafloor (95%) where it exists in thermodynamic equilibrium. The sedimentary methane hydrate reservoir probably contains 2–10 times the currently known reserves of conventional natural gas, . This represents a potentially important future source of hydrocarbon fuel. However, in the majority of sites deposits are thought to be too dispersed for economic extraction. Other problems facing commercial exploitation are detection of viable reserves and development of the technology for extracting methane gas from the hydrate deposits. In August 2006, China announced plans to spend 800 million yuan (US$100 million) over the next 10 years to study natural gas hydrates. A potentially economic reserve in the Gulf of Mexico may contain approximately of gas. Bjørn Kvamme and Arne Graue at the Institute for Physics and technology at the University of Bergen have developed a method for injecting into hydrates and reversing the process; thereby extracting CH4 by direct exchange. The University of Bergen's method is being field tested by ConocoPhillips and state-owned Japan Oil, Gas and Metals National Corporation (JOGMEC), and partially funded by the U.S. Department of Energy. The project has already reached injection phase and was analyzing resulting data by March 12, 2012. On March 12, 2013, JOGMEC researchers announced that they had successfully extracted natural gas from frozen methane hydrate. In order to extract the gas, specialized equipment was used to drill into and depressurize the hydrate deposits, causing the methane to separate from the ice. The gas was then collected and piped to surface where it was ignited to prove its presence. According to an industry spokesperson, "It [was] the world's first offshore experiment producing gas from methane hydrate". Previously, gas had been extracted from onshore deposits, but never from offshore deposits which are much more common. The hydrate field from which the gas was extracted is located from central Japan in the Nankai Trough, under the sea. A spokesperson for JOGMEC remarked "Japan could finally have an energy source to call its own". Marine geologist Mikio Satoh remarked "Now we know that extraction is possible. The next step is to see how far Japan can get costs down to make the technology economically viable." Japan estimates that there are at least 1.1 trillion cubic meters of methane trapped in the Nankai Trough, enough to meet the country's needs for more than ten years. Both Japan and China announced in May 2017 a breakthrough for mining methane clathrates, when they extracted methane from hydrates in the South China Sea. China described the result as a breakthrough; Praveen Linga from the Department of Chemical and Biomolecular Engineering at the National University of Singapore agreed "Compared with the results we have seen from Japanese research, the Chinese scientists have managed to extract much more gas in their efforts". Industry consensus is that commercial-scale production remains years away. Environmental concerns Experts caution that environmental impacts are still being investigated and that methane—a greenhouse gas with around 86 times as much global warming potential over a 20-year period (GWP100) as carbon dioxide—could potentially escape into the atmosphere if something goes wrong. Furthermore, while cleaner than coal, burning natural gas also creates carbon dioxide emissions. Hydrates in natural gas processing Routine operations Methane clathrates (hydrates) are also commonly formed during natural gas production operations, when liquid water is condensed in the presence of methane at high pressure. It is known that larger hydrocarbon molecules like ethane and propane can also form hydrates, although longer molecules (butanes, pentanes) cannot fit into the water cage structure and tend to destabilise the formation of hydrates. Once formed, hydrates can block pipeline and processing equipment. They are generally then removed by reducing the pressure, heating them, or dissolving them by chemical means (methanol is commonly used). Care must be taken to ensure that the removal of the hydrates is carefully controlled, because of the potential for the hydrate to undergo a phase transition from the solid hydrate to release water and gaseous methane at a high rate when the pressure is reduced. The rapid release of methane gas in a closed system can result in a rapid increase in pressure. It is generally preferable to prevent hydrates from forming or blocking equipment. This is commonly achieved by removing water, or by the addition of ethylene glycol (MEG) or methanol, which act to depress the temperature at which hydrates will form. In recent years, development of other forms of hydrate inhibitors have been developed, like Kinetic Hydrate Inhibitors (increasing the required sub-cooling which hydrates require to form, at the expense of increased hydrate formation rate) and anti-agglomerates, which do not prevent hydrates forming, but do prevent them sticking together to block equipment. Effect of hydrate phase transition during deep water drilling When drilling in oil- and gas-bearing formations submerged in deep water, the reservoir gas may flow into the well bore and form gas hydrates owing to the low temperatures and high pressures found during deep water drilling. The gas hydrates may then flow upward with drilling mud or other discharged fluids. When the hydrates rise, the pressure in the annulus decreases and the hydrates dissociate into gas and water. The rapid gas expansion ejects fluid from the well, reducing the pressure further, which leads to more hydrate dissociation and further fluid ejection. The resulting violent expulsion of fluid from the annulus is one potential cause or contributor to the "kick". (Kicks, which can cause blowouts, typically do not involve hydrates: see Blowout: formation kick). Measures which reduce the risk of hydrate formation include: High flow-rates, which limit the time for hydrate formation in a volume of fluid, thereby reducing the kick potential. Careful measuring of line flow to detect incipient hydrate plugging. Additional care in measuring when gas production rates are low and the possibility of hydrate formation is higher than at relatively high gas flow rates. Monitoring of well casing after it is "shut in" (isolated) may indicate hydrate formation. Following "shut in", the pressure rises while gas diffuses through the reservoir to the bore hole; the rate of pressure rise exhibit a reduced rate of increase while hydrates are forming. Additions of energy (e.g., the energy released by setting cement used in well completion) can raise the temperature and convert hydrates to gas, producing a "kick". Blowout recovery At sufficient depths, methane complexes directly with water to form methane hydrates, as was observed during the Deepwater Horizon oil spill in 2010. BP engineers developed and deployed a subsea oil recovery system over oil spilling from a deepwater oil well below sea level to capture escaping oil. This involved placing a dome over the largest of the well leaks and piping it to a storage vessel on the surface. This option had the potential to collect some 85% of the leaking oil but was previously untested at such depths. BP deployed the system on May 7–8, but it failed due to buildup of methane clathrate inside the dome; with its low density of approximately 0.9 g/cm3 the methane hydrates accumulated in the dome, adding buoyancy and obstructing flow. Methane clathrates and climate change Natural gas hydrates for gas storage and transportation Since methane clathrates are stable at a higher temperature than liquefied natural gas (LNG) (−20 vs −162 °C), there is some interest in converting natural gas into clathrates (Solidified Natural Gas or SNG) rather than liquifying it when transporting it by seagoing vessels. A significant advantage would be that the production of natural gas hydrate (NGH) from natural gas at the terminal would require a smaller refrigeration plant and less energy than LNG would. Offsetting this, for 100 tonnes of methane transported, 750 tonnes of methane hydrate would have to be transported; since this would require a ship of 7.5 times greater displacement, or require more ships, it is unlikely to prove economically feasible.. Recently, methane hydrate has received considerable interest for large scale stationary storage application due to the very mild storage conditions with the inclusion of tetrahydrofuran (THF) as a co-guest. With the inclusion of tetrahydrofuran, though there is a slight reduction in the gas storage capacity, the hydrates have been demonstrated to be stable for several months in a recent study at −2 °C and atmospheric pressure. A recent study has demonstrated that SNG can be formed directly with seawater instead of pure water in combination with THF.
Physical sciences
Glaciology
Earth science
54146
https://en.wikipedia.org/wiki/Breadfruit
Breadfruit
Breadfruit (Artocarpus altilis) is a species of flowering tree in the mulberry and jackfruit family (Moraceae) believed to be a domesticated descendant of Artocarpus camansi originating in New Guinea, the Maluku Islands, and the Philippines. It was initially spread to Oceania via the Austronesian expansion. It was further spread to other tropical regions of the world during the Colonial Era. British and French navigators introduced a few Polynesian seedless varieties to Caribbean islands during the late 18th century. Today it is grown in 90 countries throughout South and Southeast Asia, the Pacific Ocean, the Caribbean, Central America and Africa. Its name is derived from the texture of the moderately ripe fruit when cooked, similar to freshly baked bread and having a potato-like flavor. The trees have been widely planted in tropical regions, including lowland Central America, northern South America, and the Caribbean. In addition to the fruit serving as a staple food in many cultures, the light, sturdy timber of breadfruit has been used for outriggers, ships, and houses in the tropics. Breadfruit is closely related to Artocarpus camansi (breadnut or seeded breadfruit) of New Guinea, the Maluku Islands, and the Philippines, Artocarpus blancoi (tipolo or antipolo) of the Philippines, and Artocarpus mariannensis (dugdug) of Micronesia, all of which are sometimes also referred to as "breadfruit". It is also closely related to the jackfruit. Description Breadfruit trees grow to a height of . The large and thick leaves are deeply cut into pinnate lobes. All parts of the tree yield latex, which is useful for boat caulking. The trees are monoecious, with male and female flowers growing on the same tree. The male flowers emerge first, followed shortly afterward by the female flowers. The latter grow into capitula, which are capable of pollination just three days later. Pollination occurs mainly by fruit bats, but cultivated varieties produce fruit without pollination. The compound, false fruit develops from the swollen perianth, and originates from 1,500 to 2,000 flowers visible on the skin of the fruit as hexagon-like disks. Breadfruit is one of the highest-yielding food plants, with a single tree producing up to 200 or more grapefruit-sized fruits per season, requiring limited care. In the South Pacific, the trees yield 50 to 150 fruits per year, usually round, oval or oblong weighing . Productivity varies between wet and dry areas. Studies in Barbados indicate a reasonable potential of . The ovoid fruit has a rough surface, and each fruit is divided into many achenes, each achene surrounded by a fleshy perianth and growing on a fleshy receptacle. Most selectively bred cultivars have seedless fruit, whereas seeded varieties are grown mainly for their edible seeds. Breadfruit is usually propagated using root cuttings. Breadfruit is closely related to the breadnut. It is similar in appearance to its relative of the same genus, the jackfruit (Artocarpus heterophyllus). The closely related Artocarpus camansi can be distinguished from A. altilis by having spinier fruits with numerous seeds. Artocarpus mariannensis can be distinguished by having dark green elongated fruits with darker yellow flesh, as well as entire or shallowly lobed leaves. Propagation Breadfruit is propagated mainly by seeds, though seedless breadfruit can be propagated by transplanting suckers that grow off the surface roots of the tree. The roots can be purposefully injured to induce the growth of suckers, which are then separated from the root and planted in a pot or directly transplanted into the ground. Pruning also induces sucker growth. Sucker cuttings are placed in plastic bags containing a mixture of soil, peat and sand, and kept in the shade while moistened with liquid fertilizer. When roots are developed, the transplant is put in full sun until time for planting in the orchard. For large-scale propagation, root cuttings are preferred, using segments about thick and long. Rooting may take up to 5 months to develop, with the young trees ready for planting when they are high. Etymology and common names The term breadfruit was first used in the 17th century to describe the bread-like texture of the fruit when baked. Breadfruit has hundreds of varieties and numerous common names varying by its geographic distribution. Taxonomy According to DNA fingerprinting studies, the wild seeded ancestor of breadfruit is the breadnut (Artocarpus camansi) which is native to New Guinea, the Maluku Islands, and the Philippines. It was one of the canoe plants spread by Austronesian voyagers around 3,000 years ago into Micronesia, Melanesia, and Polynesia, where it was not native. A. camansi was domesticated and selectively bred in Polynesia, giving rise to the mostly seedless Artocarpus altilis. Micronesian breadfruit also show evidence of hybridization with the native Artocarpus mariannensis, while most Polynesian and Melanesian cultivars do not. This indicates that Micronesia was initially colonized separately from Polynesia and Melanesia through two different migration events which later came into contact with each other in eastern Micronesia. Distribution and habitat Breadfruit is an equatorial lowland species. It has been spread from its Pacific source to many tropical regions. In 1769, Joseph Banks was stationed in Tahiti as part of the expedition commanded by Captain James Cook. The late-18th-century quest for cheap, high-energy food sources for slaves in British colonies prompted colonial administrators and plantation owners to call for breadfruit to be brought to the Caribbean. As president of the Royal Society, Banks provided a cash bounty and gold medal for success in this endeavor, and successfully lobbied for a British Naval expedition. After an unsuccessful voyage to the South Pacific to collect the plants as commander of , in 1791, William Bligh commanded a second expedition with and , which collected seedless breadfruit plants in Tahiti and transported these to St. Helena in the Atlantic and St. Vincent and Jamaica in the West Indies. The plant grows best below elevations of , but is found at elevations of . Preferred soils are neutral to alkaline (pH of 6.1–7.4) and either sand, sandy loam, loam or sandy clay loam. Breadfruit is able to grow in coral sands and saline soils. The breadfruit is ultra-tropical, requiring a temperature range of and an annual rainfall of . Nutrition Breadfruit is 71% water, 27% carbohydrates, 1% protein and contains negligible fat (table). In a reference amount of , raw breadfruit supplies 103 calories, is a rich source of vitamin C (32% of the Daily Value, DV), and provides a moderate source of potassium (16% DV), with no other nutrients in significant content. Uses Food Breadfruit is a staple food in many tropical regions. Most breadfruit varieties produce fruit throughout the year. Both ripe and unripe fruit have culinary uses; unripe breadfruit is cooked before consumption. Before being eaten, the fruit are roasted, baked, fried or boiled. When cooked, the taste of moderately ripe breadfruit is described as potato-like, or similar to freshly baked bread. One breadfruit tree can produce each season. Because breadfruit trees usually produce large crops at certain times of the year, the preservation of harvested fruit is an issue. One traditional preservation technique known throughout Oceania is to bury peeled and washed fruits in a leaf-lined pit where they ferment over several weeks and produce a sour, sticky paste. Stored in this way, the product may endure a year or more. Some pits are reported to have produced edible contents more than 20 years after burial. Remnants of pit-like formations with stone scattered around (presumed to line them) are often clues indicating prehistoric settlement to archaeologists studying pre-contact history of French Polynesia. In addition to being edible raw, breadfruit can be ground into flour and the seeds can be cooked for consumption. Southeast Asia, Pacific Islands and Madagascar The seedless breadfruit is found in Brunei, Indonesia and Malaysia, where it is called . It is commonly made into fritters and eaten as snacks. Breadfruit fritters are sold as local street food. In the Philippines, breadfruit is known as in Tagalog and in the Visayan languages. It is also called (also spelled ), along with the closely related Artocarpus camansi, and the endemic Artocarpus blancoi ( or ). All three species, as well as the closely related jackfruit, are commonly used much in the same way in savory dishes. The immature fruits are most commonly eaten as (cooked with coconut milk). In the Hawaiian staple food called , the traditional ingredient of mashed taro root can be replaced by, or augmented with, mashed breadfruit ( in Hawaiian). The resulting "breadfruit poi" is called . South Asia In Sri Lanka, it is cooked as a curry using coconut milk and spices (which becomes a side dish) or boiled. Boiled breadfruit is a famous main meal. It is often consumed with scraped coconut or coconut sambol, made of scraped coconut, red chili powder and salt mixed with a dash of lime juice. A traditional sweet snack made of finely sliced, sun-dried breadfruit chips deep-fried in coconut oil and dipped in heated treacle or sugar syrup is known as rata del petti. In India, fritters of breadfruit, called jeev kadge phodi in Konkani or kadachakka varuthath in Malayalam, are a local delicacy in coastal Karnataka and Kerala. In Seychelles, it was traditionally eaten as a substitute for rice, as an accompaniment to the mains. It would either be consumed boiled (friyapen bwi) or grilled (friyapen griye), where it would be put whole in the wood fire used for cooking the main meal and then taken out when ready. It is also eaten as a dessert, called ladob friyapen, where it is boiled in coconut milk, sugar, vanilla, cinnamon and a pinch of salt. Caribbean and Latin America In Belize, the Mayan people call it masapan. In Puerto Rico, breadfruit is called panapén or pana, for short, although the name pana is often used to refer to breadnut, seeds of which have traditionally been boiled, peeled and eaten whole. In some inland regions it is also called mapén and used to make pasteles and alcapurrias. Breadfruit is often served boiled with a mixture of sauteed bacalao (salted cod fish), olive oil and onions. Mostly as tostones where about 1 inch chunks are fried, lighty flattened and fried again. Mofongo de panapén fried breadfruit mashed with olive oil, garlic, broth, and chicharrón. Rellenos de panapén the breadfruit version of papa rellena. Dipping sauce made from boiled ripe breadfruit similar to chutney using spices, sesame seeds, herbs, lentil, coconut milk, and fruit. Both ripe and unripe are boiled together and mashed with milk and butter to make pastelón de panapén, a dish similar to lasagna. Ripe breadfruit is used in desserts: flan de pana (breadfruit custard). Cazuela, a crustless pie with ripe breadfruit, spices, raisins, coconut milk, and sweet potatoes. Breadfruit flour is sold all over Puerto Rico and used for making bread, pastries, cookies, pancakes, waffles, crepes, and almojábana. In the Dominican Republic, it is called buen pan or "good bread". Breadfruit is not popular in Dominican cookery and is used mainly for feeding pigs. In Barbados, breadfruit is boiled with salted meat and mashed with butter to make breadfruit coucou. It is usually eaten with saucy meat dishes. In Haiti, steamed breadfruit is mashed to make a dish called tonmtonm which is eaten with a sauce made with okra and other ingredients, such as fish and crab. In Trinidad and Tobago, breadfruit is boiled, then fried and eaten with saucy meat dishes like curried duck. In Jamaica, breadfruit is boiled in soups or roasted on stove top, in the oven or on wood coal. It is eaten with the national dish ackee and salt fish. The ripe fruit is used in salads or fried as a side dish. In St. Vincent and the Grenadines it is eaten boiled in soups, roasted and fried. When roasted and served with fried jackfish, it is the country's national dish. The ripe fruit is used as a base to make drinks, cakes and ice cream. Timber and other uses Breadfruit was widely used in a variety of ways among Pacific Islanders. Its lightweight wood (specific gravity of 0.27) is resistant to termites and shipworms, so it is used as timber for structures and outrigger canoes. Its wood pulp can also be used to make paper, called breadfruit tapa. The wood of the breadfruit tree was one of the most valuable timbers in the construction of traditional houses in Samoan architecture. Breadfruit contains phytochemicals having potential as an insect repellent. The parts of the fruits that are discarded can be used to feed livestock. The leaves of breadfruit trees can also be browsed by cattle. Breadfruit however, exudes latex upon harvesting, causing the plant sap to adhere to the surface leading to the staining of the epicarp. Proper methods of breadfruit harvesting usually include the process of draining the latex and disposing of it. Sticky white sap or latex is present in all parts of the breadfruit tree and has been used for glue, caulk, and even chewing gum. Native Hawaiians used its sticky latex to trap birds, whose feathers were made into cloaks. In culture On Puluwat in the Caroline Islands, in the context of sacred yitang lore, breadfruit (poi) is a figure of speech for knowledge. This lore is organized into five categories: war, magic, meetings, navigation, and breadfruit. According to an etiological Hawaiian myth, the breadfruit originated from the sacrifice of the war god Kū. After deciding to live secretly among mortals as a farmer, Kū married and had children. He and his family lived happily until a famine seized their island. When he could no longer bear to watch his children suffer, Kū told his wife that he could deliver them from starvation, but to do so he would have to leave them. Reluctantly she agreed, and at her word, Kū descended into the ground right where he had stood until only the top of his head was visible. His family waited around the spot he had last been, day and night, watering it with their tears until suddenly, a small green shoot appeared where Kū had stood. Quickly, the shoot grew into a tall and leafy tree that was laden with heavy breadfruits that Kū's family and neighbors gratefully ate, joyfully saved from starvation. Though they are widely distributed throughout the Pacific, many breadfruit hybrids and cultivars are seedless or otherwise biologically incapable of naturally dispersing long distances. Therefore, it is clear that humans aided distribution of the plant in the Pacific, specifically prehistoric groups who colonized the Pacific Islands. To investigate the patterns of human migration throughout the Pacific, scientists have used molecular dating of breadfruit hybrids and cultivars in concert with anthropological data. Results support the west-to-east migration hypothesis, in which the Lapita people are thought to have traveled from Melanesia to numerous Polynesian islands. The world's largest collection of breadfruit varieties was established by botanist Diane Ragone, from over 20 years' travel to 50 Pacific islands, on a plot outside of Hana, on the isolated east coast of Maui (Hawaii). Gallery
Biology and health sciences
Rosales
null
54147
https://en.wikipedia.org/wiki/Bremsstrahlung
Bremsstrahlung
In particle physics, (; ) is electromagnetic radiation produced by the deceleration of a charged particle when deflected by another charged particle, typically an electron by an atomic nucleus. The moving particle loses kinetic energy, which is converted into radiation (i.e., photons), thus satisfying the law of conservation of energy. The term is also used to refer to the process of producing the radiation. has a continuous spectrum, which becomes more intense and whose peak intensity shifts toward higher frequencies as the change of the energy of the decelerated particles increases. Broadly speaking, or braking radiation is any radiation produced due to the acceleration (positive or negative) of a charged particle, which includes synchrotron radiation (i.e., photon emission by a relativistic particle), cyclotron radiation (i.e. photon emission by a non-relativistic particle), and the emission of electrons and positrons during beta decay. However, the term is frequently used in the more narrow sense of radiation from electrons (from whatever source) slowing in matter. Bremsstrahlung emitted from plasma is sometimes referred to as free–free radiation. This refers to the fact that the radiation in this case is created by electrons that are free (i.e., not in an atomic or molecular bound state) before, and remain free after, the emission of a photon. In the same parlance, bound–bound radiation refers to discrete spectral lines (an electron "jumps" between two bound states), while free–bound radiation refers to the radiative combination process, in which a free electron recombines with an ion. This article uses SI units, along with the scaled single-particle charge . Classical description If quantum effects are negligible, an accelerating charged particle radiates power as described by the Larmor formula and its relativistic generalization. Total radiated power The total radiated power is where (the velocity of the particle divided by the speed of light), is the Lorentz factor, is the vacuum permittivity, signifies a time derivative of and is the charge of the particle. In the case where velocity is parallel to acceleration (i.e., linear motion), the expression reduces to where is the acceleration. For the case of acceleration perpendicular to the velocity (), for example in synchrotrons, the total power is Power radiated in the two limiting cases is proportional to or . Since , we see that for particles with the same energy the total radiated power goes as or , which accounts for why electrons lose energy to bremsstrahlung radiation much more rapidly than heavier charged particles (e.g., muons, protons, alpha particles). This is the reason a TeV energy electron-positron collider (such as the proposed International Linear Collider) cannot use a circular tunnel (requiring constant acceleration), while a proton-proton collider (such as the Large Hadron Collider) can utilize a circular tunnel. The electrons lose energy due to bremsstrahlung at a rate times higher than protons do. Angular distribution The most general formula for radiated power as a function of angle is: where is a unit vector pointing from the particle towards the observer, and is an infinitesimal solid angle. In the case where velocity is parallel to acceleration (for example, linear motion), this simplifies to where is the angle between and the direction of observation . Simplified quantum-mechanical description The full quantum-mechanical treatment of bremsstrahlung is very involved. The "vacuum case" of the interaction of one electron, one ion, and one photon, using the pure Coulomb potential, has an exact solution that was probably first published by Arnold Sommerfeld in 1931. This analytical solution involves complicated mathematics, and several numerical calculations have been published, such as by Karzas and Latter. Other approximate formulas have been presented, such as in recent work by Weinberg and Pradler and Semmelrock. This section gives a quantum-mechanical analog of the prior section, but with some simplifications to illustrate the important physics. We give a non-relativistic treatment of the special case of an electron of mass , charge , and initial speed decelerating in the Coulomb field of a gas of heavy ions of charge and number density . The emitted radiation is a photon of frequency and energy . We wish to find the emissivity which is the power emitted per (solid angle in photon velocity space * photon frequency), summed over both transverse photon polarizations. We express it as an approximate classical result times the free−free emission Gaunt factor gff accounting for quantum and other corrections: if , that is, the electron does not have enough kinetic energy to emit the photon. A general, quantum-mechanical formula for exists but is very complicated, and usually is found by numerical calculations. We present some approximate results with the following additional assumptions: Vacuum interaction: we neglect any effects of the background medium, such as plasma screening effects. This is reasonable for photon frequency much greater than the plasma frequency with the plasma electron density. Note that light waves are evanescent for and a significantly different approach would be needed. Soft photons: , that is, the photon energy is much less than the initial electron kinetic energy. With these assumptions, two unitless parameters characterize the process: , which measures the strength of the electron-ion Coulomb interaction, and , which measures the photon "softness" and we assume is always small (the choice of the factor 2 is for later convenience). In the limit , the quantum-mechanical Born approximation gives: In the opposite limit , the full quantum-mechanical result reduces to the purely classical result where is the Euler–Mascheroni constant. Note that which is a purely classical expression without the Planck constant . A semi-classical, heuristic way to understand the Gaunt factor is to write it as where and are maximum and minimum "impact parameters" for the electron-ion collision, in the presence of the photon electric field. With our assumptions, : for larger impact parameters, the sinusoidal oscillation of the photon field provides "phase mixing" that strongly reduces the interaction. is the larger of the quantum-mechanical de Broglie wavelength and the classical distance of closest approach where the electron-ion Coulomb potential energy is comparable to the electron's initial kinetic energy. The above approximations generally apply as long as the argument of the logarithm is large, and break down when it is less than unity. Namely, these forms for the Gaunt factor become negative, which is unphysical. A rough approximation to the full calculations, with the appropriate Born and classical limits, is Thermal bremsstrahlung in a medium: emission and absorption This section discusses bremsstrahlung emission and the inverse absorption process (called inverse bremsstrahlung) in a macroscopic medium. We start with the equation of radiative transfer, which applies to general processes and not just bremsstrahlung: is the radiation spectral intensity, or power per (area × × photon frequency) summed over both polarizations. is the emissivity, analogous to defined above, and is the absorptivity. and are properties of the matter, not the radiation, and account for all the particles in the medium – not just a pair of one electron and one ion as in the prior section. If is uniform in space and time, then the left-hand side of the transfer equation is zero, and we find If the matter and radiation are also in thermal equilibrium at some temperature, then must be the blackbody spectrum: Since and are independent of , this means that must be the blackbody spectrum whenever the matter is in equilibrium at some temperature – regardless of the state of the radiation. This allows us to immediately know both and once one is known – for matter in equilibrium. In plasma: approximate classical results NOTE: this section currently gives formulas that apply in the Rayleigh–Jeans limit , and does not use a quantized (Planck) treatment of radiation. Thus a usual factor like does not appear. The appearance of in below is due to the quantum-mechanical treatment of collisions. In a plasma, the free electrons continually collide with the ions, producing bremsstrahlung. A complete analysis requires accounting for both binary Coulomb collisions as well as collective (dielectric) behavior. A detailed treatment is given by Bekefi, while a simplified one is given by Ichimaru. In this section we follow Bekefi's dielectric treatment, with collisions included approximately via the cutoff wavenumber, Consider a uniform plasma, with thermal electrons distributed according to the Maxwell–Boltzmann distribution with the temperature . Following Bekefi, the power spectral density (power per angular frequency interval per volume, integrated over the whole sr of solid angle, and in both polarizations) of the bremsstrahlung radiated, is calculated to be where is the electron plasma frequency, is the photon frequency, is the number density of electrons and ions, and other symbols are physical constants. The second bracketed factor is the index of refraction of a light wave in a plasma, and shows that emission is greatly suppressed for (this is the cutoff condition for a light wave in a plasma; in this case the light wave is evanescent). This formula thus only applies for . This formula should be summed over ion species in a multi-species plasma. The special function is defined in the exponential integral article, and the unitless quantity is is a maximum or cutoff wavenumber, arising due to binary collisions, and can vary with ion species. Roughly, when (typical in plasmas that are not too cold), where eV is the Hartree energy, and is the electron thermal de Broglie wavelength. Otherwise, where is the classical Coulomb distance of closest approach. For the usual case , we find The formula for is approximate, in that it neglects enhanced emission occurring for slightly above In the limit , we can approximate as where is the Euler–Mascheroni constant. The leading, logarithmic term is frequently used, and resembles the Coulomb logarithm that occurs in other collisional plasma calculations. For the log term is negative, and the approximation is clearly inadequate. Bekefi gives corrected expressions for the logarithmic term that match detailed binary-collision calculations. The total emission power density, integrated over all frequencies, is and decreases with ; it is always positive. For , we find Note the appearance of due to the quantum nature of . In practical units, a commonly used version of this formula for is This formula is 1.59 times the one given above, with the difference due to details of binary collisions. Such ambiguity is often expressed by introducing Gaunt factor , e.g. in one finds where everything is expressed in the CGS units. Relativistic corrections For very high temperatures there are relativistic corrections to this formula, that is, additional terms of the order of Bremsstrahlung cooling If the plasma is optically thin, the bremsstrahlung radiation leaves the plasma, carrying part of the internal plasma energy. This effect is known as the bremsstrahlung cooling. It is a type of radiative cooling. The energy carried away by bremsstrahlung is called bremsstrahlung losses and represents a type of radiative losses. One generally uses the term bremsstrahlung losses in the context when the plasma cooling is undesired, as e.g. in fusion plasmas. Polarizational bremsstrahlung Polarizational bremsstrahlung (sometimes referred to as "atomic bremsstrahlung") is the radiation emitted by the target's atomic electrons as the target atom is polarized by the Coulomb field of the incident charged particle. Polarizational bremsstrahlung contributions to the total bremsstrahlung spectrum have been observed in experiments involving relatively massive incident particles, resonance processes, and free atoms. However, there is still some debate as to whether or not there are significant polarizational bremsstrahlung contributions in experiments involving fast electrons incident on solid targets. It is worth noting that the term "polarizational" is not meant to imply that the emitted bremsstrahlung is polarized. Also, the angular distribution of polarizational bremsstrahlung is theoretically quite different than ordinary bremsstrahlung. Sources X-ray tube In an X-ray tube, electrons are accelerated in a vacuum by an electric field towards a piece of material called the "target". X-rays are emitted as the electrons hit the target. Already in the early 20th century physicists found out that X-rays consist of two components, one independent of the target material and another with characteristics of fluorescence. Now we say that the output spectrum consists of a continuous spectrum of X-rays with additional sharp peaks at certain energies. The former is due to bremsstrahlung, while the latter are characteristic X-rays associated with the atoms in the target. For this reason, bremsstrahlung in this context is also called continuous X-rays. The German term itself was introduced in 1909 by Arnold Sommerfeld in order to explain the nature of the first variety of X-rays. The shape of this continuum spectrum is approximately described by Kramers' law. The formula for Kramers' law is usually given as the distribution of intensity (photon count) against the wavelength of the emitted radiation: The constant is proportional to the atomic number of the target element, and is the minimum wavelength given by the Duane–Hunt law. The spectrum has a sharp cutoff at which is due to the limited energy of the incoming electrons. For example, if an electron in the tube is accelerated through 60 kV, then it will acquire a kinetic energy of 60 keV, and when it strikes the target it can create X-rays with energy of at most 60 keV, by conservation of energy. (This upper limit corresponds to the electron coming to a stop by emitting just one X-ray photon. Usually the electron emits many photons, and each has an energy less than 60 keV.) A photon with energy of at most 60 keV has wavelength of at least , so the continuous X-ray spectrum has exactly that cutoff, as seen in the graph. More generally the formula for the low-wavelength cutoff, the Duane–Hunt law, is: where is the Planck constant, is the speed of light, is the voltage that the electrons are accelerated through, is the elementary charge, and is picometres. Beta decay Beta particle-emitting substances sometimes exhibit a weak radiation with continuous spectrum that is due to bremsstrahlung (see the "outer bremsstrahlung" below). In this context, bremsstrahlung is a type of "secondary radiation", in that it is produced as a result of stopping (or slowing) the primary radiation (beta particles). It is very similar to X-rays produced by bombarding metal targets with electrons in X-ray generators (as above) except that it is produced by high-speed electrons from beta radiation. Inner and outer bremsstrahlung The "inner" bremsstrahlung (also known as "internal bremsstrahlung") arises from the creation of the electron and its loss of energy (due to the strong electric field in the region of the nucleus undergoing decay) as it leaves the nucleus. Such radiation is a feature of beta decay in nuclei, but it is occasionally (less commonly) seen in the beta decay of free neutrons to protons, where it is created as the beta electron leaves the proton. In electron and positron emission by beta decay the photon's energy comes from the electron-nucleon pair, with the spectrum of the bremsstrahlung decreasing continuously with increasing energy of the beta particle. In electron capture, the energy comes at the expense of the neutrino, and the spectrum is greatest at about one third of the normal neutrino energy, decreasing to zero electromagnetic energy at normal neutrino energy. Note that in the case of electron capture, bremsstrahlung is emitted even though no charged particle is emitted. Instead, the bremsstrahlung radiation may be thought of as being created as the captured electron is accelerated toward being absorbed. Such radiation may be at frequencies that are the same as soft gamma radiation, but it exhibits none of the sharp spectral lines of gamma decay, and thus is not technically gamma radiation. The internal process is to be contrasted with the "outer" bremsstrahlung due to the impingement on the nucleus of electrons coming from the outside (i.e., emitted by another nucleus), as discussed above. Radiation safety In some cases, such as the decay of , the bremsstrahlung produced by shielding the beta radiation with the normally used dense materials (e.g. lead) is itself dangerous; in such cases, shielding must be accomplished with low density materials, such as Plexiglas (Lucite), plastic, wood, or water; as the atomic number is lower for these materials, the intensity of bremsstrahlung is significantly reduced, but a larger thickness of shielding is required to stop the electrons (beta radiation). In astrophysics The dominant luminous component in a cluster of galaxies is the 107 to 108 kelvin intracluster medium. The emission from the intracluster medium is characterized by thermal bremsstrahlung. This radiation is in the energy range of X-rays and can be easily observed with space-based telescopes such as Chandra X-ray Observatory, XMM-Newton, ROSAT, ASCA, EXOSAT, Suzaku, RHESSI and future missions like IXO and Astro-H . Bremsstrahlung is also the dominant emission mechanism for H II regions at radio wavelengths. In electric discharges In electric discharges, for example as laboratory discharges between two electrodes or as lightning discharges between cloud and ground or within clouds, electrons produce Bremsstrahlung photons while scattering off air molecules. These photons become manifest in terrestrial gamma-ray flashes and are the source for beams of electrons, positrons, neutrons and protons. The appearance of Bremsstrahlung photons also influences the propagation and morphology of discharges in nitrogen–oxygen mixtures with low percentages of oxygen. Quantum mechanical description The complete quantum mechanical description was first performed by Bethe and Heitler. They assumed plane waves for electrons which scatter at the nucleus of an atom, and derived a cross section which relates the complete geometry of that process to the frequency of the emitted photon. The quadruply differential cross section, which shows a quantum mechanical symmetry to pair production, is where is the atomic number, the fine-structure constant, the reduced Planck constant and the speed of light. The kinetic energy of the electron in the initial and final state is connected to its total energy or its momenta via where is the mass of an electron. Conservation of energy gives where is the photon energy. The directions of the emitted photon and the scattered electron are given by where is the momentum of the photon. The differentials are given as The absolute value of the virtual photon between the nucleus and electron is The range of validity is given by the Born approximation where this relation has to be fulfilled for the velocity of the electron in the initial and final state. For practical applications (e.g. in Monte Carlo codes) it can be interesting to focus on the relation between the frequency of the emitted photon and the angle between this photon and the incident electron. Köhn and Ebert integrated the quadruply differential cross section by Bethe and Heitler over and and obtained: with and However, a much simpler expression for the same integral can be found in (Eq. 2BN) and in (Eq. 4.1). An analysis of the doubly differential cross section above shows that electrons whose kinetic energy is larger than the rest energy (511 keV) emit photons in forward direction while electrons with a small energy emit photons isotropically. Electron–electron bremsstrahlung One mechanism, considered important for small atomic numbers is the scattering of a free electron at the shell electrons of an atom or molecule. Since electron–electron bremsstrahlung is a function of and the usual electron-nucleus bremsstrahlung is a function of electron–electron bremsstrahlung is negligible for metals. For air, however, it plays an important role in the production of terrestrial gamma-ray flashes.
Physical sciences
Atomic physics
null
54149
https://en.wikipedia.org/wiki/Dicotyledon
Dicotyledon
The dicotyledons, also known as dicots (or, more rarely, dicotyls), are one of the two groups into which all the flowering plants (angiosperms) were formerly divided. The name refers to one of the typical characteristics of the group: namely, that the seed has two embryonic leaves or cotyledons. There are around 200,000 species within this group. The other group of flowering plants were called monocotyledons (or monocots), typically each having one cotyledon. Historically, these two groups formed the two divisions of the flowering plants. Largely from the 1990s onwards, molecular phylogenetic research confirmed what had already been suspected: that dicotyledons are not a group made up of all the descendants of a common ancestor (i.e., they are not a monophyletic group). Rather, a number of lineages, such as the magnoliids and groups now collectively known as the basal angiosperms, diverged earlier than the monocots did; in other words, monocots evolved from within the dicots, as traditionally defined. The traditional dicots are thus a paraphyletic group. The eudicots are the largest monophyletic group within the dicotyledons. They are distinguished from all other flowering plants by the structure of their pollen. Other dicotyledons and the monocotyledons have monosulcate pollen (or derived forms): grains with a single sulcus. Contrastingly, eudicots have tricolpate pollen (or derived forms): grains with three or more pores set in furrows called colpi. Comparison with monocotyledons Aside from cotyledon number, other broad differences have been noted between monocots and dicots, although these have proven to be differences primarily between monocots and eudicots. Many early-diverging dicot groups have monocot characteristics such as scattered vascular bundles, trimerous flowers, and non-tricolpate pollen. In addition, some monocots have dicot characteristics such as reticulated leaf veins. Classification Phylogeny The consensus phylogenetic tree used in the APG IV system shows that the group traditionally treated as the dicots is paraphyletic to the monocots: Historical Traditionally, the dicots have been called the Dicotyledones (or Dicotyledoneae), at any rank. If treated as a class, as they are within the Cronquist system, they could be called the Magnoliopsida after the type genus Magnolia. In some schemes, the eudicots were either treated as a separate class, the Rosopsida (type genus Rosa), or as several separate classes. The remaining dicots (palaeodicots or basal angiosperms) may be kept in a single paraphyletic class, called Magnoliopsida, or further divided. Some botanists prefer to retain the dicotyledons as a valid class, arguing its practicality and that it makes evolutionary sense. APG vs. Cronquist The following lists show the orders in the Angiosperm Phylogeny Group APG IV system traditionally called dicots, together with the older Cronquist system. Dahlgren and Thorne systems Under the Dahlgren and Thorne systems, the subclass name Magnoliidae was used for the dicotyledons. This is also the case in some of the systems derived from the Cronquist system. These two systems are contrasted in the table below in terms of how each categorises by superorder; note that the sequence within each system has been altered in order to pair corresponding taxa The Thorne system (1992) as depicted by Reveal is: There exist variances between the superorders circumscribed from each system. Namely, although the systems share common names for many of the listed superorders, the specific list orders classified within each varies. For example, Thorne's Theanae corresponds to five distinct superorders under Dahlgren's system, only one of which is called Theanae.
Biology and health sciences
Flowering plants
Plants
54176
https://en.wikipedia.org/wiki/Human%20body
Human body
The human body is the entire structure of a human being. It is composed of many different types of cells that together create tissues and subsequently organs and then organ systems. The external human body consists of a head, hair, neck, torso (which includes the thorax and abdomen), genitals, arms, hands, legs, and feet. The internal human body includes organs, teeth, bones, muscle, tendons, ligaments, blood vessels and blood, lymphatic vessels and lymph. The study of the human body includes anatomy, physiology, histology and embryology. The body varies anatomically in known ways. Physiology focuses on the systems and organs of the human body and their functions. Many systems and mechanisms interact in order to maintain homeostasis, with safe levels of substances such as sugar, iron, and oxygen in the blood. The body is studied by health professionals, physiologists, anatomists, and artists to assist them in their work. Composition The human body is composed of elements including hydrogen, oxygen, carbon, calcium and phosphorus. These elements reside in trillions of cells and non-cellular components of the body. The adult male body is about 60% total body water content of some . This is made up of about of extracellular fluid including about of blood plasma and about of interstitial fluid, and about of fluid inside cells. The content, acidity and composition of the water inside and outside cells is carefully maintained. The main electrolytes in body water outside cells are sodium and chloride, whereas within cells it is potassium and other phosphates. Cells The body contains trillions of cells, the fundamental unit of life. At maturity, there are roughly 30 trillion cells, and 38 trillion bacteria in the body, an estimate arrived at by totaling the cell numbers of all the organs of the body and cell types. The skin of the body is also host to billions of commensal organisms as well as immune cells. Not all parts of the body are made from cells. Cells sit in an extracellular matrix that consists of proteins such as collagen, surrounded by extracellular fluids. Genome Cells in the body function because of DNA. DNA sits within the nucleus of a cell. Here, parts of DNA are copied and sent to the body of the cell via RNA. The RNA is then used to create proteins, which form the basis for cells, their activity, and their products. Proteins dictate cell function and gene expression, a cell is able to self-regulate by the amount of proteins produced. However, not all cells have DNA; some cells such as mature red blood cells lose their nucleus as they mature. Tissues The body consists of many different types of tissue, defined as cells that act with a specialised function. The study of tissues is called histology and is often done with a microscope. The body consists of four main types of tissues. These are lining cells (epithelia), connective tissue, nerve tissue and muscle tissue. Cells Cells that line surfaces exposed to the outside world or gastrointestinal tract (epithelia) or internal cavities (endothelium) come in numerous shapes and forms – from single layers of flat cells, to cells with small beating hair-like cilia in the lungs, to column-like cells that line the stomach. Endothelial cells are cells that line internal cavities including blood vessels and glands. Lining cells regulate what can and cannot pass through them, protect internal structures, and function as sensory surfaces. Organs Organs, structured collections of cells with a specific function, mostly sit within the body, with the exception of skin. Examples include the heart, lungs and liver. Many organs reside within cavities within the body. These cavities include the abdomen (which contains the stomach, for example) and pleura, which contains the lungs. Heart The heart is an organ located in the thoracic cavity between the lungs and slightly to the left. It is surrounded by the pericardium, which holds it in place in the mediastinum and serves to protect it from blunt trauma, infection and help lubricate the movement of the heart via pericardial fluid. The heart works by pumping blood around the body allowing oxygen, nutrients, waste, hormones and white blood cells to be transported. The heart is composed of two atria and two ventricles. The primary purpose of the atria is to allow uninterrupted venous blood flow to the heart during ventricular systole. This allows enough blood to get into the ventricles during atrial systole. Consequently, the atria allows a cardiac output roughly 75% greater than would be possible without them. The purpose of the ventricles is to pump blood to the lungs through the right ventricle and to the rest of the body through the left ventricle. The heart has an electrical conduction system to control the contraction and relaxation of the muscles. It starts in the sinoatrial node traveling through the atria causing them to pump blood into the ventricles. It then travels to the atrioventricular node, which makes the signal slow down slightly allowing the ventricles to fill with blood before pumping it out and starting the cycle over again. Coronary artery disease is the leading cause of death worldwide, making up 16% of all deaths. It is caused by the buildup of plaque in the coronary arteries supplying the heart, eventually the arteries may become so narrow that not enough blood is able to reach the myocardium, a condition known as myocardial infarction or heart attack, this can cause heart failure or cardiac arrest and eventually death. Risk factors for coronary artery disease include obesity, smoking, high cholesterol, high blood pressure, lack of exercise and diabetes. Cancer can affect the heart, though it is exceedingly rare and has usually metastasized from another part of the body such as the lungs or breasts. This is because the heart cells quickly stop dividing and all growth occurs through size increase rather than cell division. Gallbladder The gallbladder is a hollow pear-shaped organ located posterior to the inferior middle part of the right lobe of the liver. It is variable in shape and size. It stores bile before it is released into the small intestine via the common bile duct to help with digestion of fats. It receives bile from the liver via the cystic duct, which connects to the common hepatic duct to form the common bile duct. The gallbladder gets its blood supply from the cystic artery, which in most people, emerges from the right hepatic artery. Gallstones is a common disease in which one or more stones form in the gallbladder or biliary tract. Most people are asymptomatic but if a stone blocks the biliary tract, it causes a gallbladder attack, symptoms may include sudden pain in the upper right abdomen or center of the abdomen. Nausea and vomiting may also occur. Typical treatment is removal of the gallbladder through a procedure called a cholecystectomy. Having gallstones is a risk factor for gallbladder cancer, which although quite uncommon, is rapidly fatal if not diagnosed early. Systems Circulatory system The circulatory system consists of the heart and blood vessels (arteries, veins and capillaries). The heart propels the circulation of the blood, which serves as a "transportation system" to transfer oxygen, fuel, nutrients, waste products, immune cells and signaling molecules (i.e. hormones) from one part of the body to another. Paths of blood circulation within the human body can be divided into two circuits: the pulmonary circuit, which pumps blood to the lungs to receive oxygen and leave carbon dioxide, and the systemic circuit, which carries blood from the heart off to the rest of the body. The blood consists of fluid that carries cells in the circulation, including some that move from tissue to blood vessels and back, as well as the spleen and bone marrow. Digestive system The digestive system consists of the mouth including the tongue and teeth, esophagus, stomach, (gastrointestinal tract, small and large intestines, and rectum), as well as the liver, pancreas, gallbladder, and salivary glands. It converts food into small, nutritional, non-toxic molecules for distribution and absorption into the body. These molecules take the form of proteins (which are broken down into amino acids), fats, vitamins and minerals (the last of which are mainly ionic rather than molecular). After being swallowed, food moves through the gastrointestinal tract by means of peristalsis: the systematic expansion and contraction of muscles to push food from one area to the next. Digestion begins in the mouth, which chews food into smaller pieces for easier digestion. Then it is swallowed, and moves through the esophagus to the stomach. In the stomach, food is mixed with gastric acids to allow the extraction of nutrients. What is left is called chyme; this then moves into the small intestine, which absorbs the nutrients and water from the chyme. What remains passes on to the large intestine, where it is dried to form feces; these are then stored in the rectum until they are expelled through the anus. Endocrine system The endocrine system consists of the principal endocrine glands: the pituitary, thyroid, adrenals, pancreas, parathyroids, and gonads, but nearly all organs and tissues produce specific endocrine hormones as well. The endocrine hormones serve as signals from one body system to another regarding an enormous array of conditions, resulting in variety of changes of function. Immune system The immune system consists of the white blood cells, the thymus, lymph nodes and lymph channels, which are also part of the lymphatic system. The immune system provides a mechanism for the body to distinguish its own cells and tissues from outside cells and substances and to neutralize or destroy the latter by using specialized proteins such as antibodies, cytokines, and toll-like receptors, among many others. Integumentary system The integumentary system consists of the covering of the body (the skin), including hair and nails as well as other functionally important structures such as the sweat glands and sebaceous glands. The skin provides containment, structure, and protection for other organs, and serves as a major sensory interface with the outside world. Lymphatic system The lymphatic system extracts, transports and metabolizes lymph, the fluid found in between cells. The lymphatic system is similar to the circulatory system in terms of both its structure and its most basic function, to carry a body fluid. Musculoskeletal system The musculoskeletal system consists of the human skeleton (which includes bones, ligaments, tendons, joints and cartilage) and attached muscles. It gives the body basic structure and the ability for movement. In addition to their structural role, the larger bones in the body contain bone marrow, the site of production of blood cells. Also, all bones are major storage sites for calcium and phosphate. This system can be split up into the muscular system and the skeletal system. Nervous system The nervous system consists of the body's neurons and glial cells, which together form the nerves, ganglia and gray matter, which in turn form the brain and related structures. The brain is the organ of thought, emotion, memory, and sensory processing; it serves many aspects of communication and controls various systems and functions. The special senses consist of vision, hearing, taste, and smell. The eyes, ears, tongue, and nose gather information about the body's environment. From a structural perspective, the nervous system is typically subdivided into two component parts: the central nervous system (CNS), composed of the brain and the spinal cord; and the peripheral nervous system (PNS), composed of the nerves and ganglia outside the brain and spinal cord. The CNS is mostly responsible for organizing motion, processing sensory information, thought, memory, cognition and other such functions. It remains a matter of some debate whether the CNS directly gives rise to consciousness. The peripheral nervous system (PNS) is mostly responsible for gathering information with sensory neurons and directing body movements with motor neurons. From a functional perspective, the nervous system is again typically divided into two component parts: the somatic nervous system (SNS) and the autonomic nervous system (ANS). The SNS is involved in voluntary functions like speaking and sensory processes. The ANS is involved in involuntary processes, such as digestion and regulating blood pressure. The nervous system is subject to many different diseases. In epilepsy, abnormal electrical activity in the brain can cause seizures. In multiple sclerosis, the immune system attacks the nerve linings, damaging the nerves' ability to transmit signals. Amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's disease, is a motor neuron disease which gradually reduces movement in patients. There are also many other diseases of the nervous system. Reproductive system The purpose of the reproductive system is to reproduce and nurture the growth of offspring. The functions include the production of germ cells and hormones. The sex organs of the male reproductive system and the female reproductive system develops and mature at puberty. These systems include the internal and external genitalia. Female puberty generally occurs between the ages of 9 and 13 and is characterized by ovulation and menstruation; the growth of secondary sex characteristics, such as growth of pubic and underarm hair, breast, uterine and vaginal growth, widening hips and increased height and weight, also occur during puberty. Male puberty sees the further development of the penis and testicles. The female inner sex organs are the two ovaries, their fallopian tubes, the uterus, and the cervix. At birth there are about 70,000 immature egg cells that degenerate until at puberty there are around 40,000. No more egg cells are produced. Hormones stimulate the beginning of menstruation, and the ongoing menstrual cycles. The female external sex organs are the vulva (labia, clitoris, and vestibule). The male external genitalia include the penis and scrotum that contains the testicles. The testicle is the gonad, the sex gland that produces the sperm cells. Unlike the egg cells in the female, sperm cells are produced throughout life. Other internal sex organs are the epididymides, vasa deferentia, and some accessory glands. Diseases that affect the reproductive system include polycystic ovary syndrome, a number of disorders of the testicles including testicular torsion, and a number of sexually transmitted infections including syphilis, HIV, chlamydia, HPV and genital warts. Cancer can affect most parts of the reproductive system including the penis, testicles, prostate, ovaries, cervix, vagina, fallopian, uterus and vulva. Respiratory system The respiratory system consists of the nose, nasopharynx, trachea, and lungs. It brings oxygen from the air and excretes carbon dioxide and water back into the air. First, air is pulled through the trachea into the lungs by the diaphragm pushing down, which creates a vacuum. Air is briefly stored inside small sacs known as alveoli (sing.: alveolus) before being expelled from the lungs when the diaphragm contracts again. Each alveolus is surrounded by capillaries carrying deoxygenated blood, which absorbs oxygen out of the air and into the bloodstream. For the respiratory system to function properly, there need to be as few impediments as possible to the movement of air within the lungs. Inflammation of the lungs and excess mucus are common sources of breathing difficulties. In asthma, the respiratory system is persistently inflamed, causing wheezing or shortness of breath. Pneumonia occurs through infection of the alveoli, and may be caused by tuberculosis. Emphysema, commonly a result of smoking, is caused by damage to connections between the alveoli. Urinary system The urinary system consists of the two kidneys, two ureters, bladder, and urethra. It removes waste materials from the blood through urine, which carries a variety of waste molecules and excess ions and water out of the body. First, the kidneys filter the blood through their respective nephrons, removing waste products like urea, creatinine and maintaining the proper balance of electrolytes and turning the waste products into urine by combining them with water from the blood. The kidneys filter about 150 quarts (170 liters) of blood daily, but most of it is returned to the blood stream with only 1-2 quarts (1-2 liters) ending up as urine. The urine is brought by the ureters from the kidneys down to the bladder. The smooth muscle lining the ureter walls continuously tighten and relax through a process called peristalsis to force urine away from the kidneys and down into the bladder. Small amounts of urine are released into the bladder every 10–15 seconds. The bladder is a hollow balloon shaped organ located in the pelvis. It stores urine until the brain signals it to relax the urinary sphincter and release the urine into the urethra starting urination. A normal bladder can hold up to 16 ounces (half a liter) for 3–5 hours comfortably. Numerous diseases affect the urinary system including kidney stones, which are formed when materials in the urine concentrate enough to form a solid mass, urinary tract infections, which are infections of the urinary tract and can cause pain when urinating, frequent urination and even death if left untreated. Renal failure occurs when the kidneys fail to adequately filter waste from the blood and can lead to death if not treated with dialysis or kidney transplantation. Cancer can affect the bladder, kidneys, urethra and ureters, with the latter two being far more rare. Anatomy Human anatomy is the study of the shape and form of the human body. The human body has four limbs (two arms and two legs), a head and a neck, which connect to the torso. The body's shape is determined by a strong skeleton made of bone and cartilage, surrounded by fat (adipose tissue), muscle, connective tissue, organs, and other structures. The spine at the back of the skeleton contains the flexible vertebral column, which surrounds the spinal cord, which is a collection of nerve fibres connecting the brain to the rest of the body. Nerves connect the spinal cord and brain to the rest of the body. All major bones, muscles, and nerves in the body are named, with the exception of anatomical variations such as sesamoid bones and accessory muscles. Blood vessels carry blood throughout the body, which moves because of the beating of the heart. Venules and veins collect blood low in oxygen from tissues throughout the body. These collect in progressively larger veins until they reach the body's two largest veins, the superior and inferior vena cava, which drain blood into the right side of the heart. From here, the blood is pumped into the lungs where it receives oxygen and drains back into the left side of the heart. From here, it is pumped into the body's largest artery, the aorta, and then progressively smaller arteries and arterioles until it reaches tissue. Here, blood passes from small arteries into capillaries, then small veins and the process begins again. Blood carries oxygen, waste products, and hormones from one place in the body to another. Blood is filtered at the kidneys and liver. The body consists of a number of body cavities, separated areas which house different organ systems. The brain and central nervous system reside in an area protected from the rest of the body by the blood brain barrier. The lungs sit in the pleural cavity. The intestines, liver, and spleen sit in the abdominal cavity. Height, weight, shape and other body proportions vary individually and with age and sex. Body shape is influenced by the distribution of bones, muscle and fat tissue. Physiology Human physiology is the study of how the human body functions. This includes the mechanical, physical, bioelectrical, and biochemical functions of humans in good health, from organs to the cells of which they are composed. The human body consists of many interacting systems of organs. These interact to maintain homeostasis, keeping the body in a stable state with safe levels of substances such as sugar and oxygen in the blood. Each system contributes to homeostasis, of itself, other systems, and the entire body. Some combined systems are referred to by joint names. For example, the nervous system and the endocrine system operate together as the neuroendocrine system. The nervous system receives information from the body, and transmits this to the brain via nerve impulses and neurotransmitters. At the same time, the endocrine system releases hormones, such as to help regulate blood pressure and volume. Together, these systems regulate the internal environment of the body, maintaining blood flow, posture, energy supply, temperature, and acid balance (pH). Development Development of the human body is the process of growth to maturity. The process begins with fertilisation, where an egg released from the ovary of a female is penetrated by sperm. The egg then lodges in the uterus, where an embryo and later fetus develop until birth. Growth and development occur after birth, and include both physical and psychological development, influenced by genetic, hormonal, environmental and other factors. Development and growth continue throughout life, through childhood, adolescence, and through adulthood to old age, and are referred to as the process of aging. Society and culture Professional study Health professionals learn about the human body from illustrations, models, and demonstrations. Medical and dental students in addition gain practical experience, for example by dissection of cadavers. Human anatomy, physiology, and biochemistry are basic medical sciences, generally taught to medical students in their first year at medical school. Depiction In Western societies, the contexts for depictions of the human body include information, art and pornography. Information includes both science and education, such as anatomical drawings. Any ambiguous image not easily fitting into one of these categories may be misinterpreted, leading to disputes. The most contentious disputes are between fine art and erotic images, which define the legal distinction of which images are permitted or prohibited. History of anatomy In Ancient Greece, the Hippocratic Corpus described the anatomy of the skeleton and muscles. The 2nd century physician Galen of Pergamum compiled classical knowledge of anatomy into a text that was used throughout the Middle Ages. In the Renaissance, Andreas Vesalius (1514–1564) pioneered the modern study of human anatomy by dissection, writing the influential book De humani corporis fabrica. Anatomy advanced further with the invention of the microscope and the study of the cellular structure of tissues and organs. Modern anatomy uses techniques such as magnetic resonance imaging, computed tomography, fluoroscopy and ultrasound imaging to study the body in unprecedented detail. History of physiology The study of human physiology began with Hippocrates in Ancient Greece, around 420 BCE, and with Aristotle (384–322 BCE) who applied critical thinking and emphasis on the relationship between structure and function. Galen () was the first to use experiments to probe the body's functions. The term physiology was introduced by the French physician Jean Fernel (1497–1558). In the 17th century, William Harvey (1578–1657) described the circulatory system, pioneering the combination of close observation with careful experiment. In the 19th century, physiological knowledge began to accumulate at a rapid rate with the cell theory of Matthias Schleiden and Theodor Schwann in 1838, that organisms are made up of cells. Claude Bernard (1813–1878) created the concept of the milieu interieur (internal environment), which Walter Cannon (1871–1945) later said was regulated to a steady state in homeostasis. In the 20th century, the physiologists Knut Schmidt-Nielsen and George Bartholomew extended their studies to comparative physiology and ecophysiology. Most recently, evolutionary physiology has become a distinct subdiscipline.
Biology and health sciences
Biology
null
54195
https://en.wikipedia.org/wiki/Stingray
Stingray
Stingrays are a group of sea rays, a type of cartilaginous fish. They are classified in the suborder Myliobatoidei of the order Myliobatiformes and consist of eight families: Hexatrygonidae (sixgill stingray), Plesiobatidae (deepwater stingray), Urolophidae (stingarees), Urotrygonidae (round rays), Dasyatidae (whiptail stingrays), Potamotrygonidae (river stingrays), Gymnuridae (butterfly rays) and Myliobatidae (eagle rays). There are about 220 known stingray species organized into 29 genera. Stingrays are common in coastal tropical and subtropical marine waters throughout the world. Some species, such as the thorntail stingray (Dasyatis thetidis), are found in warmer temperate oceans and others, such as the deepwater stingray (Plesiobatis daviesi), are found in the deep ocean. The river stingrays and a number of whiptail stingrays (such as the Niger stingray (Fontitrygon garouaensis)) are restricted to fresh water. Most myliobatoids are demersal (inhabiting the next-to-lowest zone in the water column), but some, such as the pelagic stingray and the eagle rays, are pelagic. Stingray species are progressively becoming threatened or vulnerable to extinction, particularly as the consequence of unregulated fishing. As of 2013, 45 species have been listed as vulnerable or endangered by the IUCN. The status of some other species is poorly known, leading to their being listed as data deficient. Evolution Stingrays diverged from their closest relatives, the panrays, during the Late Jurassic period, and diversified over the course of the Cretaceous into the different extant families today. The earliest stingrays appear to have been benthic, with the ancestors of the eagle rays becoming pelagic during the early Late Cretaceous. Fossils Permineralized stingray teeth have been found in sedimentary deposits around the world as far back as the Early Cretaceous. The oldest known stingray taxon is "Dasyatis" speetonensis from the Hauterivian of England, whose teeth most closely resemble that of the extant sixgill stingray (Hexatrygon). Although stingray teeth are rare on sea bottoms compared to the similar shark teeth, scuba divers searching for the latter do encounter the teeth of stingrays. Full-body stingray fossils are very rare but are known from certain lagerstätte that preserve soft-bodied animals. The extinct Cyclobatis of the Cretaceous of Lebanon is thought to be a skate that had convergently evolved a highly stingray-like body plan, although its exact taxonomic placement is still uncertain. True stingray fossils become more common in the Eocene, with the extinct freshwater stingrays Heliobatis and Asterotrygon known from the Green River Formation. A diversity of stingray fossils is known from the Eocene Monte Bolca formation from Italy, including the early stingaree Arechia, as well as Dasyomyliobatis, which is thought to represent a transitional form between stingrays and eagle rays, and the highly unusual Lessiniabatis, which had an extremely short and slender tail with no sting. Anatomy Jaw and teeth The mouth of the stingray is located on the ventral side of the vertebrate. Stingrays exhibit hyostylic jaw suspension, which means that the mandibular arch is only suspended by an articulation with the hyomandibula. This type of suspensions allows for the upper jaw to have high mobility and protrude outward. The teeth are modified placoid scales that are regularly shed and replaced. In general, the teeth have a root implanted within the connective tissue and a visible portion of the tooth, is large and flat, allowing them to crush the bodies of hard shelled prey. Male stingrays display sexual dimorphism by developing cusps, or pointed ends, to some of their teeth. During mating season, some stingray species fully change their tooth morphology which then returns to baseline during non-mating seasons. Spiracles Spiracles are small openings that allow some fish and amphibians to breathe. Stingray spiracles are openings just behind its eyes. The respiratory system of stingrays is complicated by having two separate ways to take in water to use the oxygen. Most of the time stingrays take in water using their mouth and then send the water through the gills for gas exchange. This is efficient, but the mouth cannot be used when hunting because the stingrays bury themselves in the ocean sediment and wait for prey to swim by. So the stingray switches to using its spiracles. With the spiracles, they can draw water free from sediment directly into their gills for gas exchange. These alternate ventilation organs are less efficient than the mouth, since spiracles are unable to pull the same volume of water. However, it is enough when the stingray is quietly waiting to ambush its prey. The flattened bodies of stingrays allow them to effectively conceal themselves in their environments. Stingrays do this by agitating the sand and hiding beneath it. Because their eyes are on top of their bodies and their mouths on the undersides, stingrays cannot see their prey after capture; instead, they use smell and electroreceptors (ampullae of Lorenzini) similar to those of sharks. Stingrays settle on the bottom while feeding, often leaving only their eyes and tails visible. Coral reefs are favorite feeding grounds and are usually shared with sharks during high tide. Behavior Reproduction During the breeding season, males of various stingray species such as the round stingray (Urobatis halleri), may rely on their ampullae of Lorenzini to sense certain electrical signals given off by mature females before potential copulation. When a male is courting a female, he follows her closely, biting at her pectoral disc. He then places one of his two claspers into her valve. Reproductive ray behaviors are associated with their behavioral endocrinology, for example, in species such as the atlantic stingray (Hypanus sabinus), social groups are formed first, then the sexes display complex courtship behaviors that end in pair copulation which is similar to the species Urobatis halleri. Furthermore, their mating period is one of the longest recorded in elasmobranch fish. Individuals are known to mate for seven months before the females ovulate in March. During this time, the male stingrays experience increased levels of androgen hormones which has been linked to its prolonged mating periods. The behavior expressed among males and females during specific parts of this period involves aggressive social interactions. Frequently, the males trail females with their snout near the female vent then proceed to bite the female on her fins and her body. Although this mating behavior is similar to the species Urobatis halleri, differences can be seen in the particular actions of Hypanus sabinus. Seasonal elevated levels of serum androgens coincide with the expressed aggressive behavior, which led to the proposal that androgen steroids start, indorse and maintain aggressive sexual behaviors in the male rays for this species which drives the prolonged mating season. Similarly, concise elevations of serum androgens in females has been connected to increased aggression and improvement in mate choice. When their androgen steroid levels are elevated, they are able to improve their mate choice by quickly fleeing from tenacious males when undergoing ovulation succeeding impregnation. This ability affects the paternity of their offspring by refusing less qualified mates. Stingrays are ovoviviparous, bearing live young in "litters" of five to thirteen. During this period, the female's behavior transitions to support of her future offspring. Females hold the embryos in the womb without a placenta. Instead, the embryos absorb nutrients from a yolk sac and after the sac is depleted, the mother provides uterine "milk". After birth, the offspring generally disassociate from the mother and swim away, having been born with the instinctual abilities to protect and feed themselves. In a very small number of species, like the giant freshwater stingray (Urogymnus polylepis), the mother "cares" for her young by having them swim with her until they are one-third of her size. At the Sea Life London Aquarium, two female stingrays delivered seven baby stingrays, although the mothers have not been near a male for two years. This suggests some species of rays can store sperm then give birth when they deem conditions to be suitable. Locomotion The stingray uses its paired pectoral fins for moving around. This is in contrast to sharks and most other fish, which get most of their swimming power from a single caudal (tail) fin. Stingray pectoral fin locomotion can be divided into two categories, undulatory and oscillatory. Stingrays that use undulatory locomotion have shorter thicker fins for slower motile movements in benthic areas. Longer thinner pectoral fins make for faster speeds in oscillation mobility in pelagic zones. Visually distinguishable oscillation has less than one wave going, opposed to undulation having more than one wave at all times. Feeding behavior and diet Stingrays use a wide range of feeding strategies. Some have specialized jaws that allow them to crush hard mollusk shells, whereas others use external mouth structures called cephalic lobes to guide plankton into their oral cavity. Benthic stingrays (those that reside on the sea floor) are ambush hunters. They wait until prey comes near, then use a strategy called "tenting". With pectoral fins pressed against the substrate, the ray will raise its head, generating a suction force that pulls the prey underneath the body. This form of whole-body suction is analogous to the buccal suction feeding performed by ray-finned fish. Stingrays exhibit a wide range of colors and patterns on their dorsal surface to help them camouflage with the sandy bottom. Some stingrays can even change color over the course of several days to adjust to new habitats. Since their mouths are on the underside of their bodies, they catch their prey, then crush and eat with their powerful jaws. Like its shark relatives, the stingray is outfitted with electrical sensors called ampullae of Lorenzini. Located around the stingray's mouth, these organs sense the natural electrical charges of potential prey. Many rays have jaw teeth to enable them to crush mollusks such as clams, oysters and mussels. Most stingrays feed primarily on mollusks, crustaceans and, occasionally, on small fish. Freshwater stingrays in the Amazon feed on insects and break down their tough exoskeletons with mammal-like chewing motions. Large pelagic rays like the manta use ram feeding to consume vast quantities of plankton and have been seen swimming in acrobatic patterns through plankton patches. Stingray injuries Stingrays are not usually aggressive and ordinarily attack humans only when provoked, such as when they are accidentally stepped on. Stingrays can have one, two or three blades. Contact with the spinal blade or blades causes local trauma (from the cut itself), pain, swelling, muscle cramps from the venom and, later, may result in infection from bacteria or fungi. The injury is very painful, but rarely life-threatening unless the stinger pierces a vital area. The blade is often deeply barbed and usually breaks off in the wound. Surgery may be required to remove the fragments. Fatal stings are very rare. The death of Steve Irwin in 2006 was only the second recorded in Australian waters since 1945. The stinger penetrated his thoracic wall and pierced his heart, causing massive trauma and bleeding. Venom The venom of the stingray has been relatively unstudied due to the mixture of venomous tissue secretions cells and mucous membrane cell products that occurs upon secretion from the spinal blade. The spine is covered with the epidermal skin layer. During secretion, the venom penetrates the epidermis and mixes with the mucus to release the venom on its victim. Typically, other venomous organisms create and store their venom in a gland. The stingray is notable in that it stores its venom within tissue cells. The toxins that have been confirmed to be within the venom are cystatins, peroxiredoxin and galectin. Galectin induces cell death in its victims and cystatins inhibit defense enzymes. In humans, these toxins lead to increased blood flow in the superficial capillaries and cell death. Despite the number of cells and toxins that are within the stingray, there is little relative energy required to produce and store the venom. The venom is produced and stored in the secretory cells of the vertebral column at the mid-distal region. These secretory cells are housed within the ventrolateral grooves of the spine. The cells of both marine and freshwater stingrays are round and contain a great amount of granule-filled cytoplasm. The stinging cells of marine stingrays are located only within these lateral grooves of the stinger. The stinging cells of freshwater stingray branch out beyond the lateral grooves to cover a larger surface area along the entire blade. Due to this large area and an increased number of proteins within the cells, the venom of freshwater stingrays has a greater toxicity than that of marine stingrays. Human use As food Rays are edible, and may be caught as food using fishing lines or spears. Stingray recipes can be found in many coastal areas worldwide. For example, in Malaysia and Singapore, stingray is commonly grilled over charcoal, then served with spicy sambal sauce. In Goa, and other Indian states, it is sometimes used as part of spicy curries. Generally, the most prized parts of the stingray are the wings, the "cheek" (the area surrounding the eyes), and the liver. The rest of the ray is considered too rubbery to have any culinary uses. Ecotourism Stingrays are usually very docile and curious, their usual reaction being to flee any disturbance, but they sometimes brush their fins past any new object they encounter. Nevertheless, certain larger species may be more aggressive and should be approached with caution, as the stingray's defensive reflex (use of its venomous stinger) may result in serious injury or death. Other uses The skin of the ray is used as an under layer for the cord or leather wrap (known as samegawa in Japanese) on Japanese swords due to its hard, rough texture that keeps the braided wrap from sliding on the handle during use. Several ethnological sections in museums, such as the British Museum, display arrowheads and spearheads made of stingray stingers, used in Micronesia and elsewhere. Henry de Monfreid stated in his books that before World War II, in the Horn of Africa, whips were made from the tails of big stingrays and these devices inflicted cruel cuts, so in Aden, the British forbade their use on women and slaves. In former Spanish colonies, a stingray is called ("whip ray"). Some stingray species are commonly seen in public aquarium exhibits and more recently in home aquaria. Gallery
Biology and health sciences
Batoidea
null
54211
https://en.wikipedia.org/wiki/Ganymede%20%28moon%29
Ganymede (moon)
Ganymede, or Jupiter III, is the largest and most massive natural satellite of Jupiter, and in the Solar System. Despite being the only moon in the Solar System with a substantial magnetic field, it is the largest Solar System object without a substantial atmosphere. Like Saturn's largest moon Titan, it is larger than the planet Mercury, but has somewhat less surface gravity than Mercury, Io, or the Moon due to its lower density compared to the three. Ganymede orbits Jupiter in roughly seven days and is in a 1:2:4 orbital resonance with the moons Europa and Io, respectively. Ganymede is composed of silicate rock and water in approximately equal proportions. It is a fully differentiated body with an iron-rich, liquid metallic core, giving it the lowest moment of inertia factor of any solid body in the Solar System. Its internal ocean potentially contains more water than all of Earth's oceans combined. Ganymede's magnetic field is probably created by convection within its core, and influenced by tidal forces from Jupiter's far greater magnetic field. Ganymede has a thin oxygen atmosphere that includes O, O2, and possibly O3 (ozone). Atomic hydrogen is a minor atmospheric constituent. Whether Ganymede has an ionosphere associated with its atmosphere is unresolved. Ganymede's surface is composed of two main types of terrain, the first of which are lighter regions, generally crosscut by extensive grooves and ridges, dating from slightly less than 4 billion years ago, covering two-thirds of Ganymede. The cause of the light terrain's disrupted geology is not fully known, but may be the result of tectonic activity due to tidal heating. The second terrain type are darker regions saturated with impact craters, which are dated to four billion years ago. Ganymede's discovery is credited to Simon Marius and Galileo Galilei, who both observed it in 1610, as the third of the Galilean moons, the first group of objects discovered orbiting another planet. Its name was soon suggested by astronomer Simon Marius, after the mythological Ganymede, a Trojan prince desired by Zeus (the Greek counterpart of Jupiter), who carried him off to be the cupbearer of the gods. Beginning with Pioneer 10, several spacecraft have explored Ganymede. The Voyager probes, Voyager 1 and Voyager 2, refined measurements of its size, while Galileo discovered its underground ocean and magnetic field. The next planned mission to the Jovian system is the European Space Agency's Jupiter Icy Moons Explorer (JUICE), which was launched in 2023. After flybys of all three icy Galilean moons, it is planned to enter orbit around Ganymede. History Chinese astronomical records report that in 365 BC, Gan De detected what might have been a moon of Jupiter, probably Ganymede, with the naked eye. However, Gan De reported the color of the companion as reddish, which is puzzling since moons are too faint for their color to be perceived with the naked eye. Shi Shen and Gan De together made fairly accurate observations of the five major planets. On January 7, 1610, Galileo Galilei used a telescope to observe what he thought were three stars near Jupiter, including what turned out to be Ganymede, Callisto, and one body that turned out to be the combined light from Io and Europa; the next night he noticed that they had moved. On January 13, he saw all four at once for the first time, but had seen each of the moons before this date at least once. By January 15, Galileo concluded that the stars were actually bodies orbiting Jupiter. Name Galileo claimed the right to name the moons he had discovered. He considered "Cosmian Stars" and settled on "Medicean Stars", in honor of Cosimo II de' Medici. The French astronomer Nicolas-Claude Fabri de Peiresc suggested individual names from the Medici family for the moons, but his proposal was not taken up. Simon Marius, who had originally claimed to have found the Galilean satellites, tried to name the moons the "Saturn of Jupiter", the "Jupiter of Jupiter" (this was Ganymede), the "Venus of Jupiter", and the "Mercury of Jupiter", another nomenclature that never caught on. Later on, after finding out about a suggestion from Johannes Kepler, Marius agreed with Kepler's proposal and so he then proposed a naming system based on Greek mythology instead. This final Kepler/Marius proposal was ultimately successful. This name and those of the other Galilean satellites fell into disfavor for a considerable time, and were not in common use until the mid-20th century. In much of the earlier astronomical literature, Ganymede is referred to instead by its Roman numeral designation, (a system introduced by Galileo), in other words "the third satellite of Jupiter". Following the discovery of moons of Saturn, a naming system based on that of Kepler and Marius was used for Jupiter's moons. Ganymede is the only Galilean moon of Jupiter named after a male figure—like Io, Europa, and Callisto, he was a lover of Zeus. In English, the Galilean satellites Io, Europa and Callisto have the Latin spellings of their names, but the Latin form of Ganymede is Ganymēdēs, which would be pronounced . However, the final syllable is dropped in English, perhaps under the influence of French Ganymède (). Orbit and rotation Ganymede orbits Jupiter at a distance of , third among the Galilean satellites, and completes a revolution every seven days and three hours (7.155 days). Like most known moons, Ganymede is tidally locked, with one side always facing toward the planet, hence its day is also seven days and three hours. Its orbit is very slightly eccentric and inclined to the Jovian equator, with the eccentricity and inclination changing quasi-periodically due to solar and planetary gravitational perturbations on a timescale of centuries. The ranges of change are 0.0009–0.0022 and 0.05–0.32°, respectively. These orbital variations cause the axial tilt (the angle between the rotational and orbital axes) to vary between 0 and 0.33°. Ganymede participates in orbital resonances with Europa and Io: for every orbit of Ganymede, Europa orbits twice and Io orbits four times. Conjunctions (alignment on the same side of Jupiter) between Io and Europa occur when Io is at periapsis and Europa at apoapsis. Conjunctions between Europa and Ganymede occur when Europa is at periapsis. The longitudes of the Io–Europa and Europa–Ganymede conjunctions change at the same rate, making triple conjunctions impossible. Such a complicated resonance is called the Laplace resonance. The current Laplace resonance is unable to pump the orbital eccentricity of Ganymede to a higher value. The value of about 0.0013 is probably a remnant from a previous epoch, when such pumping was possible. The Ganymedian orbital eccentricity is somewhat puzzling; if it is not pumped now it should have decayed long ago due to the tidal dissipation in the interior of Ganymede. This means that the last episode of the eccentricity excitation happened only several hundred million years ago. Because Ganymede's orbital eccentricity is relatively low—on average 0.0015—tidal heating is negligible now. However, in the past Ganymede may have passed through one or more Laplace-like resonances that were able to pump the orbital eccentricity to a value as high as 0.01–0.02. This probably caused a significant tidal heating of the interior of Ganymede; the formation of the grooved terrain may be a result of one or more heating episodes. There are two hypotheses for the origin of the Laplace resonance among Io, Europa, and Ganymede: that it is primordial and has existed from the beginning of the Solar System; or that it developed after the formation of the Solar System. A possible sequence of events for the latter scenario is as follows: Io raised tides on Jupiter, causing Io's orbit to expand (due to conservation of momentum) until it encountered the 2:1 resonance with Europa; after that, the expansion continued, but some of the angular moment was transferred to Europa as the resonance caused its orbit to expand as well; the process continued until Europa encountered the 2:1 resonance with Ganymede. Eventually the drift rates of conjunctions between all three moons were synchronized and locked in the Laplace resonance. Physical characteristics Size With a diameter of about and a mass of , Ganymede is the largest and most massive moon in the Solar System. It is slightly more massive than the second most massive moon, Saturn's satellite Titan, and is more than twice as massive as the Earth's Moon. It is larger than the planet Mercury, which has a diameter of but is only 45 percent of Mercury's mass. Ganymede is the ninth-largest object in the solar system, but the tenth-most massive. Composition The average density of Ganymede, 1.936 g/cm3 (a bit greater than Callisto's), suggests a composition of about equal parts rocky material and mostly water ices. Some of the water is liquid, forming an underground ocean. The mass fraction of ices is between 46 and 50 percent, which is slightly lower than that in Callisto. Some additional volatile ices such as ammonia may also be present. The exact composition of Ganymede's rock is not known, but is probably close to the composition of L/LL type ordinary chondrites, which are characterized by less total iron, less metallic iron and more iron oxide than H chondrites. The weight ratio of iron to silicon ranges between 1.05 and 1.27 in Ganymede, whereas the solar ratio is around 1.8. Surface features Ganymede's surface has an albedo of about 43 percent. Water ice seems to be ubiquitous on its surface, with a mass fraction of 50–90 percent, significantly more than in Ganymede as a whole. Near-infrared spectroscopy has revealed the presence of strong water ice absorption bands at wavelengths of 1.04, 1.25, 1.5, 2.0 and 3.0 μm. The grooved terrain is brighter and has a more icy composition than the dark terrain. The analysis of high-resolution, near-infrared and UV spectra obtained by the Galileo spacecraft and from Earth observations has revealed various non-water materials: carbon dioxide, sulfur dioxide and, possibly, cyanogen, hydrogen sulfate and various organic compounds. Galileo results have also shown magnesium sulfate (MgSO4) and, possibly, sodium sulfate (Na2SO4) on Ganymede's surface. These salts may originate from the subsurface ocean. The Ganymedian surface albedo is very asymmetric; the leading hemisphere is brighter than the trailing one. This is similar to Europa, but the reverse for Callisto. The trailing hemisphere of Ganymede appears to be enriched in sulfur dioxide. The distribution of carbon dioxide does not demonstrate any hemispheric asymmetry, but little or no carbon dioxide is observed near the poles. Impact craters on Ganymede (except one) do not show any enrichment in carbon dioxide, which also distinguishes it from Callisto. Ganymede's carbon dioxide gas was probably depleted in the past. Ganymede's surface is a mix of two types of terrain: very old, highly cratered, dark regions and somewhat younger (but still ancient), lighter regions marked with an extensive array of grooves and ridges. The dark terrain, which comprises about one-third of the surface, contains clays and organic materials that could indicate the composition of the impactors from which Jovian satellites accreted. The heating mechanism required for the formation of the grooved terrain on Ganymede is an unsolved problem in the planetary sciences. The modern view is that the grooved terrain is mainly tectonic in nature. Cryovolcanism is thought to have played only a minor role, if any. The forces that caused the strong stresses in the Ganymedian ice lithosphere necessary to initiate the tectonic activity may be connected to the tidal heating events in the past, possibly caused when the satellite passed through unstable orbital resonances. The tidal flexing of the ice may have heated the interior and strained the lithosphere, leading to the development of cracks and horst and graben faulting, which erased the old, dark terrain on 70 percent of the surface. The formation of the grooved terrain may also be connected with the early core formation and subsequent tidal heating of Ganymede's interior, which may have caused a slight expansion of Ganymede by one to six percent due to phase transitions in ice and thermal expansion. During subsequent evolution deep, hot water plumes may have risen from the core to the surface, leading to the tectonic deformation of the lithosphere. Radiogenic heating within the satellite is the most relevant current heat source, contributing, for instance, to ocean depth. Research models have found that if the orbital eccentricity were an order of magnitude greater than currently (as it may have been in the past), tidal heating would be a more substantial heat source than radiogenic heating. Cratering is seen on both types of terrain, but is especially extensive on the dark terrain: it appears to be saturated with impact craters and has evolved largely through impact events. The brighter, grooved terrain contains many fewer impact features, which have been only of minor importance to its tectonic evolution. The density of cratering indicates an age of 4 billion years for the dark terrain, similar to the highlands of the Moon, and a somewhat younger age for the grooved terrain (but how much younger is uncertain). Ganymede may have experienced a period of heavy cratering 3.5 to 4 billion years ago similar to that of the Moon. If true, the vast majority of impacts happened in that epoch, whereas the cratering rate has been much smaller since. Craters both overlay and are crosscut by the groove systems, indicating that some of the grooves are quite ancient. Relatively young craters with rays of ejecta are also visible. Ganymedian craters are flatter than those on the Moon and Mercury. This is probably due to the relatively weak nature of Ganymede's icy crust, which can (or could) flow and thereby soften the relief. Ancient craters whose relief has disappeared leave only a "ghost" of a crater known as a palimpsest. One significant feature on Ganymede is a dark plain named Galileo Regio, which contains a series of concentric grooves, or furrows, likely created during a period of geologic activity. Ganymede also has polar caps, likely composed of water frost. The frost extends to 40° latitude. These polar caps were first seen by the Voyager spacecraft. Theories on the formation of the caps include the migration of water to higher latitudes and the bombardment of the ice by plasma. Data from Galileo suggests the latter is correct. The presence of a magnetic field on Ganymede results in more intense charged particle bombardment of its surface in the unprotected polar regions; sputtering then leads to redistribution of water molecules, with frost migrating to locally colder areas within the polar terrain. A crater named Anat provides the reference point for measuring longitude on Ganymede. By definition, Anat is at 128° longitude. The 0° longitude directly faces Jupiter, and unless stated otherwise longitude increases toward the west. Internal structure Ganymede appears to be fully differentiated, with an internal structure consisting of an iron-sulfide–iron core, a silicate mantle, and outer layers of water ice and liquid water. The precise thicknesses of the different layers in the interior of Ganymede depend on the assumed composition of silicates (fraction of olivine and pyroxene) and amount of sulfur in the core. Ganymede has the lowest moment of inertia factor, 0.31, among the solid Solar System bodies. This is a consequence of its substantial water content and fully differentiated interior. Subsurface oceans In the 1970s, NASA scientists first suspected that Ganymede had a thick ocean between two layers of ice, one on the surface and one beneath a liquid ocean and atop the rocky mantle. In the 1990s, NASA's Galileo mission flew by Ganymede, and found indications of such a subsurface ocean. An analysis published in 2014, taking into account the realistic thermodynamics for water and effects of salt, suggests that Ganymede might have a stack of several ocean layers separated by different phases of ice, with the lowest liquid layer adjacent to the rocky mantle. Water–rock contact may be an important factor in the origin of life. The analysis also notes that the extreme depths involved (~800 km to the rocky "seafloor") mean that temperatures at the bottom of a convective (adiabatic) ocean can be up to 40 K higher than those at the ice–water interface. In March 2015, scientists reported that measurements with the Hubble Space Telescope of how the aurorae moved confirmed that Ganymede has a subsurface ocean. A large saltwater ocean affects Ganymede's magnetic field, and consequently, its aurorae. The evidence suggests that Ganymede's oceans might be the largest in the entire Solar System. These observations were later supported by Juno, which detected various salts and other compounds on Ganymede's surface, including hydrated sodium chloride, ammonium chloride, sodium bicarbonate, and possibly aliphatic aldehydes. These compounds were potentially deposited from Ganymede's ocean in past resurfacing events and were discovered to be most abundant in Ganymede's lower latitudes, shielded by its small magnetosphere. As a result of these findings, there is increasing speculation on the potential habitability of Ganymede's ocean. Core The existence of a liquid, iron–nickel-rich core provides a natural explanation for the intrinsic magnetic field of Ganymede detected by Galileo spacecraft. The convection in the liquid iron, which has high electrical conductivity, is the most reasonable model of magnetic field generation. The density of the core is 5.5–6 g/cm3 and the silicate mantle is 3.4–3.6 g/cm3. The radius of this core may be up to 500 km. The temperature in the core of Ganymede is probably 1500–1700 K and pressure up to . Atmosphere and ionosphere In 1972, a team of Indian, British and American astronomers working in Java, Indonesia and Kavalur, India claimed that they had detected a thin atmosphere during an occultation, when it and Jupiter passed in front of a star. They estimated that the surface pressure was around 0.1 Pa (1 microbar). However, in 1979, Voyager 1 observed an occultation of the star κ Centauri during its flyby of Jupiter, with differing results. The occultation measurements were conducted in the far-ultraviolet spectrum at wavelengths shorter than 200 nm, which were much more sensitive to the presence of gases than the 1972 measurements made in the visible spectrum. No atmosphere was revealed by the Voyager data. The upper limit on the surface particle number density was found to be , which corresponds to a surface pressure of less than 2.5 μPa (25 picobar). The latter value is almost five orders of magnitude less than the 1972 estimate. Despite the Voyager data, evidence for a tenuous oxygen atmosphere (exosphere) on Ganymede, very similar to the one found on Europa, was found by the Hubble Space Telescope (HST) in 1995. HST actually observed airglow of atomic oxygen in the far-ultraviolet at the wavelengths 130.4 nm and 135.6 nm. Such an airglow is excited when molecular oxygen is dissociated by electron impacts, which is evidence of a significant neutral atmosphere composed predominantly of O2 molecules. The surface number density probably lies in the range, corresponding to the surface pressure of . These values are in agreement with Voyager's upper limit set in 1981. The oxygen is not evidence of life; it is thought to be produced when water ice on Ganymede's surface is split into hydrogen and oxygen by radiation, with the hydrogen then being more rapidly lost due to its low atomic mass. The airglow observed over Ganymede is not spatially homogeneous like that observed over Europa. HST observed two bright spots located in the northern and southern hemispheres, near ± 50° latitude, which is exactly the boundary between the open and closed field lines of the Ganymedian magnetosphere (see below). The bright spots are probably polar auroras, caused by plasma precipitation along the open field lines. The existence of a neutral atmosphere implies that an ionosphere should exist, because oxygen molecules are ionized by the impacts of the energetic electrons coming from the magnetosphere and by solar EUV radiation. However, the nature of the Ganymedian ionosphere is as controversial as the nature of the atmosphere. Some Galileo measurements found an elevated electron density near Ganymede, suggesting an ionosphere, whereas others failed to detect anything. The electron density near the surface is estimated by different sources to lie in the range 400–2,500 cm−3. As of 2008, the parameters of the ionosphere of Ganymede were not well constrained. Additional evidence of the oxygen atmosphere comes from spectral detection of gases trapped in the ice at the surface of Ganymede. The detection of ozone (O3) bands was announced in 1996. In 1997 spectroscopic analysis revealed the dimer (or diatomic) absorption features of molecular oxygen. Such an absorption can arise only if the oxygen is in a dense phase. The best candidate is molecular oxygen trapped in ice. The depth of the dimer absorption bands depends on latitude and longitude, rather than on surface albedo—they tend to decrease with increasing latitude on Ganymede, whereas O3 shows an opposite trend. Laboratory work has found that O2 would not cluster or bubble but would dissolve in ice at Ganymede's relatively warm surface temperature of 100 K (−173.15 °C). A search for sodium in the atmosphere, just after such a finding on Europa, turned up nothing in 1997. Sodium is at least 13 times less abundant around Ganymede than around Europa, possibly because of a relative deficiency at the surface or because the magnetosphere fends off energetic particles. Another minor constituent of the Ganymedian atmosphere is atomic hydrogen. Hydrogen atoms were observed as far as 3,000 km from Ganymede's surface. Their density on the surface is about . In 2021, water vapour was detected in the atmosphere of Ganymede. Magnetosphere The Galileo craft made six close flybys of Ganymede from 1995 to 2000 (G1, G2, G7, G8, G28 and G29) and discovered that Ganymede has a permanent (intrinsic) magnetic moment independent of the Jovian magnetic field. The value of the moment is about , which is three times larger than the magnetic moment of Mercury. The magnetic dipole is tilted with respect to the rotational axis of Ganymede by 176°, which means that it is directed against the Jovian magnetic moment. Its north pole lies below the orbital plane. The dipole magnetic field created by this permanent moment has a strength of 719 ± 2 nT at Ganymede's equator, which should be compared with the Jovian magnetic field at the distance of Ganymede—about 120 nT. The equatorial field of Ganymede is directed against the Jovian field, meaning reconnection is possible. The intrinsic field strength at the poles is two times that at the equator—1440 nT. The permanent magnetic moment carves a part of space around Ganymede, creating a tiny magnetosphere embedded inside that of Jupiter; it is the only moon in the Solar System known to possess the feature. Its diameter is 4–5 Ganymede radii. The Ganymedian magnetosphere has a region of closed field lines located below 30° latitude, where charged particles (electrons and ions) are trapped, creating a kind of radiation belt. The main ion species in the magnetosphere is single ionized oxygen (O+) which fits well with Ganymede's tenuous oxygen atmosphere. In the polar cap regions, at latitudes higher than 30°, magnetic field lines are open, connecting Ganymede with Jupiter's ionosphere. In these areas, the energetic (tens and hundreds of kiloelectronvolt) electrons and ions have been detected, which may cause the auroras observed around the Ganymedian poles. In addition, heavy ions precipitate continuously on Ganymede's polar surface, sputtering and darkening the ice. The interaction between the Ganymedian magnetosphere and Jovian plasma is in many respects similar to that of the solar wind and Earth's magnetosphere. The plasma co-rotating with Jupiter impinges on the trailing side of the Ganymedian magnetosphere much like the solar wind impinges on the Earth's magnetosphere. The main difference is the speed of plasma flow—supersonic in the case of Earth and subsonic in the case of Ganymede. Because of the subsonic flow, there is no bow shock off the trailing hemisphere of Ganymede. In addition to the intrinsic magnetic moment, Ganymede has an induced dipole magnetic field. Its existence is connected with the variation of the Jovian magnetic field near Ganymede. The induced moment is directed radially to or from Jupiter following the direction of the varying part of the planetary magnetic field. The induced magnetic moment is an order of magnitude weaker than the intrinsic one. The field strength of the induced field at the magnetic equator is about 60 nT—half of that of the ambient Jovian field. The induced magnetic field of Ganymede is similar to those of Callisto and Europa, indicating that Ganymede also has a subsurface water ocean with a high electrical conductivity. Given that Ganymede is completely differentiated and has a metallic core, its intrinsic magnetic field is probably generated in a similar fashion to the Earth's: as a result of conducting material moving in the interior. The magnetic field detected around Ganymede is likely to be caused by compositional convection in the core, if the magnetic field is the product of dynamo action, or magnetoconvection. Despite the presence of an iron core, Ganymede's magnetosphere remains enigmatic, particularly given that similar bodies lack the feature. Some research has suggested that, given its relatively small size, the core ought to have sufficiently cooled to the point where fluid motions, hence a magnetic field would not be sustained. One explanation is that the same orbital resonances proposed to have disrupted the surface also allowed the magnetic field to persist: with Ganymede's eccentricity pumped and tidal heating of the mantle increased during such resonances, reducing heat flow from the core, leaving it fluid and convective. Another explanation is a remnant magnetization of silicate rocks in the mantle, which is possible if the satellite had a more significant dynamo-generated field in the past. Radiation environment The radiation level at the surface of Ganymede is considerably lower than on Europa, being 50–80 mSv (5–8 rem) per day, an amount that would cause severe illness or death in human beings exposed for two months. Origin and evolution Ganymede probably formed by an accretion in Jupiter's subnebula, a disk of gas and dust surrounding Jupiter after its formation. The accretion of Ganymede probably took about 10,000 years, much shorter than the 100,000 years estimated for Callisto. The Jovian subnebula may have been relatively "gas-starved" when the Galilean satellites formed; this would have allowed for the lengthy accretion times required for Callisto. In contrast, Ganymede formed closer to Jupiter, where the subnebula was denser, which explains its shorter formation timescale. This relatively fast formation prevented the escape of accretional heat, which may have led to ice melt and differentiation: the separation of the rocks and ice. The rocks settled to the center, forming the core. In this respect, Ganymede is different from Callisto, which apparently failed to melt and differentiate early due to loss of the accretional heat during its slower formation. This hypothesis explains why the two Jovian moons look so dissimilar, despite their similar mass and composition. Alternative theories explain Ganymede's greater internal heating on the basis of tidal flexing or more intense pummeling by impactors during the Late Heavy Bombardment. In the latter case, modeling suggests that differentiation would become a runaway process at Ganymede but not Callisto. After formation, Ganymede's core largely retained the heat accumulated during accretion and differentiation, only slowly releasing it to the ice mantle. The mantle, in turn, transported it to the surface by convection. The decay of radioactive elements within rocks further heated the core, causing increased differentiation: an inner, iron–iron-sulfide core and a silicate mantle formed. With this, Ganymede became a fully differentiated body. By comparison, the radioactive heating of undifferentiated Callisto caused convection in its icy interior, which effectively cooled it and prevented large-scale melting of ice and rapid differentiation. The convective motions in Callisto have caused only a partial separation of rock and ice. Today, Ganymede continues to cool slowly. The heat being released from its core and silicate mantle enables the subsurface ocean to exist, whereas the slow cooling of the liquid Fe–FeS core causes convection and supports magnetic field generation. The current heat flux out of Ganymede is probably higher than that out of Callisto. A study from 2020 by Hirata, Suetsugu and Ohtsuki suggests that Ganymede probably was hit by a massive asteroid 4 billion years ago; an impact so violent that may have shifted the moon's axis. The study came to this conclusion analyzing images of the furrows system in the satellite's surface. Exploration Several spacecraft have performed close flybys of Ganymede: two Pioneer and two Voyager spacecraft made a single flyby each between 1973 and 1979; the Galileo spacecraft made six passes between 1996 and 2000; and the Juno spacecraft performed two flybys in 2019 and 2021. No spacecraft has yet orbited Ganymede, but the JUICE mission, which launched in April 2023, intends to do so. Completed flybys The first spacecraft to approach close to Ganymede was Pioneer 10, which performed a flyby in 1973 as it passed through the Jupiter system at high speed. Pioneer 11 made a similar flyby in 1974. Data sent back by the two spacecraft was used to determine the moon's physical characteristics and provided images of the surface with up to resolution. Pioneer 10's closest approach was 446,250 km, about 85 times Ganymede's diameter. Voyager 1 and Voyager 2 both studied Ganymede when passing through the Jupiter system in 1979. Data from those flybys were used to refine the size of Ganymede, revealing it was larger than Saturn's moon Titan, which was previously thought to have been bigger. Images from the Voyagers provided the first views of the moon's grooved surface terrain. The Pioneer and Voyager flybys were all at large distances and high speeds, as they flew on unbound trajectories through the Jupiter system. Better data can be obtained from a spacecraft which is orbiting Jupiter, as it can encounter Ganymede at a lower speed and adjust the orbit for a closer approach. In 1995, the Galileo spacecraft entered orbit around Jupiter and between 1996 and 2000 made six close flybys of Ganymede. These flybys were denoted G1, G2, G7, G8, G28 and G29. During the closest flyby (G2), Galileo passed just 264 km from the surface of Ganymede (five percent of the moon's diameter), which remains the closest approach by any spacecraft. During the G1 flyby in 1996, Galileo instruments detected Ganymede's magnetic field. Data from the Galileo flybys was used to discover the sub-surface ocean, which was announced in 2001. High spatial resolution spectra of Ganymede taken by Galileo were used to identify several non-ice compounds on the surface. The New Horizons spacecraft also observed Ganymede, but from a much larger distance as it passed through the Jupiter system in 2007 (en route to Pluto). The data were used to perform topographic and compositional mapping of Ganymede. Like Galileo, the Juno spacecraft orbited Jupiter. On 2019 December 25, Juno performed a distant flyby of Ganymede during its 24th orbit of Jupiter, at a range of . This flyby provided images of the moon's polar regions. In June 2021, Juno performed a second flyby, at a closer distance of . This encounter was designed to provide a gravity assist to reduce Juno'''s orbital period from 53 days to 43 days. Additional images of the surface were collected. Future missions The Jupiter Icy Moons Explorer (JUICE) will be the first to enter orbit around Ganymede itself. JUICE was launched on April 14, 2023. It is intended to perform its first flyby of Ganymede in 2031, then enter orbit of the moon in 2032. When the spacecraft consumes its propellant, JUICE is planned to be deorbited and impact Ganymede in February 2034. In addition to JUICE, NASA's Europa Clipper, which was launched in October 2024, will conduct 4 close flybys of Ganymede beginning in 2030. It may also crash into Ganymede at the end of its mission to aid JUICE in studying the surface's geochemistry. Cancelled proposals Several other missions have been proposed to flyby or orbit Ganymede, but were either not selected for funding or cancelled before launch. The Jupiter Icy Moons Orbiter would have studied Ganymede in greater detail. However, the mission was canceled in 2005. Another old proposal was called The Grandeur of Ganymede. A Ganymede orbiter based on the Juno probe was proposed in 2010 for the Planetary Science Decadal Survey. The mission was not supported, with the Decadal Survey preferring the Europa Clipper mission instead. The Europa Jupiter System Mission had a proposed launch date of 2020, and was a joint NASA and ESA proposal for exploration of many of Jupiter's moons including Ganymede. In February 2009 it was announced that ESA and NASA had given this mission priority ahead of the Titan Saturn System Mission. The mission was to consist of the NASA-led Jupiter Europa Orbiter, the ESA-led Jupiter Ganymede Orbiter, and possibly a JAXA-led Jupiter Magnetospheric Orbiter. The NASA and JAXA components were later cancelled, and ESA's appeared likely to be cancelled too, but in 2012 ESA announced it would go ahead alone. The European part of the mission became the Jupiter Icy Moon Explorer (JUICE). The Russian Space Research Institute proposed a Ganymede lander astrobiology mission called Laplace-P, possibly in partnership with JUICE. If selected, it would have been launched in 2023. The mission was cancelled due to a lack of funding in 2017.
Physical sciences
Solar System
null
54217
https://en.wikipedia.org/wiki/Pigeonhole%20principle
Pigeonhole principle
In mathematics, the pigeonhole principle states that if items are put into containers, with , then at least one container must contain more than one item. For example, of three gloves, at least two must be right-handed or at least two must be left-handed, because there are three objects but only two categories of handedness to put them into. This seemingly obvious statement, a type of counting argument, can be used to demonstrate possibly unexpected results. For example, given that the population of London is more than one unit greater than the maximum number of hairs that can be on a human's head, the principle requires that there must be at least two people in London who have the same number of hairs on their heads. Although the pigeonhole principle appears as early as 1624 in a book attributed to Jean Leurechon, it is commonly called Dirichlet's box principle or Dirichlet's drawer principle after an 1834 treatment of the principle by Peter Gustav Lejeune Dirichlet under the name ("drawer principle" or "shelf principle"). The principle has several generalizations and can be stated in various ways. In a more quantified version: for natural numbers and , if objects are distributed among sets, the pigeonhole principle asserts that at least one of the sets will contain at least objects. For arbitrary and , this generalizes to , where and denote the floor and ceiling functions, respectively. Though the principle's most straightforward application is to finite sets (such as pigeons and boxes), it is also used with infinite sets that cannot be put into one-to-one correspondence. To do so requires the formal statement of the pigeonhole principle: "there does not exist an injective function whose codomain is smaller than its domain". Advanced mathematical proofs like Siegel's lemma build upon this more general concept. Etymology Dirichlet published his works in both French and German, using either the German or the French . The strict original meaning of these terms corresponds to the English drawer, that is, an open-topped box that can be slid in and out of the cabinet that contains it. (Dirichlet wrote about distributing pearls among drawers.) These terms morphed to pigeonhole in the sense of a small open space in a desk, cabinet, or wall for keeping letters or papers, metaphorically rooted in structures that house pigeons. Because furniture with pigeonholes is commonly used for storing or sorting things into many categories (such as letters in a post office or room keys in a hotel), the translation pigeonhole may be a better rendering of Dirichlet's original "drawer". That understanding of the term pigeonhole, referring to some furniture features, is fading—especially among those who do not speak English natively but as a lingua franca in the scientific world—in favor of the more pictorial interpretation, literally involving pigeons and holes. The suggestive (though not misleading) interpretation of "pigeonhole" as "dovecote" has lately found its way back to a German back-translation of the "pigeonhole principle" as the "". Besides the original terms "" in German and "" in French, other literal translations are still in use in Arabic (), Bulgarian (""), Chinese (""), Danish (""), Dutch (""), Hungarian (""), Italian (""), Japanese (""), Persian (""), Polish (""), Portuguese (""), Swedish (""), Turkish (""), and Vietnamese (""). Examples Sock picking Suppose a drawer contains a mixture of black socks and blue socks, each of which can be worn on either foot. You pull a number of socks from the drawer without looking. What is the minimum number of pulled socks required to guarantee a pair of the same color? By the pigeonhole principle (, using one pigeonhole per color), the answer is three ( items). Either you have three of one color, or you have two of one color and one of the other. Hand shaking If people can shake hands with one another (where ), the pigeonhole principle shows that there is always a pair of people who will shake hands with the same number of people. In this application of the principle, the "hole" to which a person is assigned is the number of hands that person shakes. Since each person shakes hands with some number of people from 0 to , there are possible holes. On the other hand, either the "0" hole, the hole, or both must be empty, for it is impossible (if ) for some person to shake hands with everybody else while some person shakes hands with nobody. This leaves people to be placed into at most non-empty holes, so the principle applies. This hand-shaking example is equivalent to the statement that in any graph with more than one vertex, there is at least one pair of vertices that share the same degree. This can be seen by associating each person with a vertex and each edge with a handshake. Hair counting One can demonstrate there must be at least two people in London with the same number of hairs on their heads as follows. Since a typical human head has an average of around 150,000 hairs, it is reasonable to assume (as an upper bound) that no one has more than 1,000,000 hairs on their head holes). There are more than 1,000,000 people in London ( is bigger than 1 million items). Assigning a pigeonhole to each number of hairs on a person's head, and assigning people to pigeonholes according to the number of hairs on their heads, there must be at least two people assigned to the same pigeonhole by the 1,000,001st assignment (because they have the same number of hairs on their heads; or, ). Assuming London has 9.002 million people, it follows that at least ten Londoners have the same number of hairs, as having nine Londoners in each of the 1 million pigeonholes accounts for only 9 million people. For the average case () with the constraint: fewest overlaps, there will be at most one person assigned to every pigeonhole and the 150,001st person assigned to the same pigeonhole as someone else. In the absence of this constraint, there may be empty pigeonholes because the "collision" happens before the 150,001st person. The principle just proves the existence of an overlap; it says nothing about the number of overlaps (which falls under the subject of probability distribution). There is a passing, satirical, allusion in English to this version of the principle in A History of the Athenian Society, prefixed to A Supplement to the Athenian Oracle: Being a Collection of the Remaining Questions and Answers in the Old Athenian Mercuries (printed for Andrew Bell, London, 1710). It seems that the question whether there were any two persons in the World that have an equal number of hairs on their head? had been raised in The Athenian Mercury before 1704. Perhaps the first written reference to the pigeonhole principle appears in a short sentence from the French Jesuit Jean Leurechon's 1622 work Selectæ Propositiones: "It is necessary that two men have the same number of hairs, écus, or other things, as each other." The full principle was spelled out two years later, with additional examples, in another book that has often been attributed to Leurechon, but might be by one of his students. The birthday problem The birthday problem asks, for a set of randomly chosen people, what is the probability that some pair of them will have the same birthday? The problem itself is mainly concerned with counterintuitive probabilities, but we can also tell by the pigeonhole principle that among 367 people, there is at least one pair of people who share the same birthday with 100% probability, as there are only 366 possible birthdays to choose from. Team tournament Imagine seven people who want to play in a tournament of teams items), with a limitation of only four teams holes) to choose from. The pigeonhole principle tells us that they cannot all play for different teams; there must be at least one team featuring at least two of the seven players: Subset sum Any subset of size six from the set = {1,2,3,...,9} must contain two elements whose sum is 10. The pigeonholes will be labeled by the two element subsets {1,9}, {2,8}, {3,7}, {4,6} and the singleton {5}, five pigeonholes in all. When the six "pigeons" (elements of the size six subset) are placed into these pigeonholes, each pigeon going into the pigeonhole that has it contained in its label, at least one of the pigeonholes labeled with a two-element subset will have two pigeons in it. Hashing Hashing in computer science is the process of mapping an arbitrarily large set of data to fixed-size values. This has applications in caching whereby large data sets can be stored by a reference to their representative values (their "hash codes") in a "hash table" for fast recall. Typically, the number of unique objects in a data set is larger than the number of available unique hash codes , and the pigeonhole principle holds in this case that hashing those objects is no guarantee of uniqueness, since if you hashed all objects in the data set , some objects must necessarily share the same hash code. Uses and applications The principle can be used to prove that any lossless compression algorithm, provided it makes some inputs smaller (as "compression" suggests), will also make some other inputs larger. Otherwise, the set of all input sequences up to a given length could be mapped to the (much) smaller set of all sequences of length less than without collisions (because the compression is lossless), a possibility that the pigeonhole principle excludes. A notable problem in mathematical analysis is, for a fixed irrational number , to show that the set of fractional parts is dense in . One finds that it is not easy to explicitly find integers such that where is a small positive number and is some arbitrary irrational number. But if one takes such that by the pigeonhole principle there must be such that and are in the same integer subdivision of size (there are only such subdivisions between consecutive integers). In particular, one can find such that for some integers and in }. One can then easily verify that This implies that where or . This shows that 0 is a limit point of {[]}. One can then use this fact to prove the case for in : find such that then if the proof is complete. Otherwise and by setting one obtains Variants occur in a number of proofs. In the proof of the pumping lemma for regular languages, a version that mixes finite and infinite sets is used: If infinitely many objects are placed into finitely many boxes, then two objects share a box. In Fisk's solution to the Art gallery problem a sort of converse is used: If objects are placed into boxes, then there is a box containing at most objects. Alternative formulations The following are alternative formulations of the pigeonhole principle. If objects are distributed over places, and if , then some place receives at least two objects. (equivalent formulation of 1) If objects are distributed over places in such a way that no place receives more than one object, then each place receives exactly one object. (generalization of 1) If and are sets, and the cardinality of is greater than the cardinality of , then there is no injective function from to . If objects are distributed over places, and if , then some place receives no object. (equivalent formulation of 4) If objects are distributed over places in such a way that no place receives no object, then each place receives exactly one object. (generalization of 4) If and are sets, and the cardinality of is less than the cardinality of , then there is no surjective function from to . Strong form Let be positive integers. If objects are distributed into boxes, then either the first box contains at least objects, or the second box contains at least objects, ..., or the th box contains at least objects. The simple form is obtained from this by taking , which gives objects. Taking gives the more quantified version of the principle, namely: Let and be positive integers. If objects are distributed into boxes, then at least one of the boxes contains or more of the objects. This can also be stated as, if discrete objects are to be allocated to containers, then at least one container must hold at least objects, where is the ceiling function, denoting the smallest integer larger than or equal to . Similarly, at least one container must hold no more than objects, where is the floor function, denoting the largest integer smaller than or equal to . Generalizations of the pigeonhole principle A probabilistic generalization of the pigeonhole principle states that if pigeons are randomly put into pigeonholes with uniform probability , then at least one pigeonhole will hold more than one pigeon with probability where is the falling factorial . For and for (and ), that probability is zero; in other words, if there is just one pigeon, there cannot be a conflict. For (more pigeons than pigeonholes) it is one, in which case it coincides with the ordinary pigeonhole principle. But even if the number of pigeons does not exceed the number of pigeonholes (), due to the random nature of the assignment of pigeons to pigeonholes there is often a substantial chance that clashes will occur. For example, if 2 pigeons are randomly assigned to 4 pigeonholes, there is a 25% chance that at least one pigeonhole will hold more than one pigeon; for 5 pigeons and 10 holes, that probability is 69.76%; and for 10 pigeons and 20 holes it is about 93.45%. If the number of holes stays fixed, there is always a greater probability of a pair when you add more pigeons. This problem is treated at much greater length in the birthday paradox. A further probabilistic generalization is that when a real-valued random variable has a finite mean , then the probability is nonzero that is greater than or equal to , and similarly the probability is nonzero that is less than or equal to . To see that this implies the standard pigeonhole principle, take any fixed arrangement of pigeons into holes and let be the number of pigeons in a hole chosen uniformly at random. The mean of is , so if there are more pigeons than holes the mean is greater than one. Therefore, is sometimes at least 2. Infinite sets The pigeonhole principle can be extended to infinite sets by phrasing it in terms of cardinal numbers: if the cardinality of set is greater than the cardinality of set , then there is no injection from to . However, in this form the principle is tautological, since the meaning of the statement that the cardinality of set is greater than the cardinality of set is exactly that there is no injective map from to . However, adding at least one element to a finite set is sufficient to ensure that the cardinality increases. Another way to phrase the pigeonhole principle for finite sets is similar to the principle that finite sets are Dedekind finite: Let and be finite sets. If there is a surjection from to that is not injective, then no surjection from to is injective. In fact no function of any kind from to is injective. This is not true for infinite sets: Consider the function on the natural numbers that sends 1 and 2 to 1, 3 and 4 to 2, 5 and 6 to 3, and so on. There is a similar principle for infinite sets: If uncountably many pigeons are stuffed into countably many pigeonholes, there will exist at least one pigeonhole having uncountably many pigeons stuffed into it. This principle is not a generalization of the pigeonhole principle for finite sets however: It is in general false for finite sets. In technical terms it says that if and are finite sets such that any surjective function from to is not injective, then there exists an element of such that there exists a bijection between the preimage of and . This is a quite different statement, and is absurd for large finite cardinalities. Quantum mechanics Yakir Aharonov et al. presented arguments that quantum mechanics may violate the pigeonhole principle, and proposed interferometric experiments to test the pigeonhole principle in quantum mechanics. Later research has called this conclusion into question. In a January 2015 arXiv preprint, researchers Alastair Rae and Ted Forgan at the University of Birmingham performed a theoretical wave function analysis, employing the standard pigeonhole principle, on the flight of electrons at various energies through an interferometer. If the electrons had no interaction strength at all, they would each produce a single, perfectly circular peak. At high interaction strength, each electron produces four distinct peaks, for a total of 12 peaks on the detector; these peaks are the result of the four possible interactions each electron could experience (alone, together with the first other particle only, together with the second other particle only, or all three together). If the interaction strength was fairly low, as would be the case in many real experiments, the deviation from a zero-interaction pattern would be nearly indiscernible, much smaller than the lattice spacing of atoms in solids, such as the detectors used for observing these patterns. This would make it very difficult or impossible to distinguish a weak-but-nonzero interaction strength from no interaction whatsoever, and thus give an illusion of three electrons that did not interact despite all three passing through two paths.
Mathematics
Combinatorics
null
54232
https://en.wikipedia.org/wiki/Reinforced%20concrete
Reinforced concrete
Reinforced concrete, also called ferroconcrete, is a composite material in which concrete's relatively low tensile strength and ductility are compensated for by the inclusion of reinforcement having higher tensile strength or ductility. The reinforcement is usually, though not necessarily, steel reinforcing bars (known as rebar) and is usually embedded passively in the concrete before the concrete sets. However, post-tensioning is also employed as a technique to reinforce the concrete. In terms of volume used annually, it is one of the most common engineering materials. In corrosion engineering terms, when designed correctly, the alkalinity of the concrete protects the steel rebar from corrosion. Description Reinforcing schemes are generally designed to resist tensile stresses in particular regions of the concrete that might cause unacceptable cracking and/or structural failure. Modern reinforced concrete can contain varied reinforcing materials made of steel, polymers or alternate composite material in conjunction with rebar or not. Reinforced concrete may also be permanently stressed (concrete in compression, reinforcement in tension), so as to improve the behavior of the final structure under working loads. In the United States, the most common methods of doing this are known as pre-tensioning and post-tensioning. For a strong, ductile and durable construction the reinforcement needs to have the following properties at least: High relative strength High toleration of tensile strain Good bond to the concrete, irrespective of pH, moisture, and similar factors Thermal compatibility, not causing unacceptable stresses (such as expansion or contraction) in response to changing temperatures. Durability in the concrete environment, irrespective of corrosion or sustained stress for example. History French builder was the first one to use iron-reinforced concrete as a building technique. In 1853, Coignet built for himself the first iron reinforced concrete structure, a four-story house at 72 rue Charles Michels in the suburbs of Paris. Coignet's descriptions of reinforcing concrete suggests that he did not do it for means of adding strength to the concrete but for keeping walls in monolithic construction from overturning. The 1872–73 Pippen Building in Brooklyn, although not designed by Coignet, stands as a testament to his technique. In 1854, English builder William B. Wilkinson reinforced the concrete roof and floors in the two-story house he was constructing. His positioning of the reinforcement demonstrated that, unlike his predecessors, he had knowledge of tensile stresses. Between 1869 and 1870, Henry Eton would design, and Messrs W & T Phillips of London construct the wrought iron reinforced Homersfield Bridge bridge, with a 50' (15.25 meter) span, over the river Waveney, between the English counties of Norfolk and Suffolk. In 1877, Thaddeus Hyatt, published a report entitled An Account of Some Experiments with Portland-Cement-Concrete Combined with Iron as a Building Material, with Reference to Economy of Metal in Construction and for Security against Fire in the Making of Roofs, Floors, and Walking Surfaces, in which he reported his experiments on the behaviour of reinforced concrete. His work played a major role in the evolution of concrete construction as a proven and studied science. Without Hyatt's work, more dangerous trial and error methods might have been depended on for the advancement in the technology. Joseph Monier, a 19th-century French gardener, was a pioneer in the development of structural, prefabricated and reinforced concrete, having been dissatisfied with the existing materials available for making durable flowerpots. He was granted a patent for reinforcing concrete flowerpots by means of mixing a wire mesh and a mortar shell. In 1877, Monier was granted another patent for a more advanced technique of reinforcing concrete columns and girders, using iron rods placed in a grid pattern. Though Monier undoubtedly knew that reinforcing concrete would improve its inner cohesion, it is not clear whether he even knew how much the tensile strength of concrete was improved by the reinforcing. Before the 1870s, the use of concrete construction, though dating back to the Roman Empire, and having been reintroduced in the early 19th century, was not yet a proven scientific technology. Ernest L. Ransome, an English-born engineer, was an early innovator of reinforced concrete techniques at the end of the 19th century. Using the knowledge of reinforced concrete developed during the previous 50 years, Ransome improved nearly all the styles and techniques of the earlier inventors of reinforced concrete. Ransome's key innovation was to twist the reinforcing steel bar, thereby improving its bond with the concrete. Gaining increasing fame from his concrete constructed buildings, Ransome was able to build two of the first reinforced concrete bridges in North America. One of his bridges still stands on Shelter Island in New Yorks East End, One of the first concrete buildings constructed in the United States was a private home designed by William Ward, completed in 1876. The home was particularly designed to be fireproof. G. A. Wayss was a German civil engineer and a pioneer of the iron and steel concrete construction. In 1879, Wayss bought the German rights to Monier's patents and, in 1884, his firm, Wayss & Freytag, made the first commercial use of reinforced concrete. Up until the 1890s, Wayss and his firm greatly contributed to the advancement of Monier's system of reinforcing, established it as a well-developed scientific technology. One of the first skyscrapers made with reinforced concrete was the 16-story Ingalls Building in Cincinnati, constructed in 1904. The first reinforced concrete building in Southern California was the Laughlin Annex in downtown Los Angeles, constructed in 1905. In 1906, 16 building permits were reportedly issued for reinforced concrete buildings in the City of Los Angeles, including the Temple Auditorium and 8-story Hayward Hotel. In 1906, a partial collapse of the Bixby Hotel in Long Beach killed 10 workers during construction when shoring was removed prematurely. That event spurred a scrutiny of concrete erection practices and building inspections. The structure was constructed of reinforced concrete frames with hollow clay tile ribbed flooring and hollow clay tile infill walls. That practice was strongly questioned by experts and recommendations for "pure" concrete construction were made, using reinforced concrete for the floors and walls as well as the frames. In April 1904, Julia Morgan, an American architect and engineer, who pioneered the aesthetic use of reinforced concrete, completed her first reinforced concrete structure, El Campanil, a bell tower at Mills College, which is located across the bay from San Francisco. Two years later, El Campanil survived the 1906 San Francisco earthquake without any damage, which helped build her reputation and launch her prolific career. The 1906 earthquake also changed the public's initial resistance to reinforced concrete as a building material, which had been criticized for its perceived dullness. In 1908, the San Francisco Board of Supervisors changed the city's building codes to allow wider use of reinforced concrete. In 1906, the National Association of Cement Users (NACU) published Standard No. 1 and, in 1910, the Standard Building Regulations for the Use of Reinforced Concrete. Use in construction Many different types of structures and components of structures can be built using reinforced concrete elements including slabs, walls, beams, columns, foundations, frames and more. Reinforced concrete can be classified as precast or cast-in-place concrete. Designing and implementing the most efficient floor system is key to creating optimal building structures. Small changes in the design of a floor system can have significant impact on material costs, construction schedule, ultimate strength, operating costs, occupancy levels and end use of a building. Without reinforcement, constructing modern structures with concrete material would not be possible. Reinforced concrete elements When reinforced concrete elements are used in construction, these reinforced concrete elements exhibit basic behavior when subjected to external loads. Reinforced concrete elements may be subject to tension, compression, bending, shear, and/or torsion. Behavior Materials Concrete is a mixture of coarse (stone or brick chips) and fine (generally sand and/or crushed stone) aggregates with a paste of binder material (usually Portland cement) and water. When cement is mixed with a small amount of water, it hydrates to form microscopic opaque crystal lattices encapsulating and locking the aggregate into a rigid shape. The aggregates used for making concrete should be free from harmful substances like organic impurities, silt, clay, lignite, etc. Typical concrete mixes have high resistance to compressive stresses (about ); however, any appreciable tension (e.g., due to bending) will break the microscopic rigid lattice, resulting in cracking and separation of the concrete. For this reason, typical non-reinforced concrete must be well supported to prevent the development of tension. If a material with high strength in tension, such as steel, is placed in concrete, then the composite material, reinforced concrete, resists not only compression but also bending and other direct tensile actions. A composite section where the concrete resists compression and reinforcement "rebar" resists tension can be made into almost any shape and size for the construction industry. Key characteristics Three physical characteristics give reinforced concrete its special properties: The coefficient of thermal expansion of concrete is similar to that of steel, eliminating large internal stresses due to differences in thermal expansion or contraction. When the cement paste within the concrete hardens, this conforms to the surface details of the steel, permitting any stress to be transmitted efficiently between the different materials. Usually steel bars are roughened or corrugated to further improve the bond or cohesion between the concrete and steel. The alkaline chemical environment provided by the alkali reserve (KOH, NaOH) and the portlandite (calcium hydroxide) contained in the hardened cement paste causes a passivating film to form on the surface of the steel, making it much more resistant to corrosion than it would be in neutral or acidic conditions. When the cement paste is exposed to the air and meteoric water reacts with the atmospheric CO2, portlandite and the calcium silicate hydrate (CSH) of the hardened cement paste become progressively carbonated and the high pH gradually decreases from 13.5 – 12.5 to 8.5, the pH of water in equilibrium with calcite (calcium carbonate) and the steel is no longer passivated. As a rule of thumb, only to give an idea on orders of magnitude, steel is protected at pH above ~11 but starts to corrode below ~10 depending on steel characteristics and local physico-chemical conditions when concrete becomes carbonated. Carbonation of concrete along with chloride ingress are amongst the chief reasons for the failure of reinforcement bars in concrete. The relative cross-sectional area of steel required for typical reinforced concrete is usually quite small and varies from 1% for most beams and slabs to 6% for some columns. Reinforcing bars are normally round in cross-section and vary in diameter. Reinforced concrete structures sometimes have provisions such as ventilated hollow cores to control their moisture & humidity. Distribution of concrete (in spite of reinforcement) strength characteristics along the cross-section of vertical reinforced concrete elements is inhomogeneous. Mechanism of composite action of reinforcement and concrete The reinforcement in a RC structure, such as a steel bar, has to undergo the same strain or deformation as the surrounding concrete in order to prevent discontinuity, slip or separation of the two materials under load. Maintaining composite action requires transfer of load between the concrete and steel. The direct stress is transferred from the concrete to the bar interface so as to change the tensile stress in the reinforcing bar along its length. This load transfer is achieved by means of bond (anchorage) and is idealized as a continuous stress field that develops in the vicinity of the steel-concrete interface. The reasons that the two different material components concrete and steel can work together are as follows: (1) Reinforcement can be well bonded to the concrete, thus they can jointly resist external loads and deform. (2) The thermal expansion coefficients of concrete and steel are so close ( to for concrete and for steel) that the thermal stress-induced damage to the bond between the two components can be prevented. (3) Concrete can protect the embedded steel from corrosion and high-temperature induced softening. Anchorage (bond) in concrete: Codes of specifications Because the actual bond stress varies along the length of a bar anchored in a zone of tension, current international codes of specifications use the concept of development length rather than bond stress. The main requirement for safety against bond failure is to provide a sufficient extension of the length of the bar beyond the point where the steel is required to develop its yield stress and this length must be at least equal to its development length. However, if the actual available length is inadequate for full development, special anchorages must be provided, such as cogs or hooks or mechanical end plates. The same concept applies to lap splice length mentioned in the codes where splices (overlapping) provided between two adjacent bars in order to maintain the required continuity of stress in the splice zone. Anticorrosion measures In wet and cold climates, reinforced concrete for roads, bridges, parking structures and other structures that may be exposed to deicing salt may benefit from use of corrosion-resistant reinforcement such as uncoated, low carbon/chromium (micro composite), epoxy-coated, hot dip galvanized or stainless steel rebar. Good design and a well-chosen concrete mix will provide additional protection for many applications. Uncoated, low carbon/chromium rebar looks similar to standard carbon steel rebar due to its lack of a coating; its highly corrosion-resistant features are inherent in the steel microstructure. It can be identified by the unique ASTM specified mill marking on its smooth, dark charcoal finish. Epoxy-coated rebar can easily be identified by the light green color of its epoxy coating. Hot dip galvanized rebar may be bright or dull gray depending on length of exposure, and stainless rebar exhibits a typical white metallic sheen that is readily distinguishable from carbon steel reinforcing bar. Reference ASTM standard specifications A1035/A1035M Standard Specification for Deformed and Plain Low-carbon, Chromium, Steel Bars for Concrete Reinforcement, A767 Standard Specification for Hot Dip Galvanized Reinforcing Bars, A775 Standard Specification for Epoxy Coated Steel Reinforcing Bars and A955 Standard Specification for Deformed and Plain Stainless Bars for Concrete Reinforcement. Another, cheaper way of protecting rebars is coating them with zinc phosphate. Zinc phosphate slowly reacts with calcium cations and the hydroxyl anions present in the cement pore water and forms a stable hydroxyapatite layer. Penetrating sealants typically must be applied some time after curing. Sealants include paint, plastic foams, films and aluminum foil, felts or fabric mats sealed with tar, and layers of bentonite clay, sometimes used to seal roadbeds. Corrosion inhibitors, such as calcium nitrite [Ca(NO2)2], can also be added to the water mix before pouring concrete. Generally, 1–2 wt. % of [Ca(NO2)2] with respect to cement weight is needed to prevent corrosion of the rebars. The nitrite anion is a mild oxidizer that oxidizes the soluble and mobile ferrous ions (Fe2+) present at the surface of the corroding steel and causes them to precipitate as an insoluble ferric hydroxide (Fe(OH)3). This causes the passivation of steel at the anodic oxidation sites. Nitrite is a much more active corrosion inhibitor than nitrate, which is a less powerful oxidizer of the divalent iron. Reinforcement and terminology of beams A beam bends under bending moment, resulting in a small curvature. At the outer face (tensile face) of the curvature the concrete experiences tensile stress, while at the inner face (compressive face) it experiences compressive stress. A singly reinforced beam is one in which the concrete element is only reinforced near the tensile face and the reinforcement, called tension steel, is designed to resist the tension. A doubly reinforced beam is the section in which besides the tensile reinforcement the concrete element is also reinforced near the compressive face to help the concrete resist compression and take stresses. The latter reinforcement is called compression steel. When the compression zone of a concrete is inadequate to resist the compressive moment (positive moment), extra reinforcement has to be provided if the architect limits the dimensions of the section. An under-reinforced beam is one in which the tension capacity of the tensile reinforcement is smaller than the combined compression capacity of the concrete and the compression steel (under-reinforced at tensile face). When the reinforced concrete element is subject to increasing bending moment, the tension steel yields while the concrete does not reach its ultimate failure condition. As the tension steel yields and stretches, an "under-reinforced" concrete also yields in a ductile manner, exhibiting a large deformation and warning before its ultimate failure. In this case the yield stress of the steel governs the design. An over-reinforced beam is one in which the tension capacity of the tension steel is greater than the combined compression capacity of the concrete and the compression steel (over-reinforced at tensile face). So the "over-reinforced concrete" beam fails by crushing of the compressive-zone concrete and before the tension zone steel yields, which does not provide any warning before failure as the failure is instantaneous. A balanced-reinforced beam is one in which both the compressive and tensile zones reach yielding at the same imposed load on the beam, and the concrete will crush and the tensile steel will yield at the same time. This design criterion is however as risky as over-reinforced concrete, because failure is sudden as the concrete crushes at the same time of the tensile steel yields, which gives a very little warning of distress in tension failure. Steel-reinforced concrete moment-carrying elements should normally be designed to be under-reinforced so that users of the structure will receive warning of impending collapse. The characteristic strength is the strength of a material where less than 5% of the specimen shows lower strength. The design strength or nominal strength is the strength of a material, including a material-safety factor. The value of the safety factor generally ranges from 0.75 to 0.85 in Permissible stress design. The ultimate limit state is the theoretical failure point with a certain probability. It is stated under factored loads and factored resistances. Reinforced concrete structures are normally designed according to rules and regulations or recommendation of a code such as ACI-318, CEB, Eurocode 2 or the like. WSD, USD or LRFD methods are used in design of RC structural members. Analysis and design of RC members can be carried out by using linear or non-linear approaches. When applying safety factors, building codes normally propose linear approaches, but for some cases non-linear approaches. To see the examples of a non-linear numerical simulation and calculation visit the references: Prestressed concrete Prestressing concrete is a technique that greatly increases the load-bearing strength of concrete beams. The reinforcing steel in the bottom part of the beam, which will be subjected to tensile forces when in service, is placed in tension before the concrete is poured around it. Once the concrete has hardened, the tension on the reinforcing steel is released, placing a built-in compressive force on the concrete. When loads are applied, the reinforcing steel takes on more stress and the compressive force in the concrete is reduced, but does not become a tensile force. Since the concrete is always under compression, it is less subject to cracking and failure. Common failure modes of steel reinforced concrete Reinforced concrete can fail due to inadequate strength, leading to mechanical failure, or due to a reduction in its durability. Corrosion and freeze/thaw cycles may damage poorly designed or constructed reinforced concrete. When rebar corrodes, the oxidation products (rust) expand and tends to flake, cracking the concrete and unbonding the rebar from the concrete. Typical mechanisms leading to durability problems are discussed below. Mechanical failure Cracking of the concrete section is nearly impossible to prevent; however, the size and location of cracks can be limited and controlled by appropriate reinforcement, control joints, curing methodology and concrete mix design. Cracking can allow moisture to penetrate and corrode the reinforcement. This is a serviceability failure in limit state design. Cracking is normally the result of an inadequate quantity of rebar, or rebar spaced at too great a distance. The concrete cracks either under excess loading, or due to internal effects such as early thermal shrinkage while it cures. Ultimate failure leading to collapse can be caused by crushing the concrete, which occurs when compressive stresses exceed its strength, by yielding or failure of the rebar when bending or shear stresses exceed the strength of the reinforcement, or by bond failure between the concrete and the rebar. Carbonation Carbonation, or neutralisation, is a chemical reaction between carbon dioxide in the air and calcium hydroxide and hydrated calcium silicate in the concrete. When a concrete structure is designed, it is usual to specify the concrete cover for the rebar (the depth of the rebar within the object). The minimum concrete cover is normally regulated by design or building codes. If the reinforcement is too close to the surface, early failure due to corrosion may occur. The concrete cover depth can be measured with a cover meter. However, carbonated concrete incurs a durability problem only when there is also sufficient moisture and oxygen to cause electropotential corrosion of the reinforcing steel. One method of testing a structure for carbonation is to drill a fresh hole in the surface and then treat the cut surface with phenolphthalein indicator solution. This solution turns pink when in contact with alkaline concrete, making it possible to see the depth of carbonation. Using an existing hole does not suffice because the exposed surface will already be carbonated. Chlorides Chlorides can promote the corrosion of embedded rebar if present in sufficiently high concentration. Chloride anions induce both localized corrosion (pitting corrosion) and generalized corrosion of steel reinforcements. For this reason, one should only use fresh raw water or potable water for mixing concrete, ensure that the coarse and fine aggregates do not contain chlorides, rather than admixtures which might contain chlorides. It was once common for calcium chloride to be used as an admixture to promote rapid set-up of the concrete. It was also mistakenly believed that it would prevent freezing. However, this practice fell into disfavor once the deleterious effects of chlorides became known. It should be avoided whenever possible. The use of de-icing salts on roadways, used to lower the freezing point of water, is probably one of the primary causes of premature failure of reinforced or prestressed concrete bridge decks, roadways, and parking garages. The use of epoxy-coated reinforcing bars and the application of cathodic protection has mitigated this problem to some extent. Also FRP (fiber-reinforced polymer) rebars are known to be less susceptible to chlorides. Properly designed concrete mixtures that have been allowed to cure properly are effectively impervious to the effects of de-icers. Another important source of chloride ions is sea water. Sea water contains by weight approximately 3.5% salts. These salts include sodium chloride, magnesium sulfate, calcium sulfate, and bicarbonates. In water these salts dissociate in free ions (Na+, Mg2+, Cl−, , ) and migrate with the water into the capillaries of the concrete. Chloride ions, which make up about 50% of these ions, are particularly aggressive as a cause of corrosion of carbon steel reinforcement bars. In the 1960s and 1970s it was also relatively common for magnesite, a chloride rich carbonate mineral, to be used as a floor-topping material. This was done principally as a levelling and sound attenuating layer. However it is now known that when these materials come into contact with moisture they produce a weak solution of hydrochloric acid due to the presence of chlorides in the magnesite. Over a period of time (typically decades), the solution causes corrosion of the embedded rebars. This was most commonly found in wet areas or areas repeatedly exposed to moisture. Alkali silica reaction This a reaction of amorphous silica (chalcedony, chert, siliceous limestone) sometimes present in the aggregates with the hydroxyl ions (OH−) from the cement pore solution. Poorly crystallized silica (SiO2) dissolves and dissociates at high pH (12.5 - 13.5) in alkaline water. The soluble dissociated silicic acid reacts in the porewater with the calcium hydroxide (portlandite) present in the cement paste to form an expansive calcium silicate hydrate (CSH). The alkali–silica reaction (ASR) causes localised swelling responsible for tensile stress and cracking. The conditions required for alkali silica reaction are threefold: (1) aggregate containing an alkali-reactive constituent (amorphous silica), (2) sufficient availability of hydroxyl ions (OH−), and (3) sufficient moisture, above 75% relative humidity (RH) within the concrete. This phenomenon is sometimes popularly referred to as "concrete cancer". This reaction occurs independently of the presence of rebars; massive concrete structures such as dams can be affected. Conversion of high alumina cement Resistant to weak acids and especially sulfates, this cement cures quickly and has very high durability and strength. It was frequently used after World War II to make precast concrete objects. However, it can lose strength with heat or time (conversion), especially when not properly cured. After the collapse of three roofs made of prestressed concrete beams using high alumina cement, this cement was banned in the UK in 1976. Subsequent inquiries into the matter showed that the beams were improperly manufactured, but the ban remained. Sulfates Sulfates (SO4) in the soil or in groundwater, in sufficient concentration, can react with the Portland cement in concrete causing the formation of expansive products, e.g., ettringite or thaumasite, which can lead to early failure of the structure. The most typical attack of this type is on concrete slabs and foundation walls at grades where the sulfate ion, via alternate wetting and drying, can increase in concentration. As the concentration increases, the attack on the Portland cement can begin. For buried structures such as pipe, this type of attack is much rarer, especially in the eastern United States. The sulfate ion concentration increases much slower in the soil mass and is especially dependent upon the initial amount of sulfates in the native soil. A chemical analysis of soil borings to check for the presence of sulfates should be undertaken during the design phase of any project involving concrete in contact with the native soil. If the concentrations are found to be aggressive, various protective coatings can be applied. Also, in the US ASTM C150 Type 5 Portland cement can be used in the mix. This type of cement is designed to be particularly resistant to a sulfate attack. Steel plate construction In steel plate construction, stringers join parallel steel plates. The plate assemblies are fabricated off site, and welded together on-site to form steel walls connected by stringers. The walls become the form into which concrete is poured. Steel plate construction speeds reinforced concrete construction by cutting out the time-consuming on-site manual steps of tying rebar and building forms. The method results in excellent strength because the steel is on the outside, where tensile forces are often greatest. Fiber-reinforced concrete Fiber reinforcement is mainly used in shotcrete, but can also be used in normal concrete. Fiber-reinforced normal concrete is mostly used for on-ground floors and pavements, but can also be considered for a wide range of construction parts (beams, pillars, foundations, etc.), either alone or with hand-tied rebars. Concrete reinforced with fibers (which are usually steel, glass, plastic fibers) or cellulose polymer fiber is less expensive than hand-tied rebar. The shape, dimension, and length of the fiber are important. A thin and short fiber, for example short, hair-shaped glass fiber, is only effective during the first hours after pouring the concrete (its function is to reduce cracking while the concrete is stiffening), but it will not increase the concrete tensile strength. A normal-size fiber for European shotcrete (1 mm diameter, 45 mm length—steel or plastic) will increase the concrete's tensile strength. Fiber reinforcement is most often used to supplement or partially replace primary rebar, and in some cases it can be designed to fully replace rebar. Steel is the strongest commonly available fiber, and comes in different lengths (30 to 80 mm in Europe) and shapes (end-hooks). Steel fibers can only be used on surfaces that can tolerate or avoid corrosion and rust stains. In some cases, a steel-fiber surface is faced with other materials. Glass fiber is inexpensive and corrosion-proof, but not as ductile as steel. Recently, spun basalt fiber, long available in Eastern Europe, has become available in the U.S. and Western Europe. Basalt fiber is stronger and less expensive than glass, but historically has not resisted the alkaline environment of Portland cement well enough to be used as direct reinforcement. New materials use plastic binders to isolate the basalt fiber from the cement. The premium fibers are graphite-reinforced plastic fibers, which are nearly as strong as steel, lighter in weight, and corrosion-proof. Some experiments have had promising early results with carbon nanotubes, but the material is still far too expensive for any building. Non-steel reinforcement There is considerable overlap between the subjects of non-steel reinforcement and fiber-reinforcement of concrete. The introduction of non-steel reinforcement of concrete is relatively recent; it takes two major forms: non-metallic rebar rods, and non-steel (usually also non-metallic) fibers incorporated into the cement matrix. For example, there is increasing interest in glass fiber reinforced concrete (GFRC) and in various applications of polymer fibers incorporated into concrete. Although currently there is not much suggestion that such materials will replace metal rebar, some of them have major advantages in specific applications, and there also are new applications in which metal rebar simply is not an option. However, the design and application of non-steel reinforcing is fraught with challenges. For one thing, concrete is a highly alkaline environment, in which many materials, including most kinds of glass, have a poor service life. Also, the behavior of such reinforcing materials differs from the behavior of metals, for instance in terms of shear strength, creep and elasticity. Fiber-reinforced plastic/polymer (FRP) and glass-reinforced plastic (GRP) consist of fibers of polymer, glass, carbon, aramid or other polymers or high-strength fibers set in a resin matrix to form a rebar rod, or grid, or fiber. These rebars are installed in much the same manner as steel rebars. The cost is higher but, suitably applied, the structures have advantages, in particular a dramatic reduction in problems related to corrosion, either by intrinsic concrete alkalinity or by external corrosive fluids that might penetrate the concrete. These structures can be significantly lighter and usually have a longer service life. The cost of these materials has dropped dramatically since their widespread adoption in the aerospace industry and by the military. In particular, FRP rods are useful for structures where the presence of steel would not be acceptable. For example, MRI machines have huge magnets, and accordingly require non-magnetic buildings. Again, toll booths that read radio tags need reinforced concrete that is transparent to radio waves. Also, where the design life of the concrete structure is more important than its initial costs, non-steel reinforcing often has its advantages where corrosion of reinforcing steel is a major cause of failure. In such situations corrosion-proof reinforcing can extend a structure's life substantially, for example in the intertidal zone. FRP rods may also be useful in situations where it is likely that the concrete structure may be compromised in future years, for example the edges of balconies when balustrades are replaced, and bathroom floors in multi-story construction where the service life of the floor structure is likely to be many times the service life of the waterproofing building membrane. Plastic reinforcement often is stronger, or at least has a better strength to weight ratio than reinforcing steels. Also, because it resists corrosion, it does not need a protective concrete cover as thick as steel reinforcement does (typically 30 to 50 mm or more). FRP-reinforced structures therefore can be lighter and last longer. Accordingly, for some applications the whole-life cost will be price-competitive with steel-reinforced concrete. The material properties of FRP or GRP bars differ markedly from steel, so there are differences in the design considerations. FRP or GRP bars have relatively higher tensile strength but lower stiffness, so that deflections are likely to be higher than for equivalent steel-reinforced units. Structures with internal FRP reinforcement typically have an elastic deformability comparable to the plastic deformability (ductility) of steel reinforced structures. Failure in either case is more likely to occur by compression of the concrete than by rupture of the reinforcement. Deflection is always a major design consideration for reinforced concrete. Deflection limits are set to ensure that crack widths in steel-reinforced concrete are controlled to prevent water, air or other aggressive substances reaching the steel and causing corrosion. For FRP-reinforced concrete, aesthetics and possibly water-tightness will be the limiting criteria for crack width control. FRP rods also have relatively lower compressive strengths than steel rebar, and accordingly require different design approaches for reinforced concrete columns. One drawback to the use of FRP reinforcement is their limited fire resistance. Where fire safety is a consideration, structures employing FRP have to maintain their strength and the anchoring of the forces at temperatures to be expected in the event of fire. For purposes of fireproofing, an adequate thickness of cement concrete cover or protective cladding is necessary. The addition of 1 kg/m3 of polypropylene fibers to concrete has been shown to reduce spalling during a simulated fire. (The improvement is thought to be due to the formation of pathways out of the bulk of the concrete, allowing steam pressure to dissipate.) Another problem is the effectiveness of shear reinforcement. FRP rebar stirrups formed by bending before hardening generally perform relatively poorly in comparison to steel stirrups or to structures with straight fibers. When strained, the zone between the straight and curved regions are subject to strong bending, shear, and longitudinal stresses. Special design techniques are necessary to deal with such problems. There is growing interest in applying external reinforcement to existing structures using advanced materials such as composite (fiberglass, basalt, carbon) rebar, which can impart exceptional strength. Worldwide, there are a number of brands of composite rebar recognized by different countries, such as Aslan, DACOT, V-rod, and ComBar. The number of projects using composite rebar increases day by day around the world, in countries ranging from USA, Russia, and South Korea to Germany.
Technology
Building materials
null
54240
https://en.wikipedia.org/wiki/Singularity%20%28mathematics%29
Singularity (mathematics)
In mathematics, a singularity is a point at which a given mathematical object is not defined, or a point where the mathematical object ceases to be well-behaved in some particular way, such as by lacking differentiability or analyticity. For example, the reciprocal function has a singularity at , where the value of the function is not defined, as involving a division by zero. The absolute value function also has a singularity at , since it is not differentiable there. The algebraic curve defined by in the coordinate system has a singularity (called a cusp) at . For singularities in algebraic geometry, see singular point of an algebraic variety. For singularities in differential geometry, see singularity theory. Real analysis In real analysis, singularities are either discontinuities, or discontinuities of the derivative (sometimes also discontinuities of higher order derivatives). There are four kinds of discontinuities: type I, which has two subtypes, and type II, which can also be divided into two subtypes (though usually is not). To describe the way these two types of limits are being used, suppose that is a function of a real argument , and for any value of its argument, say , then the left-handed limit, , and the right-handed limit, , are defined by: , constrained by and , constrained by . The value is the value that the function tends towards as the value approaches from below, and the value is the value that the function tends towards as the value approaches from above, regardless of the actual value the function has at the point where  . There are some functions for which these limits do not exist at all. For example, the function does not tend towards anything as approaches . The limits in this case are not infinite, but rather undefined: there is no value that settles in on. Borrowing from complex analysis, this is sometimes called an essential singularity. The possible cases at a given value for the argument are as follows. A point of continuity is a value of for which , as one expects for a smooth function. All the values must be finite. If is not a point of continuity, then a discontinuity occurs at . A type I discontinuity occurs when both and exist and are finite, but at least one of the following three conditions also applies: ; is not defined for the case of ; or has a defined value, which, however, does not match the value of the two limits. Type I discontinuities can be further distinguished as being one of the following subtypes: A jump discontinuity occurs when , regardless of whether is defined, and regardless of its value if it is defined. A removable discontinuity occurs when , also regardless of whether is defined, and regardless of its value if it is defined (but which does not match that of the two limits). A type II discontinuity occurs when either or does not exist (possibly both). This has two subtypes, which are usually not considered separately: An infinite discontinuity is the special case when either the left hand or right hand limit does not exist, specifically because it is infinite, and the other limit is either also infinite, or is some well defined finite number. In other words, the function has an infinite discontinuity when its graph has a vertical asymptote. An essential singularity is a term borrowed from complex analysis (see below). This is the case when either one or the other limits or does not exist, but not because it is an infinite discontinuity. Essential singularities approach no limit, not even if valid answers are extended to include . In real analysis, a singularity or discontinuity is a property of a function alone. Any singularities that may exist in the derivative of a function are considered as belonging to the derivative, not to the original function. Coordinate singularities A coordinate singularity occurs when an apparent singularity or discontinuity occurs in one coordinate frame, which can be removed by choosing a different frame. An example of this is the apparent singularity at the 90 degree latitude in spherical coordinates. An object moving due north (for example, along the line 0 degrees longitude) on the surface of a sphere will suddenly experience an instantaneous change in longitude at the pole (in the case of the example, jumping from longitude 0 to longitude 180 degrees). This discontinuity, however, is only apparent; it is an artifact of the coordinate system chosen, which is singular at the poles. A different coordinate system would eliminate the apparent discontinuity (e.g., by replacing the latitude/longitude representation with an -vector representation). Complex analysis In complex analysis, there are several classes of singularities. These include the isolated singularities, the nonisolated singularities, and the branch points. Isolated singularities Suppose that is a function that is complex differentiable in the complement of a point in an open subset of the complex numbers Then: The point is a removable singularity of if there exists a holomorphic function defined on all of such that for all in The function is a continuous replacement for the function The point is a pole or non-essential singularity of if there exists a holomorphic function defined on with nonzero, and a natural number such that for all in The least such number is called the order of the pole. The derivative at a non-essential singularity itself has a non-essential singularity, with increased by (except if is so that the singularity is removable). The point is an essential singularity of if it is neither a removable singularity nor a pole. The point is an essential singularity if and only if the Laurent series has infinitely many powers of negative degree. Nonisolated singularities Other than isolated singularities, complex functions of one variable may exhibit other singular behaviour. These are termed nonisolated singularities, of which there are two types: Cluster points: limit points of isolated singularities. If they are all poles, despite admitting Laurent series expansions on each of them, then no such expansion is possible at its limit. Natural boundaries: any non-isolated set (e.g. a curve) on which functions cannot be analytically continued around (or outside them if they are closed curves in the Riemann sphere). Branch points Branch points are generally the result of a multi-valued function, such as or which are defined within a certain limited domain so that the function can be made single-valued within the domain. The cut is a line or curve excluded from the domain to introduce a technical separation between discontinuous values of the function. When the cut is genuinely required, the function will have distinctly different values on each side of the branch cut. The shape of the branch cut is a matter of choice, even though it must connect two different branch points (such as and for ) which are fixed in place. Finite-time singularity A finite-time singularity occurs when one input variable is time, and an output variable increases towards infinity at a finite time. These are important in kinematics and Partial Differential Equations – infinites do not occur physically, but the behavior near the singularity is often of interest. Mathematically, the simplest finite-time singularities are power laws for various exponents of the form of which the simplest is hyperbolic growth, where the exponent is (negative) 1: More precisely, in order to get a singularity at positive time as time advances (so the output grows to infinity), one instead uses (using t for time, reversing direction to so that time increases to infinity, and shifting the singularity forward from 0 to a fixed time ). An example would be the bouncing motion of an inelastic ball on a plane. If idealized motion is considered, in which the same fraction of kinetic energy is lost on each bounce, the frequency of bounces becomes infinite, as the ball comes to rest in a finite time. Other examples of finite-time singularities include the various forms of the Painlevé paradox (for example, the tendency of a chalk to skip when dragged across a blackboard), and how the precession rate of a coin spun on a flat surface accelerates towards infinite—before abruptly stopping (as studied using the Euler's Disk toy). Hypothetical examples include Heinz von Foerster's facetious "Doomsday's equation" (simplistic models yield infinite human population in finite time). Algebraic geometry and commutative algebra In algebraic geometry, a singularity of an algebraic variety is a point of the variety where the tangent space may not be regularly defined. The simplest example of singularities are curves that cross themselves. But there are other types of singularities, like cusps. For example, the equation defines a curve that has a cusp at the origin . One could define the -axis as a tangent at this point, but this definition can not be the same as the definition at other points. In fact, in this case, the -axis is a "double tangent." For affine and projective varieties, the singularities are the points where the Jacobian matrix has a rank which is lower than at other points of the variety. An equivalent definition in terms of commutative algebra may be given, which extends to abstract varieties and schemes: A point is singular if the local ring at this point is not a regular local ring.
Mathematics
Complex analysis
null
54244
https://en.wikipedia.org/wiki/Gravitational%20singularity
Gravitational singularity
A gravitational singularity, spacetime singularity, or simply singularity, is a theoretical condition in which gravity is predicted to be so intense that spacetime itself would break down catastrophically. As such, a singularity is by definition no longer part of the regular spacetime and cannot be determined by "where" or "when". Gravitational singularities exist at a junction between general relativity and quantum mechanics; therefore, the properties of the singularity cannot be described without an established theory of quantum gravity. Trying to find a complete and precise definition of singularities in the theory of general relativity, the current best theory of gravity, remains a difficult problem. A singularity in general relativity can be defined by the scalar invariant curvature becoming infinite or, better, by a geodesic being incomplete. Gravitational singularities are mainly considered in the context of general relativity, where density would become infinite at the center of a black hole without corrections from quantum mechanics, and within astrophysics and cosmology as the earliest state of the universe during the Big Bang. Physicists have not reached a consensus about what actually happens at the extreme densities predicted by singularities (including at the start of the Big Bang). General relativity predicts that any object collapsing beyond a certain point (for stars this is the Schwarzschild radius) would form a black hole, inside which a singularity (covered by an event horizon) would be formed. The Penrose–Hawking singularity theorems define a singularity to have geodesics that cannot be extended in a smooth manner. The termination of such a geodesic is considered to be the singularity. Modern theory asserts that the initial state of the universe, at the beginning of the Big Bang, was a singularity. In this case, the universe did not collapse into a black hole, because currently-known calculations and density limits for gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. Neither general relativity nor quantum mechanics can currently describe the earliest moments of the Big Bang, but in general, quantum mechanics does not permit particles to inhabit a space smaller than their Compton wavelengths. Interpretation Many theories in physics have mathematical singularities of one kind or another. Equations for these physical theories predict that the ball of mass of some quantity becomes infinite or increases without limit. This is generally a sign for a missing piece in the theory, as in the ultraviolet catastrophe, re-normalization, and instability of a hydrogen atom predicted by the Larmor formula. In classical field theories, including special relativity but not general relativity, one can say that a solution has a singularity at a particular point in spacetime where certain physical properties become ill-defined, with spacetime serving as a background field to locate the singularity. A singularity in general relativity, on the other hand, is more complex because spacetime itself becomes ill-defined, and the singularity is no longer part of the regular spacetime manifold. In general relativity, a singularity cannot be defined by "where" or "when". Some theories, such as the theory of loop quantum gravity, suggest that singularities may not exist. This is also true for such classical unified field theories as the Einstein–Maxwell–Dirac equations. The idea can be stated in the form that, due to quantum gravity effects, there is a minimum distance beyond which the force of gravity no longer continues to increase as the distance between the masses becomes shorter, or alternatively that interpenetrating particle waves mask gravitational effects that would be felt at a distance. Motivated by such philosophy of loop quantum gravity, recently it has been shown that such conceptions can be realized through some elementary constructions based on the refinement of the first axiom of geometry, namely, the concept of a point by considering Klein's prescription of accounting for the extension of a small spot that represents or demonstrates a point, which was a programmatic call that he called as a fusion of arithmetic and geometry. Klein's program, according to Born, was actually a mathematical route to consider 'natural uncertainty in all observations' while describing 'a physical situation' by means of 'real numbers'. Types There is only one type of singularity, each with different physical features that have characteristics relevant to the theories from which they originally emerged, such as the different shapes of the singularities, conical and curved. They have also been hypothesized to occur without event horizons, structures that delineate one spacetime section from another in which events cannot affect past the horizon; these are called naked. Conical A conical singularity occurs when there is a point where the limit of some diffeomorphism invariant quantity does not exist or is infinite, in which case spacetime is not smooth at the point of the limit itself. Thus, spacetime looks like a cone around this point, where the singularity is located at the tip of the cone. The metric can be finite everywhere the coordinate system is used. An example of such a conical singularity is a cosmic string and a Schwarzschild black hole. Curvature Solutions to the equations of general relativity or another theory of gravity (such as supergravity) often result in encountering points where the metric blows up to infinity. However, many of these points are completely regular, and the infinities are merely a result of using an inappropriate coordinate system at this point. To test whether there is a singularity at a certain point, one must check whether at this point diffeomorphism invariant quantities (i.e. scalars) become infinite. Such quantities are the same in every coordinate system, so these infinities will not "go away" by a change of coordinates. An example is the Schwarzschild solution that describes a non-rotating, uncharged black hole. In coordinate systems convenient for working in regions far away from the black hole, a part of the metric becomes infinite at the event horizon. However, spacetime at the event horizon is regular. The regularity becomes evident when changing to another coordinate system (such as the Kruskal coordinates), where the metric is perfectly smooth. On the other hand, in the center of the black hole, where the metric becomes infinite as well, the solutions suggest a singularity exists. The existence of the singularity can be verified by noting that the Kretschmann scalar, being the square of the Riemann tensor i.e. , which is diffeomorphism invariant, is infinite. While in a non-rotating black hole the singularity occurs at a single point in the model coordinates, called a "point singularity", in a rotating black hole, also known as a Kerr black hole, the singularity occurs on a ring (a circular line), known as a "ring singularity". Such a singularity may also theoretically become a wormhole. More generally, a spacetime is considered singular if it is geodesically incomplete, meaning that there are freely-falling particles whose motion cannot be determined beyond a finite time, being after the point of reaching the singularity. For example, any observer inside the event horizon of a non-rotating black hole would fall into its center within a finite period of time. The classical version of the Big Bang cosmological model of the universe contains a causal singularity at the start of time (t=0), where all time-like geodesics have no extensions into the past. Extrapolating backward to this hypothetical time 0 results in a universe with all spatial dimensions of size zero, infinite density, infinite temperature, and infinite spacetime curvature. Naked singularity Until the early 1990s, it was widely believed that general relativity hides every singularity behind an event horizon, making naked singularities impossible. This is referred to as the cosmic censorship hypothesis. However, in 1991, physicists Stuart Shapiro and Saul Teukolsky performed computer simulations of a rotating plane of dust that indicated that general relativity might allow for "naked" singularities. What these objects would actually look like in such a model is unknown. Nor is it known whether singularities would still arise if the simplifying assumptions used to make the simulation were removed. However, it is hypothesized that light entering a singularity would similarly have its geodesics terminated, thus making the naked singularity look like a black hole. Disappearing event horizons exist in the Kerr metric, which is a spinning black hole in a vacuum, if the angular momentum () is high enough. Transforming the Kerr metric to Boyer–Lindquist coordinates, it can be shown that the coordinate (which is not the radius) of the event horizon is, , where , and . In this case, "event horizons disappear" means when the solutions are complex for , or . However, this corresponds to a case where exceeds (or in Planck units, ; i.e. the spin exceeds what is normally viewed as the upper limit of its physically possible values. Similarly, disappearing event horizons can also be seen with the Reissner–Nordström geometry of a charged black hole if the charge () is high enough. In this metric, it can be shown that the singularities occur at , where , and . Of the three possible cases for the relative values of  and , the case where  causes both  to be complex. This means the metric is regular for all positive values of , or in other words, the singularity has no event horizon. However, this corresponds to a case where exceeds (or in Planck units, ; i.e. the charge exceeds what is normally viewed as the upper limit of its physically possible values. Also, actual astrophysical black holes are not expected to possess any appreciable charge. A black hole possessing the lowest value consistent with its and values and the limits noted above; i.e., one just at the point of losing its event horizon, is termed extremal. Entropy Before Stephen Hawking came up with the concept of Hawking radiation, the question of black holes having entropy had been avoided. However, this concept demonstrates that black holes radiate energy, which conserves entropy and solves the incompatibility problems with the second law of thermodynamics. Entropy, however, implies heat and therefore temperature. The loss of energy also implies that black holes do not last forever, but rather evaporate or decay slowly. Black hole temperature is inversely related to mass. All known black hole candidates are so large that their temperature is far below that of the cosmic background radiation, which means they will gain energy on net by absorbing this radiation. They cannot begin to lose energy on net until the background temperature falls below their own temperature. This will occur at a cosmological redshift of more than one million, rather than the thousand or so since the background radiation formed.
Physical sciences
Theory of relativity
Physics
54245
https://en.wikipedia.org/wiki/Technological%20singularity
Technological singularity
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence. The Hungarian-American mathematician John von Neumann (1903-1957) became the first known person to use the concept of a "singularity" in the technological context. Alan Turing, often regarded as the father of modern computer science, laid a crucial foundation for the contemporary discourse on the technological singularity. His pivotal 1950 paper, "Computing Machinery and Intelligence," introduces the idea of a machine's ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human. Stanislaw Ulam reported in 1958 an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". Subsequent authors have echoed this viewpoint. The concept and the term "singularity" were popularized by Vernor Vingefirst in 1983 (in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",) and later in his 1993 essay The Coming Technological Singularity, (in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate). He wrote that he would be surprised if it occurred before 2005 or after 2030. Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045. Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction. The consequences of a technological singularity and its potential benefit or harm to the human race have been intensely debated. Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, and Gordon Moore. One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies. Intelligence explosion Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans. If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would, in theory, vastly improve over human problem-solving and inventive skills. Such an AI is referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities. I. J. Good speculated that superhuman intelligence might bring about an intelligence explosion: One version of intelligence explosion is where computing power approaches infinity in a finite amount of time. In this version, once AIs are performing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996). Emergence of superintelligence A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world. The related concept "speed superintelligence" describes an AI that can function like a human mind, only much faster. For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds. Such a difference in information processing speed could drive the singularity. Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that bypass human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies focus on scenarios that combine these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The book The Age of Em by Robin Hanson describes a hypothetical future scenario in which human brains are scanned and digitized, creating "uploads" or digital versions of human consciousness. In this future, the development of these uploads may precede or coincide with the emergence of superintelligent artificial intelligence. Variations Non-AI singularity Some writers use "the singularity" in a broader way to refer to any radical changes in society brought about by new technology (such as molecular nanotechnology), although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity. Predictions There have been numerous dates predicted for the attainment of singularity. In 1965, Good wrote that it was more probable than not that an ultra-intelligent machine would be built within the twentieth century. That computing capabilities for human-level AI would be available in supercomputers before 2010 was predicted in 1988 by Moravec, assuming that the current rate of improvement continued. The attainment of greater-than-human intelligence between 2005 and 2030 was predicted by Vinge in 1993. A singularity in 2021 was predicted by Yudkowsky in 1996. Human-level AI around 2029 and the singularity in 2045 was predicted by Kurzweil in 2005. He reaffirmed these predictions in 2024 in The Singularity is Nearer. Human-level AI by 2040, and intelligence far beyond human by 2050 was predicted in 1998 by Moravec, revising his earlier prediction. A confidence of 50% that human-level AI would be developed by 2040–2050 was the outcome of four polls of AI researchers, conducted in 2012 and 2013 by Bostrom and Müller. Plausibility Prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, and Gordon Moore, whose law is often cited in support of the concept. Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The many speculated ways to augment human intelligence include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces and mind uploading. These multiple possible paths to an intelligence explosion, all of which will presumably be pursued, makes a singularity more likely. Robin Hanson expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult. Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity. The possibility of an intelligence explosion depends on three factors. The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. However, as the intelligences become more advanced, further advances will become more and more complicated, possibly outweighing the advantage of increased intelligence. Each improvement should generate at least one more improvement, on average, for movement towards singularity to continue. Finally, the laws of physics may eventually prevent further improvement. There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation, and improvements to the algorithms used. The former is predicted by Moore's Law and the forecasted improvements in hardware, and is comparatively similar to previous technological advances. But Schulman and Sandberg argue that software will present more complex challenges than simply operating on hardware capable of running at human intelligence levels or beyond. A 2017 email survey of authors with publications at the 2015 NeurIPS and ICML machine learning conferences asked about the chance that "the intelligence explosion argument is broadly correct". Of the respondents, 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely". Speed improvements Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. An analogy to Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity. Some upper limit on speed may eventually be reached. Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity." It is difficult to directly compare silicon-based hardware with neurons. But notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain, as well as taking up a lot less space. Exponential growth The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit. Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others. Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months. On the other hand, it has been argued that the global acceleration pattern having the 21st century singularity as its parameter should be characterized as hyperbolic rather than exponential. Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine". He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence." Accelerating change Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change: Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history". Kurzweil believes that the singularity will occur by approximately 2045. His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence. Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's April 2000 Wired magazine article "Why The Future Doesn't Need Us". Algorithm improvements Some intelligence technologies, like "seed AI", may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on. The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately. An AI rewriting its own source code could do so while contained in an AI box. Second, as with Vernor Vinge's conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times quicker than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again. There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might self-modify, potentially causing the AI to optimise for something other than what was originally intended. Secondly, AIs could compete for the same scarce resources humankind uses to survive. While not actively malicious, AIs would promote the goals of their programming, not necessarily broader human goals, and thus might crowd out humans. Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang". Criticism Some critics, like philosophers Hubert Dreyfus and John Searle, assert that computers or machines cannot achieve human intelligence. Others, like physicist Stephen Hawking, object that whether machines can achieve a true intelligence or merely something similar to intelligence is irrelevant if the net result is the same. Psychologist Steven Pinker stated in 2008: "There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems." Martin Ford postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to those types of work traditionally considered to be "routine". Theodore Modis and Jonathan Huebner argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors. Theodore Modis holds the singularity cannot happen. He claims the "technological singularity" and especially Kurzweil lack scientific rigor; Kurzweil is alleged to mistake the logistic function (S-function) for an exponential function, and to see a "knee" in an exponential function where there can in fact be no such thing. In a 2021 article, Modis pointed out that no milestonesbreaks in historical perspective comparable in importance to the Internet, DNA, the transistor, or nuclear energyhad been observed in the previous twenty years while five of them would have been expected according to the exponential trend advocated by the proponents of the technological singularity. AI researcher Jürgen Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists. Microsoft co-founder Paul Allen argued the opposite of accelerating returns, the complexity brake: the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies, a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since. The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse". Hofstadter (2006) raises concern that Ray Kurzweil is not sufficiently scientifically rigorous, that an exponential tendency of technology is not a scientific law like one of physics, and that exponential curves have no "knees". Nonetheless, he did not rule out the singularity in principle in the distant future and in the light of ChatGPT and other recent advancements has revised his opinion significantly towards dramatic technological change in the near future. Jaron Lanier denies that the singularity is inevitable: "I do not think the technology is creating itself. It's not an autonomous process." Furthermore: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics." Economist Robert J. Gordon points out that measured economic growth slowed around 1970 and slowed even further since the financial crisis of 2007–2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I. J. Good. Philosopher and cognitive scientist Daniel Dennett said in 2017: "The whole singularity stuff, that's preposterous. It distracts us from much more pressing problems", adding "AI tools that we become hyper-dependent on, that is going to happen. And one of the dangers is that we will give them more authority than they warrant." In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily. Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. Kelly (2006) argues that the way the Kurzweil chart is constructed with x-axis having time before present, it always points to the singularity being "now", for any date on which one would construct such a chart, and shows this visually on Kurzweil's chart. Some critics suggest religious motivations or implications of singularity, especially Kurzweil's version of it. The buildup towards the singularity is compared with Christian end-of-time scenarios. Beam calls it "a Buck Rogers vision of the hypothetical Christian Rapture". John Gray says "the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event". David Streitfeld in The New York Times questioned whether "it might manifest first and foremost—thanks, in part, to the bottom-line obsession of today’s Silicon Valley—as a tool to slash corporate America’s head count." Potential impacts Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis. Uncertainty and risk The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate. It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat. Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the Future of Humanity Institute, the Machine Intelligence Research Institute, the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute. Physicist Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity: claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators. Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments. AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources, and humans would be powerless to stop them. Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity. discusses human extinction scenarios, and lists superintelligence as a possible cause: According to Eliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. proposes an AI design that avoids several dangers including self-delusion, unintended instrumental actions, and corruption of the reward generator. He also discusses social impacts of AI and testing AI. His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator. Next step of sociobiological evolution While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three courtships leading to marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". The article further argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1 bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3 base pairs, equivalent to 1.325 bytes of information. If growth in digital storage continues at its current rate of 30–38% compound annual growth per year, it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years". Implications for human society In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at the Asilomar conference center in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards. Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist. Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability. Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. Hard or soft takeoff In a hard takeoff scenario, an artificial superintelligence rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the agent's goals. In a soft takeoff scenario, the AI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AI's development. Ramez Naam argues against a hard takeoff. He has pointed out that we already see recursive self-improvement by superintelligences, such as corporations. Intel, for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law. Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1." J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circularthey seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world. Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. He refers to this scenario as a "semihard takeoff". Max More disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years." Relation to immortality and aging Eric Drexler, one of the founders of nanotechnology, theorized in 1986 the possibility of cell repair devices, including ones operating within cells and using as yet hypothetical biological machines. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom. Moravec predicted in 1988 the possibility of "uploading" human mind into a human-like robot, achieving quasi-immortality by extreme longevity via transfer of the human mind between successive new robots as the old ones wear out; beyond that, he predicts later exponential acceleration of subjective experience of time leading to a subjective sense of immortality. Kurzweil suggested in 2005 that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes. Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious." History of the concept A paper by Mahendra Prasad, published in AI Magazine, asserts that the 18th-century mathematician Marquis de Condorcet was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity. An early description of the idea was made in John W. Campbell's 1932 short story "The Last Evolution". In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence. In 1977, Hans Moravec wrote an article with unclear publishing status where he envisioned a development of self-improving thinking machines, a creation of "super-consciousness, the synthesis of terrestrial life, and perhaps jovian and martian life as well, constantly improving and extending itself, spreading outwards from the solar system, converting non-life into mind." The article describes the human mind uploading later covered in Moravec (1988). The machines are expected to reach human level and then improve themselves beyond that ("Most significantly of all, they [the machines] can be put to work as programmers and engineers, with the task of optimizing the software and hardware which make them what they are. The successive generations of machines produced this way will be increasingly smarter and more cost effective.") Humans will no longer be needed, and their abilities will be overtaken by the machines: "In the long run the sheer physical inability of humans to keep up with these rapidly evolving progeny of our minds will ensure that the ratio of people to machines approaches zero, and that a direct descendant of our culture, but not our genes, inherits the universe." While the word "singularity" is not used, the notion of human-level thinking machines thereafter improving themselves beyond human level is there. In this view, there is no intelligence explosion in the sense of a very rapid intelligence increase once human equivalence is reached. An updated version of the article was published in 1979 in Analog Science Fiction and Fact. In 1981, Stanisław Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirements because it finds them lacking internal logical consistency. In 1983, Vernor Vinge addressed Good's intelligence explosion in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" (although not "technological singularity") in a way that was specifically tied to the creation of intelligent machines: In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time. In 1986, Vernor Vinge published Marooned in Realtime, a science-fiction novel where a few remaining humans traveling forward in the future have survived an unknown extinction event that might well be a singularity. In a short afterword, the author states that an actual technological singularity would not be the end of the human species: "of course it seems very unlikely that the Singularity would be a clean vanishing of the human race. (On the other hand, such a vanishing is the timelike analog of the silence we find all across the sky.)". In 1988, Vinge used the phrase "technological singularity" (including "technological") in the short story collection Threats and Other Promises, writing in the introduction to his story "The Whirligig of Time" (p. 72): Barring a worldwide catastrophe, I believe that technology will achieve our wildest dreams, and soon. When we raise our own intelligence and that of our creations, we are no longer in a world of human-sized characters. At that point we have fallen into a technological "black hole", a technological singularity. In 1988, Hans Moravec published Mind Children, in which he predicted human-level intelligence in supercomputers by 2010, self-improving intelligent machines far surpassing human intelligence later, human mind uploading into human-like robots later, intelligent machines leaving humans behind, and space colonization. He did not mention "singularity", though, and he did not speak of a rapid explosion of intelligence immediately after the human level is achieved. Nonetheless, the overall singularity tenor is there in predicting both human-level artificial intelligence and further artificial intelligence far surpassing humans later. Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era", spread widely on the internet and helped to popularize the idea. This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express. Minsky's 1994 article says robots will "inherit the Earth", possibly with the use of nanotechnology, and proposes to think of robots as human "mind children", drawing the analogy from Moravec. The rhetorical effect of that analogy is that if humans are fine to pass the world to their biological children, they should be equally fine to pass it to robots, their "mind" children. As per Minsky, 'we could design our "mind-children" to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.' The feature of the singularity present in Minsky is the development of superhuman artificial intelligence ("million times faster"), but there is no talk of sudden intelligence explosion, self-improving thinking machines or unpredictability beyond any specific event and the word "singularity" is not used. Tipler's 1994 book The Physics of Immortality predicts a future where super–intelligent machines will build enormously powerful computers, people will be "emulated" in computers, life will reach every galaxy and people will achieve immortality when they reach Omega Point. There is no talk of Vingean "singularity" or sudden intelligence explosion, but intelligence much greater than human is there, as well as immortality. In 1996, Yudkowsky predicted a singularity by 2021. His version of singularity involves intelligence explosion: once AIs are doing the research to improve themselves, speed doubles after 2 years, then 1 one year, then after 6 months, then after 3 months, then after 1.5 months, and after more iterations, the "singularity" is reached. This construction implies that the speed reaches infinity in finite time. In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of robotics, genetic engineering, and nanotechnology. In 2005, Kurzweil published The Singularity Is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart. From 2006 to 2012, an annual Singularity Summit conference was organized by Machine Intelligence Research Institute, founded by Eliezer Yudkowsky. In 2007, Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting. For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability. In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges." Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year. In politics In 2007, the Joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity. Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:
Technology
Artificial intelligence concepts
null
54248
https://en.wikipedia.org/wiki/Military%20aircraft
Military aircraft
A military aircraft is any fixed-wing or rotary-wing aircraft that is operated by a legal or insurrectionary military of any type. Some military aircraft engage directly in aerial warfare, while others take on support roles: Combat aircraft, such as fighters and bombers, are designed to destroy enemy equipment or personnel using their own ordnance. Combat aircraft are typically developed and procured only by military forces. Non-combat aircraft, such as transports and tankers, are not designed for combat as their primary function but may carry weapons for self-defense. These mainly operate in support roles, and may be developed by either military forces or civilian organizations. History Lighter-than-air In 1783, when the first practical aircraft (hot-air and hydrogen balloons) were established, they were quickly adopted for military duties. The first military balloon unit was the French Aerostatic Corps, who in 1794 flew an observation balloon during the Battle of Fleurus, the first major battle to feature aerial observation. Balloons continued to be used throughout the 19th century, including in the Napoleonic Wars and the Franco-Prussian War, for observation and propaganda distribution. During World War I, German Zeppelin airships carried out multiple air raids on British cities, as well as being used for observation. In the 1920s, the U.S. Navy acquired several non-rigid airships, the first one to see service being the K-1 in 1931. Use by the U.S. as well as other countries continued into World War II. The U.S. Navy retired its last balloons in 1963. Only a handful of lighter-than-air military aircraft were used since, such as the American Blimp MZ-3, used for research and development by the U.S. Navy from 2006 to 2017. Heavier-than-air Soon after the first flight of the Wright Flyer, several militaries became interested in powered aircraft. In 1909 the United States Army purchased the Wright Military Flyer, a two-seat observation aircraft, for the Aeronautical Division, U.S. Signal Corps. It served until 1911, by which time powered aircraft had become an important feature in several armies around the world. Airplanes performed aerial reconnaissance and tactical bombing missions in the Italo-Turkish war, and the First Balkan War saw the first naval-air operations. Photoreconnaissance and propaganda leaflet drops followed in the Second Balkan War. Air combat was a notable component of World War I, as fighter aircraft were developed during the war, long-range strategic bombing became a possibility, and airplanes were deployed from aircraft carriers. Airplanes also took on a greater variety of support roles, notably medical evacuation, and deployed new weapons like air-to-air rockets for use against reconnaissance balloons. Aviation technology advanced rapidly in the interwar period, and military aircraft became increasingly capable. Autogyros and helicopters were also developed at this time. During World War II, military aviation reached new heights. Decisive air battles influenced the outcome of the war, early jet aircraft flew combat missions, cruise missiles and ballistic missiles were deployed for the first time, airborne troops and cargo parachuted into battle, and the nuclear weapons that ended the war were delivered by air. In the Cold War era, aviation technology continued to advance at an extremely rapid pace. Jet aircraft exceeded Mach 1 and Mach 2, armament focus switched mainly to missiles, aircraft began carrying more sophisticated avionics, air-to-air refueling matured into practicality, and transport aircraft grew in size. Stealth aircraft entered development during the 1970s and saw combat in the 1980s. Combat Combat aircraft, or "warplanes", are divided broadly into fighters, bombers, attackers, electronic warfare, maritime, multirole, and unmanned aircraft. Variations exist between them, including fighter-bombers, such as the MiG-23 ground-attack aircraft and the Soviet Ilyushin Il-2. Also included among combat aircraft are long-range maritime patrol aircraft, such as the Hawker Siddeley Nimrod and the S-3 Viking that are often equipped to attack with anti-ship missiles and anti-submarine weapons. Fighters The primary role of fighters is destroying enemy aircraft in air-to-air combat, as part of both offensive and defensive counter air operations. Many fighters also possess a degree of ground attack capability, allowing them to perform surface attack and close air support missions. In addition to their counter air duties they are tasked to perform escort mission for bombers or other aircraft. Fighters are capable of carrying a variety of weapons, including machine guns, autocannons, rockets, guided missiles, and bombs. Many modern fighters can attack enemy fighters from a great distance, before the enemy even sees or detects them. Examples of such fighters include the F-35 Lightning II, F-22 Raptor, F-15 Eagle, and Su-27. Bombers Bombers are normally larger, heavier, and less maneuverable than fighter aircraft. They are capable of carrying large payloads of bombs, torpedoes or cruise missiles. Bombers are used almost exclusively for ground attacks and are not fast or agile enough to take on enemy fighters head-to-head. Some have a single engine and require one pilot to operate, while others have two or more engines and require crews of two or more. A limited number of bombers, such as the B-2 Spirit, have stealth capabilities that keep them from being detected by enemy radar. An example of a conventional modern bomber would be the B-52 Stratofortress. An example of a World War II bomber would be a B-17 Flying Fortress. An example of a World War I bomber would be a Handley Page O/400. Bombers include light bombers, medium bombers, heavy bombers, dive bombers, and torpedo bombers. Attack aircraft Attack aircraft can be used to provide support for friendly ground troops. Some are able to carry conventional or nuclear weapons far behind enemy lines to strike priority ground targets. Attack helicopters attack enemy armor and provide close air support for ground troops. An example of a historical ground-attack aircraft is the Soviet Ilyushin Il-2. Several types of transport airplanes have been armed with sideways firing weapons as gunships for ground attack. These include the AC-47 and AC-130 gunships. Electronic warfare An electronic warfare aircraft is a military aircraft equipped for electronic warfare, i.e. degrading the effectiveness of enemy radar and radio systems. They are generally modified versions of other preexisting aircraft. A recent example would be the EA-18G Growler, which is a modified version of the F/A-18F Super Hornet. Maritime patrol A maritime patrol aircraft is a fixed-wing military aircraft designed to operate for long durations over water in maritime patrol roles—in particular anti-submarine, anti-ship, and search and rescue. Some patrol aircraft were designed for this purpose, like the Kawasaki P-1. Many others are modified designs of pre-existing aircraft, such as the Boeing P-8 Poseidon, which is based on the Boeing 737-800 airliner. While the term maritime patrol aircraft generally refers to fixed wing aircraft, other aircraft types, such as blimps and helicopters, have also been used in the same roles. Multirole Many combat aircraft in the modern day have multirole capabilities. Normally only applied to fixed-wing aircraft, the term signifies the ability to transition between air-to-air and air-to-ground roles, sometimes even during the same mission. An example of a multirole design is the F-15E Strike Eagle, Eurofighter Typhoon, the Rafale Dassault and Panavia Tornado. A World War II example would be the P-38 Lightning. A utility helicopter could also count as a multirole aircraft and can fill roles such as close-air support, air assault, military logistics, CASEVAC, medical evacuation, command and control, and troop transport. Unmanned Unmanned combat aerial vehicles (UCAV) have no crew, but are controlled by a remote operator. They may have varying degrees of autonomy. UCAVs are often armed with bombs, air-to-surface missiles, or other aircraft ordinance. Their uses typically include targeted killings, precision airstrikes, and air interdictions, as well as other forms of drone warfare. Non-combat Non-combat roles of military aircraft include search and rescue, reconnaissance, observation/surveillance, Airborne Early Warning and Control, transport, training, and aerial refueling. Many civil aircraft, both fixed wing and rotary wing, have been produced in separate models for military use, such as the civilian Douglas DC-3 airliner, which became the military C-47 Skytrain, and British "Dakota" transport planes, and decades later, the USAF's AC-47 Spooky gunships. Even the fabric-covered two-seat Piper J-3 Cub had a military version. Gliders and balloons have also been used as military aircraft; for example, balloons were used for observation during the American Civil War and during World War I, and military gliders were used during World War II to deliver ground troops in airborne assaults. Military transport Military transport (logistics) aircraft are primarily used to transport troops and war supplies. Cargo can be attached to pallets, which are easily loaded, secured for flight, and quickly unloaded for delivery. Cargo also may be discharged from flying aircraft on parachutes, eliminating the need for landing. Also included in this category are aerial tankers; these planes can refuel other aircraft while in flight. An example of a transport aircraft is the C-17 Globemaster III. A World War II example would be the C-47. An example of a tanker craft would be the KC-135 Stratotanker. Transport helicopters and gliders can transport troops and supplies to areas where other aircraft would be unable to land. Calling a military transport aircraft a "cargo plane" is inaccurate, because military transport planes are able to carry paratroopers and other personnel. Airborne early warning and control An airborne early warning and control (AEW&C) system is an airborne radar system designed to detect aircraft, ships and ground vehicles at long ranges and control and command the battle space in an air engagement by directing fighter and attack aircraft strikes. AEW&C units are also used to carry out surveillance, including over ground targets and frequently perform C2BM (command and control, battle management) functions similar to an Airport Traffic Controller given military command over other forces. Used at a high altitude, the radars on the aircraft allow the operators to distinguish between friendly and hostile aircraft hundreds of miles away. AEW&C aircraft are used for both defensive and offensive air operations, and are to the NATO and American trained or integrated air forces what the combat information center is to a naval vessel, plus a highly mobile and powerful radar platform. The system is used offensively to direct fighters to their target locations, and defensively in order to counterattacks by enemy forces, both air and ground. So useful is the advantage of command and control from a high altitude, the United States Navy operates AEW&C aircraft off its Supercarriers to augment and protect its carrier combat information center (CICs). AEW&C is also known by the older terms "airborne early warning" (AEW) and "airborne warning and control system" (AWACS, /ˈeɪwæks/ ay-waks) although AWACS is the name of a specific system currently used by NATO and the USAF and is often used in error to describe similar systems. Reconnaissance and surveillance Reconnaissance aircraft are primarily used to gather intelligence. They are equipped with cameras and other sensors. These aircraft may be specially designed or may be modified from a basic fighter or bomber type. This role is increasingly being filled by military satellites and unmanned aerial vehicles (UAVs). Surveillance and observation aircraft use radar and other sensors for battlefield surveillance, airspace surveillance, maritime patrol, and artillery spotting. They include modified civil aircraft designs, moored balloons and UAVs. Experimental Experimental aircraft are designed in order to test advanced aerodynamic, structural, avionic, or propulsion concepts. These are usually well instrumented, with performance data telemetered on radio-frequency data links to ground stations located at the test ranges where they are flown. An example of an experimental aircraft is the Bristol 188.
Technology
Military aviation
null
54257
https://en.wikipedia.org/wiki/Desktop%20publishing
Desktop publishing
Desktop publishing (DTP) is the creation of documents using dedicated software on a personal ("desktop") computer. It was first used almost exclusively for print publications, but now it also assists in the creation of various forms of online content. Desktop publishing software can generate page layouts and produce text and image content comparable to the simpler forms of traditional typography and printing. This technology allows individuals, businesses, and other organizations to self-publish a wide variety of content, from menus to magazines to books, without the expense of commercial printing. Desktop publishing often requires the use of a personal computer and WYSIWYG page layout software to create documents for either large-scale publishing or small-scale local printing and distribution although non-WYSIWYG systems such as TeX and LaTeX are also used, especially in scientific publishing. Originally, desktop publishing methods provided more control over design, layout, and typography than word processing software but the latter has evolved to include most, if not all, capabilities previously available only with dedicated desktop publishing software. The same DTP skills and software used for common paper and book publishing are sometimes used to create graphics for point of sale displays, presentations, infographics, brochures, business cards, promotional items, trade show exhibits, retail package designs and outdoor signs. History Desktop publishing was first developed at Xerox PARC in the 1970s. A contradictory claim states that desktop publishing began in 1983 with a program developed by James Davise at a community newspaper in Philadelphia. The program Type Processor One ran on a PC using a graphics card for a WYSIWYG display and was offered commercially by Best Info in 1984. Desktop typesetting with only limited page makeup facilities arrived in 1978–1979 with the introduction of TeX, and was extended in 1985 with the introduction of LaTeX. The desktop publishing market took off in 1985 with the introduction in January of the Apple LaserWriter laser printer for the year-old Apple Macintosh personal computer. This momentum was kept up with the release that July of PageMaker software from Aldus, which rapidly became the standard software application for desktop publishing. With its advanced layout features, PageMaker immediately relegated word processors like Microsoft Word to the composition and editing of purely textual documents. Word did not begin to acquire desktop publishing features until a decade later, and by 2003, it was regarded only as "good" and not "great" at desktop publishing tasks. The term "desktop publishing" is attributed to Aldus founder Paul Brainerd, who sought a marketing catchphrase to describe the small size and relative affordability of this suite of products, in contrast to the expensive commercial phototypesetting equipment of the day. Before the advent of desktop publishing, the only option available to most people for producing typed documents (as opposed to handwritten documents) was a typewriter, which offered only a handful of typefaces (usually fixed-width) and one or two font sizes. Indeed, one popular desktop publishing book was titled The Mac is Not a Typewriter, and it had to actually explain how a Mac could do so much more than a typewriter. The ability to create WYSIWYG page layouts on screen and then print pages containing text and graphical elements at 300 dpi resolution was a major development for the personal computer industry. The ability to do all this with industry standards like PostScript also radically changed the traditional publishing industry, which at the time was accustomed to buying end-to-end turnkey solutions for digital typesetting which came with their own proprietary hardware workstations. Newspapers and other print publications began to transition to DTP-based programs from older layout systems such as Atex and other programs in the early 1980s. Desktop publishing was still in its early stage in the early 1980s. Users of the PageMaker/LaserWriter/Macintosh 512K system endured frequent software crashes, Mac's low-resolution 512x342 1-bit monochrome screen, the inability to control letter spacing, kerning, and other typographic features, and the discrepancies between screen display and printed output. However, it was an unheard-of combination at the time, and was received with considerable acclaim. Behind the scenes, technologies developed by Adobe Systems set the foundation for professional desktop publishing applications. The LaserWriter and LaserWriter Plus printers included scalable Adobe PostScript fonts built into their ROM memory. The LaserWriter's PostScript capability allowed publication designers to proof files on a local printer, then print the same file at DTP service bureaus using optical resolution 600+ ppi PostScript printers such as those from Linotronic. Later, the Macintosh II was released, which was considerably more suitable for desktop publishing due to its greater expandability, support for large color multi-monitor displays, and its SCSI storage interface (which allowed hard drives to be attached to the system). Macintosh-based systems continued to dominate the market into 1986, when the GEM-based Ventura Publisher was introduced for MS-DOS computers. PageMaker's pasteboard metaphor closely simulated the process of creating layouts manually, but Ventura Publisher automated the layout process through its use of tags and style sheets and automatically generated indices and other body matter. This made it particularly suitable for the creation of manuals and other long-format documents. Desktop publishing moved into the home market in 1986 with Professional Page for the Amiga, Publishing Partner (now PageStream) for the Atari ST, GST's Timeworks Publisher on the PC and Atari ST, and Calamus for the Atari TT030. Software was published even for 8-bit computers like the Apple II and Commodore 64: Home Publisher, The Newsroom, and geoPublish. During its early years, desktop publishing acquired a bad reputation as a result of untrained users who created poorly organized, unprofessional-looking "ransom note effect" layouts. (Similar criticism was leveled again against early World Wide Web publishers a decade later.) However, some desktop publishers who mastered the programs were able to achieve near professional results. Desktop publishing skills were considered of primary importance in career advancement in the 1980s, but increased accessibility to more user-friendly DTP software has made DTP a secondary skill to art direction, graphic design, multimedia development, marketing communications, and administrative careers. DTP skill levels range from what may be learned in a couple of hours (e.g., learning how to put clip art in a word processor), to what's typically required in a college education. The discipline of DTP skills range from technical skills such as prepress production and programming, to creative skills such as communication design and graphic image development. , Apple computers remain dominant in publishing, even as the most popular software has changed from QuarkXPress – an estimated 95% market share in the 1990s – to Adobe InDesign. An Ars Technica writer said in an article: "I've heard about Windows-based publishing environments, but I've never actually seen one in my 20+ years in design and publishing". Terminology There are two types of pages in desktop publishing: digital pages and virtual paper pages to be printed on physical paper pages. All computerized documents are technically digital, which are limited in size only by computer memory or computer data storage space. Virtual paper pages will ultimately be printed, and will therefore require paper parameters coinciding with standard physical paper sizes such as A4, letterpaper and legalpaper. Alternatively, the virtual paper page may require a custom size for later trimming. Some desktop publishing programs allow custom sizes designated for large format printing used in posters, billboards and trade show displays. A virtual page for printing has a predesignated size of virtual printing material and can be viewed on a monitor in WYSIWYG format. Each page for printing has trim sizes (edge of paper) and a printable area if bleed printing is not possible as is the case with most desktop printers. A web page is an example of a digital page that is not constrained by virtual paper parameters. Most digital pages may be dynamically re-sized, causing either the content to scale in size with the page or the content to re-flow. Master pages are templates used to automatically copy or link elements and graphic design styles to some or all the pages of a multipage document. Linked elements can be modified without having to change each instance of an element on pages that use the same element. Master pages can also be used to apply graphic design styles to automatic page numbering. Cascading Style Sheets can provide the same global formatting functions for web pages that master pages provide for virtual paper pages. Page layout is the process by which the elements are laid on the page orderly, aesthetically and precisely. The main types of components to be laid out on a page include text, linked images (that can only be modified as an external source), and embedded images (that may be modified with the layout application software). Some embedded images are rendered in the application software, while others can be placed from an external source image file. Text may be keyed into the layout, placed, or – with database publishing applications – linked to an external source of text which allows multiple editors to develop a document at the same time. Graphic design styles such as color, transparency and filters may also be applied to layout elements. Typography styles may be applied to text automatically with style sheets. Some layout programs include style sheets for images in addition to text. Graphic styles for images may include border shapes, colors, transparency, filters, and a parameter designating the way text flows around the object (also known as "wraparound" or "runaround"). Comparisons With word processing As desktop publishing software still provides extensive features necessary for print publishing, modern word processors now have publishing capabilities beyond those of many older DTP applications, blurring the line between word processing and desktop publishing. In the early 1980s, the graphical user interface was still in its embryonic stage and DTP software was in a class of its own when compared to the leading word processing applications of the time. Programs such as WordPerfect and WordStar were still mainly text-based and offered little in the way of page layout, other than perhaps margins and line spacing. On the other hand, word processing software was necessary for features like indexing and spell checking – features that are common in many applications today. As computers and operating systems became more powerful, versatile, and user-friendly in the 2010s, vendors have sought to provide users with a single application that can meet almost all their publication needs. With other digital layout software In earlier modern-day usage, DTP usually did not include digital tools such as TeX or troff, though both can easily be used on a modern desktop system, and are standard with many Unix-like operating systems and are readily available for other systems. The key difference between digital typesetting software and DTP software is that DTP software is generally interactive and "What you see [onscreen] is what you get" (WYSIWYG) in design, while other digital typesetting software, such as TeX, LaTeX and other variants, tend to operate in "batch mode", requiring the user to enter the processing program's markup language (e.g. HTML) without immediate visualization of the finished product. This kind of workflow is less user-friendly than WYSIWYG, but more suitable for conference proceedings and scholarly articles as well as corporate newsletters or other applications where consistent, automated layout is important. In the 2010s, interactive front-end components of TeX, such as TeXworks and LyX, have produced "what you see is what you mean" (WYSIWYM) hybrids of DTP and batch processing. These hybrids are focused more on the semantics than the traditional DTP. Furthermore, with the advent of TeX editors the line between desktop publishing and markup-based typesetting is becoming increasingly narrow as well; a software which separates itself from the TeX world and develops itself in the direction of WYSIWYG markup-based typesetting is GNU TeXmacs. On a different note, there is a slight overlap between desktop publishing and what is known as hypermedia publishing (e.g. web design, kiosk, CD-ROM). Many graphical HTML editors such as Microsoft FrontPage and Adobe Dreamweaver use a layout engine similar to that of a DTP program. However, many web designers still prefer to write HTML without the assistance of a WYSIWYG editor, for greater control and ability to fine-tune the appearance and functionality. Another reason that some Web designers write in HTML is that WYSIWYG editors often result in excessive lines of code, leading to code bloat that can make the pages hard to troubleshoot. With web design Desktop publishing produces primarily static print or digital media, the focus of this article. Similar skills, processes, and terminology are used in web design. Digital typography is the specialization of typography for desktop publishing. Web typography addresses typography and the use of fonts on the World Wide Web. Desktop style sheets apply formatting for print, Web Cascading Style Sheets (CSS) provide format control for web display. Web HTML font families map website font usage to the fonts available on the user's web browser or display device. Software A wide variety of DTP applications and websites are available and are listed separately. File formats The design industry standard is PDF. The older EPS format is also used and supported by most applications.
Technology
Computer software
null
54267
https://en.wikipedia.org/wiki/Floor%20and%20ceiling%20functions
Floor and ceiling functions
In mathematics, the floor function is the function that takes as input a real number , and gives as output the greatest integer less than or equal to , denoted or . Similarly, the ceiling function maps to the least integer greater than or equal to , denoted or . For example, for floor: , , and for ceiling: , and . The floor of is also called the integral part, integer part, greatest integer, or entier of , and was historically denoted (among other notations). However, the same term, integer part, is also used for truncation towards zero, which differs from the floor function for negative numbers. For an integer, . Although and produce graphs that appear exactly alike, they are not the same when the value of x is an exact integer. For example, when =2.0001; . However, if =2, then , while . Notation The integral part or integer part of a number ( in the original) was first defined in 1798 by Adrien-Marie Legendre in his proof of the Legendre's formula. Carl Friedrich Gauss introduced the square bracket notation in his third proof of quadratic reciprocity (1808). This remained the standard in mathematics until Kenneth E. Iverson introduced, in his 1962 book A Programming Language, the names "floor" and "ceiling" and the corresponding notations and . (Iverson used square brackets for a different purpose, the Iverson bracket notation.) Both notations are now used in mathematics, although Iverson's notation will be followed in this article. In some sources, boldface or double brackets are used for floor, and reversed brackets or for ceiling. The fractional part is the sawtooth function, denoted by for real and defined by the formula For all x, . These characters are provided in Unicode: In the LaTeX typesetting system, these symbols can be specified with the and commands in math mode. LaTeX has supported UTF-8 since 2018, so the Unicode characters can now be used directly. Larger versions are and . Definition and properties Given real numbers x and y, integers m and n and the set of integers , floor and ceiling may be defined by the equations Since there is exactly one integer in a half-open interval of length one, for any real number x, there are unique integers m and n satisfying the equation where  and  may also be taken as the definition of floor and ceiling. Equivalences These formulas can be used to simplify expressions involving floors and ceilings. In the language of order theory, the floor function is a residuated mapping, that is, part of a Galois connection: it is the upper adjoint of the function that embeds the integers into the reals. These formulas show how adding an integer to the arguments affects the functions: The above are never true if is not an integer; however, for every and , the following inequalities hold: Monotonicity Both floor and ceiling functions are monotonically non-decreasing functions: Relations among the functions It is clear from the definitions that with equality if and only if x is an integer, i.e. In fact, for integers n, both floor and ceiling functions are the identity: Negating the argument switches floor and ceiling and changes the sign: and: Negating the argument complements the fractional part: The floor, ceiling, and fractional part functions are idempotent: The result of nested floor or ceiling functions is the innermost function: due to the identity property for integers. Quotients If m and n are integers and n ≠ 0, If n is a positive integer If m is positive For m = 2 these imply More generally, for positive m (See Hermite's identity) The following can be used to convert floors to ceilings and vice versa (m positive) For all m and n strictly positive integers: which, for positive and coprime m and n, reduces to and similarly for the ceiling and fractional part functions (still for positive and coprime m and n), Since the right-hand side of the general case is symmetrical in m and n, this implies that More generally, if m and n are positive, This is sometimes called a reciprocity law. Division by positive integers gives rise to an interesting and sometimes useful property. Assuming , Similarly, Indeed, keeping in mind that The second equivalence involving the ceiling function can be proved similarly. Nested divisions For positive integer n, and arbitrary real numbers m,x: Continuity and series expansions None of the functions discussed in this article are continuous, but all are piecewise linear: the functions , , and have discontinuities at the integers. is upper semi-continuous and and are lower semi-continuous. Since none of the functions discussed in this article are continuous, none of them have a power series expansion. Since floor and ceiling are not periodic, they do not have uniformly convergent Fourier series expansions. The fractional part function has Fourier series expansion for not an integer. At points of discontinuity, a Fourier series converges to a value that is the average of its limits on the left and the right, unlike the floor, ceiling and fractional part functions: for y fixed and x a multiple of y the Fourier series given converges to y/2, rather than to x mod y = 0. At points of continuity the series converges to the true value. Using the formula gives for not an integer. Applications Mod operator For an integer x and a positive integer y, the modulo operation, denoted by x mod y, gives the value of the remainder when x is divided by y. This definition can be extended to real x and y, y ≠ 0, by the formula Then it follows from the definition of floor function that this extended operation satisfies many natural properties. Notably, x mod y is always between 0 and y, i.e., if y is positive, and if y is negative, Quadratic reciprocity Gauss's third proof of quadratic reciprocity, as modified by Eisenstein, has two basic steps. Let p and q be distinct positive odd prime numbers, and let First, Gauss's lemma is used to show that the Legendre symbols are given by The second step is to use a geometric argument to show that Combining these formulas gives quadratic reciprocity in the form There are formulas that use floor to express the quadratic character of small numbers mod odd primes p: Rounding For an arbitrary real number , rounding to the nearest integer with tie breaking towards positive infinity is given by ; rounding towards negative infinity is given as . If tie-breaking is away from 0, then the rounding function is (see sign function), and rounding towards even can be expressed with the more cumbersome , which is the above expression for rounding towards positive infinity minus an integrality indicator for . Rounding a real number to the nearest integer value forms a very basic type of quantizer – a uniform one. A typical (mid-tread) uniform quantizer with a quantization step size equal to some value can be expressed as , Number of digits The number of digits in base b of a positive integer k is Number of strings without repeated characters The number of possible strings of arbitrary length that doesn't use any character twice is given by where: > 0 is the number of letters in the alphabet (e.g., 26 in English) the falling factorial denotes the number of strings of length that don't use any character twice. ! denotes the factorial of = 2.718... is Euler's number For = 26, this comes out to 1096259850353149530222034277. Factors of factorials Let n be a positive integer and p a positive prime number. The exponent of the highest power of p that divides n! is given by a version of Legendre's formula where is the way of writing n in base p. This is a finite sum, since the floors are zero when pk > n. Beatty sequence The Beatty sequence shows how every positive irrational number gives rise to a partition of the natural numbers into two sequences via the floor function. Euler's constant (γ) There are formulas for Euler's constant γ = 0.57721 56649 ... that involve the floor and ceiling, e.g. and Riemann zeta function (ζ) The fractional part function also shows up in integral representations of the Riemann zeta function. It is straightforward to prove (using integration by parts) that if is any function with a continuous derivative in the closed interval [a, b], Letting for real part of s greater than 1 and letting a and b be integers, and letting b approach infinity gives This formula is valid for all s with real part greater than −1, (except s = 1, where there is a pole) and combined with the Fourier expansion for {x} can be used to extend the zeta function to the entire complex plane and to prove its functional equation. For s = σ + it in the critical strip 0 < σ < 1, In 1947 van der Pol used this representation to construct an analogue computer for finding roots of the zeta function. Formulas for prime numbers The floor function appears in several formulas characterizing prime numbers. For example, since is equal to 1 if m divides n, and to 0 otherwise, it follows that a positive integer n is a prime if and only if One may also give formulas for producing the prime numbers. For example, let pn be the n-th prime, and for any integer r > 1, define the real number α by the sum Then A similar result is that there is a number θ = 1.3064... (Mills' constant) with the property that are all prime. There is also a number ω = 1.9287800... with the property that are all prime. Let (x) be the number of primes less than or equal to x. It is a straightforward deduction from Wilson's theorem that Also, if n ≥ 2, None of the formulas in this section are of any practical use. Solved problems Ramanujan submitted these problems to the Journal of the Indian Mathematical Society. If n is a positive integer, prove that Some generalizations to the above floor function identities have been proven. Unsolved problem The study of Waring's problem has led to an unsolved problem: Are there any positive integers k ≥ 6 such that Mahler has proved there can only be a finite number of such k; none are known. Computer implementations In most programming languages, the simplest method to convert a floating point number to an integer does not do floor or ceiling, but truncation. The reason for this is historical, as the first machines used ones' complement and truncation was simpler to implement (floor is simpler in two's complement). FORTRAN was defined to require this behavior and thus almost all processors implement conversion this way. Some consider this to be an unfortunate historical design decision that has led to bugs handling negative offsets and graphics on the negative side of the origin. An arithmetic right-shift of a signed integer by is the same as . Division by a power of 2 is often written as a right-shift, not for optimization as might be assumed, but because the floor of negative results is required. Assuming such shifts are "premature optimization" and replacing them with division can break software. Many programming languages (including C, C++, C#, Java, Julia, PHP, R, and Python) provide standard functions for floor and ceiling, usually called floor and ceil, or less commonly ceiling. The language APL uses ⌊x for floor. The J Programming Language, a follow-on to APL that is designed to use standard keyboard symbols, uses <. for floor and >. for ceiling. ALGOL usesentier for floor. In Microsoft Excel the function INT rounds down rather than toward zero, while FLOOR rounds toward zero, the opposite of what "int" and "floor" do in other languages. Since 2010 FLOOR has been changed to error if the number is negative. The OpenDocument file format, as used by OpenOffice.org, Libreoffice and others, INT and FLOOR both do floor, and FLOOR has a third argument to reproduce Excel's earlier behavior.
Mathematics
Specific functions
null
54301
https://en.wikipedia.org/wiki/Polycystic%20ovary%20syndrome
Polycystic ovary syndrome
Polycystic ovary syndrome, or polycystic ovarian syndrome (PCOS), is the most common endocrine disorder in women of reproductive age. The syndrome is named after cysts which form on the ovaries of some women with this condition, though this is not a universal symptom, and not the underlying cause of the disorder. The primary characteristics of PCOS include hyperandrogenism, anovulation, insulin resistance, and neuroendocrine disruption. Women may also experience irregular menstrual periods, heavy periods, excess hair, acne, pelvic pain, difficulty getting pregnant, and patches of darker skin. Beyond its reproductive implications, PCOS is increasingly recognized as a multifactorial metabolic condition with significant long-term health consequences, including an elevated risk of cardiovascular disease and type 2 diabetes. A review of international evidence found that the prevalence of PCOS could be as high as 26% among some populations, though ranges between 4% and 18% are reported for general populations. According to the World Health Organization (WHO), PCOS affects over 8-13% of reproductive-aged women. The exact cause of PCOS remains uncertain, and treatment involves management of symptoms using medication. Definition Two definitions are commonly used: NIH In 1990, a consensus workshop sponsored by the NIH/NICHD suggested that a person has PCOS if they have all of the following: Rotterdam In 2003, a consensus workshop sponsored by ESHRE/ASRM in Rotterdam indicated PCOS to be present if any two out of three criteria are met, in the absence of other entities that might cause these findings:The Rotterdam definition is wider, including many more women, the most notable ones being women without androgen excess. Critics say that findings obtained from the study of women with androgen excess cannot necessarily be extrapolated to women without androgen excess. Androgen Excess PCOS Society In 2006, the Androgen Excess PCOS Society suggested a tightening of the diagnostic criteria to all of the following: Signs and symptoms Signs and symptoms of PCOS include irregular or no menstrual periods, heavy periods, excess body and facial hair, acne, pelvic pain, difficulty getting pregnant, and patches of thick, darker, velvety skin, ovarian cysts, enlarged ovaries, excess androgens, and weight gain. Associated conditions include type 2 diabetes, obesity, obstructive sleep apnea, heart disease, mood disorders, and endometrial cancer. Common signs and symptoms of PCOS include the following: Menstrual disorders: PCOS mostly produces oligomenorrhea (fewer than nine menstrual periods in a year) or amenorrhea (no menstrual periods for three or more consecutive months), but other types of menstrual disorders may also occur. Infertility: This generally results directly from chronic anovulation (lack of ovulation). High levels of masculinizing hormones: Known as hyperandrogenism, the most common signs are acne and hirsutism (male pattern of hair growth, such as on the chin or chest), but it may produce hypermenorrhea (heavy and prolonged menstrual periods), androgenic alopecia (increased hair thinning or diffuse hair loss), or other symptoms. Approximately three-quarters of women with PCOS (by the diagnostic criteria of NIH/NICHD 1990) have evidence of hyperandrogenemia. Metabolic syndrome: This appears as a tendency towards central obesity and other symptoms associated with insulin resistance, including low energy levels and food cravings. Serum insulin, insulin resistance, and homocysteine levels are higher in women with PCOS. Acne: A rise in testosterone levels, increases the oil production within the sebaceous glands and clogs pores. For many women, the emotional impact is great and quality of life can be significantly reduced. Androgenic alopecia: Estimates suggest that androgenic alopecia affects 22% of PCOS sufferers. This is a result of high testosterone levels that are converted into the dihydrotestosterone (DHT) hormone. Hair follicles become clogged, making hair fall out and preventing further growth. Acanthosis nigricans (AN): A skin condition where dark, thick and "velvety" patches can form. Polycystic ovaries: There are small cysts on one or both ovaries. Ovaries might get enlarged and comprise follicles surrounding the eggs. As result, ovaries might fail to function regularly. This disease is related to the number of follicles per ovary each month growing from the average range of 6–8 to double, triple or more. Women with PCOS have higher risk of multiple diseases including infertility, type 2 diabetes mellitus (DM-2), cardiovascular risk, metabolic syndrome, obesity, impaired glucose tolerance, depression, obstructive sleep apnea (OSA), endometrial cancer, and nonalcoholic fatty liver disease/nonalcoholic steatohepatitis (NAFLD/NASH). Women with PCOS tend to have central obesity, but studies are conflicting as to whether visceral and subcutaneous abdominal fat is increased, unchanged, or decreased in women with PCOS relative to non-PCOS women with the same body mass index. In any case, androgens, such as testosterone, androstanolone (dihydrotestosterone), and nandrolone decanoate have been found to increase visceral fat deposition in both female animals and women. Although 80% of PCOS presents in women with obesity, 20% of women diagnosed with the disease are non-obese or "lean" women. However, obese women that have PCOS have a higher risk of adverse outcomes, such as hypertension, insulin resistance, metabolic syndrome, and endometrial hyperplasia. Even though most women with PCOS are overweight or obese, it is important to acknowledge that non-overweight women can also be diagnosed with PCOS. Up to 30% of women diagnosed with PCOS maintain a normal weight before and after diagnosis. "Lean" women still face the various symptoms of PCOS with the added challenges of having their symptoms properly addressed and recognized. Lean women often go undiagnosed for years, and usually are diagnosed after struggles to conceive. Lean women are likely to have a missed diagnosis of diabetes and cardiovascular disease. These women also have an increased risk of developing insulin resistance, despite not being overweight. Lean women are often taken less seriously with their diagnosis of PCOS, and also face challenges finding appropriate treatment options. This is because most treatment options are limited to approaches of losing weight and healthy dieting. Hormone levels Testosterone levels are usually elevated in women with PCOS. In a 2020 systematic review and meta-analysis of sexual dysfunction related to PCOS which included 5,366 women with PCOS from 21 studies, testosterone levels were analyzed and were found to be 2.34 nmol/L (67 ng/dL) in women with PCOS and 1.57 nmol/L (45 ng/dL) in women without PCOS. In a 1995 study of 1,741 women with PCOS, mean testosterone levels were 2.6 (1.1–4.8) nmol/L (75 (32–140) ng/dL). In a 1998 study which reviewed many studies and subjected them to meta-analysis, testosterone levels in women with PCOS were 62 to 71 ng/dL (2.2–2.5 nmol/L) and testosterone levels in women without PCOS were about 32 ng/dL (1.1 nmol/L). In a 2010 study of 596 women with PCOS which used liquid chromatography–mass spectrometry (LC–MS) to quantify testosterone, median levels of testosterone were 41 and 47 ng/dL (with 25th–75th percentiles of 34–65 ng/dL and 27–58 ng/dL and ranges of 12–184 ng/dL and 1–205 ng/dL) via two different labs. If testosterone levels are above 100 to 200 ng/dL, per different sources, other possible causes of hyperandrogenism, such as congenital adrenal hyperplasia or an androgen-secreting tumor, may be present and should be excluded. Associated conditions Warning signs may include a change in appearance. But there are also manifestations of mental health problems, such as anxiety, depression, and eating disorders. A diagnosis of PCOS suggests an increased risk of the following: Endometrial hyperplasia and endometrial cancer (cancer of the uterine lining) are possible, due to overaccumulation of uterine lining, and also lack of progesterone, resulting in prolonged stimulation of uterine cells by estrogen. It is not clear whether this risk is directly due to the syndrome or from the associated obesity, hyperinsulinemia, and hyperandrogenism. Insulin resistance/type 2 diabetes. A review published in 2010 concluded that women with PCOS have an elevated prevalence of insulin resistance and type 2 diabetes, even when controlling for body mass index (BMI). PCOS is also associated with higher risk for diabetes. High blood pressure, in particular if obese or during pregnancy Depression and anxiety Dyslipidemia – disorders of lipid metabolism – cholesterol and triglycerides. Women with PCOS show a decreased removal of atherosclerosis-inducing remnants, seemingly independent of insulin resistance/type 2 diabetes. Cardiovascular disease, with a meta-analysis estimating a 2-fold risk of arterial disease for women with PCOS relative to women without PCOS, independent of BMI. Strokes Weight gain Miscarriage Sleep apnea, particularly if obesity is present Non-alcoholic fatty liver disease, particularly if obesity is present Acanthosis nigricans (patches of darkened skin under the arms, in the groin area, on the back of the neck) Autoimmune thyroiditis Iron deficiency The risk of ovarian cancer and breast cancer is not significantly increased overall. Cause PCOS is a heterogeneous disorder of uncertain cause. There is some evidence that it is a genetic disease. Such evidence includes the familial clustering of cases, greater concordance in monozygotic compared with dizygotic twins and heritability of endocrine and metabolic features of PCOS. There is some evidence that exposure to higher than typical levels of androgens and the anti-Müllerian hormone (AMH) in utero increases the risk of developing PCOS in later life. It may be caused by a combination of genetic and environmental factors. Risk factors include obesity, a lack of physical exercise, and a family history of someone with the condition. Diagnosis is based on two of the following three findings: anovulation, high androgen levels, and ovarian cysts. Cysts may be detectable by ultrasound. Other conditions that produce similar symptoms include adrenal hyperplasia, hypothyroidism, and high blood levels of prolactin. Genetics The genetic component appears to be inherited in an autosomal dominant fashion with high genetic penetrance but variable expressivity in females; this means that each child has a 50% chance of inheriting the predisposing genetic variant(s) from a parent, and, if a daughter receives the variant(s), the daughter will have the disease to some extent. The genetic variant(s) can be inherited from either the father or the mother, and can be passed along to both sons (who may be asymptomatic carriers or may have symptoms such as early baldness and/or excessive hair) and daughters, who will show signs of PCOS. The phenotype appears to manifest itself at least partially via heightened androgen levels secreted by ovarian follicle theca cells from women with the allele. The exact gene affected has not yet been identified. In rare instances, single-gene mutations can give rise to the phenotype of the syndrome. Current understanding of the pathogenesis of the syndrome suggests, however, that it is a complex multigenic disorder. Due to the scarcity of large-scale screening studies, the prevalence of endometrial abnormalities in PCOS remains unknown, though women with the condition may be at increased risk for endometrial hyperplasia and carcinoma as well as menstrual dysfunction and infertility. The severity of PCOS symptoms appears to be largely determined by factors such as obesity. PCOS has some aspects of a metabolic disorder, since its symptoms are partly reversible. Even though considered as a gynecological problem, PCOS consists of 28 clinical symptoms. Even though the name suggests that the ovaries are central to disease pathology, cysts are a symptom instead of the cause of the disease. Some symptoms of PCOS will persist even if both ovaries are removed; the disease can appear even if cysts are absent. Since its first description by Stein and Leventhal in 1935, the criteria of diagnosis, symptoms, and causative factors have been subject to debate. Gynecologists often see it as a gynecological problem, with the ovaries being the primary organ affected. However, recent insights show a multisystem disorder, with the primary problem lies in hormonal regulation in the hypothalamus, with the involvement of many organs. The term PCOS is used due to the fact that there is a wide spectrum of symptoms possible. It is common to have polycystic ovaries without having PCOS; approximately 20% of European women have polycystic ovaries, but most of those women do not have PCOS. Environment PCOS may be related to or worsened by exposures during the prenatal period, epigenetic factors, environmental impacts (especially industrial endocrine disruptors, such as bisphenol A and certain drugs) and the increasing rates of obesity. Endocrine disruptors are defined as chemicals that can interfere with the endocrine system by mimicking hormones such as estrogen. According to the NIH (National Institute of Health), examples of endocrine disruptors can include dioxins and triclosan. Endocrine disruptors can cause adverse health impacts in animals. Additional research is needed to assess the role that endocrine disruptors may play in disrupting reproductive health in women and possibly triggering or exacerbating PCOS and its related symptoms. Pathogenesis Polycystic ovaries develop when the ovaries are stimulated to produce excessive amounts of androgenic hormones, in particular testosterone, by either one or a combination of the following (almost certainly combined with genetic susceptibility): the release of excessive luteinizing hormone (LH) by the anterior pituitary gland through high levels of insulin in the blood (hyperinsulinaemia) in women whose ovaries are sensitive to this stimulus A majority of women with PCOS have insulin resistance and/or are obese, which is a strong risk factor for insulin resistance, although insulin resistance is a common finding among women with PCOS in normal-weight women as well. Elevated insulin levels contribute to or cause the abnormalities seen in the hypothalamic–pituitary–ovarian axis that lead to PCOS. Hyperinsulinemia increases GnRH pulse frequency, which in turn results in an increase in the LH/FSH ratio increased ovarian androgen production; decreased follicular maturation; and decreased SHBG binding. Furthermore, excessive insulin increases the activity of 17α-hydroxylase, which catalyzes the conversion of progesterone to androstenedione, which is in turn converted to testosterone. The combined effects of hyperinsulinemia contribute to an increased risk of PCOS. Adipose (fat) tissue possesses aromatase, an enzyme that converts androstenedione to estrone and testosterone to estradiol. The excess of adipose tissue in obese women creates the paradox of having both excess androgens (which are responsible for hirsutism and virilization) and excess estrogens (which inhibit FSH via negative feedback). The syndrome acquired its most widely used name due to the common sign on ultrasound examination of multiple (poly) ovarian cysts. These "cysts" are in fact immature ovarian follicles. The follicles have developed from primordial follicles, but this development has stopped ("arrested") at an early stage, due to the disturbed ovarian function. The follicles may be oriented along the ovarian periphery, appearing as a 'string of pearls' on ultrasound examination. PCOS may be associated with chronic inflammation, with several investigators correlating inflammatory mediators with anovulation and other PCOS symptoms. Similarly, there seems to be a relation between PCOS and an increased level of oxidative stress. Diagnosis Not every person with PCOS has polycystic ovaries (PCO), nor does everyone with ovarian cysts have PCOS; although a pelvic ultrasound is a major diagnostic tool, it is not the only one. The diagnosis is fairly straightforward using the Rotterdam criteria, even when the syndrome is associated with a wide range of symptoms. Differential diagnosis Other causes of irregular or absent menstruation and hirsutism, such as hypothyroidism, congenital adrenal hyperplasia (21-hydroxylase deficiency) (which may cause excessive body hair, deep tone voice and others symptoms similar to hyperandrogenism), Cushing's syndrome, hyperprolactinemia (leading to anovulation), androgen-secreting neoplasms, and other pituitary or adrenal disorders, should be investigated. Assessment and testing Standard assessment History-taking, specifically for menstrual pattern, obesity, hirsutism and acne. A clinical prediction rule found that these four questions can diagnose PCOS with a sensitivity of 77.1% (95% confidence interval [CI] 62.7%–88.0%) and a specificity of 93.8% (95% CI 82.8%–98.7%). Gynecologic ultrasonography, specifically looking for small ovarian follicles. These are believed to be the result of disturbed ovarian function with failed ovulation, reflected by the infrequent or absent menstruation that is typical of the condition. In a normal menstrual cycle, one egg is released from a dominant follicle – in essence, a cyst that bursts to release the egg. After ovulation, the follicle remnant is transformed into a progesterone-producing corpus luteum, which shrinks and disappears after approximately 12–14 days. In PCOS, there is a so-called "follicular arrest"; i.e., several follicles develop to a size of 5–7 mm, but not further. No single follicle reaches the preovulatory size (16 mm or more). According to the Rotterdam criteria, which are widely used for diagnosis of PCOS, 12 or more small follicles should be seen in a suspect ovary on ultrasound examination. More recent research suggests that there should be at least 25 follicles in an ovary to designate it as having polycystic ovarian morphology (PCOM) in women aged 18–35 years. The follicles may be oriented in the periphery, giving the appearance of a 'string of pearls'. If a high-resolution transvaginal ultrasonography machine is not available, an ovarian volume of at least 10 ml is regarded as an acceptable definition of having polycystic ovarian morphology, rather than follicle count. Laparoscopic examination may reveal a thickened, smooth, pearl-white outer surface of the ovary. (This would usually be an incidental finding if laparoscopy were performed for some other reason, as it would not be routine to examine the ovaries in this way to confirm a diagnosis of PCOS.) Serum (blood) levels of androgens, including androstenedione and testosterone may be elevated. Dehydroepiandrosterone sulfate (DHEA-S) levels above 700–800 μg/dL are highly suggestive of adrenal dysfunction because DHEA-S is made exclusively by the adrenal glands. The free testosterone level is thought to be the best measure, with approximately 60 per cent of PCOS patients demonstrating supranormal levels. Some other blood tests are suggestive but not diagnostic. The ratio of LH (luteinizing hormone) to FSH (follicle-stimulating hormone), when measured in international units, is elevated in women with PCOS. Common cut-offs to designate abnormally high LH/FSH ratios are 2:1 or 3:1 as tested on day 3 of the menstrual cycle. The pattern is not very sensitive; a ratio of 2:1 or higher was present in less than 50% of women with PCOS in one study. There are often low levels of sex hormone-binding globulin, in particular among obese or overweight women. Anti-Müllerian hormone (AMH) is increased in PCOS, and may become part of its diagnostic criteria. Glucose tolerance testing Two-hour oral glucose tolerance test (GTT) in women with risk factors (obesity, family history, history of gestational diabetes) may indicate impaired glucose tolerance (insulin resistance) in 15–33% of women with PCOS. Frank diabetes can be seen in 65–68% of women with this condition. Insulin resistance can be observed in both normal weight and overweight people, although it is more common in the latter (and in those matching the stricter NIH criteria for diagnosis); 50–80% of people with PCOS may have insulin resistance at some level. Fasting insulin level or GTT with insulin levels (also called IGTT). Elevated insulin levels have been helpful to predict response to medication and may indicate women needing higher doses of metformin or the use of a second medication to significantly lower insulin levels. Elevated blood sugar and insulin values do not predict who responds to an insulin-lowering medication, low-glycemic diet, and exercise. Many women with normal levels may benefit from combination therapy. A hypoglycemic response in which the two-hour insulin level is higher and the blood sugar lower than fasting is consistent with insulin resistance. A mathematical derivation known as the HOMAI, calculated from the fasting values in glucose and insulin concentrations, allows a direct and moderately accurate measure of insulin sensitivity (glucose-level x insulin-level/22.5). Management PCOS has no cure. Treatment may involve lifestyle changes such as weight loss and exercise. Recent research suggests that daily exercise including both aerobic and strength activities can improve hormone imbalances. Birth control pills may help with improving the regularity of periods, excess hair growth, and acne. Combined oral contraceptives are especially effective, and used as the first-line of treatment to reduce acne and hirsutism, and regulate menstrual cycle. This is especially the case in adolescents. Metformin, GLP-1, and anti-androgens may also help. Other typical acne treatments and hair removal techniques may be used. Efforts to improve fertility include weight loss, metformin, and ovulation induction using clomiphene or letrozole. In vitro fertilization is used by some in whom other measures are not effective. Certain cosmetic procedures may also help alleviate symptoms in some cases. For example, the use of laser hair removal, electrolysis, or general waxing, plucking and shaving are all effective methods for reducing hirsutism. The primary treatments for PCOS include lifestyle changes and use of medications. Goals of treatment may be considered under these categories: Lowering of insulin resistance Reducing androgen and testosterone levels Restoration of fertility Treatment of hirsutism or acne Restoration of regular menstruation, and prevention of endometrial hyperplasia and endometrial cancer In each of these areas, there is considerable debate as to the optimal treatment. One of the major factors underlying the debate is the lack of large-scale clinical trials comparing different treatments. Smaller trials tend to be less reliable and hence may produce conflicting results. General interventions that help to reduce weight or insulin resistance can be beneficial for all these aims, because they address what is believed to be the underlying cause. As PCOS appears to cause significant emotional distress, appropriate support may also be useful. Diet Where PCOS is associated with being overweight or obesity, successful weight loss is the most effective method of restoring normal ovulation/menstruation. The American Association of Clinical Endocrinologists guidelines recommend a goal of achieving 10–15% weight loss or more, which improves insulin resistance and all hormonal disorders. Still, many women find it very difficult to achieve and sustain significant weight loss. Insulin resistance itself can cause increased food cravings and lower energy levels, which can make it difficult to lose weight on a regular weight-loss diet. A scientific review in 2013 found similar improvements in weight, body composition and pregnancy rate, menstrual regularity, ovulation, hyperandrogenism, insulin resistance, lipids, and quality of life to occur with weight loss, independent of diet composition. Still, a low GI diet, in which a significant portion of total carbohydrates is obtained from fruit, vegetables, and whole-grain sources, has resulted in greater menstrual regularity than a macronutrient-matched healthy diet. Reducing intake of food groups that cause inflammation, such as dairy, sugars and simple carbohydrates, can be beneficial. A mediterranean diet is often very effective due to its anti-inflammatory and anti-oxidative properties. Vitamin D deficiency may play some role in the development of the metabolic syndrome, and treatment of any such deficiency is indicated. However, a systematic review of 2015 found no evidence that vitamin D supplementation reduced or mitigated metabolic and hormonal dysregulations in PCOS. As of 2012, interventions using dietary supplements to correct metabolic deficiencies in people with PCOS had been tested in small, uncontrolled and nonrandomized clinical trials; the resulting data are insufficient to recommend their use. Medications Medications for PCOS include oral contraceptives and metformin. The oral contraceptives increase sex hormone binding globulin production, which increases binding of free testosterone. This reduces the symptoms of hirsutism caused by high testosterone and regulates return to normal menstrual periods. Anti-androgens such as finasteride, flutamide, spironolactone, and bicalutamide do not show advantages over oral contraceptives, but could be an option for people who do not tolerate them. Finasteride is the only oral medication for the treatment of androgenic alopecia, that is FDA approved. Metformin is a medication commonly used in type 2 diabetes mellitus to reduce insulin resistance, and is used off label (in the UK, US, AU and EU) to treat insulin resistance seen in PCOS. In many cases, metformin also supports ovarian function and return to normal ovulation. A newer insulin resistance medication class, the thiazolidinediones (glitazones), have shown equivalent efficacy to metformin, but metformin has a more favorable side effect profile. The United Kingdom's National Institute for Health and Clinical Excellence recommended in 2004 that women with PCOS and a body mass index above 25 be given metformin when other therapy has failed to produce results. Metformin may not be effective in every type of PCOS, and therefore there is some disagreement about whether it should be used as a general first line therapy. In addition to this, metformin is associated with several unpleasant side effects: including abdominal pain, metallic taste in the mouth, diarrhoea and vomiting. Metformin is thought to be safe to use during pregnancy (pregnancy category B in the US). A review in 2014 concluded that the use of metformin does not increase the risk of major birth defects in women treated with metformin during the first trimester. Liraglutide may reduce weight and waist circumference in people with PCOS more than other medications. The use of statins in the management of underlying metabolic syndrome remains unclear. It can be difficult to become pregnant with PCOS because it causes irregular ovulation. Medications to induce fertility when trying to conceive include the ovulation inducer clomiphene or pulsatile leuprorelin. Evidence from randomised controlled trials suggests that in terms of live birth, metformin may be better than placebo, and metform plus clomiphene may be better than clomiphene alone, but that in both cases women may be more likely to experience gastrointestinal side effects with metformin. Infertility Some individuals with PCOS may have difficulty getting pregnant since their body does not produce the hormones necessary for regular ovulation. PCOS might also increase the risk of miscarriage or premature delivery. However, it is possible to have a normal pregnancy. For women that do, anovulation or infrequent ovulation is a common cause and PCOS is the main cause of anovulatory infertility. Other factors include changed levels of gonadotropins, hyperandrogenemia, and hyperinsulinemia. Like women without PCOS, women with PCOS that are ovulating may be infertile due to other causes, such as tubal blockages due to a history of sexually transmitted diseases. For overweight anovulatory women with PCOS, weight loss and diet adjustments, especially to reduce the intake of simple carbohydrates, are associated with resumption of natural ovulation. Digital health interventions have been shown to be particularly effective in providing combined therapy to manage PCOS through both lifestyle changes and medication. Femara is an alternative medicine that raises FSH levels and promote the development of the follicle. For those women that, after weight loss, are still anovulatory, or for anovulatory lean women, ovulation induction using the medications letrozole or clomiphene citrate are the principal treatments used to promote ovulation. Clomiphene can cause mood swings and abdominal cramping for some. Previously, the anti-diabetes medication metformin was recommended treatment for anovulation, but it appears less effective than letrozole or clomiphene. For women not responsive to letrozole or clomiphene and diet and lifestyle modification, there are options available including assisted reproductive technology procedures such as controlled ovarian hyperstimulation with follicle-stimulating hormone (FSH) injections followed by in vitro fertilisation (IVF). Though surgery is not commonly performed, the polycystic ovaries can be treated with a laparoscopic procedure called "ovarian drilling" (puncture of 4–10 small follicles with electrocautery, laser, or biopsy needles), which often results in either resumption of spontaneous ovulations or ovulations after adjuvant treatment with clomiphene or FSH. (Ovarian wedge resection is no longer used as much due to complications such as adhesions and the presence of frequently effective medications.) There are, however, concerns about the long-term effects of ovarian drilling on ovarian function. In a small UK randomized trial, bariatric surgery led to more spontaneous ovulations than behavioral interventions combined with medical therapy in adult women with PCOS, raising the prospect that surgery could enhance prospects of spontaneous fertility. Mental health Women with PCOS are far more likely to have depression than women without. Symptoms of depression might be heightened by certain physiological manifestations of this disease such as hirsutism or obesity that can lead to low self-esteem or poor body image.  Researchers suggest that there be mental health screenings performed in tandem with PCOS assessment in order to identify these complications early and treat them accordingly. PCOS is associated with other mental health related conditions besides depression such as anxiety, bipolar disorder, and obsessive–compulsive disorder. Additionally, it has been found to significantly increase risk of eating disorders.  Screening for these mental health conditions will also be helpful in treatment of PCOS. Lifestyle changes for people with PCOS have been proven to be difficult due to lack of intrinsic motivation, altered risk perception or other PCOS-related barriers. However, self management techniques and behavior change can be taught in a multidisciplinary approach with the goal of supporting women with PCOS in managing their symptoms. Hirsutism and acne When appropriate (e.g., in women of child-bearing age who require contraception), a standard contraceptive pill is frequently effective in reducing hirsutism. Progestogens such as norgestrel and levonorgestrel should be avoided due to their androgenic effects. Metformin combined with an oral contraceptive may be more effective than either metformin or the oral contraceptive on its own. In the case of taking medication for acne, Kelly Morrow-Baez PHD, in her exposition titled Thriving with PCOS, informs that it "takes time for medications to adjust hormone levels, and once those hormone levels are adjusted, it takes more time still for pores to be unclogged of overproduced oil and for any bacterial infections under the skin to clear up before you will see discernible results." (p. 138) Other medications with anti-androgen effects include flutamide, and spironolactone, which can improve hirsutism. Metformin can reduce hirsutism, perhaps by reducing insulin resistance, and is often used if there are other features such as insulin resistance, diabetes, or obesity that are likely to respond to metformin. Eflornithine (Vaniqa) is a medication that is applied to the skin in cream form, and acts directly on the hair follicles to inhibit hair growth. It is usually applied to the face. 5-alpha reductase inhibitors (such as finasteride and dutasteride) may also be used; they work by blocking the conversion of testosterone to dihydrotestosterone (the latter of which is responsible for most hair growth alterations and androgenic acne). Although these agents have shown significant efficacy in clinical trials (for oral contraceptives, in 60–100% of individuals), the reduction in hair growth may not be enough to eliminate the social embarrassment of hirsutism or the inconvenience of plucking or shaving. Individuals vary in their response to different therapies. It is usually worth trying other medications if one does not work, but medications do not work well for all individuals. Menstrual irregularity If fertility is not the primary aim, then menstruation can usually be regulated with a contraceptive pill. The purpose of regulating menstruation, in essence, is for the patient's convenience, and perhaps their sense of well-being; there is no medical requirement for regular periods, as long as they occur sufficiently often. If a regular menstrual cycle is not desired, then therapy for an irregular cycle is not necessarily required. Most experts say that, if a menstrual bleed occurs at least every three months, then the endometrium (womb lining) is being shed sufficiently often to prevent an increased risk of endometrial abnormalities or cancer. If menstruation occurs less often or not at all, some form of progestogen replacement is recommended. Alternative medicine A 2017 review concluded that while both myo-inositol and D-chiro-inositols may regulate menstrual cycles and improve ovulation, there is a lack of evidence regarding effects on the probability of pregnancy. A 2012 and 2017 review have found myo-inositol supplementation appears to be effective in improving several of the hormonal disturbances of PCOS. Myo-inositol reduces the amount of gonadotropins and the length of controlled ovarian hyperstimulation in women undergoing in vitro fertilization. A 2011 review found not enough evidence to conclude any beneficial effect from D-chiro-inositol. There is insufficient evidence to support the use of acupuncture, current studies are inconclusive and there's a need for additional randomized controlled trials. Epidemiology PCOS is the most common endocrine disorder among women between the ages of 18 and 44. It affects approximately 2% to 20% of this age group depending on how it is defined. When someone is infertile due to lack of ovulation, PCOS is the most common cause and could guide to patients' diagnosis. The earliest known description of what is now recognized as PCOS dates from 1721 in Italy. The prevalence of PCOS depends on the choice of diagnostic criteria. The World Health Organization estimates that it affects 116 million women worldwide as of 2010 (3.4% of women). Another estimate indicates that 7% of women of reproductive age are affected. Another study using the Rotterdam criteria found that about 18% of women had PCOS, and that 70% of them were previously undiagnosed. Prevalence also varies across countries due to lack of large-scale scientific studies; India, for example, has a purported rate of 1 in 5 women having PCOS. There are few studies that have investigated the racial differences in cardiometabolic factors in women with PCOS. There is also limited data on the racial differences in the risk of metabolic syndrome and cardiovascular disease in adolescents and young adults with PCOS. The first study to comprehensively examine racial differences discovered notable racial differences in risk factors for cardiovascular disease. African American women were found to be significantly more obese, with a significantly higher prevalence of metabolic syndrome compared to white adult women with PCOS. It is important for the further research of racial differences among women with PCOS, to ensure that every woman that is affected by PCOS has the available resources for management. Ultrasonographic findings of polycystic ovaries are found in 8–25% of women non-affected by the syndrome. 14% women on oral contraceptives are found to have polycystic ovaries. Ovarian cysts are also a common side effect of levonorgestrel-releasing intrauterine devices (IUDs). There are few studies that have investigated the racial differences in cardiometabolic factors in women with PCOS. History The condition was first described in 1935 by American gynecologists Irving F. Stein Sr. and Michael L. Leventhal, from whom its original name of Stein–Leventhal syndrome is taken. Stein and Leventhal first described PCOS as an endocrine disorder in the United States, and since then, it has become recognized as one of the most common causes of oligo ovulatory infertility among women. The earliest published description of a person with what is now recognized as PCOS was in 1721 in Italy. Cyst-related changes to the ovaries were described in 1844. Early Descriptions of PCOS Historical descriptions of PCOS symptoms date back to ancient Greece, where Hippocrates described women with "thick, oily skin and absence of menstruation." Etymology Other names for this syndrome include polycystic ovarian syndrome, polycystic ovary disease, functional ovarian hyperandrogenism, ovarian hyperthecosis, sclerocystic ovary syndrome, and Stein–Leventhal syndrome. The eponymous last option is the original name; it is now used, if at all, only for the subset of women with all the symptoms of amenorrhea with infertility, hirsutism, and enlarged polycystic ovaries. Most common names for this disease derive from a typical finding on medical images, called a polycystic ovary. A polycystic ovary has an abnormally large number of developing eggs visible near its surface, looking like many small cysts. Society and culture In 2005, 4 million cases of PCOS were reported in the US, costing $4.36 billion in healthcare costs. In 2016 out of the National Institute Health's research budget of $32.3 billion for that year, 0.1% was spent on PCOS research. Among women aged between 14 and 44, PCOS is conservatively estimated to cost $4.37 billion per year. As opposed to women in the general population, women with PCOS experience higher rates of depression and anxiety. International guidelines and Indian guidelines suggest psychosocial factors should be considered in women with PCOS, as well as screenings for depression and anxiety. Globally, this aspect has been increasingly focused on because it reflects the true impact of PCOS on the lives of patients. Research shows that PCOS adversely impacts a patient's quality of life. Public figures A number of celebrities and public figures have spoken about their experiences with PCOS, including: Victoria Beckham Maci Bookout Frankie Bridge Harnaam Kaur Jaime King Chrisette Michele Lea Michele Keke Palmer Sasha Pieterse Florence Pugh Daisy Ridley Romee Strijd Lee Tilghman
Biology and health sciences
Specific diseases
Health
54342
https://en.wikipedia.org/wiki/Cellular%20automaton
Cellular automaton
A cellular automaton (pl. cellular automata, abbrev. CA) is a discrete model of computation studied in automata theory. Cellular automata are also called cellular spaces, tessellation automata, homogeneous structures, cellular structures, tessellation structures, and iterative arrays. Cellular automata have found application in various areas, including physics, theoretical biology and microstructure modeling. A cellular automaton consists of a regular grid of cells, each in one of a finite number of states, such as on and off (in contrast to a coupled map lattice). The grid can be in any finite number of dimensions. For each cell, a set of cells called its neighborhood is defined relative to the specified cell. An initial state (time t = 0) is selected by assigning a state for each cell. A new generation is created (advancing t by 1), according to some fixed rule (generally, a mathematical function) that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood. Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously, though exceptions are known, such as the stochastic cellular automaton and asynchronous cellular automaton. The concept was originally discovered in the 1940s by Stanislaw Ulam and John von Neumann while they were contemporaries at Los Alamos National Laboratory. While studied by some throughout the 1950s and 1960s, it was not until the 1970s and Conway's Game of Life, a two-dimensional cellular automaton, that interest in the subject expanded beyond academia. In the 1980s, Stephen Wolfram engaged in a systematic study of one-dimensional cellular automata, or what he calls elementary cellular automata; his research assistant Matthew Cook showed that one of these rules is Turing-complete. The primary classifications of cellular automata, as outlined by Wolfram, are numbered one to four. They are, in order, automata in which patterns generally stabilize into homogeneity, automata in which patterns evolve into mostly stable or oscillating structures, automata in which patterns evolve in a seemingly chaotic fashion, and automata in which patterns become extremely complex and may last for a long time, with stable local structures. This last class is thought to be computationally universal, or capable of simulating a Turing machine. Special types of cellular automata are reversible, where only a single configuration leads directly to a subsequent one, and totalistic, in which the future value of individual cells only depends on the total value of a group of neighboring cells. Cellular automata can simulate a variety of real-world systems, including biological and chemical ones. Overview One way to simulate a two-dimensional cellular automaton is with an infinite sheet of graph paper along with a set of rules for the cells to follow. Each square is called a "cell" and each cell has two possible states, black and white. The neighborhood of a cell is the nearby, usually adjacent, cells. The two most common types of neighborhoods are the von Neumann neighborhood and the Moore neighborhood. The former, named after the founding cellular automaton theorist, consists of the four orthogonally adjacent cells. The latter includes the von Neumann neighborhood as well as the four diagonally adjacent cells. For such a cell and its Moore neighborhood, there are 512 (= 29) possible patterns. For each of the 512 possible patterns, the rule table would state whether the center cell will be black or white on the next time interval. Conway's Game of Life is a popular version of this model. Another common neighborhood type is the extended von Neumann neighborhood, which includes the two closest cells in each orthogonal direction, for a total of eight. The general equation for the total number of automata possible is kks, where k is the number of possible states for a cell, and s is the number of neighboring cells (including the cell to be calculated itself) used to determine the cell's next state. Thus, in the two-dimensional system with a Moore neighborhood, the total number of automata possible would be 229, or . It is usually assumed that every cell in the universe starts in the same state, except for a finite number of cells in other states; the assignment of state values is called a configuration. More generally, it is sometimes assumed that the universe starts out covered with a periodic pattern, and only a finite number of cells violate that pattern. The latter assumption is common in one-dimensional cellular automata. Cellular automata are often simulated on a finite grid rather than an infinite one. In two dimensions, the universe would be a rectangle instead of an infinite plane. The obvious problem with finite grids is how to handle the cells on the edges. How they are handled will affect the values of all the cells in the grid. One possible method is to allow the values in those cells to remain constant. Another method is to define neighborhoods differently for these cells. One could say that they have fewer neighbors, but then one would also have to define new rules for the cells located on the edges. These cells are usually handled with periodic boundary conditions resulting in a toroidal arrangement: when one goes off the top, one comes in at the corresponding position on the bottom, and when one goes off the left, one comes in on the right. (This essentially simulates an infinite periodic tiling, and in the field of partial differential equations is sometimes referred to as periodic boundary conditions.) This can be visualized as taping the left and right edges of the rectangle to form a tube, then taping the top and bottom edges of the tube to form a torus (doughnut shape). Universes of other dimensions are handled similarly. This solves boundary problems with neighborhoods, but another advantage is that it is easily programmable using modular arithmetic functions. For example, in a 1-dimensional cellular automaton like the examples below, the neighborhood of a cell xit is {xi−1t−1, xit−1, xi+1t−1}, where t is the time step (vertical), and i is the index (horizontal) in one generation. History Stanislaw Ulam, while working at the Los Alamos National Laboratory in the 1940s, studied the growth of crystals, using a simple lattice network as his model. At the same time, John von Neumann, Ulam's colleague at Los Alamos, was working on the problem of self-replicating systems. Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model. As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a "sea of parts" from which to build its replicant. Neumann wrote a paper entitled "The general and logical theory of automata" for the Hixon Symposium in 1948. Ulam was the one who suggested using a discrete system for creating a reductionist model of self-replication. Nils Aall Barricelli performed many of the earliest explorations of these models of artificial life. Ulam and von Neumann created a method for calculating liquid motion in the late 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbors' behaviors. Thus was born the first system of cellular automata. Like Ulam's lattice network, von Neumann's cellular automata are two-dimensional, with his self-replicator implemented algorithmically. The result was a universal copier and constructor working within a cellular automaton with a small neighborhood (only those cells that touch are neighbors; for von Neumann's cellular automata, only orthogonal cells), and with 29 states per cell. Von Neumann gave an existence proof that a particular pattern would make endless copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so. This design is known as the tessellation model, and is called a von Neumann universal constructor. Also in the 1940s, Norbert Wiener and Arturo Rosenblueth developed a model of excitable media with some of the characteristics of a cellular automaton. Their specific motivation was the mathematical description of impulse conduction in cardiac systems. However their model is not a cellular automaton because the medium in which signals propagate is continuous, and wave fronts are curves. A true cellular automaton model of excitable media was developed and studied by J. M. Greenberg and S. P. Hastings in 1978; see Greenberg-Hastings cellular automaton. The original work of Wiener and Rosenblueth contains many insights and continues to be cited in modern research publications on cardiac arrhythmia and excitable systems. In the 1960s, cellular automata were studied as a particular type of dynamical system and the connection with the mathematical field of symbolic dynamics was established for the first time. In 1969, Gustav A. Hedlund compiled many results following this point of view in what is still considered as a seminal paper for the mathematical study of cellular automata. The most fundamental result is the characterization in the Curtis–Hedlund–Lyndon theorem of the set of global rules of cellular automata as the set of continuous endomorphisms of shift spaces. In 1969, German computer pioneer Konrad Zuse published his book Calculating Space, proposing that the physical laws of the universe are discrete by nature, and that the entire universe is the output of a deterministic computation on a single cellular automaton; "Zuse's Theory" became the foundation of the field of study called digital physics. Also in 1969 computer scientist Alvy Ray Smith completed a Stanford PhD dissertation on Cellular Automata Theory, the first mathematical treatment of CA as a general class of computers. Many papers came from this dissertation: He showed the equivalence of neighborhoods of various shapes, how to reduce a Moore to a von Neumann neighborhood or how to reduce any neighborhood to a von Neumann neighborhood. He proved that two-dimensional CA are computation universal, introduced 1-dimensional CA, and showed that they too are computation universal, even with simple neighborhoods. He showed how to subsume the complex von Neumann proof of construction universality (and hence self-reproducing machines) into a consequence of computation universality in a 1-dimensional CA. Intended as the introduction to the German edition of von Neumann's book on CA, he wrote a survey of the field with dozens of references to papers, by many authors in many countries over a decade or so of work, often overlooked by modern CA researchers. In the 1970s a two-state, two-dimensional cellular automaton named Game of Life became widely known, particularly among the early computing community. Invented by John Conway and popularized by Martin Gardner in a Scientific American article, its rules are as follows: Any live cell with fewer than two live neighbours dies, as if caused by underpopulation. Any live cell with two or three live neighbours lives on to the next generation. Any live cell with more than three live neighbours dies, as if by overpopulation. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction. Despite its simplicity, the system achieves an impressive diversity of behavior, fluctuating between apparent randomness and order. One of the most apparent features of the Game of Life is the frequent occurrence of gliders, arrangements of cells that essentially move themselves across the grid. It is possible to arrange the automaton so that the gliders interact to perform computations, and after much effort it has been shown that the Game of Life can emulate a universal Turing machine. It was viewed as a largely recreational topic, and little follow-up work was done outside of investigating the particularities of the Game of Life and a few related rules in the early 1970s. Stephen Wolfram independently began working on cellular automata in mid-1981 after considering how complex patterns seemed formed in nature in violation of the second law of thermodynamics. His investigations were initially spurred by a desire to model systems such as the neural networks found in brains. He published his first paper in Reviews of Modern Physics investigating elementary cellular automata (Rule 30 in particular) in June 1983. The unexpected complexity of the behavior of these simple rules led Wolfram to suspect that complexity in nature may be due to similar mechanisms. His investigations, however, led him to realize that cellular automata were poor at modelling neural networks. Additionally, during this period Wolfram formulated the concepts of intrinsic randomness and computational irreducibility, and suggested that rule 110 may be universal—a fact proved later by Wolfram's research assistant Matthew Cook in the 1990s. Classification Wolfram, in A New Kind of Science and several papers dating from the mid-1980s, defined four classes into which cellular automata and several other simple computational models can be divided depending on their behavior. While earlier studies in cellular automata tended to try to identify types of patterns for specific rules, Wolfram's classification was the first attempt to classify the rules themselves. In order of complexity the classes are: Class 1: Nearly all initial patterns evolve quickly into a stable, homogeneous state. Any randomness in the initial pattern disappears. Class 2: Nearly all initial patterns evolve quickly into stable or oscillating structures. Some of the randomness in the initial pattern may filter out, but some remains. Local changes to the initial pattern tend to remain local. Class 3: Nearly all initial patterns evolve in a pseudo-random or chaotic manner. Any stable structures that appear are quickly destroyed by the surrounding noise. Local changes to the initial pattern tend to spread indefinitely. Class 4: Nearly all initial patterns evolve into structures that interact in complex and interesting ways, with the formation of local structures that are able to survive for long periods of time. Class 2 type stable or oscillating structures may be the eventual outcome, but the number of steps required to reach this state may be very large, even when the initial pattern is relatively simple. Local changes to the initial pattern may spread indefinitely. Wolfram has conjectured that many class 4 cellular automata, if not all, are capable of universal computation. This has been proven for Rule 110 and Conway's Game of Life. These definitions are qualitative in nature and there is some room for interpretation. According to Wolfram, "...with almost any general classification scheme there are inevitably cases which get assigned to one class by one definition and another class by another definition. And so it is with cellular automata: there are occasionally rules...that show some features of one class and some of another." Wolfram's classification has been empirically matched to a clustering of the compressed lengths of the outputs of cellular automata. There have been several attempts to classify cellular automata in formally rigorous classes, inspired by Wolfram's classification. For instance, Culik and Yu proposed three well-defined classes (and a fourth one for the automata not matching any of these), which are sometimes called Culik–Yu classes; membership in these proved undecidable. Wolfram's class 2 can be partitioned into two subgroups of stable (fixed-point) and oscillating (periodic) rules. The idea that there are 4 classes of dynamical system came originally from Nobel-prize winning chemist Ilya Prigogine who identified these 4 classes of thermodynamical systems: (1) systems in thermodynamic equilibrium, (2) spatially/temporally uniform systems, (3) chaotic systems, and (4) complex far-from-equilibrium systems with dissipative structures (see figure 1 in the 1974 paper of Nicolis, Prigogine's student). Reversible A cellular automaton is reversible if, for every current configuration of the cellular automaton, there is exactly one past configuration (preimage). If one thinks of a cellular automaton as a function mapping configurations to configurations, reversibility implies that this function is bijective. If a cellular automaton is reversible, its time-reversed behavior can also be described as a cellular automaton; this fact is a consequence of the Curtis–Hedlund–Lyndon theorem, a topological characterization of cellular automata. For cellular automata in which not every configuration has a preimage, the configurations without preimages are called Garden of Eden patterns. For one-dimensional cellular automata there are known algorithms for deciding whether a rule is reversible or irreversible. However, for cellular automata of two or more dimensions reversibility is undecidable; that is, there is no algorithm that takes as input an automaton rule and is guaranteed to determine correctly whether the automaton is reversible. The proof by Jarkko Kari is related to the tiling problem by Wang tiles. Reversible cellular automata are often used to simulate such physical phenomena as gas and fluid dynamics, since they obey the laws of thermodynamics. Such cellular automata have rules specially constructed to be reversible. Such systems have been studied by Tommaso Toffoli, Norman Margolus and others. Several techniques can be used to explicitly construct reversible cellular automata with known inverses. Two common ones are the second-order cellular automaton and the block cellular automaton, both of which involve modifying the definition of a cellular automaton in some way. Although such automata do not strictly satisfy the definition given above, it can be shown that they can be emulated by conventional cellular automata with sufficiently large neighborhoods and numbers of states, and can therefore be considered a subset of conventional cellular automata. Conversely, it has been shown that every reversible cellular automaton can be emulated by a block cellular automaton. Totalistic A special class of cellular automata are totalistic cellular automata. The state of each cell in a totalistic cellular automaton is represented by a number (usually an integer value drawn from a finite set), and the value of a cell at time t depends only on the sum of the values of the cells in its neighborhood (possibly including the cell itself) at time t − 1. If the state of the cell at time t depends on both its own state and the total of its neighbors at time t − 1 then the cellular automaton is properly called outer totalistic. Conway's Game of Life is an example of an outer totalistic cellular automaton with cell values 0 and 1; outer totalistic cellular automata with the same Moore neighborhood structure as Life are sometimes called cellular automata. Related automata There are many possible generalizations of the cellular automaton concept. One way is by using something other than a rectangular (cubic, etc.) grid. For example, if a plane is tiled with regular hexagons, those hexagons could be used as cells. In many cases the resulting cellular automata are equivalent to those with rectangular grids with specially designed neighborhoods and rules. Another variation would be to make the grid itself irregular, such as with Penrose tiles. Also, rules can be probabilistic rather than deterministic. Such cellular automata are called probabilistic cellular automata. A probabilistic rule gives, for each pattern at time t, the probabilities that the central cell will transition to each possible state at time t + 1. Sometimes a simpler rule is used; for example: "The rule is the Game of Life, but on each time step there is a 0.001% probability that each cell will transition to the opposite color." The neighborhood or rules could change over time or space. For example, initially the new state of a cell could be determined by the horizontally adjacent cells, but for the next generation the vertical cells would be used. In cellular automata, the new state of a cell is not affected by the new state of other cells. This could be changed so that, for instance, a 2 by 2 block of cells can be determined by itself and the cells adjacent to itself. There are continuous automata. These are like totalistic cellular automata, but instead of the rule and states being discrete (e.g. a table, using states {0,1,2}), continuous functions are used, and the states become continuous (usually values in [0,1]). The state of a location is a finite number of real numbers. Certain cellular automata can yield diffusion in liquid patterns in this way. Continuous spatial automata have a continuum of locations. The state of a location is a finite number of real numbers. Time is also continuous, and the state evolves according to differential equations. One important example is reaction–diffusion textures, differential equations proposed by Alan Turing to explain how chemical reactions could create the stripes on zebras and spots on leopards. When these are approximated by cellular automata, they often yield similar patterns. MacLennan considers continuous spatial automata as a model of computation. There are known examples of continuous spatial automata, which exhibit propagating phenomena analogous to gliders in the Game of Life. Graph rewriting automata are extensions of cellular automata based on graph rewriting systems. Elementary cellular automata The simplest nontrivial cellular automaton would be one-dimensional, with two possible states per cell, and a cell's neighbors defined as the adjacent cells on either side of it. A cell and its two neighbors form a neighborhood of 3 cells, so there are 23 = 8 possible patterns for a neighborhood. A rule consists of deciding, for each pattern, whether the cell will be a 1 or a 0 in the next generation. There are then 28 = 256 possible rules. These 256 cellular automata are generally referred to by their Wolfram code, a standard naming convention invented by Wolfram that gives each rule a number from 0 to 255. A number of papers have analyzed and compared the distinct cases among the 256 cellular automata (many are trivially isomorphic). The rule 30, rule 90, rule 110, and rule 184 cellular automata are particularly interesting. The images below show the history of rules 30 and 110 when the starting configuration consists of a 1 (at the top of each image) surrounded by 0s. Each row of pixels represents a generation in the history of the automaton, with t=0 being the top row. Each pixel is colored white for 0 and black for 1. Rule 30 exhibits class 3 behavior, meaning even simple input patterns such as that shown lead to chaotic, seemingly random histories. Rule 110, like the Game of Life, exhibits what Wolfram calls class 4 behavior, which is neither completely random nor completely repetitive. Localized structures appear and interact in various complicated-looking ways. In the course of the development of A New Kind of Science, as a research assistant to Wolfram in 1994, Matthew Cook proved that some of these structures were rich enough to support universality. This result is interesting because rule 110 is an extremely simple one-dimensional system, and difficult to engineer to perform specific behavior. This result therefore provides significant support for Wolfram's view that class 4 systems are inherently likely to be universal. Cook presented his proof at a Santa Fe Institute conference on Cellular Automata in 1998, but Wolfram blocked the proof from being included in the conference proceedings, as Wolfram did not want the proof announced before the publication of A New Kind of Science. In 2004, Cook's proof was finally published in Wolfram's journal Complex Systems (Vol. 15, No. 1), over ten years after Cook came up with it. Rule 110 has been the basis for some of the smallest universal Turing machines. Rule space An elementary cellular automaton rule is specified by 8 bits, and all elementary cellular automaton rules can be considered to sit on the vertices of the 8-dimensional unit hypercube. This unit hypercube is the cellular automaton rule space. For next-nearest-neighbor cellular automata, a rule is specified by 25 = 32 bits, and the cellular automaton rule space is a 32-dimensional unit hypercube. A distance between two rules can be defined by the number of steps required to move from one vertex, which represents the first rule, and another vertex, representing another rule, along the edge of the hypercube. This rule-to-rule distance is also called the Hamming distance. Cellular automaton rule space allows us to ask the question concerning whether rules with similar dynamical behavior are "close" to each other. Graphically drawing a high dimensional hypercube on the 2-dimensional plane remains a difficult task, and one crude locator of a rule in the hypercube is the number of bit-1 in the 8-bit string for elementary rules (or 32-bit string for the next-nearest-neighbor rules). Drawing the rules in different Wolfram classes in these slices of the rule space show that class 1 rules tend to have lower number of bit-1s, thus located in one region of the space, whereas class 3 rules tend to have higher proportion (50%) of bit-1s. For larger cellular automaton rule space, it is shown that class 4 rules are located between the class 1 and class 3 rules. This observation is the foundation for the phrase edge of chaos, and is reminiscent of the phase transition in thermodynamics. Applications Biology Several biological processes occur—or can be simulated—by cellular automata. Some examples of biological phenomena modeled by cellular automata with a simple state space are: Patterns of some seashells, like the ones in the genera Conus and Cymbiola, are generated by natural cellular automata. The pigment cells reside in a narrow band along the shell's lip. Each cell secretes pigments according to the activating and inhibiting activity of its neighbor pigment cells, obeying a natural version of a mathematical rule. The cell band leaves the colored pattern on the shell as it grows slowly. For example, the widespread species Conus textile bears a pattern resembling Wolfram's rule 30 cellular automaton. Plants regulate their intake and loss of gases via a cellular automaton mechanism. Each stoma on the leaf acts as a cell. Moving wave patterns on the skin of cephalopods can be simulated with a two-state, two-dimensional cellular automata, each state corresponding to either an expanded or retracted chromatophore. Threshold automata have been invented to simulate neurons, and complex behaviors such as recognition and learning can be simulated. Fibroblasts bear similarities to cellular automata, as each fibroblast only interacts with its neighbors. Additionally, biological phenomena which require explicit modeling of the agents' velocities (for example, those involved in collective cell migration) may be modeled by cellular automata with a more complex state space and rules, such as biological lattice-gas cellular automata. These include phenomena of great medical importance, such as: Characterization of different modes of metastatic invasion. The role of heterogeneity in the development of aggressive carcinomas. Phenotypic switching during tumor proliferation. Chemistry The Belousov–Zhabotinsky reaction is a spatio-temporal chemical oscillator that can be simulated by means of a cellular automaton. In the 1950s A. M. Zhabotinsky (extending the work of B. P. Belousov) discovered that when a thin, homogenous layer of a mixture of malonic acid, acidified bromate, and a ceric salt were mixed together and left undisturbed, fascinating geometric patterns such as concentric circles and spirals propagate across the medium. In the "Computer Recreations" section of the August 1988 issue of Scientific American, A. K. Dewdney discussed a cellular automaton developed by Martin Gerhardt and Heike Schuster of the University of Bielefeld (Germany). This automaton produces wave patterns that resemble those in the Belousov-Zhabotinsky reaction. Physics Probabilistic cellular automata are used in statistical and condensed matter physics to study phenomena like fluid dynamics and phase transitions. The Ising model is a prototypical example, in which each cell can be in either of two states called "up" and "down", making an idealized representation of a magnet. By adjusting the parameters of the model, the proportion of cells being in the same state can be varied, in ways that help explicate how ferromagnets become demagnetized when heated. Moreover, results from studying the demagnetization phase transition can be transferred to other phase transitions, like the evaporation of a liquid into a gas; this convenient cross-applicability is known as universality. The phase transition in the two-dimensional Ising model and other systems in its universality class has been of particular interest, as it requires conformal field theory to understand in depth. Other cellular automata that have been of significance in physics include lattice gas automata, which simulate fluid flows. Computer science, coding, and communication Cellular automaton processors are physical implementations of CA concepts, which can process information computationally. Processing elements are arranged in a regular grid of identical cells. The grid is usually a square tiling, or tessellation, of two or three dimensions; other tilings are possible, but not yet used. Cell states are determined only by interactions with adjacent neighbor cells. No means exists to communicate directly with cells farther away. One such cellular automaton processor array configuration is the systolic array. Cell interaction can be via electric charge, magnetism, vibration (phonons at quantum scales), or any other physically useful means. This can be done in several ways so that no wires are needed between any elements. This is very unlike processors used in most computers today (von Neumann designs) which are divided into sections with elements that can communicate with distant elements over wires. Rule 30 was originally suggested as a possible block cipher for use in cryptography. Two-dimensional cellular automata can be used for constructing a pseudorandom number generator. Cellular automata have been proposed for public-key cryptography. The one-way function is the evolution of a finite CA whose inverse is believed to be hard to find. Given the rule, anyone can easily calculate future states, but it appears to be very difficult to calculate previous states. Cellular automata have also been applied to design error correction codes. Other problems that can be solved with cellular automata include: Firing squad synchronization problem Majority problem Generative art and music Cellular automata have been used in generative music and evolutionary music composition and procedural terrain generation in video games. Maze generation Specific rules Specific cellular automata rules include: Brian's Brain Codd's cellular automaton CoDi Conway's game of life Day and Night Langton's ant Langton's loops Lenia Nobili cellular automata Rule 90 Rule 184 Seeds Turmite Von Neumann cellular automaton Wireworld
Mathematics
Automata theory
null
54347
https://en.wikipedia.org/wiki/Complement%20%28set%20theory%29
Complement (set theory)
In set theory, the complement of a set , often denoted by (or ), is the set of elements not in . When all elements in the universe, i.e. all elements under consideration, are considered to be members of a given set , the absolute complement of is the set of elements in that are not in . The relative complement of with respect to a set , also termed the set difference of and , written is the set of elements in that are not in . Absolute complement Definition If is a set, then the absolute complement of (or simply the complement of ) is the set of elements not in (within a larger set that is implicitly defined). In other words, let be a set that contains all the elements under study; if there is no need to mention , either because it has been previously specified, or it is obvious and unique, then the absolute complement of is the relative complement of in : The absolute complement of is usually denoted by . Other notations include Examples Assume that the universe is the set of integers. If is the set of odd numbers, then the complement of is the set of even numbers. If is the set of multiples of 3, then the complement of is the set of numbers congruent to 1 or 2 modulo 3 (or, in simpler terms, the integers that are not multiples of 3). Assume that the universe is the standard 52-card deck. If the set is the suit of spades, then the complement of is the union of the suits of clubs, diamonds, and hearts. If the set is the union of the suits of clubs and diamonds, then the complement of is the union of the suits of hearts and spades. When the universe is the universe of sets described in formalized set theory, the absolute complement of a set is generally not itself a set, but rather a proper class. For more info, see universal set. Properties Let and be two sets in a universe . The following identities capture important properties of absolute complements: De Morgan's laws: Complement laws: (this follows from the equivalence of a conditional with its contrapositive). Involution or double complement law: Relationships between relative and absolute complements: Relationship with a set difference: The first two complement laws above show that if is a non-empty, proper subset of , then is a partition of . Relative complement Definition If and are sets, then the relative complement of in , also termed the set difference of and , is the set of elements in but not in . The relative complement of in is denoted according to the ISO 31-11 standard. It is sometimes written but this notation is ambiguous, as in some contexts (for example, Minkowski set operations in functional analysis) it can be interpreted as the set of all elements where is taken from and from . Formally: Examples If is the set of real numbers and is the set of rational numbers, then is the set of irrational numbers. Properties Let , , and be three sets in a universe . The following identities capture notable properties of relative complements: with the important special case demonstrating that intersection can be expressed using only the relative complement operation. If , then . is equivalent to . Complementary relation A binary relation is defined as a subset of a product of sets The complementary relation is the set complement of in The complement of relation can be written Here, is often viewed as a logical matrix with rows representing the elements of and columns elements of The truth of corresponds to 1 in row column Producing the complementary relation to then corresponds to switching all 1s to 0s, and 0s to 1s for the logical matrix of the complement. Together with composition of relations and converse relations, complementary relations and the algebra of sets are the elementary operations of the calculus of relations. LaTeX notation In the LaTeX typesetting language, the command \setminus is usually used for rendering a set difference symbol, which is similar to a backslash symbol. When rendered, the \setminus command looks identical to \backslash, except that it has a little more space in front and behind the slash, akin to the LaTeX sequence \mathbin{\backslash}. A variant \smallsetminus is available in the amssymb package, but this symbol is not included separately in Unicode. The symbol (as opposed to ) is produced by \complement. (It corresponds to the Unicode symbol .)
Mathematics
Set theory
null
54404
https://en.wikipedia.org/wiki/Tropaeolum
Tropaeolum
Tropaeolum , commonly known as nasturtium (; literally "nose-twister" or "nose-tweaker"), is a genus of roughly 80 species of annual and perennial herbaceous flowering plants. It was named by Carl Linnaeus in his book Species Plantarum, and is the only genus in the family Tropaeolaceae. The nasturtiums received their common name because they produce an oil similar to that of watercress (Nasturtium officinale). The genus Tropaeolum, native to South and Central America, includes several very popular garden plants, the most common being T. majus, T. peregrinum and T. speciosum. One of the hardiest species is T. polyphyllum from Chile, the perennial roots of which can survive the winter underground at elevations of . Plants in this genus have showy, often intensely bright flowers and rounded, peltate (shield-shaped) leaves with the petiole in the centre. The flowers are bisexual and zygomorphic, with five petals, a superior three-carpelled ovary, and a funnel-shaped nectar spur at the back, formed by modification of one of the five sepals. History Tropaeolum was introduced into Spain by the Spanish botanist Nicolás Monardes, who described it in his Historia medicinal de las cosas que se traen de nuestras Indias Occidentales of 1569, translated into English as Ioyfull newes out of the newe founde worlde by John Frampton. The English herbalist John Gerard reports having received seeds of the plant from Europe in his 1597 book Herball, or Generall Historie of Plantes. Tropaeolum majus was named by the Swedish botanist Carl Linnaeus, who chose the genus name because the plant reminded him of an ancient custom: After victory in battle, the Romans erected a trophy pole (or tropaeum, from the Greek tropaion, source of English "trophy") on which the vanquished foe's armour and weapons were hung. The plant's round leaves reminded Linnaeus of shields and its flowers of blood-stained helmets. Nasturtiums were once commonly called "Indian cresses" because they were introduced from the Americas, known popularly then as the Indies, and used like cress as salad ingredients. In his herbal, John Gerard compared the flowers of the "Indian Cress" to those of the forking larkspur (Consolida regalis) of the buttercup family. He wrote: "Unto the backe part (of the flower) doth hange a taile or spurre, such as hath the Larkes heele, called in Latine Consolida regalis." J. R. R. Tolkien commented that an alternative anglicization of "nasturtium" was "nasturtian". Description Tropaeolum is a genus of dicotyledonous annual or perennial plants, often with somewhat succulent stems and sometimes tuberous roots. The alternate leaves are hairless, peltate, and entirely or palmately lobed. The petioles or leaf stalks are long and, in many species, can twine around other stems to provide support. The flowers are bisexual and showy, set singly on long stalks in the axils of the leaves. They have five sepals, the uppermost of which is elongated into a nectar spur. The five petals are clawed, with the lower three unlike the upper two. The eight stamens are in two whorls of unequal length, and the superior ovary has three segments and three stigmas on a single style. The fruit is naked and nut-like, with three single seed segments. Species in cultivation The most common flower in cultivation is a hybrid of T. majus, T. minus, and T. peltophorum. It is commonly known as the nasturtium (and occasionally anglicized as nasturtian). It is mostly grown from seed as a half-hardy annual, and both single and double varieties are available. It comes in various forms and colours, including cream, yellow, orange and red, solid in colour or striped and often with a dark blotch at the base of the petals. It is vigorous and easily grown and does well in sun. It thrives in poor soil and dry conditions, whereas rich soil produces much leafy growth and few flowers. Some varieties adopt a bush form while others scramble over and through other plants and are useful for planting in awkward spots or for covering fences and trellises. The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit: 'Alaska Series' 'Hermine Grashoff' 'Whirlybird Series' The blue nasturtium (Tropaeolum azureum) is a tender species from Chile which has violet-blue flowers with white eyes that can be as much as across. Tropaeolum brachyceras has yellow flowers with purplish markings on wiry, climbing stems. It is a half-hardy perennial from Chile and may remain dormant for several years before being sparked into growth by some unknown trigger factor. Tropaeolum hookerianum is a tuberous-rooted species from Chile. There are two subspecies, T. h. austropurpureum which has violet-purple flowers and T. h. pilosum with yellow flowers. The Canary creeper (Tropaeolum peregrinum) is a trailing and climbing half-hardy annual species with wiry stalks and palmately lobed leaves. The pale yellow, fringed flowers are borne on long stalks. It originated from Peru but may first have been cultivated in the Canary Islands before being introduced into Western Europe. Wreath nasturtium (Tropaeolum polyphyllum) is a prostrate plant originating from Argentina and Chile. It has silvery, deeply lobed leaves and a profusion of small, bright yellow flowers on long trailing stalks. After flowering, the plant dies back. It is a perennial with underground rhizomes which send up new shoots at intervals. It will survive for several years in a suitable sunny location with well-drained soil. It is a very hardy species; the tubers can grow at depths of enabling the plant to survive at altitudes of as much as 3,300 metres (10,000 ft) in the Andes. The flame flower (Tropaeolum speciosum) is well adapted to cool, moist climates and famously does well in Scotland. It sends up shoots that thread their way through hedges and shrubs and, when they emerge into the light, bear brilliant red flowers among small, five or six-lobed leaves. It is difficult to establish but is an attractive garden plant when it thrives. This plant has gained the Royal Horticultural Society's Award of Garden Merit. Three-coloured Indian cress (Tropaeolum tricolor) is another tuberous, climbing species grown for its attractive red, purple and yellow tubular flowers. It comes from Chile and Bolivia and is a reliable winter-growing species. Mashua (Tropaeolum tuberosum) is a perennial climbing plant from the Andes grown for its tuberous roots. It has been cultivated since ancient times, and depictions of it are found at archaeological sites pre-dating the Incas. It has leaves with five to seven lobes and small, long-spurred, red and yellow flowers. The tubers have an unpleasant smell when raw, which disappears on cooking. It is frost-hardy and produces crops of 30 tonnes per hectare at an elevation of above sea level. The cultivar T. tuberosum lineamaculatum 'Ken Aslet' has gained the Royal Horticultural Society's Award of Garden Merit. Species originating from the coastal areas and lower foothills make most of their growth in winter, whereas the true alpine species are summer growers. Tuberous Tropaeolum species are well known for occasionally remaining dormant for one or more years. The species with underground rhizomes and tubers can be propagated from these, while other species are best raised from seed. Many growers favour fresh seed, but dried seed is also often successful. Seed from the winter growing species should be sown in the autumn, while the summer growing species are best sown in the spring in well-drained compost and covered with of grit or sand. The containers should be kept at below until the seedlings appear in about a month, as too high a temperature inhibits germination. Uses Culinary All parts of Tropaeolum majus are edible. The flower has most often been consumed, making for an especially ornamental salad ingredient; it has a slightly peppery taste reminiscent of watercress and is also used in stir fry. The flowers contain about 130 mg vitamin C per , about the same amount as is contained in parsley. Moreover, they contain up to 45 mg of lutein per 100 g, which is the highest amount found in any edible plant. The unripe seed pods can be harvested and dropped into spiced vinegar to produce a condiment and garnish, sometimes used in place of capers. Mashua (T. tuberosum) produces an edible underground tuber that is a major food source in parts of the Andes. Herbal medicine T. majus has been used in herbal medicine for respiratory and urinary tract infections. In Germany, licensed physicians can prescribe the herbal antibiotic Angocin Anti-Infekt N, made from only nasturtium and horseradish root. Companion planting and biological pest control Nasturtiums are used as companion plants for biological pest control, repelling some pests, acting as a trap crop for others and attracting predatory insects. While companion planting is a widespread notion and often adopted by home gardeners, there is little but anecdotal evidence to support these claims. Taxonomy Tropaeolum was previously placed in the family Tropaeolaceae along with two other genera, Magallana Cav. and Trophaeastrum. The monotypic genus Magallana was characterised by having winged fruit, and the two species of Trophaeastrum lacked spurs. The genus Tropaeolum was diagnosed only by the absence of the characteristics of the other two genera. A molecular study undertaken in 2000 found Tropaeolum to be paraphyletic when the other two genera are segregated, so Magallana and Trophaeastrum were reduced to synonyms of Tropaeolum. Tropaeolaceae was thus rendered monogeneric, a family of only one genus. Species "The Plant List", a collaboration between the Missouri Botanical Garden and the Royal Botanic Gardens, Kew, includes the following accepted names of Tropaeolum species names. Some that are under review are here marked "U".
Biology and health sciences
Brassicales
null
54406
https://en.wikipedia.org/wiki/Monday
Monday
Monday is the day of the week that takes place between Sunday and Tuesday. According to the International Organization for Standardization's ISO 8601 standard, it is the first day of the week. Names The names of the day of the week were coined in the Roman era, in Greek and Latin, in the case of Monday as ἡμέρᾱ Σελήνης, diēs Lūnae "day of the Moon". Many languages use either terms directly derived from these names or loan translations based on them. The English noun Monday derived sometime before 1200 from monedæi, which itself developed from Old English (around 1000) mōnandæg and mōndæg (literally meaning "moon's day"), which has cognates in other Germanic languages, including Old Frisian mōnadeig, Middle Low German and Middle Dutch mānendag, mānendach (modern Dutch Maandag), Old High German mānetag (modern German Montag), and Old Norse mánadagr (Swedish and Norwegian nynorsk måndag, Icelandic mánudagur. Danish and Norwegian bokmål mandag). The Germanic term is a Germanic interpretation of Latin lunae dies ("day of the moon"). Japanese and Korean share the same ancient Chinese words '月曜日' (Hiragana:げつようび, translit. getsuyо̄bi, Hangeul:월요일) for Monday which means "day of the moon". In many Indo-Aryan languages, the word for Monday is Somavāra or Chandravāra, Sanskrit loan-translations of "Monday". In some cases, the "ecclesiastical" names are used, a tradition of numbering the days of the week in order to avoid the pagan connotation of the planetary or deities’ names, and to keep with the biblical name, in which Monday is the "second day" (Hebrew יום שני, Greek Δευτέρα ἡμέρα (Deutéra hēméra), Latin feria secunda, Arabic الأثنين). In many Slavic languages the name of the day translates to "after Sunday/holiday". Russian понедельник (ponyedyelnik) literally translated, Monday means "next to the week", по "next to" or "on" недельник "(the) week" Croatian and Bosnian ponedjeljak, Serbian понедељак (ponedeljak), Ukrainian понеділок (ponedilok), Bulgarian понеделник (ponedelnik), Polish poniedziałek, Czech pondělí, Slovak pondelok, Slovenian ponedeljek. In Turkish it is called pazartesi, which also means "after Sunday". Arrangement in the week Historically, the Greco-Roman week began with Sunday (dies solis), and Monday (dies lunae) was the second day of the week. It is still the custom to refer to Monday as feria secunda in the liturgical calendar of the Catholic Church. Quakers also traditionally referred to Monday as "Second Day". The Portuguese and the Greek (Eastern Orthodox Church) also retain the ecclesiastical tradition (Portuguese segunda-feira, Greek Δευτέρα "deutéra" "second"). Vietnamese, whose Latin-based alphabet was originally romanized by Portuguese Jesuit missionaries, adopted this convention and thus also refers to Monday as Second Day (thứ Hai). Likewise, the Modern Hebrew name for Monday is yom-sheni (יום שני). While in North America, Sunday is the first day of the week, the Geneva-based International Organization for Standardization places Monday as the first day of the week in its ISO 8601 standard. Monday is xīngqīyī (星期一) in Chinese, meaning "day one of the week". Religious observances Christianity The early Christian Didache warned believers not to fast on Mondays to avoid Judaizing (see below), and suggested fasting on Wednesdays instead. In the Eastern Orthodox Church, Mondays are days on which the Angels are commemorated. The Octoechos contains hymns on this theme, arranged in an eight-week cycle, which are chanted on Mondays throughout the year. At the end of Divine Services on Mondays, the dismissal begins with the words: "May Christ our True God, through the intercessions, of his most-pure Mother, of the honorable, Bodiless Powers (i.e., the angels) of Heaven…". In many Eastern monasteries Mondays are observed as fast days; because Mondays are dedicated to the angels, and monks strive to live an angelic life. In these monasteries, the monks abstain from meat, fowl, dairy products, fish, wine and oil (if a feast day occurs on a Monday, fish, wine and oil may be allowed, depending upon the particular feast). Members of the Church of Jesus Christ of Latter-day Saints spend one evening per week, called Family Home Evening (FHE) or Family Night. This is usually a Monday, when families are encouraged to spend time together in study, prayer and other family activities. Hinduism In Hinduism, Mondays are associated with the Hindu god of the moon Chandra or Soma. In several South Asian languages, Monday is knowns as Somavara or Somavaram. Hindus who fast on Mondays do so in dedication to the deity Shiva. Some observe the Solah Somvar Vrat, which is a fast of sixteen Mondays dedicated to Shiva in hopes of getting married and finding a suitable partner. Fasting on Mondays in the Hindu month of Shravana is also considered auspicious as it is one of the holiest months to Hindus and dedicated to Shiva and his consort Parvati. Islam In Islam, Mondays are one of the days in a week in which Muslims are encouraged to do voluntary fasting, the other being Thursdays. There are a number of Hadith which narrated of Muhammad fasting on these days. According to the same Hadith, Muhammad was born on Monday. It is also narrated that he received his first revelation (which would later become the Quran) on Monday. Judaism In Judaism, Mondays are considered auspicious days for fasting. In Judaism, a small portion of the weekly Parashah in Torah is read in public on Monday and Thursday mornings, as a supplement for the Saturday reading). Special penitential prayers are recited on Monday unless there is a special occasion for happiness which cancels them. According to the Mishna and Talmud, these traditions are due to Monday and Thursday being "the market days" when people gathered from the towns to the city. A tradition of Ashkenazi Jews to voluntarily fast on the first consecutive Monday Thursday and Monday of the Hebrew month is prevalent among the ultra-orthodox. In Hebrew, Monday is called "Yom Shení", literally meaning "Second Day", following the biblical reference to the sabbath day as the "Seventh-day" and the tradition of that day being on Saturday. It has been established that the phonetic and cultural link between the planet Saturn, Saturday and the Sabbath day is of ancient Mesopotamian origin. Cultural references A number of popular songs in Western culture portray Mondays often as days of depression, anxiety, avolition, hysteria, or melancholy (mostly because of its association with the first day of the workweek). Mondays are also portrayed as days of boredom and bad luck, especially for many people in their school years, who have to go back to school every Monday after having no school Saturday and Sunday, which can make them grow a hatred for Mondays. For example, "Monday, Monday" (1966) from the Mamas & the Papas; "Rainy Days and Mondays" (1971) from the Carpenters; Monday, Monday, Monday (2002) from Tegan and Sara; and "Manic Monday" (1986) from the Bangles (written by Prince). There is a band named the Happy Mondays and an American pop-punk band Hey Monday. The popular comic strip character Garfield by Jim Davis is well known for his hatred for Mondays, mostly accompanied by the catchphrase “I hate Mondays.” In the United Kingdom, more people commit suicide in England and Wales on Mondays than other days of the week; more people in the country in general call in sick; and more people worldwide surf the web. In July 2002, the consulting firm PricewaterhouseCoopers announced that it would rename its consultancy practice "Monday", and would spend $110 million over the next year to establish the brand. When IBM acquired the consultancy three months later it chose not to retain the new name. On October 17, 2022, Guinness World Records announced on Twitter that Monday is the 'Worst Day of the Week''', to the dismay of some people. Named days Big Monday Black Monday Blue Monday Clean Monday (Ash Monday) Cyber Monday Easter Monday, also Bright Monday or Wet Monday Handsel Monday Lundi Gras Mad Monday Miracle Monday Plough Monday Shrove Monday Wet Monday Whit Monday
Technology
Days of the week
null
54407
https://en.wikipedia.org/wiki/Sunday
Sunday
Sunday (Latin: dies solis meaning "day of the sun") is the day of the week between Saturday and Monday. Sunday is a day of rest in most Western countries and a part of the weekend. In some Middle Eastern countries, Sunday is a weekday. For most Christians, Sunday is observed as a day of worship and rest, holding it as the Lord's Day and the day of Christ's resurrection; in the United States, Canada, Japan, as well as in parts of South America, Sunday is the first day of the week. According to the Islamic calendar, Hebrew calendar and traditional calendars (including Christian calendars) Sunday is the first day of the week; Quaker Christians call Sunday the "first day" in accordance with their testimony of simplicity. The International Organization for Standardization ISO 8601, which is based in Switzerland, calls Sunday the seventh day of the week. Etymology The name "Sunday", the day of the Sun, is derived from Hellenistic astrology, where the seven planets – known in English as Saturn, Jupiter, Mars, the Sun, Venus, Mercury and the Moon – each had an hour of the day assigned to them, and the planet which was regent during the first hour of any day of the week gave its name to that day. During the 1st and 2nd centuries, the week of seven days was introduced into Rome from Egypt, and the Roman names of the planets were given to each successive day. Germanic peoples seem to have adopted the week as a division of time from the Romans, but they changed the Roman names into those of corresponding Teutonic deities. Hence, the dies Solis became Sunday (German, Sonntag). The English noun Sunday derived sometime before 1250 from sunedai, which itself developed from Old English (before 700) Sunnandæg (literally meaning "sun's day"), which is cognate to other Germanic languages, including Old Frisian sunnandei, Old Saxon sunnundag, Middle Dutch sonnendach (modern Dutch zondag), Old High German sunnun tag (modern German Sonntag), and Old Norse sunnudagr (Danish and Norwegian søndag, Icelandic sunnudagur and Swedish söndag). The Germanic term is a Germanic interpretation of Latin dies solis ("day of the sun"), which is a translation of the ancient Greek Ἥλίου ημέρα" (Hēlíou hēméra). In most Indian languages, the word for Sunday is derived from Sanskrit Ravivāra or Adityavāra — vāra meaning day and Aditya and Ravi both being names for Surya, the Sun and the solar deity. Ravivāra is the first day cited in Jyotisha, which provides logical reason for giving the name of each weekday. In the Thai solar calendar, the name ("Waan Arthit") is derived from Aditya, and the associated colour is red. In most Slavic languages other than Russian, the words for Sunday reflect the Christian commandment to abstain from work. Belarusian (), Bulgarian (), Croatian and Serbian nedjelja / , Czech neděle, Macedonian (), Polish niedziela, Slovak nedeľa, Slovenian nedelja and Ukrainian () are all cognates literally meaning "no work" or "day with no work". In Russian, the word for Sunday is () meaning "resurrection" (that is, the day of a week which commemorates the resurrection of Jesus Christ). In Old Russian, Sunday was also called (), "free day", or "day with no work", but in the contemporary language this word means "week". The Modern Greek word for Sunday, , is derived from (Kyrios, Lord) also, due to its liturgical significance as the day commemorating the resurrection of Jesus Christ, i.e. The Lord's Day. The name is similar in the Romance languages. In Italian, Sunday is called , which also means "Lord's Day" (from Latin ). One finds similar cognates in French, where the name is , as well as Romanian , and in Spanish and Portuguese, . In Chinese, Korean, and Japanese, Sunday is called (), (), and () respectively, which all mean "sun day of the week". The Arabic word for Sunday is (), meaning "the first". It is usually combined with the word () meaning "day". Latvian word for Sunday is svētdiena, literally "holy day". Lithuanian word is sekmadienis, literally "seventh day" (archaic; in contemporary Lithuanian, "seventh day" translates to septinta diena). Position in the week ISO 8601 The international standard ISO 8601 for representation of dates and times states that Sunday is the seventh and last day of the week. This method of representing dates and times unambiguously was first published in 1988. Culture and languages In the Judaic, Christian, and some Islamic traditions, Sunday has been considered the first day of the week. A number of languages express this position either by the name of the day or by the naming of the other days. In Hebrew it is called יום ראשון yom rishon, in Arabic الأحد al-ahad, in Persian and related languages یکشنبه yek-shanbe, all meaning "first". In Greek, the names of the days Monday, Tuesday, Wednesday, and Thursday (, , , and ) mean "second", "third", "fourth", and "fifth", respectively. This leaves Sunday in the first position of the week count. Similarly in Portuguese, where the days from Monday to Friday are counted as "segunda-feira", "terça-feira", "quarta-feira", "quinta-feira" and "sexta-feira". In Vietnamese, the working days in the week are named as: Thứ Hai (Second), Thứ Ba (Third), Thứ Tư (Fourth), Thứ Năm (Fifth), Thứ Sáu (Sixth), and Thứ Bảy (Seventh). Sunday is called "Chủ Nhật"(chữ Hán: 主日) meaning "Lord's Day". Some colloquial text in the south of Vietnam and from the church may use a different reading of "Chúa Nhật"(in contemporary Vietnamese, "Chúa" means God or Lord and "Chủ" means own). In German, Wednesday is called Mittwoch, literally "mid-week", implying the week runs from Sunday to Saturday. In the Yoruba culture of West Africa, Sunday is called Oj̣ó ̣Aikú. Ojó Aiku is the day that begins a new week known as "Day of Rest". It is the day Orunmila, the convener of Ifá to earth, buried the mother of Esu Odara and his wife, Imi. Since that occurrence, Yoruba people decided to refer to the day as Ojó Aiku. Slavic languages implicitly number Monday as day number one. Russian воскресение (Sunday) means "resurrection". Hungarian szerda (Wednesday), csütörtök (Thursday), and péntek (Friday) are Slavic loanwords, so the correlation with "middle", "four", and "five" are not evident to Hungarian speakers. Hungarians use Vasárnap for Sunday, which means "market day". In the Maltese language, due to its Siculo-Arabic origin, Sunday is called Il-Ħadd, a corruption of wieħed, meaning "one". Monday is It-Tnejn, meaning "two". Similarly, Tuesday is It-Tlieta (three), Wednesday is L-Erbgħa (four), and Thursday is Il-Ħamis (five). In Armenian, Monday is Yerkoushabti, literally meaning "second day of the week", Tuesday Yerekshabti "third day", Wednesday Chorekshabti "fourth day", Thursday Hingshabti "fifth day". Saturday is Shabat coming from the word Sabbath or Shabbath in Hebrew, and Kiraki, coming from the word Krak, meaning "fire", is Sunday, referring to the sun as a fire. Apostle John, in Revelations 1:10, refers to the "Lord's Day", (kyriakḗ hēmera), that is, "the day of the Lord", possibly influencing the Armenian word for Sunday. In many European countries, calendars show Monday as the first day of the week, which follows the ISO 8601 standard. In the Persian calendar, used in Iran and Afghanistan, Sunday is the second day of the week. However, it is called "number one" as counting starts from zero; the first day - Saturday - is denoted as day zero. Sunday in Christianity Christian usage The ancient Romans traditionally used the eight-day nundinal cycle, a market week, but in the time of Augustus in the 1st century AD, a seven-day week also came into use. In the gospels, the women are described as coming to the empty tomb "", which literally means "toward the first of the sabbath" and is often translated "on the first day of the week". Justin Martyr, in the mid-2nd century, mentions "memoirs of the apostles" as being read on "the day called that of the sun" (Sunday) alongside the "writings of the prophets." On 7 March 321, Constantine I, Rome's first Christian emperor, decreed that Sunday would be observed as the Roman day of rest: Despite the official adoption of Sunday as a day of rest by Constantine, the seven-day week and the nundinal cycle continued to be used side by side until at least the Calendar of 354 and probably later. In 363, Canon 29 of the Council of Laodicea prohibited observance of the Jewish Sabbath (Saturday), and encouraged Christians to work on Saturday and rest on the Lord's Day (Sunday). The fact that the canon had to be issued at all is an indication that adoption of Constantine's decree of 321 was still not universal, not even among Christians. It also indicates that Jews were observing the Sabbath on Saturday. Modern practices First-day Sabbatarians, including Christians of the Methodist, Baptist and Reformed (Presbyterian and Congregationalist) traditions, observe Sunday as the sabbath, a day devoted to the worship of God at church (the attendance of Sunday School, a service of worship in the morning and evening), as well as a day of rest (meaning that people are free from servile labour and should refrain from trading, buying and selling except when necessary). For most Christians the custom and obligation of Sunday rest is not as strict. A minority of Christians do not regard the day they attend church as important, so long as they attend. There is considerable variation in the observance of Sabbath rituals and restrictions, but some cessation of normal weekday activities is customary. Many Christians today observe Sunday as a day of church attendance. In Roman Catholic liturgy, Sunday begins on Saturday evening. The evening Mass on Saturday is liturgically a full Sunday Mass and fulfills the obligation of Sunday Mass attendance, and Vespers (evening prayer) on Saturday night is liturgically "first Vespers" of the Sunday. The same evening anticipation applies to other major solemnities and feasts, and is an echo of the Jewish practice of starting the new day at sunset. Those who work in the medical field, in law enforcement, and soldiers in a war zone are dispensed from the usual obligation to attend church on Sunday. They are encouraged to combine their work with attending religious services if possible. In the Eastern Orthodox Church, Sunday begins at the Little Entrance of Vespers (or All-Night Vigil) on Saturday evening and runs until "Vouchsafe, O Lord" (after the "prokeimenon") of Vespers on Sunday night. During this time, the dismissal at all services begin with the words, "May Christ our True God, who rose from the dead ...." Anyone who wishes to receive Holy Communion at Divine Liturgy on Sunday morning is required to attend Vespers the night before (see Eucharistic discipline). Among Orthodox Christians, Sunday is considered to be a "Little Pascha" (Easter), and because of the Paschal joy, the making of prostrations is forbidden, except in certain circumstances. Leisure activities and idleness, being secular and offensive to Christ as they are time-wasting, are prohibited. Some languages lack separate words for "Saturday" and "Sabbath" (e. g. Italian, Portuguese). Outside the English-speaking world, Sabbath as a word, if it is used, refers to the Saturday (or the specific Jewish practices on it); Sunday is called the Lord's Day e. g. in Romance languages and Modern Greek. On the other hand, English-speaking Christians often refer to the Sunday as the Sabbath (other than Seventh-day Sabbatarians); a practice which, probably due to the international connections and the Latin tradition of the Roman Catholic Church, is more widespread among (but not limited to) Protestants. Quakers traditionally referred to Sunday as "First Day" eschewing the pagan origin of the English name, while referring to Saturday as the "Seventh day". Some Christian denominations, called "Seventh-day Sabbatarians", observe a Saturday Sabbath. Christians in the Seventh-day Adventist, Seventh Day Baptist, and Church of God (Seventh-Day) denominations, as well as many Messianic Jews, have maintained the practice of abstaining from work and gathering for worship on Saturdays (sunset to sunset) as did all of the followers of God in the Old Testament. Sunday in Mandaeism Sunday in Mandaeism is called Habshaba (Habšaba). Mandaeans perform communal masbuta (baptism) every Sunday. Common occurrences on Sunday In government and business In the United States and Canada, most government offices are closed on both Saturday and Sunday. The practice of offices closing on Sunday in government and in some rural areas of the United States stem from a system of blue laws. Blue laws were established in the early puritan days, which forbade secular activities on Sunday and were rigidly enforced. Some public activities are still regulated by these blue laws in the 21st century. In 1985, twenty-two states in which religious fundamentalism remained strong maintained general restrictions on Sunday behavior. In Oklahoma, for example, it is stated: "Oklahoma's statutes state that "acts deemed useless and serious interruptions of the repose and religious liberty of the community," such as trades, manufacturing, mechanical employment, horse racing, and gaming are forbidden. Public selling of commodities other than necessary foods and drinks, medicine, ice, and surgical and burial equipment, and other necessities can legally be prohibited on Sunday. In Oklahoma, a fine not to exceed twenty-five dollars may be imposed on individuals for each offense." Because of these blue laws, many private sector retail businesses open later and close earlier on Sunday or do not open at all. Many countries, particularly in Europe such as Sweden, France, Germany and Belgium, but also in other countries such as Peru, hold their national and local elections on a Sunday, either by law or by tradition. In media Many American and British daily newspapers publish a larger edition on Sundays, which often includes color comic strips, a magazine, and a coupon section. Others only publish on a Sunday, or have a "sister paper" with a different masthead that only publishes on a Sunday. North American radio stations often play specialty radio shows such as Casey Kasem's countdown or other nationally syndicated radio shows that may differ from their regular weekly music patterns on Sunday morning or Sunday evening. In the United Kingdom, there is a Sunday tradition of chart shows on BBC Radio 1 and commercial radio; this originates in the broadcast of chart shows and other populist material on Sundays by Radio Luxembourg when the Reithian BBC's Sunday output consisted largely of solemn and religious programmes. The first Sunday chart show was broadcast on the Light Programme on 7 January 1962, which was considered a radical step at the time. BBC Radio 1's chart show moved to Fridays in July 2015 but a chart update on Sundays was launched in July 2019. Period or older-skewing television dramas, such as Downton Abbey, Call the Midwife, Lark Rise to Candleford and Heartbeat are commonly shown on Sunday evenings in the UK; the first of these was Dr Finlay's Casebook in the 1960s. Similarly, Antiques Roadshow has been shown on Sundays on BBC1 since 1979 and Last of the Summer Wine was shown on Sundays for many years until it ended in 2010. On Sundays, BBC Radio 2 plays music in styles which it once regularly played but which are now rarely heard on the station, with programmes such as Elaine Paige on Sunday and Sunday Night is Music Night although more contemporary styles now make up a higher percentage of the station's Sunday output than previously; for example, Kendrick Lamar received a Sunday-night play on the station in March 2022. Even younger-skewing media outlets sometimes skew older on Sundays within the terms of their own audience; for example, BBC Radio 1Xtra introduced an "Old Skool Sunday" schedule in the autumn of 2019. Many American, Australian and British television networks and stations also broadcast their political interview shows on Sunday mornings. In sports Major League Baseball usually schedules all Sunday games in the daytime except for the nationally televised Sunday Night Baseball matchup. Certain historically religious cities such as Boston and Baltimore among others will schedule games no earlier than 1:35 PM to ensure time for people who go to religious service in the morning can get to the game in time. In the United States, professional American football in the National Football League is usually played on Sunday, although Saturday (via Saturday Night Football), Monday (via Monday Night Football), and Thursday (via Thursday Night Football or Thanksgiving) see some professional games. College football usually occurs on Saturday, and high-school football tends to take place on Friday night or Saturday afternoon. In the UK, some club and Premier League football matches and tournaments usually take place on Sundays. Rugby matches and tournaments usually take place in club grounds or parks on Sunday mornings. It is not uncommon for church attendance to shift on days when a late morning or early afternoon game is anticipated by a local community. The Indian Premier League schedules two games on Saturdays and Sundays instead of one, also called Double-headers. One of the remains of religious segregation in the Netherlands is seen in amateur football: The Saturday-clubs are by and large Protestant Christian clubs, who were not allowed to play on Sunday. The Sunday-clubs were in general Catholic and working class clubs, whose players had to work on Saturday and therefore could only play on Sunday. In Ireland, Gaelic football and hurling matches are predominantly played on Sundays, with the first (previously second) and fourth (previously third) Sundays in September always playing host to the All-Ireland hurling and football championship finals, respectively. Professional golf tournaments traditionally end on Sunday. Traditionally, those in the United Kingdom ended on Saturday, but this changed some time ago; for example, the Open ran from Wednesday to Saturday up to 1979 but has run from Thursday to Sunday since 1980. In the United States and Canada, National Basketball Association and National Hockey League games, which are usually played at night during the week, are frequently played during daytime hours - often broadcast on national television. Most NASCAR Cup Series and IndyCar events are held on Sundays. Most Formula One World Championship races are likewise held on Sundays regardless of time zone/country, while MotoGP holds most races on Sundays, with Middle Eastern races being the exception on Saturday. All Formula One events and MotoGP events with Sunday races involve qualifying taking place on Saturday. Astrology Sunday is associated with the Sun and is symbolized by the symbol ☉. Named days Advent Sunday Black Sunday Bloody Sunday Cold Sunday Easter Sunday represents the resurrection of Christ Gaudete Sunday is the third Sunday of Advent. Gloomy Sunday Good Shepherd Sunday is the fourth Sunday of Easter. Laetare Sunday is the fourth Sunday of Lent. Low Sunday, first Sunday after Easter, is also known as the Octave of Easter, White Sunday, Quasimodo Sunday, Alb Sunday, Antipascha Sunday, and Divine Mercy Sunday. Passion Sunday, the fifth Sunday of Lent as the beginning of Passiontide (since 1970 for Roman Catholics in the ordinary form of the rite, the term remains only official among the greater title of the Palm Sunday, which used to be also the "2nd Sunday of Passiontide") Palm Sunday is the Sunday before Easter. Selection Sunday Septuagesima, Sexagesima and Quinquagesima Sunday are the last three Sundays before Lent. Quinquagesima ("fiftieth"), is the fiftieth day before Easter, reckoning inclusively; but Sexagesima is not the sixtieth day and Septuagesima is not the seventieth but is the sixty-fourth day prior. The use of these terms was abandoned by the Catholic Church in the 1970 calendar reforms (the Sundays before Lent are now simply "Sundays in ordinary time" with no special status). However, their use is still continued in Lutheran tradition: for example, "Septuagesimae". Shavuot is the Jewish Pentecost, or 'Festival of Weeks'. For Karaite Jews it always falls on a Sunday. Stir-up Sunday is the last Sunday before Advent. Super Bowl Sunday Trinity Sunday is the first Sunday after Pentecost. Whitsunday "White Sunday" is the day of Pentecost. In pop culture Music A Sunday Kind of Love is a 1946 jazz standard first recorded by Claude Thornhill. Sunday Morning is a 1966 song by American rock band The Velvet Underground. Sunday Morning is a 2004 song by American pop rock band Maroon 5. Sunday Best is a 2019 song by American electro-pop duo Surfaces
Technology
Days of the week
null
54410
https://en.wikipedia.org/wiki/Triceratops
Triceratops
Triceratops ( ; ) is a genus of chasmosaurine ceratopsian dinosaur that lived during the late Maastrichtian age of the Late Cretaceous period, about 68 to 66 million years ago in what is now western North America. It was one of the last-known non-avian dinosaurs and lived until the Cretaceous–Paleogene extinction event 66 million years ago. The name Triceratops, which means 'three-horned face', is derived from the Greek words () meaning 'three', () meaning 'horn', and () meaning 'face'. Bearing a large bony frill, three horns on the skull, and a large, four-legged body, exhibiting convergent evolution with bovines and rhinoceroses, Triceratops is one of the most recognizable of all dinosaurs and the best-known ceratopsian. It was also one of the largest, measuring around long and weighing up to . It shared the landscape with and was most likely preyed upon by Tyrannosaurus, though it is less certain that two adults would battle in the fanciful manner often depicted in museum displays and popular media. The functions of the frills and three distinctive facial horns on its head have inspired countless debates. Traditionally, these have been viewed as defensive weapons against predators. More recent interpretations find it probable that these features were primarily used in species identification, courtship, and dominance display, much like the antlers and horns of modern ungulates. Triceratops was traditionally placed within the "short-frilled" ceratopsids, but modern cladistic studies show it to be a member of Chasmosaurinae, which usually have long frills. Two species, T. horridus and T. prorsus, are considered valid today. Seventeen different species, however, have been named throughout history. Research published in 2010 concluded that the contemporaneous Torosaurus, a ceratopsid long regarded as a separate genus, represents Triceratops in its mature form. This view has still been highly disputed and much more data is needed to settle this ongoing debate. Triceratops has been documented by numerous remains collected since the genus was first described in 1889 by American paleontologist Othniel Charles Marsh. Specimens representing life stages from hatchling to adult have been found. As the archetypal ceratopsian, Triceratops is one of the most beloved, popular dinosaurs and has been featured in numerous films, postage stamps, and many other types of media. Discovery and identification The first named fossil specimen now attributed to Triceratops is a pair of brow horns attached to a skull roof that were found by George Lyman Cannon near Denver, Colorado, in the spring of 1887. This specimen was sent to Othniel Charles Marsh, who believed that the formation from which it came from dated from the Pliocene and that the bones belonged to a particularly large and unusual bison, which he named Bison alticornis. He realized that there were horned dinosaurs by the next year, which saw his publication of the genus Ceratops from fragmentary remains, but he still believed B. alticornis to be a Pliocene mammal. It took a third and much more complete skull to fully change his mind. Although not confidently assignable, fossils possibly belonging to Triceratops were described as two taxa, Agathaumas sylvestris and Polyonax mortuarius, in 1872 and 1874, respectively, by Marsh's archrival Edward Drinker Cope. Agathaumas was named based on a pelvis, several vertebrae, and a few ribs collected by Fielding Bradford Meek and Henry Martyn Bannister near the Green River of southeastern Wyoming from layers coming from the Maastrichtian Lance Formation. Due to the fragmentary nature of the remains, it can only confidently be assigned to Ceratopsidae. Polyonax mortuarius was collected by Cope himself in 1873 from northeastern Colorado, possibly coming from the Maastrichtian Denver Formation. The fossils only consisted of fragmentary horn cores, 3 dorsal vertebrae, and fragmentary limb elements. Polyonax has the same issue as Agathaumas, with the fragmentary remains non-assignable beyond Ceratopsidae. The Triceratops holotype, YPM 1820, was collected in 1888 from the Lance Formation of Wyoming by fossil hunter John Bell Hatcher, but Marsh initially described this specimen as another species of Ceratops. Cowboy Edmund B. Wilson had been startled by the sight of a monstrous skull poking out of the side of a ravine. He tried to recover it by throwing a lasso around one of the horns. When it broke off, the skull tumbling to the bottom of the cleft, Wilson brought the horn to his boss. His boss was rancher and avid fossil collector Charles Arthur Guernsey, who just so happened to show it to Hatcher. Marsh subsequently ordered Hatcher to locate and salvage the skull. The holotype was first named Ceratops horridus. When further preparation uncovered the third nose horn, Marsh changed his mind and gave the piece the new generic name Triceratops (), accepting his Bison alticornis as another species of Ceratops. It would, however, later be added to Triceratops. The sturdy nature of the animal's skull has ensured that many examples have been preserved as fossils, allowing variations between species and individuals to be studied. Triceratops remains have subsequently been found in Montana and South Dakota (and more in Colorado and Wyoming), as well as the Canadian provinces of Saskatchewan and Alberta. Species After Triceratops was described, between 1889 and 1891, Hatcher collected another thirty-one of its skulls with great effort. The first species had been named T. horridus by Marsh. Its specific name was derived from the Latin word meaning "rough" or "rugose", perhaps referring to the type specimen's rough texture, later identified as an aged individual. The additional skulls varied to a lesser or greater degree from the original holotype. This variation is unsurprising, given that Triceratops skulls are large three-dimensional objects from individuals of different ages and both sexes that which were subjected to different amounts and directions of pressure during fossilization. In the first attempt to understand the many species, Richard Swann Lull found two groups, although he did not say how he distinguished them. One group composed of T. horridus, T. prorsus, and T. brevicornus ('the short-horned'). The other composed of T. elatus and T. calicornis. Two species (T. serratus and T. flabellatus) stood apart from these groups. By 1933, alongside his revision of the landmark 1907 Hatcher–Marsh–Lull monograph of all known ceratopsians, he retained his two groups and two unaffiliated species, with a third lineage of T. obtusus and T. hatcheri ('Hatcher's') that was characterized by a very small nasal horn. T. horridus–T. prorsus–T. brevicornus was now thought to be the most conservative lineage, with an increase in skull size and a decrease in nasal horn size. T. elatus–T. calicornis was defined by having large brow horns and small nasal horns. Charles Mortram Sternberg made one modification by adding T. eurycephalus ('the wide-headed') and suggesting that it linked the second and third lineages closer together than they were to the T. horridus lineage. With time, the idea that the differing skulls might be representative of individual variation within one (or two) species gained popularity. In 1986, John Ostrom and Peter Wellnhofer published a paper in which they proposed that there was only one species, Triceratops horridus. Part of their rationale was that there are generally only one or two species of any large animal in a region. To their findings, Thomas Lehman added the old Lull–Sternberg lineages combined with maturity and sexual dimorphism, suggesting that the T. horridus–T. prorsus–T. brevicornus lineage was composed of females, the T. calicornis–T. elatus lineage was made up of males, and the T. obtusus–T. hatcheri lineage was of pathologic old males. These findings were contested a few years later by paleontologist Catherine Forster, who reanalyzed Triceratops material more comprehensively and concluded that the remains fell into two species, T. horridus and T. prorsus, although the distinctive skull of T. ("Nedoceratops") hatcheri differed enough to warrant a separate genus. She found that T. horridus and several other species belonged together and that T. prorsus and T. brevicornus stood alone. Since there were many more specimens in the first group, she suggested that this meant the two groups were two species. It is still possible to interpret the differences as representing a single species with sexual dimorphism. In 2009, John Scannella and Denver Fowler supported the separation of T. prorsus and T. horridus, noting that the two species are also separated stratigraphically within the Hell Creek Formation, indicating that they did not live together at the same time. Valid species T. horridus (Marsh, 1889) Marsh, 1889 (originally Ceratops) (type species) T. prorsus Marsh, 1890 Synonyms and doubtful species Some of the following species are synonyms, as indicated in parentheses ("=T. horridus" or "=T. prorsus"). All the others are each considered a () because they are based on remains too poor or incomplete to be distinguished from pre-existing Triceratops species. T. albertensis C. M. Sternberg, 1949 T. alticornis (Marsh 1887) Hatcher, Marsh, and Lull, 1907 [originally Bison alticornis, Marsh 1887, and Ceratops alticornis, Marsh 1888] T. brevicornus Hatcher, 1905 (=T. prorsus) T. calicornis Marsh, 1898 (=T. horridus) T. elatus Marsh, 1891 (=T. horridus) T. eurycephalus Schlaikjer, 1935 T. flabellatus Marsh, 1889 (= Sterrholophus Marsh, 1891) (=T. horridus) T. galeus Marsh, 1889 T. hatcheri (Hatcher & Lull 1905) Lull, 1933 (contentious; see Nedoceratops below) T. ingens Marsh vide Lull, 1915 T. maximus Brown, 1933 T. mortuarius (Cope, 1874) Kuhn, 1936 (nomen dubium; originally Polyonax mortuarius) T. obtusus Marsh, 1898 (=T. horridus) T. serratus Marsh, 1890 (=T. horridus) T. sulcatus Marsh, 1890 T. sylvestris (Cope, 1872) Kuhn, 1936 (nomen dubium; originally Agathaumas sylvestris) Description Size Triceratops was a very large animal, measuring around in length and weighing up to . A specimen of T. horridus named Kelsey measured long, has a skull, stood about tall, and was estimated by the Black Hills Institute to weigh approximately . Skull Like all chasmosaurines, Triceratops had a large skull relative to its body size, among the largest of all land animals. The largest-known skull, specimen MWC 7584 (formerly BYU 12183), is estimated to have been in length when complete and could reach almost a third of the length of the entire animal. The front of the head was equipped with a large beak in front of its teeth. The core of the top beak was formed by a special rostral bone. Behind it, the premaxillae bones were located, embayed from behind by very large, circular nostrils. In chasmosaurines, the premaxillae met on their midline in a complex bone plate, the rear edge of which was reinforced by the "narial strut". From the base of this strut, a triangular process jutted out into the nostril. Triceratops differs from most relatives in that this process was hollowed out on the outer side. Behind the toothless premaxilla, the maxilla bore thirty-six to forty tooth positions, in which three to five teeth per position were vertically stacked. The teeth were closely appressed, forming a "dental battery" curving to the inside. The skull bore a single horn on the snout above the nostrils. In Triceratops, the nose horn is sometimes recognisable as a separate ossification, the epinasal. The skull also featured a pair of supraorbital "brow" horns approximately long, with one above each eye. The jugal bones pointed downward at the rear sides of the skull and were capped by separate epijugals. With Triceratops, these were not particularly large and sometimes touched the quadratojugals. The bones of the skull roof were fused and by a folding of the frontal bones, a "double" skull roof was created. In Triceratops, some specimens show a fontanelle, an opening in the upper roof layer. The cavity between the layers invaded the bone cores of the brow horns. At the rear of the skull, the outer squamosal bones and the inner parietal bones grew into a relatively short, bony frill, adorned with epoccipitals in young specimens. These were low triangular processes on the frill edge, representing separate skin ossifications or osteoderms. Typically, with Triceratops specimens, there are two epoccipitals present on each parietal bone, with an additional central process on their border. Each squamosal bone had five processes. Most other ceratopsids had large parietal fenestrae, openings in their frills, but those of Triceratops were noticeably solid, unless the genus Torosaurus represents mature Triceratops individuals, which it most likely does not. Under the frill, at the rear of the skull, a huge occipital condyle, up to in diameter, connected the head to the neck. The lower jaws were elongated and met at their tips in a shared epidentary bone, the core of the toothless lower beak. In the dentary bone, the tooth battery curved to the outside to meet the battery of the upper jaw. At the rear of the lower jaw, the articular bone was exceptionally wide, matching the general width of the jaw joint. T. horridus can be distinguished from T. prorsus by having a shallower snout. Postcranial skeleton Chasmosaurines showed little variation in their postcranial skeleton. The skeleton of Triceratops is markedly robust. Both Triceratops species possessed a very sturdy build, with strong limbs, short hands with three hooves each, and short feet with four hooves each. The vertebral column consisted of ten neck, twelve back, ten sacral, and about forty-five tail vertebrae. The front neck vertebrae were fused into a syncervical. Traditionally, this was assumed to have incorporated the first three vertebrae, thus implying that the frontmost atlas was very large and sported a neural spine. Later interpretations revived an old hypothesis by John Bell Hatcher that, at the very front, a vestige of the real atlas can be observed, the syncervical then consisting of four vertebrae. The vertebral count mentioned is adjusted to this view. In Triceratops, the neural spines of the neck are constant in height and don't gradually slope upwards. Another peculiarity is that the neck ribs only begin to lengthen with the ninth cervical vertebra. The rather short and high vertebrae of the back were, in its middle region, reinforced by ossified tendons running along the tops of the neural arches. The straight sacrum was long and adult individuals show a fusion of all sacral vertebrae. In Triceratops the first four and last two sacrals had transverse processes, connecting the vertebral column to the pelvis, that were fused at their distal ends. Sacrals seven and eight had longer processes, causing the sacrum to have an oval profile in top view. On top of the sacrum, a neural plate was present formed by a fusion of the neural spines of the second through fifth vertebrae. Triceratops had a large pelvis with a long ilium. The ischium was curved downwards. The foot was short with four functional toes. The phalangeal formula of the foot is 2-3-4-5-0. Although certainly quadrupedal, the posture of horned dinosaurs has long been the subject of some debate. Originally, it was believed that the front legs of the animal had to be sprawling at a considerable angle from the thorax in order to better bear the weight of the head. This stance can be seen in paintings by Charles Knight and Rudolph Zallinger. Ichnological evidence in the form of trackways from horned dinosaurs and recent reconstructions of skeletons (both physical and digital) seem to show that Triceratops and other ceratopsids maintained an upright stance during normal locomotion, with the elbows flexed to behind and slightly bowed out, in an intermediate state between fully upright and fully sprawling, comparable to the modern rhinoceros. The hands and forearms of Triceratops retained a fairly primitive structure when compared to other quadrupedal dinosaurs, such as thyreophorans and many sauropods. In those two groups, the forelimbs of quadrupedal species were usually rotated so that the hands faced forward with palms backward ("pronated") as the animals walked. Triceratops, like other ceratopsians and related quadrupedal ornithopods (together forming the Cerapoda), walked with most of their fingers pointing out and away from the body, the original condition for dinosaurs. This was also retained by bipedal forms, like theropods. In Triceratops, the weight of the body was carried by only the first three fingers of the hand, while digits 4 and 5 were vestigial and lacked claws or hooves. The phalangeal formula of the hand is 2-3-4-3-1, meaning that the first or innermost finger of the forelimb has two bones, the next has three, the next has four, etc. Skin Preserved skin from Triceratops is known. This skin consist of large scales, some of which exceed across, which have conical projections rising from their center. A preserved piece of skin from the frill of a specimen is also known, which consists of small polygonal basement scales. Classification Triceratops is the best-known genus of Ceratopsidae, a family of large, mostly North American ceratopsians. The exact relationship of Triceratops among the other ceratopsids has been debated over the years. Confusion stemmed mainly from the combination of a short, solid frill (similar to that of Centrosaurinae), with long brow horns (more akin to Chasmosaurinae). In the first overview of ceratopsians, R. S. Lull hypothesized the existence of two lineages, one of Monoclonius and Centrosaurus leading to Triceratops, the other with Ceratops and Torosaurus, making Triceratops a centrosaurine as the group is understood today. Later revisions supported this view when Lawrence Lambe, in 1915, formally describing the first, short-frilled group as Centrosaurinae (including Triceratops), and the second, long-frilled group as Chasmosaurinae. In 1949, Charles Mortram Sternberg was the first to question this position, proposing instead that Triceratops was more closely related to Arrhinoceratops and Chasmosaurus based on skull and horn features, making Triceratops a chasmosaurine ("ceratopsine" in his usage) genus. He was largely ignored, with John Ostrom and later David Norman placing Triceratops within the Centrosaurinae. Subsequent discoveries and analyses, however, proved the correctness of Sternberg's view on the position of Triceratops, with Thomas Lehman defining both subfamilies in 1990 and diagnosing Triceratops as "ceratopsine" on the basis of several morphological features. Apart from the one feature of a shortened frill, Triceratops shares no derived traits with centrosaurines. Further research by Peter Dodson, including a 1990 cladistic analysis and a 1993 study using resistant-fit theta-rho analysis, or RFTRA (a morphometric technique which systematically measures similarities in skull shape), reinforces Triceratops placement as a chasmosaurine. The cladogram below follows Longrich (2014), who named a new species of Pentaceratops, and included nearly all species of chasmosaurine. For many years after its discovery, the deeper evolutionary origins of Triceratops and its close relatives remained largely obscure. In 1922, the newly discovered Protoceratops was seen as its ancestor by Henry Fairfield Osborn, but many decades passed before additional findings came to light. Recent years have been fruitful for the discovery of several antecedents of Triceratops. Zuniceratops, the earliest-known ceratopsian with brow horns, was described in the late 1990s, and Yinlong, the first known Jurassic ceratopsian, was described in 2005. These new finds have been vital in illustrating the origins of ceratopsians in general, suggesting an Asian origin in the Jurassic and the appearance of truly horned ceratopsians by the beginning of the Late Cretaceous in North America. In phylogenetic taxonomy, the genus Triceratops has been used as a reference point in the definition of Dinosauria. Dinosaurs have been designated as all descendants of the most recent common ancestor of Triceratops and modern birds. Furthermore, Ornithischia has been defined as those dinosaurs more closely related to Triceratops than to modern birds. Paleobiology Although Triceratops is commonly portrayed as a herding animal, there is currently little evidence to suggest that they lived in herds. While several other ceratopsians are known from bone beds preserving bones from two to hundreds or even thousands of individuals, there is currently only one documented bonebed dominated by Triceratops bones: a site in southeastern Montana with the remains of three juveniles. It may be significant that only juveniles were present. In 2012, a group of three Triceratops in relatively complete condition, each of varying sizes from a full-grown adult to a small juvenile, were found near Newcastle, Wyoming. The remains are currently under excavation by paleontologist Peter Larson and a team from the Black Hills Institute. It is believed that the animals were traveling as a family unit, but it remains unknown if the group consists of a mated pair and their offspring, or two females and a juvenile they were caring for. The remains also show signs of predation or scavenging from Tyrannosaurus, particularly on the largest specimen, with the bones of the front limbs showing breakage and puncture wounds from Tyrannosaurus teeth. In 2020, Illies and Fowler described the co-ossified distal caudal vertebrae of Triceratops. According to them, this pathology could have arisen after one Triceratops accidentally stepped on the tail of another member of the herd. For many years, Triceratops finds were known only from solitary individuals. These remains are very common. For example, Bruce Erickson, a paleontologist of the Science Museum of Minnesota, has reported having seen 200 specimens of T. prorsus in the Hell Creek Formation of Montana. Similarly, Barnum Brown claimed to have seen over 500 skulls in the field. Because Triceratops teeth, horn fragments, frill fragments, and other skull fragments are such abundant fossils in the Lancian faunal stage of the late Maastrichtian (Late Cretaceous, 66 mya) of western North America, it is regarded as one of the dominant herbivores of the time, if not the most dominant. In 1986, Robert Bakker estimated it as making up five sixths of the large dinosaur fauna at the end of the Cretaceous. Unlike most animals, skull fossils are far more common than postcranial bones for Triceratops, suggesting that the skull had an unusually high preservation potential. Analysis of the endocranial anatomy of Triceratops suggest its sense of smell was poor compared to that of other dinosaurs. Its ears were attuned to low frequency sounds, given the short cochlear lengths recorded in an analysis by Sakagami et al,. This same study also suggests that Triceratops held its head about 45 degrees to the ground, an angle which would showcase the horns and frill most effectively that simultaneously allowed the animal to take advantage of food through grazing. A 2022 study by Wiemann and colleagues of various dinosaur genera, including Triceratops, suggests that it had an ectothermic (cold blooded) or gigantothermic metabolism, on par with that of modern reptiles. This was uncovered using the spectroscopy of lipoxidation signals, which are byproducts of oxidative phosphorylation and correlate with metabolic rates. They suggested that such metabolisms may have been common for ornithischian dinosaurs in general, with the group evolving towards ectothermy from an ancestor with an endothermic (warm blooded) metabolism. Dentition and diet Triceratops were herbivorous and, because of their low slung head, their primary food was probably low growing vegetation, although they may have been able to knock down taller plants with their horns, beak, and sheer bulk. The jaws were tipped with a deep, narrow beak, believed to have been better at grasping and plucking than biting. Triceratops teeth were arranged in groups called batteries, which contained 36 to 40 tooth columns in each side of each jaw and 3 to 5 stacked teeth per column, depending on the size of the animal. This gives a range of 432 to 800 teeth, of which only a fraction were in use at any given time (as tooth replacement was continuous throughout the life of the animal). They functioned by shearing in a vertical to near-vertical orientation. Additionally, their teeth wore as they fed, creating fullers that minimised friction as they masticated. The great size and numerous teeth of Triceratops suggests that they ate large volumes of fibrous plant material. Some researchers suggest it, along with its cousin Torosaurus ate palms and cycads and others suggest it ate ferns, which then grew in prairies. Functions of the horns and frill There has been much speculation over the functions of Triceratops head adornments. The two main theories have revolved around use in combat and in courtship display, with the latter now thought to be the most likely primary function. Early on, Lull postulated that the frills may have served as anchor points for the jaw muscles to aid chewing by allowing increased size and power for the muscles. This has been put forward by other authors over the years, but later studies do not find evidence of large muscle attachments on the frill bones. Triceratops were long thought to have used their horns and frills in combat with large predators, such as Tyrannosaurus, the idea being discussed first by Charles H. Sternberg in 1917 and 70 years later by Robert Bakker. There is evidence that Tyrannosaurus did have aggressive head-on encounters with Triceratops, based on partially healed tyrannosaur tooth marks on a Triceratops brow horn and squamosal. The bitten horn is also broken, with new bone growth after the break. Which animal was the aggressor, however, is unknown. Paleontologist Peter Dodson estimates that, in a battle against a bull Tyrannosaurus, the Triceratops had the upper hand and would successfully defend itself by inflicting fatal wounds to the Tyrannosaurus using its sharp horns. Tyrannosaurus is also known to have fed on Triceratops, as shown by a heavily tooth-scored Triceratops ilium and sacrum. In addition to combat with predators using its horns, Triceratops are popularly shown engaging each other in combat with horns locked. While studies show that such activity would be feasible, if unlike that of present-day horned animals, there is disagreement about whether they did so. Although pitting, holes, lesions, and other damage on Triceratops skulls (and the skulls of other ceratopsids) are often attributed to horn damage in combat, a 2006 study finds no evidence for horn thrust injuries causing these forms of damage (with there being no evidence of infection or healing). Instead, non-pathological bone resorption, or unknown bone diseases, are suggested as causes. A 2009 study compared incidence rates of skull lesions and periosteal reaction in Triceratops and Centrosaurus, showing that these were consistent with Triceratops using its horns in combat and the frill being adapted as a protective structure, while lower pathology rates in Centrosaurus may indicate visual use over physical use of cranial ornamentation or a form of combat focused on the body rather than the head. The frequency of injury was found to be 14% in Triceratops. The researchers also concluded that the damage found on the specimens in the study was often too localized to be caused by bone disease. Histological examination reveals that the frill of Triceratops is composed of fibrolamellar bone. This contains fibroblasts that play a critical role in wound healing and is capable of rapidly depositing bone during remodeling. One skull was found with a hole in the jugal bone, apparently a puncture wound sustained while the animal was alive, as indicated by signs of healing. The hole has a diameter close to that of the distal end of a Triceratops horn. This and other apparent healed wounds in the skulls of ceratopsians have been cited as evidence of non-fatal intra-specific competition in these dinosaurs. Another specimen, referred to as "Big John", has a similar fenestra to the squamosal caused by what appears to be another Triceratops horn and the squamosal bone shows signs of significant healing, further vindicating the hypothesis that this ceratopsian used its horns for intra-specific combat. The large frill also may have helped to increase body area to regulate body temperature. A similar theory has been proposed regarding the plates of Stegosaurus, although this use alone would not account for the bizarre and extravagant variation seen in different members of Ceratopsidae, which would rather support the sexual display theory. The theory that frills functioned as a sexual display was first proposed by Davitashvili in 1961 and has gained increasing acceptance since. Evidence that visual display was important, either in courtship or other social behavior, can be seen in the ceratopsians differing markedly in their adornments, making each species highly distinctive. Also, modern living creatures with such displays of horns and adornments use them similarly. A 2006 study of the smallest Triceratops skull, ascertained to be that of a juvenile, shows the frill and horns developed at a very early age, predating sexual development. That would suggest they were probably important for visual communication and species recognition in general. However, the use of the exaggerated structures to enable dinosaurs to recognize their own species has been questioned, as no such function exists for such structures in modern species. Growth and ontogeny In 2006, the first extensive ontogenetic study of Triceratops was published in the journal Proceedings of the Royal Society. The study, by John R. Horner and Mark Goodwin, found that individuals of Triceratops could be divided into four general ontogenetic groups: babies, juveniles, subadults, and adults. With a total number of 28 skulls studied, the youngest was only long. Ten of the 28 skulls could be placed in order in a growth series with one representing each age. Each of the four growth stages were found to have identifying features. Multiple ontogenetic trends were discovered, including the size reduction of the epoccipitals, development and reorientation of postorbital horns, and hollowing out of the horns. Torosaurus as growth stage of Triceratops Torosaurus is a ceratopsid genus first identified from a pair of skulls in 1891, two years after the identification of Triceratops by Othneil Charles Marsh. The genus Torosaurus resembles Triceratops in geological age, distribution, anatomy, and size, so it has been recognised as a close relative. Its distinguishing features are an elongated skull and the presence of two ovular fenestrae in the frill. Paleontologists investigating dinosaur ontogeny in Montana's Hell Creek Formation have recently presented evidence that the two represent a single genus. John Scannella, in a paper presented in Bristol at the conference of the Society of Vertebrate Paleontology (September 25, 2009), reclassified Torosaurus as especially mature Triceratops individuals, perhaps representing a single sex. Horner, Scannella's mentor at Bozeman Campus, Montana State University, noted that ceratopsian skulls consist of metaplastic bone. A characteristic of metaplastic bone is that it lengthens and shortens over time, extending and resorbing to form new shapes. Significant variety is seen even in those skulls already identified as Triceratops, Horner said, "where the horn orientation is backwards in juveniles and forward in adults". Approximately 50% of all subadult Triceratops skulls have two thin areas in the frill that correspond with the placement of "holes" in Torosaurus skulls, suggesting that holes developed to offset the weight that would otherwise have been added as maturing Triceratops individuals grew longer frills. A paper describing these findings in detail was published in July 2010 by Scannella and Horner. It formally argues that Torosaurus and the similar contemporary Nedoceratops are synonymous with Triceratops. The assertion has since ignited much debate. Andrew Farke had, in 2006, stressed that no systematic differences could be found between Torosaurus and Triceratops, apart from the frill. He nevertheless disputed Scannella's conclusion by arguing in 2011 that the proposed morphological changes required to "age" a Triceratops into a Torosaurus would be without precedent among ceratopsids. Such changes would include the growth of additional epoccipitals, reversion of bone texture from an adult to immature type and back to adult again, and growth of frill holes at a later stage than usual. A study by Nicholas Longrich and Daniel Field analyzed 35 specimens of both Triceratops and Torosaurus. The authors concluded that Triceratops individuals too old to be considered immature forms are represented in the fossil record, as are Torosaurus individuals too young to be considered fully mature adults. The synonymy of Triceratops and Torosaurus cannot be supported, they said, without more convincing intermediate forms than Scannella and Horner initially produced. Scannella's Triceratops specimen with a hole on its frill, they argued, could represent a diseased or malformed individual rather than a transitional stage between an immature Triceratops and mature Torosaurus form. Other genera as growth stages of Triceratops Opinion has varied on the validity of a separate genus for Nedoceratops. Scannella and Horner regarded it as an intermediate growth stage between Triceratops and Torosaurus. Farke, in his 2011 redescription of the only known skull, concluded that it was an aged individual of its own valid taxon, Nedoceratops hatcheri. Longrich and Fields also did not consider it a transition between Torosaurus and Triceratops, suggesting that the frill holes were pathological. As described above, Scannella had argued in 2010 that Nedoceratops should be considered a synonym of Triceratops. Farke (2011) maintained that it represents a valid distinct genus. Longrich agreed with Scannella about Nedoceratops and made a further suggestion that the recently described Ojoceratops was likewise a synonym. The fossils, he argued, are indistinguishable from the Triceratops horridus specimens that were previously attributed to the defunct species Triceratops serratus. Longrich observed that another newly described genus, Tatankaceratops, displayed a strange mix of characteristics already found in adult and juvenile Triceratops. Rather than representing a distinct genus, Tatankaceratops could as easily represent a dwarf Triceratops or a Triceratops individual with a developmental disorder that caused it to stop growing prematurely. Paleoecology Triceratops lived during the Late Cretaceous of western North America, its fossils coming from the Evanston Formation, Scollard Formation, Laramie Formation, Lance Formation, Denver Formation, and Hell Creek Formation. These fossil formations date back to the time of the Cretaceous–Paleogene extinction event, which has been dated to 66 ± 0.07 million years ago. Many animals and plants have been found in these formations, but mostly from the Lance Formation and Hell Creek Formation. Triceratops was one of the last ceratopsian genera to appear before the end of the Mesozoic. The related Torosaurus and more distantly related diminutive Leptoceratops were also present, though their remains have been rarely encountered. Theropods from these formations include genera of dromaeosaurids, tyrannosaurids, ornithomimids, troodontids, avialans, and caenagnathids. Dromaeosaurids from the Hell Creek Formation are Acheroraptor and Dakotaraptor. Indeterminate dromaeosaurs are known from other fossil formations. Common teeth previously referred to Dromaeosaurus and Saurornitholestes were considered to be those of Acheroraptor. The tyrannosaurids from the formation are Nanotyrannus and Tyrannosaurus, although the former is most likely a junior synonym of the latter. Among ornithomimids are the genera Struthiomimus and Ornithomimus. An undescribed animal named "Orcomimus" could be from the formation. Troodontids are only represented by Pectinodon and Paronychodon in the Hell Creek Formation with a possible species of Troodon from the Lance Formation. One species of unknown coelurosaur is known from teeth in the Hell Creek and similar formations by a single species, Richardoestesia. Only three oviraptorosaurs are from the Hell Creek Formation: Anzu, Leptorhynchos and a giant species of caenagnathid, very similar to Gigantoraptor, from South Dakota. However, only fossilized foot prints were discovered. The avialans known from the formation are Avisaurus, multiple species of Brodavis, and several other species of hesperornithoforms, as well as several species of true birds, including Cimolopteryx. Ornithischians are abundant in the Scollard, Laramie, Lance, Denver, and Hell Creek Formation. The main groups of ornithischians are ankylosaurians, ornithopods, ceratopsians, and pachycephalosaurians. Three ankylosaurians are known: Ankylosaurus, Denversaurus, and possibly a species of Edmontonia or an undescribed genus. Multiple genera of ceratopsians are known from the formation other than Triceratops. These include the leptoceratopsid Leptoceratops and the chasmosaurine ceratopsids Torosaurus, Nedoceratops, and Tatankaceratops. Ornithopods are common in the Hell Creek Formation and are known from several species of the thescelosaurine Thescelosaurus and the hadrosaurid Edmontosaurus. Several pachycephalosaurians have been found in the Hell Creek Formation and in similar formations. Among them are the derived pachycephalosaurids Stygimoloch, Dracorex, Pachycephalosaurus, Sphaerotholus, and an undescribed specimen from North Dakota. The first two might be junior synonyms of Pachycephalosaurus. Mammals are plentiful in the Hell Creek Formation. Groups represented include multituberculates, metatherians, and eutherians. The multituberculates represented include Paracimexomys, the cimolomyids Paressonodon, Meniscoessus, Essonodon, Cimolomys, Cimolodon, and Cimexomys, and the neoplagiaulacids Mesodma and Neoplagiaulax. The metatherians are represented by the alphadontids Alphadon, Protalphodon, and Turgidodon, the pediomyids Pediomys, Protolambda, and Leptalestes, the stagodontid Didelphodon, the deltatheridiid Nanocuris, the herpetotheriid Nortedelphys, and the glasbiid Glasbius. A few eutherians are known, being represented by Alostera, Protungulatum, the cimolestids Cimolestes and Batodon, the gypsonictopsid Gypsonictops , and the possible nyctitheriid Paranyctoides. Cultural significance Triceratops is the official state fossil of South Dakota. It is also the official state dinosaur of Wyoming. In 1942, Charles R. Knight painted a mural incorporating a confrontation between a Tyrannosaurus and a Triceratops in the Field Museum of Natural History for the National Geographic Society, establishing them as enemies in the popular imagination. Paleontologist Robert Bakker said of the imagined rivalry between Tyrannosaurus and Triceratops, "No matchup between predator and prey has ever been more dramatic. It's somehow fitting that those two massive antagonists lived out their co-evolutionary belligerence through the last days of the last epoch of the Age of Dinosaurs."
Biology and health sciences
Dinosaurs and prehistoric reptiles
null
54412
https://en.wikipedia.org/wiki/Unicycle
Unicycle
A unicycle is a vehicle that touches the ground with only one wheel. The most common variation has a frame with a saddle, and has a pedal-driven direct-drive. A two speed hub is commercially available for faster unicycling. Unicycling is practiced professionally in circuses, by street performers, in festivals, and as a hobby. Unicycles have also been used to create new sports such as unicycle hockey. In recent years, unicycles have also been used in mountain unicycling, an activity similar to mountain biking or trials. History US patents for single-wheeled 'velocipedes' were published in 1869 by Frederick Myers and in 1881 by Battista Scuri. Unicycle design has developed since the Penny Farthing and later the advent of the first unicycle into many variations including: the seatless unicycle ("ultimate wheel") the tall ("giraffe") unicycle and "2-wheelers" or "3-wheelers" (multiple wheels stacked directly on top of each other). During the late 1980s some extreme sportsmen took an interest in the unicycle and modified unicycles to enable them to engage in off-road or mountain unicycling, trials unicycling and street unicycling. Unicycles compared to other pedal powered vehicles Bicycles, tricycles and quadracycles share (with minor variations) several basic parts including wheels, pedals, cranks, forks, and the saddle with unicycles. Without a rider, unicycles lack stability – however, a proficient unicyclist is usually more stable than a similarly proficient rider on a bicycle as the wheel is not constrained by the linear axis of a frame. Unicycles usually, but not always, lack brakes, gears, and the ability to freewheel. Given these differences, the injuries that can occur from unicycle use tend to be different from that of bicycle use. In particular, head injuries are significantly less likely among unicycle use compared to bicycle use. Construction Unicycles have a few key parts: The wheel (which includes the tire, tube, rim, spokes, hub and axle) The cranks (which attach the pedals to the wheel hub) The hub (connects the spokes to a central point and also transfers pedaling power to the wheel) Pedals Fork-style frame Seatpost Saddle (the seat of the unicycle) The wheel is usually similar to a bicycle wheel with a special hub designed so the axle is a fixed part of the hub. This means the rotation of the cranks directly controls the rotation of the wheel (called direct-drive). The frame sits on top of the axle bearings, while the cranks attach to the ends of the axle, and the seatpost slides into the frame to allow the saddle to be height adjusted. Types of unicycles Types of unicycle include: Freestyle unicycles Trials unicycles Mountain unicycles (also called Munis) Giraffe unicycles Commuter unicycles Street unicycles Cruiser unicycles Road unicycles Each type has many combinations of frame strength, wheel diameter, and crank length. Freestyle unicycles Generally used for flatland skills and freestyle routines, freestyle unicycles typically have a relatively high seatpost, a narrow saddle, and a squared fork (used for one-footed tricks). These unicycles are used similarly to flatland bicycles. Wheel size is usually , but smaller riders may use unicycles. Some people prefer wheels. Many freestyle unicyclists will use white tires to avoid tire marks when riding indoors. Trials unicycles Designed for unicycle trials, these unicycles are stronger than standard unicycles in order to withstand the stresses caused by jumping, dropping, and supporting the weight of the unicycle and rider on components such as the pedals and cranks. Many trials unicycles also have wide, knobby tires to absorb some of the impact on drops. Mountain unicycles ("Munis") Mountain unicycling (abbreviated to muni or mUni) consists of riding specialized unicycles on mountain bike trails or otherwise off-roading. Mountain unicycles have thicker, wider tires for better traction. Riders may occasionally lower air pressure for increased shock absorption. Many riders choose to use long cranks to increase power when riding up hills and over rough terrain. A disc brake is sometimes used for descents; the brake handle is attached to the underside of the handle on the front of the saddle. Touring/commuter unicycles Used for long distances, these unicycles are specially made to cover distances. They have a large wheel diameter, between , so more distance is covered in less pedal rotation. A 36″ unicycle made by the Coker Tire company started the big wheel trend. Some variations on the traditional touring unicycle include the Schlumpf "GUni" (geared unicycle), which uses a two-speed internal fixed-geared hub. Larger direct-drive wheels tend to have shorter cranks to allow for easier cadence and more speed. Geared wheels, with an effective diameter larger than the wheel itself, tend to use longer cranks to increase torque as they are not required to achieve such high cadences as direct-drive wheels, but demand greater force per pedal stroke. Other variations Giraffe, a chain-driven unicycle. Use of a chain or multiple wheels in a gear-like configuration can make the unicycle much taller than standard unicycles (note that multi-wheel unicycles can be described as giraffes). Standard unicycles don't have a chain, which limits the seat height based on how long the rider's legs are, because there the crank is attached directly to the wheel axle. Giraffe unicycles can range in heights from to over high. Geared unicycle, or GUni, a unicycle whose wheel rotates faster than the pedal cadence. They are used for distance riding and racing. Multi-wheeled unicycle, a unicycle with more than one wheel, stacked on top of each other so that only one wheel touches the ground (nicknamed stacks). The wheels are linked together by chains or direct contact with each other. These unicycles can also be called giraffes. Kangaroo unicycle, a unicycle that has both the cranks facing in the same direction. They are so named due to the hopping motion of the rider's legs, supposedly resembling the jumping of a kangaroo. Eccentric unicycle, a unicycle that has the hub off-center in the wheel. Putting an eccentric wheel on a kangaroo unicycle can make riding easier, and the rider's motion appear more kangaroo-like. Ultimate wheel, a unicycle with no frame or seat, just a wheel and pedals. Impossible wheel, or BC wheel, a wheel with pegs or metal plates connected to the axle for the rider to stand on. These wheels are for coasting and jumping. A purist form of unicycle, without cranks. Monocycle, or monowheel, a large wheel inside which the rider sits (as in a hamster wheel), either motorized or pedal-powered. The greater gyroscopic properties and lower center of mass make it easier to balance than a normal unicycle but less maneuverable. Self-balancing unicycle or electric unicycle, a computer-controlled, motor-driven, self-balancing unicycle. Freewheeling unicycle, a unicycle in which the hub has a freewheel mechanism, allowing the rider, to coast or move forward without pedaling, as a common bicycle does. These unicycles almost always have brakes because they cannot stop the way traditional unicycles do. The brake lever is generally mounted in the bottom of the saddle. These unicycles also cannot go backwards. Tandem Recumbent Hydraulic giraffe that can change in height while being ridden Training aids Training aids are sometimes used to make it easier to become comfortable with riding a unicycle. One method for training is using a spotter to make riding easier. Another method is finding a narrow hallway that can be used to help alleviate left and right balancing while allowing a beginner to focus on forward and backward balance. Equally, riding back and forth between two chairs, faced back to back, while holding on to the chair backs allows the user to gauge how to appropriately position oneself before setting off. Using props such as sticks or ski poles is generally discouraged as they hinder balance and create dependence. A fall onto props could also cause serious injury. Riding styles Traditionally, unicycling has been seen as a circus skill which has been performed at events to entertain the public in the circus or during parades, carnivals or street festivals. Recent developments in the strength and durability of bicycle (and consequently unicycle) parts have given rise to many new activities including trials unicycling and mountain unicycling. Unicycling is arguably now as much a competitive sport and recreational pursuit as an entertainment activity. The principal types of unicycling are: Freestyle Perhaps the oldest form of unicycling, traditional freestyle riding is based on performance. Freestyle tricks and moves are derived from different ways of riding the unicycle, and linking these moves together into one long flowing line that is aesthetically pleasing. Competitions look very similar to figure skating, with riders performing routines to music. Comedy Along with freestyle it is a performance style of unicycling. Often employed by clowns and other circus skills performers. Comedy unicycling exaggerates the perceived difficulty of riding a unicycle to create a comedic performance. Trials unicycling Trials unicycling is specifically aimed at negotiating obstacles. Analogous to trials bike riding. Street unicycling Street unicycling as a style involves riders using a combination of objects found in urbanized settings (such as ledges, handrails, and stairs) to perform a wide variety of tricks. Many tricks are similar to those performed in other extreme sports, such as BMX and skateboarding. Off-road or mountain unicycling (abbreviated to 'MUni') Muni is riding on rough terrain and has developed as a form of unicycling in recent years. Touring or commuting This style concentrates on distance riding. With a wheel cruising speeds of or more can easily be reached. Flatland unicycling This style of unicycling is similar to freestyle in that various tricks and movements are performed on flat ground. Flatland, however, does not have the performance element of freestyle, but instead has tricks that are similar to those in BMX and skateboarding. Unicycle team sports Unicycling is also performed as a team sport. Unicycle basketball Unicycle basketball uses a regulation basketball on a regular basketball court with the same rules, e.g., one must dribble the ball while riding. There are a number of rules that are particular to unicycle basketball as well, e.g., a player must be mounted on the unicycle when in-bounding the ball. Unicycle basketball is usually played using or smaller unicycles, and using plastic pedals, both to preserve the court and the players' shins. In North America, regular unicycle basketball games are organized in Berkeley, San Luis Obispo, Detroit, Phoenix, Minneapolis, and Toronto. Switzerland, France, Germany and Puerto Rico are all field teams. The Puerto Rico All Star Unicycling Basketball Team has been one of the dominant teams and has won several world championships. Unicycle hockey Unicycle hockey follows rules basically similar to rink hockey, using a tennis ball and ice-hockey sticks. Play is mostly non-contact. The sport has active leagues in Germany, Switzerland, Australia and the UK and international tournaments held at least bi-annually. Tournaments in the UK are held by various teams across the country usually in sports halls, but occasionally outside. Each tournament lasts a day and around 8 teams normally compete in a round-robin league with the winner being whoever has the most points. If two teams have the same number of points the winner can be decided by goal difference or a penalty shoot-out. Notable unicyclists Known as unicyclists Individuals Kris Holm and George Peck, pioneers in mountain unicycling Rudy Horn, a German juggler , a German juggler Jiang Yan Jing, Chinese acrobat Ted Jorgensen, circus unicyclist, president of the Albuquerque Unicycle Club Michael Goudeau, an American juggler Skeeter Reece, an American clown Amy Shields, an American freestyle unicyclist Dustin Kelm, worldwide variety unicycle performer "Wobbling" Wally Watts, round the world unicyclist, April 1976 to October 1978 Ed Pratt, round the world unicyclist, March 2015 to July 2018 Mike Taylor, World Champion in Unicycle High Jump in 2014, 2016 & 2018 Groups Albuquerque Unicycle Club world's first unicycle hockey club The King Charles Troupe, the first African American circus troupe, and one of the longest running acts in Ringling Bros history Known in other fields Adam Carolla, American comedian and actor Rupert Grint, actor who played Ronald Weasley in the Harry Potter films Mark Ruffalo, actor Mika Häkkinen, Formula One racing driver Lewis Hamilton, Formula One racing driver Eddie Izzard, comedian and actor Leslie Mann, American actress who performed on The Ellen DeGeneres Show Chris Martin, lead singer of Coldplay Demetri Martin, American comedian and actor Ulrich Mühe, late German actor, best known for his role in The Lives of Others Michael Nesmith, former guitarist of The Monkees Miles Plumlee, American professional basketball player Nico Rosberg, Formula One racing driver Donald Rumsfeld, former United States Secretary of Defense Claude Shannon, founder of information theory Take That members Mark Owen, Jason Orange, and Howard Donald unicycled for the circus-based video for their song "Said It All" Andrew Tosh, son of Peter and also a Jamaican reggae musician Peter Tosh: Jamaican reggae musician from The Wailers Steve Young, former National Football League quarterback Ilya Zhitomirskiy: Russian-American software developer and entrepreneur UNICON and regional championships UNICON, Eurocycle and APUC are regular international unicycling conventions. The biennial UNICON (International Unicycling Convention), sanctioned by the International Unicycling Federation, comprises all major unicycling disciplines and is a major event on the international unicycling calendar. Events include: artistic (group, pairs, individual, standard skill , open-X), track racing (100 metres, 400 metres, 800 metres, 30 metres walk the wheel, 50 metres one-foot), 10 kilometres, marathon (42.195 km), muni (cross-country, uphill, downhill, North Shore downhill), trials, basketball and hockey. The Eurocycle (EUROpean uniCYCLE meeting) is a similar convention but based in Europe. APUC, the Asia Pacific Unicycle Championships, are held biannually, alternately with Unicon. The first APUC, in 2007, was in Singapore. Subsequently, the event has been held in Hong Kong (2009), Seoul (2011), Canberra (2013), and Singapore (2015). EUC, the Extreme Unicycle Championship, is the convention for urban unicycling (Street, Trials and Flatland). The event is held in two editions: summer and winter. Winter EUC is usually held at Cologne, Germany, while locations of the summer edition vary. Races The world's first multi-stage unicycle race, Ride the Lobster, took place in Nova Scotia in June 2008. Some 35 teams from 14 countries competed over a total distance of 800 km. Each team consisted of a maximum of 3 riders and 1 support person. Unicross, or unicycle cyclocross is an emerging race format in which unicycles race over a cyclocross course. Manufacturers Unicycle makers include: Coker Impact Unicycles Kris Holm Unicycles Mad4One Miyata Nimbus Unicycles Torker (formerly) Unicycle.com Schwinn Qu-Ax
Technology
Human-powered transport
null
54416
https://en.wikipedia.org/wiki/Trolleybus
Trolleybus
A trolleybus (also known as trolley bus, trolley coach, trackless trolley, trackless tramin the 1910s and 1920sor trolley) is an electric bus that draws power from dual overhead wires (generally suspended from roadside posts) using spring-loaded trolley poles. Two wires, and two trolley poles, are required to complete the electrical circuit. This differs from a tram or streetcar, which normally uses the track as the return path, needing only one wire and one pole (or pantograph). They are also distinct from other kinds of electric buses, which usually rely on batteries. Power is most commonly supplied as 600-volt direct current, but there are exceptions. Currently, around 300 trolleybus systems are in operation, in cities and towns in 43 countries. Altogether, more than 800 trolleybus systems have existed, but not more than about 400 concurrently. History The trolleybus dates back to 29 April 1882, when Dr. Ernst Werner Siemens demonstrated his "Elektromote" in a Berlin suburb. This experiment continued until 13 June 1882, after which there were few developments in Europe, although separate experiments were conducted in the United States. In 1899, another vehicle which could run either on or off rails was demonstrated in Berlin. The next development was when Louis Lombard-Gérin operated an experimental line at the Paris Exhibition of 1900 after four years of trials, with a circular route around Lake Daumesnil that carried passengers. Routes followed in six places including Eberswalde and Fontainebleau. Max Schiemann on 10 July 1901 opened the world's fourth passenger-carrying trolleybus system, which operated at Bielatal (Biela Valley, near Dresden), Germany. Schiemann built and operated the Bielatal system, and is credited with developing the under-running trolley current collection system, with two horizontally parallel overhead wires and rigid trolleypoles spring-loaded to hold them up to the wires. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days there were many other methods of current collection. The Cédès-Stoll (Mercédès-Électrique-Stoll) system was first operated near Dresden between 1902 and 1904, and 18 systems followed. The Lloyd-Köhler or Bremen system was tried out in Bremen with 5 further installations, and the Cantono Frigerio system was used in Italy. Throughout this period, trackless freight systems and electric canal boats were also built. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain, on 20 June 1911. Supposedly, though it was opened on 20 June, the public was not admitted to the Bradford route until the 24th. Bradford was also the last city to operate trolleybuses in the UK; the system closed on 26 March 1972. The last rear-entrance trolleybus in service in Britain was also in Bradford and is now owned by the Bradford Trolleybus Association. Birmingham was the first UK city to replace a tram route with trolleybuses, while Wolverhampton, under the direction of Charles Owen Silvers, became world-famous for its trolleybus designs. There were 50 trolleybus systems in the UK, London's being the largest. By the time trolleybuses arrived in Britain in 1911, the Schiemann system was well established and was the most common, although the Cédès-Stoll (Mercédès-Électrique-Stoll) system was tried in West Ham (in 1912) and in Keighley (in 1913). Smaller trackless trolley systems were built in the US early as well. The first non-experimental system was a seasonal municipal line installed near Nantasket Beach in 1904; the first year-round commercial line was built to open a hilly property to development just outside Los Angeles in 1910. The trackless trolley was often seen as an interim step, leading to streetcars. In the US, some systems subscribed to the all-four concept of using buses, trolleybuses, streetcars (trams, trolleys), and rapid transit subway and/or elevated lines (metros), as appropriate, for routes ranging from the lightly used to the heaviest trunk line. Buses and trolleybuses in particular were seen as entry systems that could later be upgraded to rail as appropriate. In a similar fashion, many cities in Britain originally viewed trolleybus routes as extensions to tram (streetcar) routes where the cost of constructing or restoring track could not be justified at the time, though this attitude changed markedly (to viewing them as outright replacements for tram routes) in the years after 1918. Trackless trolleys were the dominant form of new post-World War I electric traction, with extensive systems in among others, Los Angeles, Chicago, Boston, Rhode Island, and Atlanta; San Francisco and Philadelphia still maintain an "all-four" fleet. Some trolleybus lines in the United States (and in Britain, as noted above) came into existence when a trolley or tram route did not have sufficient ridership to warrant track maintenance or reconstruction. In a similar manner, a proposed tram scheme in Leeds, United Kingdom, was changed to a trolleybus scheme to cut costs. Trolleybuses are uncommon today in North America, but their use is widespread in Europe and Russia. They remain common in many countries which were part of the Soviet Union. Generally trolleybuses occupy a position in usage between street railways (trams) and motorbuses. Worldwide, around 300 cities or metropolitan areas on 5 continents are served by trolleybuses (further detail under Use and preservation, below). This mode of transport operates in large cities, such as Belgrade, Lyon, Pyongyang, São Paulo, Seattle, Sofia, St. Petersburg, and Zurich, as well as in smaller ones such as Dayton, Gdynia, Lausanne, Limoges, Modena, and Salzburg. As of 2020, Kyiv has, due to its history in the former Soviet Union, the largest trolleybus system in the world in terms of route length while another formerly Soviet city, Minsk, has the largest system in terms of number of routes (which also date back to the Soviet era). Landskrona has the smallest system in terms of route length, while Mariánské Lázně is the smallest city to be served by trolleybuses. Opened in 1914, Shanghai's trolleybus system is the oldest operating system in the world. With a length of 86 km, route #52 of Crimean Trolleybus is the longest trolleybus line in the world.
Technology
Motorized road transport
null
54423
https://en.wikipedia.org/wiki/Phase%20transition
Phase transition
In physics, chemistry, and other related fields like biology, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of matter: solid, liquid, and gas, and in rare cases, plasma. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium, certain properties of the medium change as a result of the change of external conditions, such as temperature or pressure. This can be a discontinuous change; for example, a liquid may become gas upon heating to its boiling point, resulting in an abrupt change in volume. The identification of the external conditions at which a transformation occurs defines the phase transition point. Types of phase transition States of matter Phase transitions commonly refer to when a substance transforms between one of the four states of matter to another. At the phase transition point for a substance, for instance the boiling point, the two phases involved - liquid and vapor, have identical free energies and therefore are equally likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the boiling point the gaseous form is the more stable. Common transitions between the solid, liquid, and gaseous phases of a single component, due to the effects of temperature and/or pressure are identified in the following table: For a single component, the most stable phase at different temperatures and pressures can be shown on a phase diagram. Such a diagram usually depicts states in equilibrium. A phase transition usually occurs when the pressure or temperature changes and the system crosses from one region to another, like water turning from liquid to solid as soon as the temperature drops below the freezing point. In exception to the usual case, it is sometimes possible to change the state of a system diabatically (as opposed to adiabatically) in such a way that it can be brought past a phase transition point without undergoing a phase transition. The resulting state is metastable, i.e., less stable than the phase to which the transition would have occurred, but not unstable either. This occurs in superheating and supercooling, for example. Metastable states do not appear on usual phase diagrams. Structural Phase transitions can also occur when a solid changes to a different structure without changing its chemical makeup. In elements, this is known as allotropy, whereas in compounds it is known as polymorphism. The change from one crystal structure to another, from a crystalline solid to an amorphous solid, or from one amorphous structure to another () are all examples of solid to solid phase transitions. The martensitic transformation occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations. Order-disorder transitions such as in alpha-titanium aluminides. As with states of matter, there is also a metastable to equilibrium phase transformation for structural phase transitions. A metastable polymorph which forms rapidly due to lower surface energy will transform to an equilibrium phase given sufficient thermal input to overcome an energetic barrier. Magnetic Phase transitions can also describe the change between different kinds of magnetic ordering. The most well-known is the transition between the ferromagnetic and paramagnetic phases of magnetic materials, which occurs at what is called the Curie point. Another example is the transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide. A simplified but highly useful model of magnetic phase transitions is provided by the Ising Model Mixtures Phase transitions involving solutions and mixtures are more complicated than transitions involving a single compound. While chemically pure compounds exhibit a single temperature melting point between solid and liquid phases, mixtures can either have a single melting point, known as congruent melting, or they have different liquidus and solidus temperatures resulting in a temperature span where solid and liquid coexist in equilibrium. This is often the case in solid solutions, where the two components are isostructural. There are also a number of phase transitions involving three phases: a eutectic transformation, in which a two-component single-phase liquid is cooled and transforms into two solid phases. The same process, but beginning with a solid instead of a liquid is called a eutectoid transformation. A peritectic transformation, in which a two-component single-phase solid is heated and transforms into a solid phase and a liquid phase. A peritectoid reaction is a peritectoid reaction, except involving only solid phases. A monotectic reaction consists of change from a liquid and to a combination of a solid and a second liquid, where the two liquids display a miscibility gap. Separation into multiple phases can occur via spinodal decomposition, in which a single phase is cooled and separates into two different compositions. Non-equilibrium mixtures can occur, such as in supersaturation. Other examples Other phase changes include: Transition to a mesophase between solid and liquid, such as one of the "liquid crystal" phases. The dependence of the adsorption geometry on coverage and temperature, such as for hydrogen on iron (110). The emergence of superconductivity in certain metals and ceramics when cooled below a critical temperature. The emergence of metamaterial properties in artificial photonic media as their parameters are varied.<ref>Eds. Zhou, W., and Fan. S., [https://www.sciencedirect.com/bookseries/semiconductors-and-semimetals/vol/100/suppl/C Semiconductors and Semimetals. Vol 100. Photonic Crystal Metasurface Optoelectronics], Elsevier, 2019</ref> Quantum condensation of bosonic fluids (Bose–Einstein condensation). The superfluid transition in liquid helium is an example of this. The breaking of symmetries in the laws of physics during the early history of the universe as its temperature cooled. Isotope fractionation occurs during a phase transition, the ratio of light to heavy isotopes in the involved molecules changes. When water vapor condenses (an equilibrium fractionation), the heavier water isotopes (18O and 2H) become enriched in the liquid phase while the lighter isotopes (16O and 1H) tend toward the vapor phase. Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables (cf. phases). This condition generally stems from the interactions of a large number of particles in a system, and does not appear in systems that are small. Phase transitions can occur for non-thermodynamic systems, where temperature is not a parameter. Examples include: quantum phase transitions, dynamic phase transitions, and topological (structural) phase transitions. In these types of systems other parameters take the place of temperature. For instance, connection probability replaces temperature for percolating networks. Classifications Ehrenfest classification Paul Ehrenfest classified phase transitions based on the behavior of the thermodynamic free energy as a function of other thermodynamic variables. Under this scheme, phase transitions were labeled by the lowest derivative of the free energy that is discontinuous at the transition. First-order phase transitions exhibit a discontinuity in the first derivative of the free energy with respect to some thermodynamic variable. The various solid/liquid/gas transitions are classified as first-order transitions because they involve a discontinuous change in density, which is the (inverse of the) first derivative of the free energy with respect to pressure. Second-order phase transitions are continuous in the first derivative (the order parameter, which is the first derivative of the free energy with respect to the external field, is continuous across the transition) but exhibit discontinuity in a second derivative of the free energy. These include the ferromagnetic phase transition in materials such as iron, where the magnetization, which is the first derivative of the free energy with respect to the applied magnetic field strength, increases continuously from zero as the temperature is lowered below the Curie temperature. The magnetic susceptibility, the second derivative of the free energy with the field, changes discontinuously. Under the Ehrenfest classification scheme, there could in principle be third, fourth, and higher-order phase transitions. For example, the Gross–Witten–Wadia phase transition in 2-d lattice quantum chromodynamics is a third-order phase transition. The Curie points of many ferromagnetics is also a third-order transition, as shown by their specific heat having a sudden change in slope. In practice, only the first- and second-order phase transitions are typically observed. The second-order phase transition was for a while controversial, as it seems to require two sheets of the Gibbs free energy to osculate exactly, which is so unlikely as to never occur in practice. Cornelis Gorter replied the criticism by pointing out that the Gibbs free energy surface might have two sheets on one side, but only one sheet on the other side, creating a forked appearance. ( pp. 146--150) The Ehrenfest classification implicitly allows for continuous phase transformations, where the bonding character of a material changes, but there is no discontinuity in any free energy derivative. An example of this occurs at the supercritical liquid–gas boundaries. The first example of a phase transition which did not fit into the Ehrenfest classification was the exact solution of the Ising model, discovered in 1944 by Lars Onsager. The exact specific heat differed from the earlier mean-field approximations, which had predicted that it has a simple discontinuity at critical temperature. Instead, the exact specific heat had a logarithmic divergence at the critical temperature. In the following decades, the Ehrenfest classification was replaced by a simplified classification scheme that is able to incorporate such transitions. Modern classifications In the modern classification scheme, phase transitions are divided into two broad categories, named similarly to the Ehrenfest classes: First-order phase transitions are those that involve a latent heat. During such a transition, a system either absorbs or releases a fixed (and typically large) amount of energy per volume. During this process, the temperature of the system will stay constant as heat is added: the system is in a "mixed-phase regime" in which some parts of the system have completed the transition and others have not.Faghri, A., and Zhang, Y., Fundamentals of Multiphase Heat Transfer and Flow, Springer, New York, NY, 2020 Familiar examples are the melting of ice or the boiling of water (the water does not instantly turn into vapor, but forms a turbulent mixture of liquid water and vapor bubbles). Yoseph Imry and Michael Wortis showed that quenched disorder can broaden a first-order transition. That is, the transformation is completed over a finite range of temperatures, but phenomena like supercooling and superheating survive and hysteresis is observed on thermal cycling. s are also called "continuous phase transitions". They are characterized by a divergent susceptibility, an infinite correlation length, and a power law decay of correlations near criticality. Examples of second-order phase transitions are the ferromagnetic transition, superconducting transition (for a Type-I superconductor the phase transition is second-order at zero external field and for a Type-II superconductor the phase transition is second-order for both normal-state–mixed-state and mixed-state–superconducting-state transitions) and the superfluid transition. In contrast to viscosity, thermal expansion and heat capacity of amorphous materials show a relatively sudden change at the glass transition temperature which enables accurate detection using differential scanning calorimetry measurements. Lev Landau gave a phenomenological theory of second-order phase transitions. Apart from isolated, simple phase transitions, there exist transition lines as well as multicritical points, when varying external parameters like the magnetic field or composition. Several transitions are known as infinite-order phase transitions. They are continuous but break no symmetries. The most famous example is the Kosterlitz–Thouless transition in the two-dimensional XY model. Many quantum phase transitions, e.g., in two-dimensional electron gases, belong to this class. The liquid–glass transition is observed in many polymers and other liquids that can be supercooled far below the melting point of the crystalline phase. This is atypical in several respects. It is not a transition between thermodynamic ground states: it is widely believed that the true ground state is always crystalline. Glass is a quenched disorder state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon: on cooling a liquid, internal degrees of freedom successively fall out of equilibrium. Some theoretical methods predict an underlying phase transition in the hypothetical limit of infinitely long relaxation times. No direct experimental evidence supports the existence of these transitions. Characteristic properties Phase coexistence A disorder-broadened first-order transition occurs over a finite range of temperatures where the fraction of the low-temperature equilibrium phase grows from zero to one (100%) as the temperature is lowered. This continuous variation of the coexisting fractions with temperature raised interesting possibilities. On cooling, some liquids vitrify into a glass rather than transform to the equilibrium crystal phase. This happens if the cooling rate is faster than a critical cooling rate, and is attributed to the molecular motions becoming so slow that the molecules cannot rearrange into the crystal positions. This slowing down happens below a glass-formation temperature Tg, which may depend on the applied pressure. If the first-order freezing transition occurs over a range of temperatures, and Tg falls within this range, then there is an interesting possibility that the transition is arrested when it is partial and incomplete. Extending these ideas to first-order magnetic transitions being arrested at low temperatures, resulted in the observation of incomplete magnetic transitions, with two magnetic phases coexisting, down to the lowest temperature. First reported in the case of a ferromagnetic to anti-ferromagnetic transition, such persistent phase coexistence has now been reported across a variety of first-order magnetic transitions. These include colossal-magnetoresistance manganite materials, magnetocaloric materials, magnetic shape memory materials, and other materials. The interesting feature of these observations of Tg falling within the temperature range over which the transition occurs is that the first-order magnetic transition is influenced by magnetic field, just like the structural transition is influenced by pressure. The relative ease with which magnetic fields can be controlled, in contrast to pressure, raises the possibility that one can study the interplay between Tg and Tc in an exhaustive way. Phase coexistence across first-order magnetic transitions will then enable the resolution of outstanding issues in understanding glasses. Critical points In any system containing liquid and gaseous phases, there exists a special combination of pressure and temperature, known as the critical point, at which the transition between liquid and gas becomes a second-order transition. Near the critical point, the fluid is sufficiently hot and compressed that the distinction between the liquid and gaseous phases is almost non-existent. This is associated with the phenomenon of critical opalescence, a milky appearance of the liquid due to density fluctuations at all possible wavelengths (including those of visible light). Symmetry Phase transitions often involve a symmetry breaking process. For instance, the cooling of a fluid into a crystalline solid breaks continuous translation symmetry: each point in the fluid has the same properties, but each point in a crystal does not have the same properties (unless the points are chosen from the lattice points of the crystal lattice). Typically, the high-temperature phase contains more symmetries than the low-temperature phase due to spontaneous symmetry breaking, with the exception of certain accidental symmetries (e.g. the formation of heavy virtual particles, which only occurs at low temperatures). Order parameters An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the other. At the critical point, the order parameter susceptibility will usually diverge. An example of an order parameter is the net magnetization in a ferromagnetic system undergoing a phase transition. For liquid/gas transitions, the order parameter is the difference of the densities. From a theoretical perspective, order parameters arise from symmetry breaking. When this happens, one needs to introduce one or more extra variables to describe the state of the system. For example, in the ferromagnetic phase, one must provide the net magnetization, whose direction was spontaneously chosen when the system cooled below the Curie point. However, note that order parameters can also be defined for non-symmetry-breaking transitions. Some phase transitions, such as superconducting and ferromagnetic, can have order parameters for more than one degree of freedom. In such phases, the order parameter may take the form of a complex number, a vector, or even a tensor, the magnitude of which goes to zero at the phase transition. There also exist dual descriptions of phase transitions in terms of disorder parameters. These indicate the presence of line-like excitations such as vortex- or defect lines. Relevance in cosmology Symmetry-breaking phase transitions play an important role in cosmology. As the universe expanded and cooled, the vacuum underwent a series of symmetry-breaking phase transitions. For example, the electroweak transition broke the SU(2)×U(1) symmetry of the electroweak field into the U(1) symmetry of the present-day electromagnetic field. This transition is important to explain the asymmetry between the amount of matter and antimatter in the present-day universe, according to electroweak baryogenesis theory. Progressive phase transitions in an expanding universe are implicated in the development of order in the universe, as is illustrated by the work of Eric Chaisson and David Layzer.
Physical sciences
Phase transitions
null
54444
https://en.wikipedia.org/wiki/Falcon
Falcon
Falcons () are birds of prey in the genus Falco, which includes about 40 species. Some small species of falcons with long, narrow wings are called hobbies, and some that hover while hunting are called kestrels. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene. Adult falcons have thin, tapered wings, which enable them to fly at high speed and change direction rapidly. Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broadwing. This makes flying easier while still learning the aerial skills required to be effective hunters like the adults. The falcons are the largest genus in the Falconinae subfamily of Falconidae, which also includes two other subfamilies comprising caracaras and a few other species of "falcons". All these birds kill prey with their beaks, using a tomial "tooth" on the side of their beaks — unlike the hawks, eagles and other larger birds of prey from the unrelated family Accipitridae, who use talons on their feet. The largest falcon is the gyrfalcon at up to in length. The smallest falcon species is the pygmy falcon, which measures just . As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species. As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of human eyes. They are incredibly fast fliers, with the Peregrine falcons having been recorded diving at speeds of , making them the fastest-moving creatures on Earth; the fastest recorded dive attained a vertical speed of . Taxonomy The genus Falco was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. The type species is the merlin (Falco columbarius). The genus name is Late Latin meaning a "falcon" from , , meaning "a sickle", referring to the claws of the bird. In Middle English and Old French, the title refers generically to several captive raptor species. The traditional term for a male falcon is tercel (British spelling) or tiercel (American spelling), from the Latin (third) because of the belief that only one in three eggs hatched a male bird. Some sources give the etymology as deriving from the fact that a male falcon is about one-third smaller than a female (). A falcon chick, especially one reared for falconry, still in its downy stage, is known as an eyas (sometimes spelled eyass). The word arose by mistaken division of Old French , from Latin presumed (nestling) from (nest). The technique of hunting with trained captive birds of prey is known as falconry. Compared to other birds of prey, the fossil record of the falcons is not well distributed in time. For years, the oldest fossils tentatively assigned to this genus were from the Late Miocene, less than 10 million years ago. This coincides with a period in which many modern genera of birds became recognizable in the fossil record. As of 2021, the oldest falconid fossil is estimated to be 55 million years old. Given the distribution of fossil and living Falco taxa, falcons are probably of North American, African, or possibly Middle Eastern or European origin. Falcons are not closely related to other birds of prey, and their nearest relatives are parrots and songbirds. Overview Falcons are roughly divisible into three or four groups. The first contains the kestrels (probably excepting the American kestrel); usually small and stocky falcons of mainly brown upperside colour and sometimes sexually dimorphic; three African species that are generally gray in colour stand apart from the typical members of this group. The fox and greater kestrels can be told apart at first glance by their tail colours, but not by much else; they might be very close relatives and are probably much closer to each other than the lesser and common kestrels. Kestrels feed chiefly on terrestrial vertebrates and invertebrates of appropriate size, such as rodents, reptiles, or insects. The second group contains slightly larger (on average) species, the hobbies and relatives. These birds are characterized by considerable amounts of dark slate-gray in their plumage; their malar areas are nearly always black. They feed mainly on smaller birds. Third are the peregrine falcon and its relatives, variably sized powerful birds that also have a black malar area (except some very light color morphs), and often a black cap, as well. They are very fast birds with a maximum speed of 390 kilometres per hour. Otherwise, they are somewhat intermediate between the other groups, being chiefly medium grey with some lighter or brownish colours on their upper sides. They are, on average, more delicately patterned than the hobbies and, if the hierofalcons are excluded (see below), this group typically contains species with horizontal barring on their undersides. As opposed to the other groups, where tail colour varies much in general but little according to evolutionary relatedness, the tails of the large falcons are quite uniformly dark grey with inconspicuous black banding and small, white tips, though this is probably plesiomorphic. These large Falco species feed on mid-sized birds and terrestrial vertebrates. Very similar to these, and sometimes included therein, are the four or so species of hierofalcon (literally, "hawk-falcons"). They represent taxa with, usually, more phaeomelanins, which impart reddish or brown colors, and generally more strongly patterned plumage reminiscent of hawks. Their undersides have a lengthwise pattern of blotches, lines, or arrowhead marks. While these three or four groups, loosely circumscribed, are an informal arrangement, they probably contain several distinct clades in their entirety. A study of mtDNA cytochrome b sequence data of some kestrels identified a clade containing the common kestrel and related "malar-striped" species, to the exclusion of such taxa as the greater kestrel (which lacks a malar stripe), the lesser kestrel (which is very similar to the common, but also has no malar stripe), and the American kestrel, which has a malar stripe, but its colour pattern – apart from the brownish back – and also the black feathers behind the ear, which never occur in the true kestrels, are more reminiscent of some hobbies. The malar-striped kestrels apparently split from their relatives in the Gelasian, roughly 2.0–2.5 million years ago (Mya), and are seemingly of tropical East African origin. The entire "true kestrel" group—excluding the American species—is probably a distinct and quite young clade, as also suggested by their numerous apomorphies. Other studies have confirmed that the hierofalcon are a monophyletic group–and that hybridization is quite frequent at least in the larger falcon species. Initial studies of mtDNA cytochrome b sequence data suggested that the hierofalcon are basal among living falcons. The discovery of a NUMT proved this earlier theory erroneous. In reality, the hierofalcon are a rather young group, originating at the same time as the start of the main kestrel radiation, about 2 Mya. Very little fossil history exists for this lineage. However, the present diversity of very recent origin suggests that this lineage may have nearly gone extinct in the recent past. The phylogeny and delimitations of the peregrine and hobby groups are more problematic. Molecular studies have only been conducted on a few species, and the morphologically ambiguous taxa have often been little researched. The morphology of the syrinx, which contributes well to resolving the overall phylogeny of the Falconidae, is not very informative in the present genus. Nonetheless, a core group containing the peregrine and Barbary falcons, which, in turn, group with the hierofalcon and the more distant prairie falcon (which was sometimes placed with the hierofalcon, though it is entirely distinct biogeographically), as well as at least most of the "typical" hobbies, are confirmed to be monophyletic as suspected. Given that the American Falco species of today belong to the peregrine group, or are apparently more basal species, the initially most successful evolutionary radiation seemingly was a Holarctic one that originated possibly around central Eurasia or in (northern) Africa. One or several lineages were present in North America by the Early Pliocene at latest. The origin of today's major Falco groups—the "typical" hobbies and kestrels, for example, or the peregrine-hierofalcon complex, or the aplomado falcon lineage—can be quite confidently placed from the Miocene-Pliocene boundary through the Zanclean and Piacenzian and just into the Gelasian, that is from 2.4 to 5.3 Mya, when the malar-striped kestrels diversified. Some groups of falcons, such as the hierofalcon complex and the peregrine-Barbary superspecies, have only evolved in more recent times; the species of the former seem to be 120,000 years old or so. Species The sequence follows the taxonomic order of White et al. (1996), except for adjustments in the kestrel sequence. Extinct species Réunion kestrel, Falco duboisi – extinct (about 1700) Fossil record Falco medius (Late Miocene of Cherevichnyi, Ukraine) ?Falco sp. (Late Miocene of Idaho) Falco sp. (Early Pliocene of Kansas) Falco sp. (Early Pliocene of Bulgaria – Early Pleistocene of Spain and Czech Republic) Falco oregonus (Early/Middle Pliocene of Fossil Lake, Oregon) – possibly not distinct from a living species Falco umanskajae (Late Pliocene of Kryzhanovka, Ukraine) – includes "Falco odessanus", a nomen nudum ?Falco bakalovi (Late Pliocene of Varshets, Bulgaria) Falco antiquus (Middle Pleistocene of Noailles, France and possibly Horvőlgy, Hungary) Cuban kestrel, Falco kurochkini (Late Pleistocene/Holocene of Cuba, West Indies) Falco chowi (China) Falco bulgaricus (Late Miocene of Hadzhidimovo, Bulgaria) Several more paleosubspecies of extant species also been described; see species accounts for these. "Sushkinia" pliocaena from the Early Pliocene of Pavlodar (Kazakhstan) appears to be a falcon of some sort. It might belong in this genus or a closely related one. In any case, the genus name Sushkinia is invalid for this animal because it had already been allocated to a prehistoric dragonfly relative. In 2015 the bird genus was renamed Psushkinia. The supposed "Falco" pisanus was actually a pigeon of the genus Columba, possibly the same as Columba omnisanctorum, which, in that case, would adopt the older species name of the "falcon". The Eocene fossil "Falco" falconellus (or "F." falconella) from Wyoming is a bird of uncertain affiliations, maybe a falconid, maybe not; it certainly does not belong in this genus. "Falco" readei is now considered a paleosubspecies of the yellow-headed caracara (Milvago chimachima).
Biology and health sciences
Accipitriformes and Falconiformes
null
54445
https://en.wikipedia.org/wiki/Bird%20of%20prey
Bird of prey
Birds of prey or predatory birds, also known as (although not the same as) raptors, are hypercarnivorous bird species that actively hunt and feed on other vertebrates (mainly mammals, reptiles and other smaller birds). In addition to speed and strength, these predators have keen eyesight for detecting prey from a distance or during flight, strong feet with sharp talons for grasping or killing prey, and powerful, curved beaks for tearing off flesh. Although predatory birds primarily hunt live prey, many species (such as fish eagles, vultures and condors) also scavenge and eat carrion. Although the term "bird of prey" could theoretically be taken to include all birds that actively hunt and eat other animals, ornithologists typically use the narrower definition followed in this page, excluding many piscivorous predators such as storks, cranes, herons, gulls, skuas, penguins, and kingfishers, as well as many primarily insectivorous birds such as passerines (e.g. shrikes), nightjars, frogmouths, songbirds such as crows and ravens, alongside opportunistic predators from predominantly frugivorous or herbivorous ratites such as cassowaries and rheas. Some extinct predatory telluravian birds had talons similar to those of modern birds of prey, including mousebird relatives (Sandcoleidae), and Messelasturidae indicating possible common descent. Some Enantiornithes also had such talons, indicating possible convergent evolution, as enanthiornithines weren't even modern birds. Common names The term raptor is derived from the Latin word rapio, meaning "to seize or take by force". The common names for various birds of prey are based on structure, but many of the traditional names do not reflect the evolutionary relationships between the groups. Eagles tend to be large, powerful birds with long, broad wings and massive feet. Booted eagles have legs and feet feathered to the toes and build very large stick nests. Falcons and kestrels are medium-size birds of prey with long pointed wings, and many are particularly swift flyers. They belong to the family Falconidae, only distantly related to the Accipitriformes below. Caracaras are a distinct subgroup of the Falconidae unique to the New World, and most common in the Neotropics – their broad wings, naked faces and appetites of a generalist suggest some level of convergence with either Buteo or the vulturine birds, or both. True hawks are medium-sized birds of prey that usually belong to the genus Accipiter (see below). They are mainly woodland birds that hunt by sudden dashes from a concealed perch. They usually have long tails for tight steering. Buzzards are medium-large raptors with robust bodies and broad wings, or, alternatively, any bird of the genus Buteo (also commonly known as "hawks" in North America, while "buzzard" is colloquially used for vultures). Harriers are large, slender hawk-like birds with long tails and long thin legs. Most use a combination of keen eyesight and hearing to hunt small vertebrates, gliding on their long broad wings and circling low over grasslands and marshes. Kites have long wings and relatively weak legs. They spend much of their time soaring. They will take live vertebrate prey, but mostly feed on insects or even carrion. The osprey, a single species found worldwide that specializes in catching fish and builds large stick nests. Owls are variable-sized, typically night-specialized hunting birds. They fly almost silently due to their special feather structure that reduces turbulence. They have particularly acute hearing and nocturnal eyesight. The secretarybird is a single species with a large body and long, stilted legs endemic to the open grasslands of Sub-Saharan Africa. Vultures are scavengers and carrion-eating raptors of two distinct biological families: the Old World vultures (Accipitridae), which occurs only in the Eastern Hemisphere; and the New World vultures (Cathartidae), which occurs only in the Western Hemisphere. Members of both groups have heads either partly or fully devoid of feathers. Many of these English language group names originally referred to particular species encountered in Britain. As English-speaking people travelled further, the familiar names were applied to new birds with similar characteristics. Names that have generalised this way include: kite (Milvus milvus), sparrowhawk or sparhawk (Accipiter nisus), goshawk (Accipiter gentilis), kestrel (Falco tinninculus), hobby (Falco subbuteo), harrier (simplified from "hen-harrier", Circus cyaneus), buzzard (Buteo buteo). Some names have not generalised, and refer to single species (or groups of closely related (sub)species), such as the merlin (Falco columbarius). Systematics Historical classifications The taxonomy of Carl Linnaeus grouped birds (class Aves) into orders, genera, and species, with no formal ranks between genus and order. He placed all birds of prey into a single order, Accipitres, subdividing this into four genera: Vultur (vultures), Falco (eagles, hawks, falcons, etc.), Strix (owls), and Lanius (shrikes). This approach was followed by subsequent authors such as Gmelin, Latham and Turton. Louis Pierre Vieillot used additional ranks: order, tribe, family, genus, species. Birds of prey (order Accipitres) were divided into diurnal and nocturnal tribes; the owls remained monogeneric (family Ægolii, genus Strix), whilst the diurnal raptors were divided into three families: Vulturini, Gypaëti, and Accipitrini. Thus Vieillot's families were similar to the Linnaean genera, with the difference that shrikes were no longer included amongst the birds of prey. In addition to the original Vultur and Falco (now reduced in scope), Vieillot adopted four genera from Savigny: Phene, Haliæetus, Pandion, and Elanus. He also introduced five new genera of vultures (Gypagus, Catharista, Daptrius, Ibycter, Polyborus) and eleven new genera of accipitrines (Aquila, Circaëtus, Circus, Buteo, Milvus, Ictinia, Physeta, Harpia, Spizaëtus, Asturina, Sparvius). Falconimorphae is a deprecated superorder within Raptores, formerly composed of the orders Falconiformes and Strigiformes. The clade was invalidated after 2012. Falconiformes is now placed in Eufalconimorphae, while Strigiformes is placed in Afroaves. Modern systematics The order Accipitriformes is believed to have originated 44 million years ago when it split from the common ancestor of the secretarybird (Sagittarius serpentarius) and the accipitrid species. The phylogeny of Accipitriformes is complex and difficult to unravel. Widespread paraphylies were observed in many phylogenetic studies. More recent and detailed studies show similar results. However, according to the findings of a 2014 study, the sister relationship between larger clades of Accipitriformes was well supported (e.g. relationship of Harpagus kites to buzzards and sea eagles and these latter two with Accipiter hawks are sister taxa of the clade containing Aquilinae and Harpiinae). The diurnal birds of prey are formally classified into six families of two different orders (Accipitriformes and Falconiformes). Accipitridae: hawks, eagles, buzzards, harriers, kites, and Old World vultures Pandionidae: the osprey Sagittariidae: the secretarybird Falconidae: falcons, caracaras, and forest falcons Cathartidae: New World vultures, including condors These families were traditionally grouped together in a single order Falconiformes but are now split into two orders, the Falconiformes and Accipitriformes. The Cathartidae are sometimes placed in a separate order Cathartiformes. Formerly, they were sometimes placed in the order Ciconiiformes. The secretary bird and/or osprey are sometimes listed as subfamilies of Acciptridae: Sagittariinae and Pandioninae, respectively. Australia's letter-winged kite is a member of the family Accipitridae, although it is a nocturnal bird. The nocturnal birds of prey—the owls—are classified separately as members of two extant families of the order Strigiformes: Strigidae: "typical owls" Tytonidae: barn and bay owls Phylogeny Below is a simplified phylogeny of Telluraves which is the clade where the birds of prey belong to along with passerines and several near-passerine lineages. The orders in bold text are birds of prey orders; this is to show the paraphyly of the group as well as their relationships to other birds. A recent phylogenomic study from Wu et al. (2024) has found an alternative phylogeny for the placement of the birds of prey. Their analysis has found support in a clade consisting of the Strigiformes and Accipitriformes in new clade Hieraves. Hieraves was also recovered to be the sister clade to Australaves (which it includes the Cariamiformes and Falconiformes along with Psittacopasserae). Below is their phylogeny from the study. Possible inclusion of Cariamiformes Cariamiformes is an order of telluravian birds consisting of the living seriemas and extinct terror birds. Jarvis et al. 2014 suggested including them in the category of birds of prey, and McClure et al. 2019 considered seriemas to be birds of prey. The Peregrine Fund also considers seriemas to be birds of prey. Like most birds of prey, seriemas and terror birds prey on vertebrates. However, seriemas were not traditionally considered birds of prey, and they are still not considered birds of prey in general parlance. They were traditionally classified in the order Gruiformes, but later research has reclassified them into Cariamiformes. The bodies of seriemas are also shaped somewhat differently from birds of prey. Their legs and necks are significantly longer than those of typical raptors, although the secretarybirds (traditionally considered raptors) also have comparably long legs. The beaks of seriemas are hooked (as in raptors), but are longer than those of typical raptors. Migration Migratory behaviour evolved multiple times within accipitrid raptors. The earliest event occurred nearly 14 to 12 million years ago. This result seems to be one of the oldest dates published so far in the case of birds of prey. For example, a previous reconstruction of migratory behaviour in one Buteo clade with a result of the origin of migration around 5 million years ago was also supported by that study. Migratory species of raptors may have had a southern origin because it seems that all of the major lineages within Accipitridae had an origin in one of the biogeographic realms of the Southern Hemisphere. The appearance of migratory behaviour occurred in the tropics parallel with the range expansion of migratory species to temperate habitats. Similar results of southern origin in other taxonomic groups can be found in the literature. Distribution and biogeographic history highly determine the origin of migration in birds of prey. Based on some comparative analyses, diet breadth also has an effect on the evolution of migratory behaviour in this group, but its relevance needs further investigation. The evolution of migration in animals seems to be a complex and difficult topic with many unanswered questions. A recent study discovered new connections between migration and the ecology, life history of raptors. A brief overview from abstract of the published paper shows that "clutch size and hunting strategies have been proved to be the most important variables in shaping distribution areas, and also the geographic dissimilarities may mask important relationships between life history traits and migratory behaviours. The West Palearctic-Afrotropical and the North-South American migratory systems are fundamentally different from the East Palearctic-Indomalayan system, owing to the presence versus absence of ecological barriers." Maximum entropy modelling can help in answering the question: why species winters at one location while the others are elsewhere. Temperature and precipitation related factors differ in the limitation of species distributions. "This suggests that the migratory behaviours differ among the three main migratory routes for these species" which may have important conservational consequences in the protection of migratory raptors. Sexual dimorphism Birds of prey (raptors) are known to display patterns of sexual dimorphism. It is commonly believed that the dimorphisms found in raptors occur due to sexual selection or environmental factors. In general, hypotheses in favor of ecological factors being the cause for sexual dimorphism in raptors are rejected. This is because the ecological model is less parsimonious, meaning that its explanation is more complex than that of the sexual selection model. Additionally, ecological models are much harder to test because a great deal of data is required. Dimorphisms can also be the product of intrasexual selection between males and females. It appears that both sexes of the species play a role in the sexual dimorphism within raptors; females tend to compete with other females to find good places to nest and attract males, and males competing with other males for adequate hunting ground so they appear as the most healthy mate. It has also been proposed that sexual dimorphism is merely the product of disruptive selection, and is merely a stepping stone in the process of speciation, especially if the traits that define gender are independent across a species. Sexual dimorphism can be viewed as something that can accelerate the rate of speciation. In non-predatory birds, males are typically larger than females. However, in birds of prey, the opposite is the case. For instance, the kestrel is a type of falcon in which males are the primary providers, and the females are responsible for nurturing the young. In this species, the smaller the kestrels are, the less food is needed and thus, they can survive in environments that are harsher. This is particularly true in the male kestrels. It has become more energetically favorable for male kestrels to remain smaller than their female counterparts because smaller males have an agility advantage when it comes to defending the nest and hunting. Larger females are favored because they can incubate larger numbers of offspring, while also being able to brood a larger clutch size. Olfaction It is a long-standing belief that birds lack any sense of smell, but it has become clear that many birds do have functional olfactory systems. Despite this, most raptors are still considered to primarily rely on vision, with raptor vision being extensively studied. A 2020 review of the existing literature combining anatomical, genetic, and behavioural studies showed that, in general, raptors have functional olfactory systems that they are likely to use in a range of different contexts. Persecution Birds of prey have been historically persecuted both directly and indirectly. In the Danish Faroe Islands, there were rewards Naebbetold (by royal decree from 1741) given in return for the bills of birds of prey shown by hunters. In Britain, kites and buzzards were seen as destroyers of game and killed, for instance in 1684–5 alone as many as 100 kites were killed. Rewards for their killing were also in force in the Netherlands from 1756. From 1705 to 1800, it has been estimated that 624087 birds of prey were killed in a part of Germany that included Hannover, Luneburg, Lauenburg and Bremen with 14125 claws deposited just in 1796–97. Many species also develop lead poisoning after accidental consumption of lead shot when feeding on animals that had been shot by hunters. Lead pellets from direct shooting that the birds have escaped from also cause reduced fitness and premature deaths. Attacks on humans Some evidence supports the contention that the African crowned eagle occasionally views human children as prey, with a witness account of one attack (in which the victim, a seven-year-old boy, survived and the eagle was killed), and the discovery of part of a human child skull in a nest. This would make it the only living bird known to prey on humans, although other birds such as ostriches and cassowaries have killed humans in self-defense and a lammergeier might have killed Aeschylus by accident. Many stories of Brazilian indigenous peoples speak about children mauled by Uiruuetê, the Harpy Eagle in Tupi language. Various large raptors like golden eagles are reported attacking human beings, but its unclear if they intend to eat them or if they have ever been successful in killing one. Some fossil evidence indicates large birds of prey occasionally preyed on prehistoric hominids. The Taung Child, an early human found in Africa, is believed to have been killed by an eagle-like bird similar to the crowned eagle. The Haast's eagle may have preyed on early humans in New Zealand, and this conclusion would be consistent with Maori folklore. Leptoptilos robustus might have preyed on both Homo floresiensis and anatomically modern humans, and the Malagasy crowned eagle, teratorns, Woodward's eagle and Caracara major are similar in size to the Haast's eagle, implying that they similarly could pose a threat to a human being. Vision Birds of prey have incredible vision and rely heavily on it for a number of tasks. They utilize their high visual acuity to obtain food, navigate their surroundings, distinguish and flee from predators, mating, nest construction, and much more. They accomplish these tasks with a large eye in relation to their skull, which allows for a larger image to be projected onto the retina. The visual acuity of some large raptors such as eagles and Old World vultures are the highest known among vertebrates; the wedge-tailed eagle has twice the visual acuity of a typical human and six times that of the common ostrich, the vertebrate with the largest eyes. There are two regions in the retina, called the deep and shallow fovea, that are specialized for acute vision. These regions contain the highest density of photoreceptors, and provide the highest points of visual acuity. The deep fovea points forward at an approximate 45° angle, while the shallow fovea points approximately 15° to the right or left of the head axis. Several raptor species repeatedly cock their heads into three distinct positions while observing an object. First, is straight ahead with their head pointed towards the object. Second and third are sideways to the right or left of the object, with their head axis positioned approximately 40° adjacent to the object. This movement is believed to be associated with lining up the incoming image to fall on the deep fovea. Raptors will choose which head position to use depending on the distance to the object. At distances as close as 8m, they used primarily binocular vision. At distances greater than 21m, they spent more time using monocular vision. At distances greater than 40m, they spent 80% or more time using their monocular vision. This suggests that raptors tilt their head to rely on the highly acute deep fovea. Like all birds, raptors possess tetrachromacy, however, due to their emphasis on visual acuity, many diurnal birds of prey have little ability to see ultraviolet light as this produces chromatic aberration which decreases the clarity of vision.
Biology and health sciences
General articles
null
54493
https://en.wikipedia.org/wiki/Kuratowski%27s%20theorem
Kuratowski's theorem
In graph theory, Kuratowski's theorem is a mathematical forbidden graph characterization of planar graphs, named after Kazimierz Kuratowski. It states that a finite graph is planar if and only if it does not contain a subgraph that is a subdivision of (the complete graph on five vertices) or of (a complete bipartite graph on six vertices, three of which connect to each of the other three, also known as the utility graph). Statement A planar graph is a graph whose vertices can be represented by points in the Euclidean plane, and whose edges can be represented by simple curves in the same plane connecting the points representing their endpoints, such that no two curves intersect except at a common endpoint. Planar graphs are often drawn with straight line segments representing their edges, but by Fáry's theorem this makes no difference to their graph-theoretic characterization. A subdivision of a graph is a graph formed by subdividing its edges into paths of one or more edges. Kuratowski's theorem states that a finite graph is planar if it is not possible to subdivide the edges of or , and then possibly add additional edges and vertices, to form a graph isomorphic to . Equivalently, a finite graph is planar if and only if it does not contain a subgraph that is homeomorphic to or . Kuratowski subgraphs If is a graph that contains a subgraph that is a subdivision of or , then is known as a Kuratowski subgraph of . With this notation, Kuratowski's theorem can be expressed succinctly: a graph is planar if and only if it does not have a Kuratowski subgraph. The two graphs and are nonplanar, as may be shown either by a case analysis or an argument involving Euler's formula. Additionally, subdividing a graph cannot turn a nonplanar graph into a planar graph: if a subdivision of a graph has a planar drawing, the paths of the subdivision form curves that may be used to represent the edges of itself. Therefore, a graph that contains a Kuratowski subgraph cannot be planar. The more difficult direction in proving Kuratowski's theorem is to show that, if a graph is nonplanar, it must contain a Kuratowski subgraph. Algorithmic implications A Kuratowski subgraph of a nonplanar graph can be found in linear time, as measured by the size of the input graph. This allows the correctness of a planarity testing algorithm to be verified for nonplanar inputs, as it is straightforward to test whether a given subgraph is or is not a Kuratowski subgraph. Usually, non-planar graphs contain a large number of Kuratowski-subgraphs. The extraction of these subgraphs is needed, e.g., in branch and cut algorithms for crossing minimization. It is possible to extract a large number of Kuratowski subgraphs in time dependent on their total size. History Kazimierz Kuratowski published his theorem in 1930. The theorem was independently proved by Orrin Frink and Paul Smith, also in 1930, but their proof was never published. The special case of cubic planar graphs (for which the only minimal forbidden subgraph is ) was also independently proved by Karl Menger in 1930. Since then, several new proofs of the theorem have been discovered. In the Soviet Union, Kuratowski's theorem was known as either the Pontryagin–Kuratowski theorem or the Kuratowski–Pontryagin theorem, as the theorem was reportedly proved independently by Lev Pontryagin around 1927. However, as Pontryagin never published his proof, this usage has not spread to other places. Related results A closely related result, Wagner's theorem, characterizes the planar graphs by their minors in terms of the same two forbidden graphs and . Every Kuratowski subgraph is a special case of a minor of the same type, and while the reverse is not true, it is not difficult to find a Kuratowski subgraph (of one type or the other) from one of these two forbidden minors; therefore, these two theorems are equivalent. An extension is the Robertson–Seymour theorem.
Mathematics
Graph theory
null
54513
https://en.wikipedia.org/wiki/Opal
Opal
Opal is a hydrated amorphous form of silica (SiO2·nH2O); its water content may range from 3% to 21% by weight, but is usually between 6% and 10%. Due to the amorphous (chemical) physical structure, it is classified as a mineraloid, unlike crystalline forms of silica, which are considered minerals. It is deposited at a relatively low temperature and may occur in the fissures of almost any kind of rock, being most commonly found with limonite, sandstone, rhyolite, marl, and basalt. The name opal is believed to be derived from the Sanskrit word (), which means 'jewel', and later the Greek derivative (). There are two broad classes of opal: precious and common. Precious opal displays play-of-color (iridescence); common opal does not. Play-of-color is defined as "a pseudo chromatic optical effect resulting in flashes of colored light from certain minerals, as they are turned in white light." The internal structure of precious opal causes it to diffract light, resulting in play-of-color. Depending on the conditions in which it formed, opal may be transparent, translucent, or opaque, and the background color may be white, black, or nearly any color of the visual spectrum. Black opal is considered the rarest, while white, gray, and green opals are the most common. Precious opal Precious opal shows a variable interplay of internal colors, and though it is a mineraloid, it has an internal structure. At microscopic scales, precious opal is composed of silica spheres some in diameter in a hexagonal or cubic close-packed lattice. It was shown by J. V. Sanders in the mid-1960s that these ordered silica spheres produce the internal colors by causing the interference and diffraction of light passing through the microstructure of the opal. The regularity of the sizes and the packing of these spheres is a prime determinant of the quality of precious opal. Where the distance between the regularly packed planes of spheres is around half the wavelength of a component of visible light, the light of that wavelength may be subject to diffraction from the grating created by the stacked planes. The colors that are observed are determined by the spacing between the planes and the orientation of planes with respect to the incident light. The process can be described by Bragg's law of diffraction. Visible light cannot pass through large thicknesses of the opal. This is the basis of the optical band gap in a photonic crystal. In addition, microfractures may be filled with secondary silica and form thin lamellae inside the opal during its formation. The term opalescence is commonly used to describe this unique and beautiful phenomenon, which in gemology is termed play of color. In gemology, opalescence is applied to the hazy-milky-turbid sheen of common or potch opal which does not show a play of color. Opalescence is a form of adularescence. For gemstone use, most opal is cut and polished to form a cabochon. "Natural" opal refers to polished stones consisting wholly of precious opal. Opals too thin to produce a "natural" opal may be combined with other materials to form "composite" gems. An opal doublet consists of a relatively thin layer of precious opal, backed by a layer of dark-colored material, most commonly ironstone, dark or black common opal (potch), onyx, or obsidian. The darker backing emphasizes the play of color and results in a more attractive display than a lighter potch. An opal triplet is similar to a doublet but has a third layer, a domed cap of clear quartz or plastic on the top. The cap takes a high polish and acts as a protective layer for the opal. The top layer also acts as a magnifier, to emphasize the play of color of the opal beneath, which is often an inferior specimen or an extremely thin section of precious opal. Triplet opals tend to have a more artificial appearance and are not classed as precious gemstones, but rather "composite" gemstones. Jewelry applications of precious opal can be somewhat limited by opal's sensitivity to heat due primarily to its relatively high water content and predisposition to scratching. Combined with modern techniques of polishing, a doublet opal can produce a similar effect to Natural black or boulder opal at a fraction of the price. Doublet opal also has the added benefit of having genuine opal as the top visible and touchable layer, unlike triplet opals. Common opal Besides the gemstone varieties that show a play of color, the other kinds of common opal include the milk opal, milky bluish to greenish (which can sometimes be of gemstone quality); resin opal, which is honey-yellow with a resinous luster; wood opal, which is caused by the replacement of the organic material in wood with opal; menilite, which is brown or grey; hyalite, a colorless glass-clear opal sometimes called Muller's glass; geyserite, also called siliceous sinter, deposited around hot springs or geysers; and diatomaceous earth, the accumulations of diatom shells or tests. Common opal often displays a hazy-milky-turbid sheen from within the stone. In gemology, this optical effect is strictly defined as opalescence which is a form of adularescence. Varieties of common opal "Girasol opal" is a term sometimes mistakenly and improperly used to refer to fire opals, as well as a type of transparent to semitransparent type milky quartz from Madagascar which displays an asterism, or star effect when cut properly. However, the true girasol opal is a type of hyalite opal that exhibits a bluish glow or sheen that follows the light source around. It is not a play of color as seen in precious opal, but rather an effect from microscopic inclusions. It is also sometimes referred to as water opal, too, when it is from Mexico. The two most notable locations of this type of opal are Oregon and Mexico. A Peruvian opal (also called blue opal) is a semi-opaque to opaque blue-green stone found in Peru, which is often cut to include the matrix in the more opaque stones. It does not display a play of color. Blue opal also comes from Oregon and Idaho in the Owyhee region, as well as from Nevada around the Virgin Valley. Opal is also formed by diatoms. Diatoms are a form of algae that, when they die, often form layers at the bottoms of lakes, bays, or oceans. Their cell walls are made up of hydrated silicon dioxide which gives them structural coloration and therefore the appearance of tiny opals when viewed under a microscope. These cell walls or "tests" form the “grains” for the diatomaceous earth. This sedimentary rock is white, opaque, and chalky in texture. Diatomite has multiple industrial uses such as filtering or adsorbing since it has a fine particle size and very porous nature, and gardening to increase water absorption. History Opal was rare and very valuable in antiquity. In Europe, it was a gem prized by royalty. Until the opening of vast deposits in Australia in the 19th century the only known source was beyond the Roman frontier in Slovakia. Opal is the national gemstone of Australia. Sources The primary sources of opal are Australia and Ethiopia, but because of inconsistent and widely varying accountings of their respective levels of extraction, it is difficult to accurately state what proportion of the global supply of opal comes from either country. Australian opal has been cited as accounting for 95–97% of the world's supply of precious opal, with the state of South Australia accounting for 80% of the world's supply. In 2012, Ethiopian opal production was estimated to be by the United States Geological Survey. USGS data from the same period (2012), reveals Australian opal production to be $41 million. Because of the units of measurement, it is not possible to directly compare Australian and Ethiopian opal production, but these data and others suggest that the traditional percentages given for Australian opal production may be overstated. Yet, the validity of data in the USGS report appears to conflict with that of Laurs et al. and Mesfin, who estimated the 2012 Ethiopian opal output (from Wegeltena) to be only . Australia The town of Coober Pedy in South Australia is a major source of opal. The world's largest and most valuable gem opal "Olympic Australis" was found in August 1956 at the "Eight Mile" opal field in Coober Pedy. It weighs and is long, with a height of and a width of . The Mintabie Opal Field in South Australia located about northwest of Coober Pedy has also produced large quantities of crystal opal and the rarer black opal. Over the years, it has been sold overseas incorrectly as Coober Pedy opal. The black opal is said to be some of the best examples found in Australia. Andamooka in South Australia is also a major producer of matrix opal, crystal opal, and black opal. Another Australian town, Lightning Ridge in New South Wales, is the main source of black opal, opal containing a predominantly dark background (dark gray to blue-black displaying the play of color), collected from the Griman Creek Formation. Boulder opal consists of concretions and fracture fillings in a dark siliceous ironstone matrix. It is found sporadically in western Queensland, from Kynuna in the north, to Yowah and Koroit in the south. Its largest quantities are found around Jundah and Quilpie in South West Queensland. Australia also has opalized fossil remains, including dinosaur bones in New South Wales and South Australia, and marine creatures in South Australia. Ethiopia It has been reported that Northern African opal was used to make tools as early as 4000 BC. The first published report of gem opal from Ethiopia appeared in 1994, with the discovery of precious opal in the Menz Gishe District, North Shewa Province. The opal, found mostly in the form of nodules, was of volcanic origin and was found predominantly within weathered layers of rhyolite. This Shewa Province opal was mostly dark brown in color and had a tendency to crack. These qualities made it unpopular in the gem trade. In 2008, a new opal deposit was found approximately 180 km north of Shewa Province, near the town of Wegeltena, in Ethiopia's Wollo Province. The Wollo Province opal was different from the previous Ethiopian opal finds in that it more closely resembled the sedimentary opals of Australia and Brazil, with a light background and often vivid play-of-color. Wollo Province opal, more commonly referred to as "Welo" or "Wello" opal, has become the dominant Ethiopian opal in the gem trade. Virgin Valley, Nevada The Virgin Valley opal fields of Humboldt County in northern Nevada produce a wide variety of precious black, crystal, white, fire, and lemon opal. The black fire opal is the official gemstone of Nevada. Most of the precious opal is partial wood replacement. The precious opal is hosted and found in situ within a subsurface horizon or zone of bentonite, which is considered a "lode" deposit. Opals which have weathered out of the in situ deposits are alluvial and considered placer deposits. Miocene-age opalised teeth, bones, fish, and a snake head have been found. Some of the opal has high water content and may desiccate and crack when dried. The largest producing mines of Virgin Valley have been the famous Rainbow Ridge, Royal Peacock, Bonanza, Opal Queen, and WRT Stonetree/Black Beauty mines. The largest unpolished black opal in the Smithsonian Institution, known as the "Roebling opal", came out of the tunneled portion of the Rainbow Ridge Mine in 1917, and weighs . The largest polished black opal in the Smithsonian Institution comes from the Royal Peacock opal mine in the Virgin Valley, weighing , known as the "Black Peacock". Mexico Fire opal is a transparent to translucent opal with warm body colors of yellow to orange to red. Although fire opals don't usually show any play of color, they occasionally exhibit bright green flashes. The most famous source of fire opals is the state of Querétaro in Mexico; these opals are commonly called Mexican fire opals. Fire opals that do not show a play of color are sometimes referred to as jelly opals. Mexican opals are sometimes cut in their rhyolitic host material if it is hard enough to allow cutting and polishing. This type of Mexican opal is referred to as a Cantera opal. Another type of opal from Mexico, referred to as Mexican water opal, is a colorless opal that exhibits either a bluish or golden internal sheen. Opal occurs in significant quantity and variety in central Mexico, where mining and production first originated in the state of Querétaro. In this region the opal deposits are located mainly in the mountain ranges of three municipalities: Colón, Tequisquiapan, and Ezequiel Montes. During the 1960s through to the mid-1970s, the Querétaro mines were heavily mined. Today's opal miners report that it was much easier to find quality opals with a lot of fire and play of color back then, whereas today the gem-quality opals are very hard to come by and command hundreds of US dollars or more. The orange-red background color is characteristic of all "fire opals," including "Mexican fire opal". The oldest mine in Querétaro is Santa Maria del Iris. This mine was opened around 1870 and has been reopened at least 28 times since. At the moment there are about 100 mines in the regions around Querétaro, but most of them are now closed. The best quality of opals came from the mine Santa Maria del Iris, followed by La Hacienda la Esperanza, Fuentezuelas, La Carbonera, and La Trinidad. Important deposits in the state of Jalisco were not discovered until the late 1950s. In 1957, Alfonso Ramirez (of Querétaro) accidentally discovered the first opal mine in Jalisco: La Unica, located on the outer area of the volcano of Tequila, near the Huitzicilapan farm in Magdalena. By 1960 there were around 500 known opal mines in this region alone. Other regions of the country that also produce opals (of lesser quality) are Guerrero, which produces an opaque opal similar to the opals from Australia (some of these opals are carefully treated with heat to improve their colors so high-quality opals from this area may be suspect). There are also some small opal mines in Morelos, Durango, Chihuahua, Baja California, Guanajuato, Puebla, Michoacán, and Estado de México. Other locations Another source of white base opal or creamy opal in the United States is Spencer, Idaho. A high percentage of the opal found there occurs in thin layers. Other significant deposits of precious opal around the world can be found in the Czech Republic, Canada, Slovakia, Hungary, Turkey, Indonesia, Brazil (in Pedro II, Piauí), Honduras (more precisely in Erandique), Guatemala, and Nicaragua. In late 2008, NASA announced the discovery of opal deposits on Mars. Fossil opal Wood opal, also known as xylopal, is a form of opal, as well as a type of petrified wood which has developed an opalescent sheen or, more rarely, where the wood has been completely replaced by opal. Other names for this opalized sheen-like wood are opalized wood and opalized petrified wood. It is often used as a gemstone. Synthetic opal Opals of all varieties have been synthesized experimentally and commercially. The discovery of the ordered sphere structure of precious opal led to its synthesis by Pierre Gilson in 1974. The resulting material is distinguishable from natural opal by its regularity; under magnification, the patches of color are seen to be arranged in a "lizard skin" or "chicken wire" pattern. Furthermore, synthetic opals do not fluoresce under ultraviolet light. Synthetics are also generally lower in density and are often highly porous. Opals which have been created in a laboratory are often termed "lab-created opals", which, while classifiable as man-made and synthetic, are very different from their resin-based counterparts which are also considered man-made and synthetic. The term "synthetic" implies that a stone has been created to be chemically and structurally indistinguishable from a genuine one, and genuine opal contains no resins or polymers. The finest modern lab-created opals do not exhibit the lizard skin or columnar patterning of earlier lab-created varieties, and their patterns are non-directional. They can still be distinguished from genuine opals, however, by their lack of inclusions and the absence of any surrounding non-opal matrix. While many genuine opals are cut and polished without a matrix, the presence of irregularities in their play-of-color continues to mark them as distinct from even the best lab-created synthetics. Other research in macroporous structures have yielded highly ordered materials that have similar optical properties to opals and have been used in cosmetics. Synthetic opals are also deeply investigated in photonics for sensing and light management purposes. Local atomic structure The lattice of spheres of opal that cause interference with light is several hundred times larger than the fundamental structure of crystalline silica. As a mineraloid, no unit cell describes the structure of opal. Nevertheless, opals can be roughly divided into those that show no signs of crystalline order (amorphous opal) and those that show signs of the beginning of crystalline order, commonly termed cryptocrystalline or microcrystalline opal. Dehydration experiments and infrared spectroscopy have shown that most of the H2O in the formula of SiO2·nH2O of opals is present in the familiar form of clusters of molecular water. Isolated water molecules, and silanols, structures such as SiOH, generally form a lesser proportion of the total and can reside near the surface or in defects inside the opal. The structure of low-pressure polymorphs of anhydrous silica consists of frameworks of fully corner bonded tetrahedra of SiO4. The higher temperature polymorphs of silica cristobalite and tridymite are frequently the first to crystallize from amorphous anhydrous silica, and the local structures of microcrystalline opals also appear to be closer to that of cristobalite and tridymite than to quartz. The structures of tridymite and cristobalite are closely related and can be described as hexagonal and cubic close-packed layers. It is therefore possible to have intermediate structures in which the layers are not regularly stacked. Microcrystalline opal Microcrystalline opal or Opal-CT has been interpreted as consisting of clusters of stacked cristobalite and tridymite over very short length scales. The spheres of opal in microcrystalline opal are themselves made up of tiny nanocrystalline blades of cristobalite and tridymite. Microcrystalline opal has occasionally been further subdivided in the literature. Water content may be as high as 10 wt%. Opal-CT, also called lussatine or lussatite, is interpreted as consisting of localized order of α-cristobalite with a lot of stacking disorder. Typical water content is about 1.5 wt%. Noncrystalline opal Two broad categories of noncrystalline opals, sometimes just referred to as "opal-A" ("A" stands for "amorphous"), have been proposed. The first of these is opal-AG consisting of aggregated spheres of silica, with water filling the space in between. Precious opal and potch opal are generally varieties of this, the difference being in the regularity of the sizes of the spheres and their packing. The second "opal-A" is opal-AN or water-containing amorphous silica-glass. Hyalite is another name for this. Noncrystalline silica in siliceous sediments is reported to gradually transform to opal-CT and then opal-C as a result of diagenesis, due to the increasing overburden pressure in sedimentary rocks, as some of the stacking disorder is removed. Opal surface chemical groups The surface of opal in contact with water is covered by siloxane bonds (≡Si–O–Si≡) and silanol groups (≡Si–OH). This makes the opal surface very hydrophilic and capable of forming numerous hydrogen bonds. Etymology The word 'opal' is adapted from the Latin term . The origin of this word in turn is a matter of debate, but most modern references suggest it is adapted from the Sanskrit word meaning ‘precious stone’. As references to the gem are made by Pliny the Elder, one theory attributed the name's origin to Roman mythology: to have been adapted from Ops, the wife of Saturn, and goddess of fertility. (The portion of Saturnalia devoted to Ops was "Opalia", similar to .) Another common claim was that the term was adapted from the Ancient Greek word, . This word has two meanings, one is related to "seeing" and forms the basis of the English words like "opaque"; the other is "other" as in "alias" and "alter". It is claimed that combined these uses, meaning "to see a change in color". However, historians have noted the first appearances of do not occur until after the Romans had taken over the Greek states in 180 BC and they had previously used the term . However, the argument for the Sanskrit origin is strong. The term first appears in Roman references around 250 BC, at a time when the opal was valued above all other gems. The opals were supplied by traders from the Bosporus, who claimed the gems were being supplied from India. Before this, the stone was referred to by a variety of names, but these fell from use after 250 BC. Historical superstitions In the Middle Ages, opal was considered a stone that could provide great luck because it was believed to possess all the virtues of each gemstone whose color was represented in the color spectrum of the opal. It was also said to grant invisibility if wrapped in a fresh bay leaf and held in the hand. As a result, the opal was seen as the patron gemstone for thieves during the medieval period. Following the publication of Sir Walter Scott's Anne of Geierstein in 1829, opal acquired a less auspicious reputation. In Scott's novel, the Baroness of Arnheim wears an opal talisman with supernatural powers. When a drop of holy water falls on the talisman, the opal turns into a colorless stone and the Baroness dies soon thereafter. Due to the popularity of Scott's novel, people began to associate opals with bad luck and death. Within a year of the publishing of Scott's novel in April 1829, the sale of opals in Europe dropped by 50%, and remained low for the next 20 years or so. Even as recently as the beginning of the 20th century, it was believed that when a Russian saw an opal among other goods offered for sale, he or she should not buy anything more, as the opal was believed to embody the evil eye. Opal is considered the birthstone for people born in October. Examples The Olympic Australis, the world's largest and most valuable gem opal, found in Coober Pedy The Andamooka Opal, presented to Queen Elizabeth II, also known as the Queen's Opal The Addyman Plesiosaur from Andamooka, "the finest known opalised skeleton on Earth" The Burning of Troy, the now-lost opal presented to Joséphine de Beauharnais by Napoleon I of France and the first named opal The Flame Queen Opal Opal cameo (jewellery-case) of a profile head of a helmeted warrior, attributed to Wilhelm Schmidt The Halley's Comet Opal, the world's largest uncut black opal Although the clock faces above the information stand in Grand Central Terminal in New York City are often said to be opal, they are in fact opalescent glass The Roebling Opal, Smithsonian Institution The Galaxy Opal, listed as the "World's Largest Polished Opal" in the 1992 Guinness Book of Records The Rainbow Virgin, "the finest crystal opal specimen ever unearthed" The Sea of Opal, the largest black opal in the world The Fire of Australia, assumed to be "the finest uncut opal in existence" Beverly the Bug, the first known example of an opal with an insect inclusion
Physical sciences
Silicate minerals
Earth science
54536
https://en.wikipedia.org/wiki/Citric%20acid
Citric acid
Citric acid is an organic compound with the formula . It is a colorless weak organic acid. It occurs naturally in citrus fruits. In biochemistry, it is an intermediate in the citric acid cycle, which occurs in the metabolism of all aerobic organisms. More than two million tons of citric acid are manufactured every year. It is used widely as acidifier, flavoring, preservative, and chelating agent. A citrate is a derivative of citric acid; that is, the salts, esters, and the polyatomic anion found in solutions and salts of citric acid. An example of the former, a salt is trisodium citrate; an ester is triethyl citrate. When citrate trianion is part of a salt, the formula of the citrate trianion is written as or . Natural occurrence and industrial production Citric acid occurs in a variety of fruits and vegetables, most notably citrus fruits. Lemons and limes have particularly high concentrations of the acid; it can constitute as much as 8% of the dry weight of these fruits (about 47 g/L in the juices). The concentrations of citric acid in citrus fruits range from 0.005 mol/L for oranges and grapefruits to 0.30 mol/L in lemons and limes; these values vary within species depending upon the cultivar and the circumstances under which the fruit was grown. Citric acid was first isolated in 1784 by the chemist Carl Wilhelm Scheele, who crystallized it from lemon juice. Industrial-scale citric acid production first began in 1890 based on the Italian citrus fruit industry, where the juice was treated with hydrated lime (calcium hydroxide) to precipitate calcium citrate, which was isolated and converted back to the acid using diluted sulfuric acid. In 1893, C. Wehmer discovered Penicillium mold could produce citric acid from sugar. However, microbial production of citric acid did not become industrially important until World War I disrupted Italian citrus exports. In 1917, American food chemist James Currie discovered that certain strains of the mold Aspergillus niger could be efficient citric acid producers, and the pharmaceutical company Pfizer began industrial-level production using this technique two years later, followed by Citrique Belge in 1929. In this production technique, which is still the major industrial route to citric acid used today, cultures of Aspergillus niger are fed on a sucrose or glucose-containing medium to produce citric acid. The source of sugar is corn steep liquor, molasses, hydrolyzed corn starch, or other inexpensive, carbohydrate solution. After the mold is filtered out of the resulting suspension, citric acid is isolated by precipitating it with calcium hydroxide to yield calcium citrate salt, from which citric acid is regenerated by treatment with sulfuric acid, as in the direct extraction from citrus fruit juice. In 1977, a patent was granted to Lever Brothers for the chemical synthesis of citric acid starting either from aconitic or isocitrate (also called alloisocitrate) calcium salts under high pressure conditions; this produced citric acid in near quantitative conversion under what appeared to be a reverse, non-enzymatic Krebs cycle reaction. Global production was in excess of 2,000,000 tons in 2018. More than 50% of this volume was produced in China. More than 50% was used as an acidity regulator in beverages, some 20% in other food applications, 20% for detergent applications, and 10% for applications other than food, such as cosmetics, pharmaceuticals, and in the chemical industry. Chemical characteristics Citric acid can be obtained as an anhydrous (water-free) form or as a monohydrate. The anhydrous form crystallizes from hot water, while the monohydrate forms when citric acid is crystallized from cold water. The monohydrate can be converted to the anhydrous form at about 78 °C. Citric acid also dissolves in absolute (anhydrous) ethanol (76 parts of citric acid per 100 parts of ethanol) at 15 °C. It decomposes with loss of carbon dioxide above about 175 °C. Citric acid is a triprotic acid, with pKa values, extrapolated to zero ionic strength, of 3.128, 4.761, and 6.396 at 25 °C. The pKa of the hydroxyl group has been found, by means of 13C NMR spectroscopy, to be 14.4. The speciation diagram shows that solutions of citric acid are buffer solutions between about pH 2 and pH 8. In biological systems around pH 7, the two species present are the citrate ion and mono-hydrogen citrate ion. The SSC 20X hybridization buffer is an example in common use. Tables compiled for biochemical studies are available. Conversely, the pH of a 1 mM solution of citric acid will be about 3.2. The pH of fruit juices from citrus fruits like oranges and lemons depends on the citric acid concentration, with a higher concentration of citric acid resulting in a lower pH. Acid salts of citric acid can be prepared by careful adjustment of the pH before crystallizing the compound. See, for example, sodium citrate. The citrate ion forms complexes with metallic cations. The stability constants for the formation of these complexes are quite large because of the chelate effect. Consequently, it forms complexes even with alkali metal cations. However, when a chelate complex is formed using all three carboxylate groups, the chelate rings have 7 and 8 members, which are generally less stable thermodynamically than smaller chelate rings. In consequence, the hydroxyl group can be deprotonated, forming part of a more stable 5-membered ring, as in ammonium ferric citrate, . Citric acid can be esterified at one or more of its three carboxylic acid groups to form any of a variety of mono-, di-, tri-, and mixed esters. Biochemistry Citric acid cycle Citrate is an intermediate in the citric acid cycle, also known as the TCA (TriCarboxylic Acid) cycle or the Krebs cycle, a central metabolic pathway for animals, plants, and bacteria. In the Krebs cycle, citrate synthase catalyzes the condensation of oxaloacetate with acetyl CoA to form citrate. Citrate then acts as the substrate for aconitase and is converted into aconitic acid. The cycle ends with regeneration of oxaloacetate. This series of chemical reactions is the source of two-thirds of the food-derived energy in higher organisms. The chemical energy released is available under the form of Adenosine triphosphate (ATP). Hans Adolf Krebs received the 1953 Nobel Prize in Physiology or Medicine for the discovery. Other biological roles Citrate can be transported out of the mitochondria and into the cytoplasm, then broken down into acetyl-CoA for fatty acid synthesis, and into oxaloacetate. Citrate is a positive modulator of this conversion, and allosterically regulates the enzyme acetyl-CoA carboxylase, which is the regulating enzyme in the conversion of acetyl-CoA into malonyl-CoA (the commitment step in fatty acid synthesis). In short, citrate is transported into the cytoplasm, converted into acetyl-CoA, which is then converted into malonyl-CoA by acetyl-CoA carboxylase, which is allosterically modulated by citrate. High concentrations of cytosolic citrate can inhibit phosphofructokinase, the catalyst of a rate-limiting step of glycolysis. This effect is advantageous: high concentrations of citrate indicate that there is a large supply of biosynthetic precursor molecules, so there is no need for phosphofructokinase to continue to send molecules of its substrate, fructose 6-phosphate, into glycolysis. Citrate acts by augmenting the inhibitory effect of high concentrations of ATP, another sign that there is no need to carry out glycolysis. Citrate is a vital component of bone, helping to regulate the size of apatite crystals. Applications Food and drink Because it is one of the stronger edible acids, the dominant use of citric acid is as a flavoring and preservative in food and beverages, especially soft drinks and candies. Within the European Union it is denoted by E number E330. Citrate salts of various metals are used to deliver those minerals in a biologically available form in many dietary supplements. Citric acid has 247 kcal per 100 g. In the United States the purity requirements for citric acid as a food additive are defined by the Food Chemicals Codex, which is published by the United States Pharmacopoeia (USP). Citric acid can be added to ice cream as an emulsifying agent to keep fats from separating, to caramel to prevent sucrose crystallization, or in recipes in place of fresh lemon juice. Citric acid is used with sodium bicarbonate in a wide range of effervescent formulae, both for ingestion (e.g., powders and tablets) and for personal care (e.g., bath salts, bath bombs, and cleaning of grease). Citric acid sold in a dry powdered form is commonly sold in markets and groceries as "sour salt", due to its physical resemblance to table salt. It has use in culinary applications, as an alternative to vinegar or lemon juice, where a pure acid is needed. Citric acid can be used in food coloring to balance the pH level of a normally basic dye. Cleaning and chelating agent Citric acid is an excellent chelating agent, binding metals by making them soluble. It is used to remove and discourage the buildup of limescale from boilers and evaporators. It can be used to treat water, which makes it useful in improving the effectiveness of soaps and laundry detergents. By chelating the metals in hard water, it lets these cleaners produce foam and work better without need for water softening. Citric acid is the active ingredient in some bathroom and kitchen cleaning solutions. A solution with a six percent concentration of citric acid will remove hard water stains from glass without scrubbing. Citric acid can be used in shampoo to wash out wax and coloring from the hair. Illustrative of its chelating abilities, citric acid was the first successful eluant used for total ion-exchange separation of the lanthanides, during the Manhattan Project in the 1940s. In the 1950s, it was replaced by the far more efficient EDTA. In industry, it is used to dissolve rust from steel, and to passivate stainless steels. Cosmetics, pharmaceuticals, dietary supplements, and foods Citric acid is used as an acidulant in creams, gels, and liquids. Used in foods and dietary supplements, it may be classified as a processing aid if it was added for a technical or functional effect (e.g. acidulent, chelator, viscosifier, etc.). If it is still present in insignificant amounts, and the technical or functional effect is no longer present, it may be exempt from labeling <21 CFR §101.100(c)>. Citric acid is an alpha hydroxy acid and is an active ingredient in chemical skin peels. Citric acid is commonly used as a buffer to increase the solubility of brown heroin. Citric acid is used as one of the active ingredients in the production of facial tissues with antiviral properties. Other uses The buffering properties of citrates are used to control pH in household cleaners and pharmaceuticals. Citric acid is used as an odorless alternative to white vinegar for fabric dyeing with acid dyes. Sodium citrate is a component of Benedict's reagent, used for both qualitative and quantitative identification of reducing sugars. Citric acid can be used as an alternative to nitric acid in passivation of stainless steel. Citric acid can be used as a lower-odor stop bath as part of the process for developing photographic film. Photographic developers are alkaline, so a mild acid is used to neutralize and stop their action quickly, but commonly used acetic acid leaves a strong vinegar odor in the darkroom. Citric acid is an excellent soldering flux, either dry or as a concentrated solution in water. It should be removed after soldering, especially with fine wires, as it is mildly corrosive. It dissolves and rinses quickly in hot water. Alkali citrate can be used as an inhibitor of kidney stones by increasing urine citrate levels, useful for prevention of calcium stones, and increasing urine pH, useful for preventing uric acid and cystine stones. Synthesis of other organic compounds Citric acid is a versatile precursor to many other organic compounds. Dehydration routes give itaconic acid and its anhydride. Citraconic acid can be produced via thermal isomerization of itaconic acid anhydride. The required itaconic acid anhydride is obtained by dry distillation of citric acid. Aconitic acid can be synthesized by dehydration of citric acid using sulfuric acid: (HO2CCH2)2C(OH)CO2H → HO2CCH=C(CO2H)CH2CO2H + H2O Acetonedicarboxylic acid can also be prepared by decarboxylation of citric acid in fuming sulfuric acid. Safety Although a weak acid, exposure to pure citric acid can cause adverse effects. Inhalation may cause cough, shortness of breath, or sore throat. Over-ingestion may cause abdominal pain and sore throat. Exposure of concentrated solutions to skin and eyes can cause redness and pain. Long-term or repeated consumption may cause erosion of tooth enamel. Compendial status British Pharmacopoeia Japanese Pharmacopoeia
Physical sciences
Carbon–oxygen bond
null
54592
https://en.wikipedia.org/wiki/Cicada
Cicada
The cicadas () are a superfamily, the Cicadoidea, of insects in the order Hemiptera (true bugs). They are in the suborder Auchenorrhyncha, along with smaller jumping bugs such as leafhoppers and froghoppers. The superfamily is divided into two families, the Tettigarctidae, with two species in Australia, and the Cicadidae, with more than 3,000 species described from around the world; many species remain undescribed. Nearly all of cicada species are annual cicadas with the exception of the few North American periodical cicada species, genus Magicicada, which in a given region emerge en masse every 13 or 17 years. Cicadas have prominent eyes set wide apart, short antennae, and membranous front wings. They have an exceptionally loud song, produced in most species by the rapid buckling and unbuckling of drum-like tymbals. The earliest known fossil Cicadomorpha appeared in the Upper Permian period; extant species occur all around the world in temperate to tropical climates. They typically live in trees, feeding on watery sap from xylem tissue, and laying their eggs in a slit in the bark. Most cicadas are cryptic. The vast majority of species are active during the day as adults, with some calling at dawn or dusk. Only a rare few species are known to be nocturnal. One exclusively North American genus, Magicicada (the periodical cicadas), which spend most of their lives as underground nymphs, emerge in predictable intervals of 13 or 17 years, depending on the species and the location. The unusual duration and synchronization of their emergence may reduce the number of cicadas lost to predation, both by making them a less reliably available prey (so that any predator that evolved to depend on cicadas for sustenance might starve waiting for their emergence), and by emerging in such huge numbers that they will satiate any remaining predators before losing enough of their number to threaten their survival as a species. The annual cicadas are species that emerge every year. Though these cicadas' life cycles can vary from 1 to 9 or more years as underground nymphs, their emergence above ground as adults is not synchronized, so some members of each species appear every year. Cicadas have been featured in literature since the time of Homer's Iliad and as motifs in art from the Chinese Shang dynasty. They have also been used in myth and folklore as symbols of carefree living and immortality. The cicada is also mentioned in Hesiod's Shield (ll.393–394), in which it is said to sing when millet first ripens. Cicadas are eaten by humans in various parts of the world, including China, Myanmar, Malaysia, and central Africa. Etymology The name is directly from the onomatopoeic Latin cicada. Taxonomy and diversity The superfamily Cicadoidea is a sister of the Cercopoidea (the froghoppers). Cicadas are arranged into two families: the Tettigarctidae and Cicadidae. The two extant species of the Tettigarctidae include one in southern Australia and the other in Tasmania. The family Cicadidae is subdivided into the subfamilies Cicadettinae, Cicadinae, Derotettiginae, Tibicininae (or Tettigadinae), and Tettigomyiinae, and they are found on all continents except Antarctica. Some previous works also included a family-level taxon called the Tibiceninae. The largest species is the Malaysian emperor cicada Megapomponia imperatoria; its wingspan is up to about . Cicadas are also notable for the great length of time some species take to mature. At least 3,000 cicada species are distributed worldwide, in essentially any habitat that has deciduous trees, with the majority being in the tropics. Most genera are restricted to a single biogeographical region, and many species have a very limited range. This high degree of endemism has been used to study the biogeography of complex island groups such as in Indonesia and Asia. There are several hundred described species in Australia and New Zealand, around 150 in South Africa, over 170 in America north of Mexico, at least 800 in Latin America, and over 200 in Southeast Asia and the Western Pacific. About 100 species occur in the Palaearctic. A few species are found in southern Europe, and a single species was known from England, the New Forest cicada, Cicadetta montana, which also occurs in continental Europe. Many species await formal description and many well-known species are yet to be studied carefully using modern acoustic analysis tools that allow their songs to be characterized. Many of the North American species are the annual or jarfly or dog-day cicadas, members of the Neotibicen, Megatibicen, or Hadoa genera, so named because they emerge in late July and August. The best-known North American genus, however, may be Magicicada. These periodical cicadas have an extremely long life cycle of 13 or 17 years, with adults suddenly and briefly emerging in large numbers. Australian cicadas are found on tropical islands and cold coastal beaches around Tasmania, in tropical wetlands, high and low deserts, alpine areas of New South Wales and Victoria, large cities including Sydney, Melbourne, and Brisbane, and Tasmanian highlands and snowfields. Many of them have common names such as cherry nose, brown baker, red eye, greengrocer, yellow Monday, whisky drinker, double drummer, and black prince. The Australian greengrocer, Cyclochila australasiae, is among the loudest insects in the world. More than 40 species from five genera populate New Zealand, ranging from sea level to mountain tops, and all are endemic to New Zealand and its surrounding islands (Kermadec Islands, Chatham Islands). One species is found on Norfolk Island, which technically is part of Australia. The closest relatives of the NZ cicadas live in New Caledonia and Australia. Palaeontology Fossil Cicadomorpha first appeared in the Late Triassic. The superfamily Palaeontinoidea contains three families. The Upper Permian Dunstaniidae are found in Australia and South Africa, and also in younger rocks from China. The Upper Triassic Mesogereonidae are found in Australia and South Africa. This group, though, is currently thought to be more distantly related to Cicadomorpha than previously thought. The Palaeontinidae or "giant cicadas" (though only distantly related to true cicadas) come from the Jurassic and Lower Cretaceous of Eurasia and South America. The first of these was a fore wing discovered in the Taynton Limestone Formation of Oxfordshire, England; it was initially described as a butterfly in 1873, before being recognised as a cicada-like form and renamed Palaeontina oolitica. Tettigarctidae and Cicadidae had diverged from each other prior to or during the Jurassic, as evidenced by fossils related to both lineages present by the Middle Jurassic (~ 165 million years ago) The morphology of well preserved fossils of early relatives of Cicadidae from the mid Cretaceous Burmese amber of Myanmar suggests that unlike many modern cicadids, they were either silent or only made quiet sounds. Most fossil Cicadidae are known from the Cenozoic, and the oldest unambiguously identified modern cicadid is Davispia bearcreekensis (subfamily Tibicininae) from the Paleocene, around 56-59 million years ago. Biology Description Cicadas are large insects made conspicuous by the courtship calls of the males. They are characterized by having three joints in their tarsi, and having small antennae with conical bases and three to six segments, including a seta at the tip. The Auchenorrhyncha differ from other hemipterans by having a rostrum that arises from the posteroventral part of the head, complex sound-producing membranes, and a mechanism for linking the wings that involves a down-rolled edging on the rear of the fore wing and an upwardly protruding flap on the hind wing. Cicadas are feeble jumpers, and nymphs lack the ability to jump altogether. Another defining characteristic is the adaptations of the fore limbs of nymphs for underground life. The relict family Tettigarctidae differs from the Cicadidae in having the prothorax extending as far as the scutellum, and by lacking the tympanal apparatus. The adult insect, known as an imago, is in total length in most species. The largest, the empress cicada (Megapomponia imperatoria), has a head-body length around , and its wingspan is . Cicadas have prominent compound eyes set wide apart on the sides of the head. The short antennae protrude between the eyes or in front of them. They also have three small ocelli located on the top of the head in a triangle between the two large eyes; this distinguishes cicadas from other members of the Hemiptera. The mouthparts form a long, sharp rostrum that they insert into the plant to feed. The postclypeus is a large, nose-like structure that lies between the eyes and makes up most of the front of the head; it contains the pumping musculature. The thorax has three segments and houses the powerful wing muscles. They have two pairs of membranous wings that may be hyaline, cloudy, or pigmented. The wing venation varies between species and may help in identification. The middle thoracic segment has an operculum on the underside, which may extend posteriorly and obscure parts of the abdomen. The abdomen is segmented, with the hindermost segments housing the reproductive organs, and terminates in females with a large, saw-edged ovipositor. In males, the abdomen is largely hollow and used as a resonating chamber. The surface of the fore wing is superhydrophobic; it is covered with minute, waxy cones, blunt spikes that create a water-repellent film. Rain rolls across the surface, removing dirt in the process. In the absence of rain, dew condenses on the wings. When the droplets coalesce, the cicada leaps several millimetres into the air, which also serves to clean the wings. Bacteria landing on the wing surface are not repelled; rather, their membranes are torn apart by the nanoscale-sized spikes, making the wing surface the first-known biomaterial that can kill bacteria. Temperature regulation Desert cicadas such as Diceroprocta apache are unusual among insects in controlling their temperature by evaporative cooling, analogous to sweating in mammals. When their temperature rises above about , they suck excess sap from the food plants and extrude the excess water through pores in the tergum at a modest cost in energy. Such a rapid loss of water can be sustained only by feeding on water-rich xylem sap. At lower temperatures, feeding cicadas would normally need to excrete the excess water. By evaporative cooling, desert cicadas can reduce their bodily temperature by some 5 °C. Some non-desert cicada species such as Magicicada tredecem also cool themselves evaporatively, but less dramatically. Conversely, many other cicadas can voluntarily raise their body temperatures as much as 22 °C (40 °F) above ambient temperature. Song The "singing" of male cicadas is produced principally and in the majority of species using a special structure called a tymbal, a pair of which lies below each side of the anterior abdominal region. The structure is buckled by muscular action and, being made of resilin, unbuckles rapidly on muscle relaxation, producing their characteristic sounds. Some cicadas, however, have mechanisms for stridulation, sometimes in addition to the tymbals. Here, the wings are rubbed over a series of midthoracic ridges. In the Chinese species Subpsaltria yangi, both males and females can stridulate. The sounds may further be modulated by membranous coverings and by resonant cavities. The male abdomen in some species is largely hollow, and acts as a sound box. By rapidly vibrating these membranes, a cicada combines the clicks into apparently continuous notes, and enlarged chambers derived from the tracheae serve as resonance chambers with which it amplifies the sound. The cicada also modulates the song by positioning its abdomen toward or away from the substrate. Partly by the pattern in which it combines the clicks, each species produces its own distinctive mating songs and acoustic signals, ensuring that the song attracts only appropriate mates. The tettigarctid (or hairy) cicadas Tettigarcta crinita of Australia and T. tomentosa have rudimentary tymbals in both sexes and do not produce airborne sounds. Both males and females produce vibrations that are transmitted through the tree substrate. They are considered as representing the original state from which other cicada communication has evolved. Average temperature of the natural habitat for the South American species Fidicina rana is about . During sound production, the temperature of the tymbal muscles was found to be significantly higher. Many cicadas sing most actively during the hottest hours of a summer day; roughly a 24-hour cycle. Most cicadas are diurnal in their calling and depend on external heat to warm them up, while a few are capable of raising their temperatures using muscle action and some species are known to call at dusk. Kanakia gigas and Froggattoides typicus are among the few that are known to be truly nocturnal and there may be other nocturnal species living in tropical forests. Cicadas call from varying heights on trees. Where multiple species occur, the species may use different heights and timing of calling. While the vast majority of cicadas call from above the ground, two Californian species, Okanagana pallidula and O. vanduzeei are known to call from hollows made at the base of the tree below the ground level. The adaptive significance is unclear, as the calls are not amplified or modified by the burrow structure, but this may avoid predation. Although only males produce the cicadas' distinctive sounds, both sexes have membranous structures called tympana (singular – tympanum) by which they detect sounds, the equivalent of having ears. Males disable their own tympana while calling, thereby preventing damage to their hearing; a necessity partly because some cicadas produce sounds up to 120 dB (SPL) which is among the loudest of all insect-produced sounds. The song is loud enough to cause permanent hearing loss in humans should the cicada be at "close range". In contrast, some small species have songs so high in pitch that they are inaudible to humans. For the human ear, telling precisely where a cicada song originates is often difficult. The pitch is nearly constant, the sound is continuous to the human ear, and cicadas sing in scattered groups. In addition to the mating song, many species have a distinct distress call, usually a broken and erratic sound emitted by the insect when seized or panicked. Some species also have courtship songs, generally quieter, and produced after a female has been drawn to the calling song. Males also produce encounter calls, whether in courtship or to maintain personal space within choruses. The songs of cicadas are considered by entomologists to be unique to a given species, and a number of resources exist to collect and analyse cicada sounds. Life cycle In some species of cicadas, the males remain in one location and call to attract females. Sometimes, several males aggregate and call in chorus. In other species, the males move from place to place, usually with quieter calls, while searching for females. The Tettigarctidae differ from other cicadas in producing vibrations in the substrate rather than audible sounds. After mating, the female cuts slits into the bark of a twig where she deposits her eggs. Both male and female cicadas die within a few weeks after emerging from the soil. Although they have mouthparts and are able to consume some plant liquids for nutrition, the amount eaten is very small and the insects have a natural adult lifespan of less than two months. When the eggs hatch, the newly hatched nymphs drop to the ground and burrow. Cicadas live underground as nymphs for most of their lives at depths down to about . Nymphs have strong front legs for digging and excavating chambers near to roots, where they feed on xylem sap. In the process, their bodies and interior of the burrow become coated in anal fluids. In wet habitats, larger species construct mud towers above ground to aerate their burrows. In the final nymphal instar, they construct an exit tunnel to the surface and emerge. They then molt (shed their skins) on a nearby plant for the last time, and emerge as adults. The exuviae or abandoned exoskeletons remain, still clinging to the bark of the tree. Most cicadas go through a life cycle that lasts 2–5 years. Some species have much longer life cycles, such as the North American genus, Magicicada, which has a number of distinct "broods" that go through either a 17-year (Brood XIII), or in some parts of the region, a 13-year (Brood XIX) life cycle The long life cycles may have developed as a response to predators, such as the cicada killer wasp and praying mantis. A specialist predator with a shorter life cycle of at least two years could not reliably prey upon the cicadas; for example, a 17-year cicada with a predator with a five-year life cycle will only be threatened by a peak predator population every 85 (5 × 17) years, while a non-prime cycle such as 15 would be endangered at every year of emergence. An alternate hypothesis is that these long life cycles evolved during the ice ages so as to overcome cold spells, and that as species co-emerged and hybridized, they left distinct species that did not hybridize having periods matching prime numbers. The 13- and 17-year cicadas only emerge in the midwestern and eastern US in the same year every 221 years (13 × 17), with 2024 being the first such year since 1803. Diet Cicada nymphs drink sap from the xylem of various species of trees, including oak, cypress, willow, ash, and maple. While common folklore indicates that adults do not eat, they actually do drink plant sap using their sucking mouthparts. Cicadas excrete fluid in streams of droplets due to their high volume consumption of xylem sap. The jets of urine that cicadas produce have a velocity of up to 3 meters per second, making them the fastest among all assessed animals, including mammals like elephants and horses. Locomotion Cicadas, unlike other Auchenorrhyncha, are not adapted for jumping (saltation). They have the usual insect modes of locomotion, walking and flight, but they do not walk or run well, and take to the wing to travel distances greater than a few centimetres. Predators, parasites, and pathogens Cicadas are commonly eaten by birds and mammals, as well as bats, wasps, mantises, spiders, and robber flies. In times of mass emergence of cicadas, various amphibians, fish, reptiles, mammals, and birds change their foraging habits so as to benefit from the glut. Newly hatched nymphs may be eaten by ants, and nymphs living underground are preyed on by burrowing mammals such as moles. In northern Japan, brown bears prey on final instar nymphs of cicadas during summer by digging up the ground. In Australia, cicadas are preyed on by the Australian cicada killer wasp (Exeirus lateritius), which stings and stuns cicadas high in the trees, making them drop to the ground, where the cicada hunter mounts and carries them, pushing with its hind legs, sometimes over a distance of 100 m, until they can be shoved down into its burrow, where the numb cicadas are placed onto one of many shelves in a "catacomb", to form the food stock for the wasp grub that grows out of the egg deposited there. A katydid predator from Australia is capable of attracting singing male cicadas of a variety of species by imitating the timed click replies of sexually receptive female cicadas, which respond in pair formation by flicking their wings. Their prime-number life cycle prevents predators with a life cycle of two or more years from synchronising with their emergence. Several fungal diseases infect and kill adult cicadas, while other fungi in the genera Ophiocordyceps and Isaria attack nymphs. Massospora cicadina specifically attacks the adults of periodical cicadas, the spores remaining dormant in the soil between outbreaks. This fungus is also capable of dosing cicadas with psilocybin, the psychedelic drug found in magic mushrooms, as well as cathinone, an alkaloid similar to various amphetamines. These chemicals alter the behaviour of the cicadas, driving males to copulate, including attempts with males, and is thought to be beneficial to the fungus, as the fungal spores are dispersed by a larger number of infected carriers. Plants can also defend themselves against cicadas. Although cicadas can feed on the roots of gymnosperms, it has been found that resinous conifers such as pine do not allow the eggs of Magicicada to hatch, the resin sealing up the egg cavities. Antipredator adaptations Cicadas use a variety of strategies to evade predators. Large cicadas can fly rapidly to escape if disturbed. Many are extremely well camouflaged to evade predators such as birds that hunt by sight. Being coloured like tree bark and disruptively patterned to break up their outlines, they are difficult to discern; their partly transparent wings are held over the body and pressed close to the substrate. Some cicada species play dead when threatened. Some cicadas such as Hemisciera maculipennis display bright deimatic flash coloration on their hind wings when threatened; the sudden contrast helps to startle predators, giving the cicadas time to escape. Most cicadas are diurnal and rely on camouflage when at rest, but some species use aposematism-related Batesian mimicry, wearing the bright colors that warn of toxicity in other animals; the Malaysian Huechys sanguinea has conspicuous red and black warning coloration, is diurnal, and boldly flies about in full view of possible predators. Predators such as the sarcophagid fly Emblemasoma hunt cicadas by sound, being attracted to their songs. Singing males soften their song so that the attention of the listener gets distracted to neighbouring louder singers, or cease singing altogether as a predator approaches. A loud cicada song, especially in chorus, has been asserted to repel predators, but observations of predator responses refute the claim. In human culture In art and literature Cicadas have been featured in literature since the time of Homer's Iliad, and as motifs in decorative art from the Chinese Shang dynasty (1766–1122 BCE). They are described by Aristotle in his History of Animals and by Pliny the Elder in his Natural History; their mechanism of sound production is mentioned by Hesiod in his poem "Works and Days": "when the Skolymus flowers, and the tuneful Tettix sitting on his tree in the weary summer season pours forth from under his wings his shrill song". In the classic 14th-century Chinese novel Romance of the Three Kingdoms, Diaochan took her name from the sable (diāo) tails and jade decorations in the shape of cicadas (chán), which adorned the hats of high-level officials. In the Japanese novel The Tale of Genji, the title character poetically likens one of his many love interests to a cicada for the way she delicately sheds her robe the way a cicada sheds its shell when molting. Cicada exuviae play a role in the manga Winter Cicada. Cicadas are a frequent subject of haiku, where, depending on type, they can indicate spring, summer, or autumn. Shaun Tan's illustrated book Cicada tells the story of a hardworking but underappreciated cicada working in an office. Branden Jacobs-Jenkins' play Appropriate takes place on an Arkansas farm in summer, and calls for the sounds of mating cicadas to underscore the entire show. In fashion Being lightweight, and with hooklike legs, the exuviae of cicadas can be used as hair or clothing accessories. As food and folk medicine Cicadas were eaten in Ancient Greece, and are consumed in selected regions in modern China, both as adults and (more often) as nymphs. Cicadas are also eaten in Malaysia, Burma, North America, and central Africa, as well as the Balochistan region of Pakistan, especially in Ziarat. Female cicadas are prized for being meatier. Shells of cicadas are employed in traditional Chinese medicines, claiming that they possess anti-convulsive, sedative, and hypothermic effects. The 17-year "Onondaga Brood" Magicicada is culturally important and a particular delicacy to the Onondaga people, and are considered a novelty food item by modern consumers in several states. In music Cicadas are featured in the protest song "Como La Cigarra" ("Like the Cicada") written by Argentinian poet and composer María Elena Walsh. In the song, the cicada is a symbol of survival and defiance against death. The song was recorded by Mercedes Sosa, among other Latin American musicians. In North America and Mexico, there is a well-known song, "" ("The Cicada"), written by Raymundo Perez Soto, which is a song in the Mariachi tradition, that romanticises the insect as a creature that sings until it dies. Brazilian artist Lenine with his track "Malvadeza" from the album Chão, creates a song built upon the sound of the cicada that can be heard along the track. Cicada sounds heavily feature on the 2021 album Solar Power by New Zealand artist Lorde. She described cicada song as being emblematic of the New Zealand summer. In mythology and folklore Cicadas have been used as money, in folk medicine, to forecast the weather, to provide song (in China), and in folklore and myths around the world. In France, the cicada represents the folklore of Provence and the Mediterranean cities. The cicada has represented since classical antiquity. Jean de La Fontaine began his collection of fables Les fables de La Fontaine with the story "La Cigale et la Fourmi" ("The Cicada and the Ant") based on one of Aesop's fables; in it, the cicada spends the summer singing, while the ant stores away food, and the cicada finds herself without food when the weather turns bitter. In Chinese tradition, the cicada (, chán) symbolises rebirth and immortality. In the Chinese essay "Thirty-Six Stratagems", the phrase "to shed the golden cicada skin" () is the poetic name for using a decoy (leaving the exuviae) to fool enemies. In the Chinese classic novel Journey to the West (16th century), the protagonist Priest of Tang was named the Golden Cicada. In Japan, the cicada is associated with the summer season. For many Japanese people, summer hasn't officially begun until the first songs of the cicada are heard. According to Lafcadio Hearn, the song of Meimuna opalifera, called tsuku-tsuku boshi, is said to indicate the end of summer, and it is called so because of its particular call. In the Homeric Hymn to Aphrodite, the goddess Aphrodite retells the legend of how Eos, the goddess of the dawn, requested Zeus to let her lover Tithonus live forever as an immortal. Zeus granted her request, but because Eos forgot to ask him to also make Tithonus ageless, Tithonus never died, but he did grow old. Eventually, he became so tiny and shriveled that he turned into the first cicada. The Greeks also used a cicada sitting on a harp as an emblem of music. In Kapampangan mythology in the Philippines, the goddess of dusk, Sisilim, is said to be greeted by the sounds and appearances of cicadas whenever she appears. As pests Cicadas feed on sap; they do not bite or sting in a true sense, but may occasionally mistake a person's arm for a plant limb and attempt to feed. Male cicadas produce very loud calls that can damage human hearing. Cicadas are not major agricultural pests, but in some outbreak years, trees may be overwhelmed by the sheer numbers of females laying their eggs in the shoots. Small trees may wilt and larger trees may lose small branches. Although in general, the feeding activities of the nymphs do little damage, during the year before an outbreak of periodic cicadas, the large nymphs feed heavily and plant growth may suffer. Some species have turned from wild grasses to sugarcane, which affects the crop adversely, and in a few isolated cases, females have oviposited on cash crops such as date palms, grape vines, citrus trees, asparagus, and cotton. Cicadas sometimes cause damage to ornamental shrubs and trees, mainly in the form of scarring left on tree branches where the females have laid their eggs. Branches of young trees may die as a result.
Biology and health sciences
Hemiptera (true bugs)
null
54631
https://en.wikipedia.org/wiki/Saturday
Saturday
Saturday is the day of the week between Friday and Sunday. No later than the 2nd century, the Romans named Saturday ("Saturn's Day") for the god Saturn. His planet, Saturn, controlled the first hour of that day, according to Vettius Valens. The day's name was introduced into West Germanic languages and is recorded in the Low German languages such as Middle Low German , saterdach, Middle Dutch (Modern Dutch ), and Old English , Sæterndæġ or . Origins Between the 1st and 3rd centuries AD, the Roman Empire gradually replaced the eight-day Roman nundinal cycle with the seven-day week. The astrological order of the days was explained by Vettius Valens and Dio Cassius (and Chaucer gave the same explanation in his Treatise on the Astrolabe). According to these authors, it was a principle of astrology that the heavenly bodies presided, in succession, over the hours of the day. The association of the weekdays with the respective deities is thus indirect, the days are named for the planets, which were in turn named for the deities. The Germanic peoples adapted the system introduced by the Romans but glossed their indigenous gods over the Roman deities in a process known as interpretatio germanica. In the case of Saturday, however, the Roman name was borrowed directly by West Germanic peoples, apparently because none of the Germanic gods was considered to be a counterpart of the Roman god Saturn. Otherwise Old Norse and Old High German did not borrow the name of the Roman god (Icelandic , German ). In the Eastern Orthodox Church, Saturdays are days on which the Theotokos (Mother of God) and All Saints are commemorated, and the day on which prayers for the dead are especially offered, in remembrance that it was on a Saturday that Jesus lay dead in the tomb. The Octoechos contains hymns on these themes, arranged in an eight-week cycle, that are chanted on Saturdays throughout the year. At the end of services on Saturday, the dismissal begins with the words: "May Christ our True God, through the intercessions of his most-pure Mother, of the holy, glorious and right victorious Martyrs, of our reverend and God-bearing Fathers…". For the Orthodox, Saturday — with the sole exception of Holy Saturday — is never a strict fast day. When a Saturday falls during one of the fasting seasons (Great Lent, Nativity Fast, Apostles' Fast, Dormition Fast) the fasting rules are always lessened to an extent. The Great Feast of the Exaltation of the Cross and the Beheading of St. John the Baptist are normally observed as strict fast days, but if they fall on a Saturday or Sunday, the fast is lessened. Name and associations Today, Saturday has two names in modern Standard German. The first word, , is always used in Austria, Liechtenstein, and the German-speaking part of Switzerland, and generally used in southern and western Germany. It derives from Old High German , the first part (sambaz) of which derives from Greek , and this Greek word derives from Hebrew , . However, the current German word for Sabbath is . The second name for Saturday in German is , which derives from Old High German , and is closely related to the Old English word . It means literally "Sun eve", i.e., "The day before Sunday". is generally used in northern and eastern Germany, and was also the official name for Saturday in East Germany. Even if these two names are used regionally differently, they are usually understood at least passively in the other part. In West Frisian there are also two words for Saturday. In Wood Frisian it is , and in Clay Frisian it is , derived from , a combination of Old Frisian , meaning sun and joen, meaning eve. In the Westphalian dialects of Low Saxon, in East Frisian Low Saxon and in the Saterland Frisian language, Saturday is called , also akin to Dutch , which has the same linguistic roots as the English word Saturday. It was formerly thought that the English name referred to a deity named Sætere who was venerated by the pre-Christian peoples of north-western Germany, some of whom were the ancestors of the Anglo-Saxons. Sætere was identified as either a god associated with the harvest of possible Slav origin, or another name for Loki a complex deity associated with both good and evil; this latter suggestion may be due to Jacob Grimm. Regardless,modern dictionaries derive the name from Saturn. In most languages of India, Saturday is , meaning day, based on Shani, the Hindu god manifested in the planet Saturn. Some Hindus fast on Saturdays to reverse the ill effects of Shani as well as pray to and worship the deity Hanuman. In the Thai solar calendar of Thailand, the day is named from the Pali word for Saturn, and the color associated with Saturday is purple. In Pakistan, Saturday is , meaning the week. In Eastern Indian languages like Bengali Saturday is called , meaning Saturn's Day and is the first day of the Bengali Week in the Bengali calendar. In Islamic countries, Fridays are considered as the last or penultimate day of the week and are holidays along with Thursdays or Saturdays; Saturday is called , (cognate to Sabbath) and it is the first day of the week in many Arab countries but the Last Day in other Islamic countries such as Indonesia, Malaysia, Brunei, Central Asian countries. In Japanese, the word Saturday is , , meaning 'soil day' and is associated with , : Saturn (the planet), literally meaning "soil star". Similarly, in Korean the word Saturday is , , also meaning earth day. The element Earth was associated with the planet Saturn in Chinese astrology and philosophy. The modern Māori name for Saturday, , literally means "washing-day" – a vestige of early colonized life when Māori converts would set aside time on the Saturday to wash their whites for Church on Sunday. A common alternative Māori name for Saturday is the transliteration . Quakers traditionally referred to Saturday as "Seventh Day", eschewing the "pagan" origin of the name. In Scandinavian countries, Saturday is called , , or , the name being derived from the old word laugr/laug (hence Icelandic name ), meaning bath, thus Lördag equates to bath-day. This is due to the Viking practice of bathing on Saturdays. The roots lör, laugar and so forth are cognate to the English word lye, in the sense of detergent. The Finnish and Estonian names for the day, and , respectively, are also derived from this term. Position in the week The international standard ISO 8601 sets Saturday as the sixth day of the week. The three Abrahamic religions (Judaism, Christianity, and Islam) regard Saturday as the seventh day of the week. As a result, many refused the ISO 8601 standards and continue to use Saturday as their seventh day. Saturday Sabbath For Jews, Messianics, Seventh Day Baptists and Seventh-day Adventists, the seventh day of the week, known as Shabbat (or Sabbath for Seventh-day Adventists), stretches from sundown Friday to nightfall Saturday and is the day of rest. Roman Catholic and Eastern Orthodox churches distinguish between Saturday (Sabbath) and the Lord's Day (Sunday). Other Protestant groups, such as Seventh-day Adventists, hold that the Lord's Day is the Sabbath, according to the fourth commandment (Exodus 20:8), and not Sunday. Holy Saturday Christian religious observance in the Holy Week, before Easter Sunday. Catholic liturgy and devotions on each Saturday In the Catholic Church, Saturday is dedicated to the Blessed Virgin Mary. In the Catholic devotion of the Holy Rosary, the Joyful Mysteries are meditated on Saturday and also on Monday throughout the year. Astrology In astrology, Saturn is associated with Saturday, its planet's symbol , and the astrological signs Capricorn and Aquarius. In popular culture Regional customs In most countries, Saturday is a weekend day (see workweek). In Australia, elections must take place on a Saturday. In Israel, Saturday is the official day of rest, on which all government offices and most businesses, including some public transportation, are closed. In Nepal, Saturday is the last day of the week and is the only official weekly holiday. In New Zealand, Saturday is the only day on which elections can be held. In Sweden and Norway, Saturday has usually been the only day of the week when especially younger children are allowed to eat sweets, in Swedish and in Norwegian. This tradition was introduced to limit dental caries, utilizing the results of the infamous Vipeholm experiments between 1945 and 1955. (See festivities in Sweden.) In the U.S. state of Louisiana, Saturday is the preferred election day. Slang The amount of criminal activities that take place on Saturday nights has led to the expression, "Saturday night special", a pejorative slang term used in the United States and Canada for any inexpensive handgun. Arts, entertainment, and media Comics and periodicals Saturday Morning Breakfast Cereal is a single-panel webcomic by Zach Weiner. The Saturday Evening Post Saturday Night (magazine) (Canada) Saturday Night Magazine (U.S.) Films The association of Saturday night with comedy shows on television lent its name to the film Mr. Saturday Night, starring Billy Crystal. It is common for clubs, bars and restaurants to be open later on Saturday night than on other nights. Thus "Saturday Night" has come to imply the party scene, and has lent its name to the films Saturday Night Fever, which showcased New York discotheques, Uptown Saturday Night, as well as many songs (see below). Folk rhymes and folklore In the folk rhyme Monday's Child, "Saturday's child works hard for a living". In another rhyme reciting the days of the week, Solomon Grundy "Died on Saturday". In folklore, Saturday was the preferred day to hunt vampires, because on that day they were restricted to their coffins. It was also believed in the Balkans that someone born on Saturday could see a vampire when it was otherwise invisible, and that such people were particularly apt to become vampire hunters. Accordingly, in this context, people born on Saturday were specially designated as in Greek and in Bulgarian; the term has been rendered in English as "Sabbatarians". Music Groups The Saturdays is a female pop group Songs The Nigerian popular song "Bobo Waro Fero Satodeh" ("Everybody Loves Saturday Night") became internationally famous in the 1950s and was sung translated into many languages "Saturday" (Fall Out Boy song) from the album Take This to Your Grave "Saturday" (Kids in Glass Houses song) from the album Smart Casual "Saturday in the Park" is a song by Chicago "Saturday Night" is a song by the Misfits from Famous Monsters "Saturday Night's Alright for Fighting" is an Elton John song "One More Saturday Night" is a Grateful Dead song. Television Saturday morning is a notable television time block aimed at children while generally airing animated cartoons, although in the United States, this has generally been phased out due to American television regulations requiring educational content be aired, along with Saturday outside activities for children Saturday night is also a popular time slot for comedy shows on television in the US. The most famous of these is Saturday Night Live, a sketch comedy show that has aired on NBC nearly every week since 1975. Other notable examples include Saturday Night Live with Howard Cosell. The Grand Final of the popular pan-European TV show, Eurovision Song Contest, has always aired on a Saturday in May. Saturday evenings are a time slot in the United Kingdom, devoted to popular TV shows such as Strictly Come Dancing, The Voice UK, and The X Factor. Many family game shows, for example Total Wipeout and Hole in the Wall, also air on a Saturday evening. Saturday night is a popular time for professional wrestling on television in the United States. WCW Saturday Night ran weekly under various titles between 1971 and 2000. WWE ran Saturday Night's Main Event television specials between 1985 and 1992, with a second run coming between 2006 and 2008. AEW Collision has run weekly since 2023. Video games Saturday Night Slam Masters – Published by Capcom Wrestling, 1993 video game Saturday Morning RPG Sports In the United Kingdom, Saturday is the day most domestic fixtures of football are played. In the United States, most regular season college football games are played on Saturday. Saturday is also a common day for college basketball games. Most mixed martial arts events organized by the Ultimate Fighting Championship occur on Saturday.
Technology
Days of the week
null
54632
https://en.wikipedia.org/wiki/Friday
Friday
Friday is the day of the week between Thursday and Saturday. In countries that adopt the traditional "Sunday-first" convention, it is the sixth day of the week. In countries adopting the ISO 8601-defined "Monday-first" convention, it is the fifth day of the week. In most Western countries, Friday is the fifth and final day of the working week. In some other countries, Friday is the first day of the weekend, with Saturday the second. In Iran, Friday is the last day of the weekend, with Saturday as the first day of the working week. Bahrain, the United Arab Emirates (UAE), Saudi Arabia and Kuwait also followed this convention until they changed to a Friday–Saturday weekend on September 1, 2006, in Bahrain and the UAE, and a year later in Kuwait. In Israel, by Jewish tradition, Friday is the sixth day of the week, and the last working day. Etymology In the seven-day week introduced in the Roman Empire in the first century CE, the days were named after the classical planets of Hellenistic astrology (the Sun, the Moon, Mars, Mercury, Jupiter, Venus and Saturn). The English name Friday comes from the Old English , meaning the "day of Frig", a result of an old convention associating the Nordic goddess Frigg with the Roman goddess Venus after whom the planet was named; the same holds for in Old High German, in Modern German, and in Dutch. "Friday" in other languages The expected cognate name in Old Norse would be . The name of Friday in Old Norse is instead, indicating a loan of the week-day names from Low German; however, the modern Faroese name is . The modern Scandinavian form is in Swedish, Norwegian, and Danish, meaning Freyja's day. The distinction between Freyja and Frigg in some Germanic mythologies is contested. The word for Friday in most Romance languages is derived from Latin or "day of Venus" (a translation of Greek , ), such as in French, in Galician, in Catalan, in Corsican, in Italian, in Romanian, and in Spanish and influencing the Filipino or , and the Chamorro . This is also reflected in the p-Celtic Welsh language as . An exception is Portuguese, also a Romance language, which uses the word , meaning "sixth day of liturgical celebration", derived from the Latin used in religious texts where consecrating days to pagan gods was not allowed. Another exception among the Romance languages is also Sardinian, in which the word is derived from Latin . This name had been given by the Jewish community exiled to the island in order to designate the food specifically prepared for Shabbat eve. In Arabic, Friday is , from a root meaning "congregation/gathering." In languages of Islamic countries outside the Arab world, the word for Friday is commonly a derivation of this: (Malay Jumaat or Jumat , Turkish , Persian/Urdu , ) and Swahili (Ijumaa). In modern Greek, four of the words for the week-days are derived from ordinals. However, the Greek word for Friday is () and is derived from a word meaning "to prepare" (). Like Saturday (, ) and Sunday (, ), Friday is named for its liturgical significance as the day of preparation before Sabbath, which was inherited by Greek Christian Orthodox culture from Jewish practices. Friday was formerly a Christian fast day; this is the origin of the Irish , Scottish Gaelic , Manx and Icelandic , all meaning "fast day". In both biblical and modern Hebrew, Friday is meaning "the sixth day". In most Indian languages, Friday is Shukravāra, named for , the planet Venus. In Bengali or is the 6th day in the Bengali week of Bengali Calendar and is the beginning of the weekend in Bangladesh. In Tamil, the word for Friday is velli, also a name for Venus; and in Malayalam it is velliyalca. In Japanese, is formed from the words meaning Venus (lit. gold + planet) and meaning day (of the week). In the Korean language, it is in Korean Hangul writing (Romanization: ), and is the pronounced form of the written word in Chinese characters, as in Japanese. In Chinese, Friday is 星期五 xīngqíwǔ meaning "fifth day of the week". In the Nahuatl language, Friday is () meaning "day of Quetzalcoatl". Most Slavic languages call Friday the "fifth (day)": Belarusian – , Bulgarian – , Czech , Polish , Russian – , Serbo-Croatian – , Slovak , Slovene , and Ukrainian – . The Hungarian word is a loan from the Slavic Pannonian dialect. The n in suggests an early adoption from Slavic, when many Slavic dialects still had nasal vowels. In modern Slavic languages only Polish retained nasal vowels. In culture Friday is considered unlucky in some cultures. This is particularly so in maritime circles; perhaps the most enduring sailing superstition is that it is unlucky to begin a voyage on a Friday. In the 19th century, Admiral William Henry Smyth described Friday in his nautical lexicon The Sailor's Word-Book as: ( means "unlucky day".) This superstition is the root of the well-known urban legend of . In modern times since the Middle Ages, Friday the 13th and Friday the 17th are considered to be especially unlucky, due to the conjunction of Friday with the unlucky numbers thirteen and seventeen. Such a Friday may be called a "Black Friday". However, this superstition is not universal, notably in Hispanic, Greek and Scottish Gaelic culture: In Hispanic and Greek cultures, Tuesday is the unlucky day, specifically the 13th. Popularly, Fridays are seen as days of good luck and happiness, since it is the last day of a work week as well as many school weeks that end every Friday. In astrology In astrology, Friday is connected with the planet Venus and is symbolized by that planet's symbol ♀. Friday is also associated with the astrological signs Libra and Taurus. Modern nursery rhymes claim that 'Friday's child is loving and giving', yet in 1775, children born on a Friday were described as having a 'strong constitution, but very involved in the romances; and if female, She is in great danger of turning into questionable moral behaviors' In religions Christianity In Christianity, Good Friday is the Friday before Easter. It commemorates the crucifixion of Jesus. Adherents of many Christian denominations including the Roman Catholic, Eastern Orthodox, Methodist, and Anglican traditions observe the Friday fast, which traditionally includes abstinence from meat, lacticinia, and alcohol on Fridays of the year. Traditionally, Roman Catholics were obliged to refrain from eating the meat of warm-blooded animals on Fridays, although fish was allowed. The Filet-O-Fish was invented in 1962 by Lou Groen, a McDonald's franchise owner in Cincinnati, Ohio, in response to falling hamburger sales on Fridays resulting from the Roman Catholic practice of abstaining from meat on Fridays. In the present day, episcopal conferences are now authorized to allow some other form of penance to replace abstinence from meat. The 1983 Code of Canon Law states: Canon 1250. The days and times of penance for the universal Church are each Friday of the whole year and the season of Lent. Canon 1251. Abstinence from meat, or from some other food as determined by the Episcopal Conference, is to be observed on all Fridays, unless a solemnity should fall on a Friday. Abstinence and fasting are to be observed on Ash Wednesday and Good Friday. Canon 1253. The Episcopal Conference can determine more particular ways in which fasting and abstinence are to be observed. In place of abstinence or fasting it can substitute, in whole or in part, other forms of penance, especially works of charity and exercises of piety. The Book of Common Prayer prescribes weekly Friday fasting and abstinence from meat for all Anglicans. In Methodism, the Directions Given to Band Societies (25 December 1744) mandate for all Methodists fasting and abstinence from meat on all Fridays of the year. The Eastern Orthodox Church continues to observe Fridays (as well as Wednesdays) as fast days throughout the year (with the exception of several fast-free periods during the year). Fasting on Fridays entails abstinence from meat or meat products (i.e., quadrupeds), poultry, and dairy products (as well as fish). Unless a feast day occurs on a Friday, the Orthodox also abstain from using oil in their cooking and from alcoholic beverages (there is some debate over whether abstention from oil involves all cooking oil or only olive oil). On particularly important feast days, fish may also be permitted. For the Orthodox, Fridays throughout the year commemorate the Crucifixion of Christ and the (Mother of God), especially as she stood by the foot of the cross. There are hymns in the which reflect this liturgically. These include (hymns to the Mother of God) which are chanted on Wednesdays and Fridays called ("Cross-"). The dismissal at the end of services on Fridays begins with the words: "May Christ our true God, through the power of the precious and life-giving cross...." Quakers traditionally referred to Friday as "Sixth Day", eschewing the pagan origins of the name. In Slavic countries, it is called "Fifth Day" (, , ). Hinduism The day is named after Shukra son of Bhrigu and Kavyamata (Usana). In Hinduism, special observances are practiced for forms of the Devi, such as Durga, Lakshmi, Saraswati, Kali, Parvati, Annapurna, Gayatri, or Santoshi Mata on Friday. Fridays are important for married ladies and they worship the goddesses on that day. Islam In Islam, Friday (from sun-down Thursday to sun-down Friday) is the day of communion, of praying together, the holy day of Muslims. Friday observance includes attendance at a Masjid (mosque) for congregation prayer or Salat Al Jumu'ah. It is considered a day of peace and mercy (see Jumu'ah). According to some Islamic traditions, the day is stated to be the original holy day ordained by God, but that now Jews and Christians recognize the days after. In some Islamic countries, the week begins on Sunday and ends on Saturday, just like the Jewish week and the week in some Christian countries. The week begins on Saturday and ends on Friday in most other Islamic countries, such as Somalia, and Iran. Friday is also the day of rest in the Baháʼí Faith. In some Malaysian states, Friday is the first week-end day, with Saturday the second, to allow Muslims to perform their religious obligations on Friday. Sunday is the first working day of the week for governmental organizations. Muslims are recommended not to fast on a Friday by itself (makruh, recommended against, but not haram, religiously forbidden), unless it is accompanied with fasting the day before (Thursday) or day after (Saturday), or it corresponds with days usually considered good for fasting (i.e. Day of Arafah or Ashura), or it falls within one's usual religious fasting habits (i.e. fasting every other day), then it is completely permissible. Muslims believe Friday as "Syed-ul-Ayyam" meaning King of days. A narration in Sahih Muslim describes the importance of Friday as follows. "Abu Huraira reported the Messenger of Allah as saying: The Qur'an also has a surah (chapter) called Al-Jumu'ah (The Friday). Judaism Jewish Sabbath begins at sunset on Friday and lasts until nightfall on Saturday. There is a Jewish custom to fast on the Friday of the week of Chukat. Named days Black Friday refers to any one of several historical disasters that happened on Fridays, and, in a general sense, to any Friday the thirteenth. In the United States, Black Friday is also the nickname of the day after Thanksgiving, the first day of the traditional Christmas shopping season. Casual Friday (also called Dress-down, Aloha or Country and Western Friday) is a relaxation of the formal dress code employed by some corporations for the last day of the working week. Good Friday is the Friday before Easter in the Christian liturgical calendar. It commemorates the crucifixion of Jesus. Jumu'atul-Wida (Farewell Friday) is the last Friday of Ramadan, the fasting month in Islam. Other Greta Thunberg's School strike for climate usually occurs on Fridays, and the movement is also called Fridays for Future. Church of the Flying Spaghetti Monster celebrates every Friday as a holy day.
Technology
Days of the week
null
54633
https://en.wikipedia.org/wiki/Thursday
Thursday
Thursday is the day of the week between Wednesday and Friday. According to the ISO 8601 international standard, it is the 4th day of the week. In countries which adopt the "Sunday-first" convention, it is the fifth day of the week. Name Thor's (or Jupiter's) day The name is derived from Old English þunresdæg and Middle English Thuresday (with loss of -n-, first in northern dialects, from influence of Old Norse Þórsdagr) meaning "Thor's Day". It was named after the Norse god Thor. Thunor, Donar (German, Donnerstag) and Thor are derived from the name of the Germanic god of thunder, Thunraz, equivalent to Jupiter in the interpretatio romana. In most Romance languages, the day is named after the Roman god Jupiter, who was the god of sky and thunder. In Latin, the day was known as Iovis Dies, "Jupiter's Day". In Latin, the genitive or possessive case of Jupiter was Iovis/Jovis and thus in most Romance languages it became the word for Thursday: Italian giovedì, Spanish jueves, French jeudi, Sardinian jòvia, Catalan dijous, Galician xoves and Romanian joi. This is also reflected in the p-Celtic Welsh dydd Iau. The astrological and astronomical sign of the planet Jupiter (♃ ) is sometimes used to represent Thursday. Since the Roman god Jupiter was identified with Thunor (Norse Thor in northern Europe), most Germanic languages name the day after this god: Torsdag in Danish, Norwegian, and Swedish, Hósdagur/Tórsdagur in Faroese, Donnerstag in German or Donderdag in Dutch. Finnish and Northern Sami, both non-Germanic (Uralic) languages, uses the borrowing "Torstai" and "Duorastat". In the extinct Polabian Slavic language, it was perundan, Perun being the Slavic equivalent of Thor. Vishnu's/Buddha's/Dattatrey's Day In most of the languages of India, the word for Thursday is Guruvāra – vāra meaning day and Guru being the style for Bṛhaspati, guru to the gods and regent of the planet Jupiter. This day marks the worship of Vishnu and his avatars such as Rama, Satyanarayana, Parashurama, Narasimha, and Buddha as well as the deity Dattatreya in Hinduism. In Sanskrit language, the day is called Bṛhaspativāsaram (day of Bṛhaspati). In Nepali language, the day is called Bihivāra with Bihi derived from the corruption of the shorter form 'Brhi' of the word Bṛhaspati. In Thai, the word is Wan Pharuehatsabodi, also in Old Javanese as Respati or in Balinese as Wraspati – referring to the Hindu deity Bṛhaspati, also associated with Jupiter. En was an old Illyrian deity and in his honor in the Albanian language Thursday is called "Enjte". In the Nahuatl language, Thursday is () meaning "day of Tezcatlipoca". In Japanese, the day is (木 represents Jupiter, 木星), following East Asian tradition. Fourth day In Slavic languages and in Chinese, this day's name is "fourth" (Slovak štvrtok, Czech čtvrtek, Slovene četrtek, Polish czwartek, Russian четверг chetverg, Bulgarian четвъртък, Serbo-Croatian четвртак / četvrtak, Macedonian четврток, Ukrainian четвер chetver). Hungarian uses a Slavic loanword "csütörtök". In Chinese, it is xīngqīsì ("fourth solar day"). In Estonian it's neljapäev, meaning "fourth day" or "fourth day in a week". The Baltic languages also use the term "fourth day" (Latvian ceturtdiena, Lithuanian ketvirtadienis). Fifth day Greek uses a number for this day: Πέμπτη Pémpti "fifth," as does "fifth day," Hebrew: (Yom Khamishi – day fifth) often written ("Yom Hey" – 5th letter Hey day), and Arabic: ("Yaum al-Khamīs" – fifth day). Rooted from Arabic, the Indonesian word for Thursday is "Kamis", similarly "Khamis" in Malaysian and "Kemis" in Javanese. In Catholic liturgy, Thursday is referred to in Latin as feria quinta. Portuguese, unlike other Romance languages, uses the word quinta-feira, meaning "fifth day of liturgical celebration", that comes from the Latin feria quinta used in religious texts where it was not allowed to consecrate days to pagan gods. Icelandic also uses the term fifth day (Fimmtudagur). In the Persian language, Thursday is referred to as panj-shanbeh, meaning 5th day of the week. Vietnamese refers to Thursday as (literally means "day five"). Quakers traditionally referred to Thursday as "Fifth Day" eschewing the pagan origin of the English name "Thursday". Cultural and religious practices Christian holidays In the Christian tradition, Maundy Thursday or Holy Thursday is the Thursday before Easter — the day on which the Last Supper occurred. Also known as Sheer Thursday in the United Kingdom, it is traditionally a day of cleaning and giving out Maundy money there. Holy Thursday is part of Holy Week. In the Eastern Orthodox Church, Thursdays are dedicated to the Apostles and Saint Nicholas. The Octoechos contains hymns on these themes, arranged in an eight-week cycle, that are chanted on Thursdays throughout the year. At the end of Divine Services on Thursday, the dismissal begins with the words: "May Christ our True God, through the intercessions of his most-pure Mother, of the holy, glorious and all-laudable Apostles, of our Father among the saints Nicholas, Archbishop of Myra in Lycia, the Wonder-worker…" Ascension Thursday is 40 days after Easter, when Christ ascended into Heaven. Hinduism In Hinduism, Thursday is associated with the Navagraha Brihaspati, whom devotees of this graha will fast pray and fast on Thursdays. The day is dedicated to the deity Vishnu or his avatars, such as Rama, Parshurama, Narasimha, and Buddha. However, Wednesday is dedicated to his avatars of Krishna and Vithoba. Devotees usually fast on this day in honor of Vishnu and his avatars, especially Vaishnava Hindus. Islam In Islam, Thursdays are one of the days in a week in which Muslims are encouraged to do voluntary fasting, the other being Mondays. Judaism In Judaism, Thursdays are considered auspicious days for fasting. The Didache warned early Christians not to fast on Thursdays to avoid Judaizing, and suggested Fridays instead. In Judaism the Torah is read in public on Thursday mornings, and special penitential prayers are said on Thursday, unless there is a special occasion for happiness which cancels them. Druze faith Formal Druze worship is confined to weekly meeting on Thursday evenings, during which all members of community gather together to discuss local issues before those not initiated into the secrets of the faith (the juhhāl, or the ignorant) are dismissed, and those who are "uqqāl" or "enlightened" (those few initiated in the Druze holy books) remain to read and study their holy scriptures. Practices in countries In Finland and Sweden, pea soup is traditionally served on Thursdays. In Indonesia, and Malaysia, in a week, batik clothing is usually worn on Thursday, especially at education and civil servant institutions. For Thai Buddhist, Thursday is considered the "Teacher's Day", and it is believed that one should begin one's education on this auspicious day. Thai students still pay homages to their teachers in specific ceremony always held on a selected Thursday. And graduation day in Thai universities, which can vary depending on each university, almost always will be held on a Thursday. In the Thai solar calendar, the colour associated with Thursday is orange. In the United States, Thanksgiving Day is an annual festival celebrated on the fourth Thursday in November. Conventional weekly events In Australia, most cinema movies premieres are held on Thursdays. Also, most Australians are paid on a Thursday, either weekly or fortnightly. Shopping malls see this as an opportunity to open longer than usual, generally until 9 pm, as most pay cheques are cleared by Thursday morning. In Norway, Thursday has also traditionally been the day when most shops and malls are open later than on the other weekdays, although the majority of shopping malls now are open until 8 pm or 9 pm every weekday. In the USSR of the 1970s and 1980s Thursday was the "Fish Day" (, Rybny den), when the nation's foodservice establishments were supposed to serve fish (rather than meat) dishes. For college and university students, Thursday is sometimes referred to as the new Friday. There are often fewer or sometimes no classes on Fridays and more opportunities to hold parties on Thursday night and sleep in on Friday. As a consequence, some call Thursday "thirstday" or "thirsty Thursday". Elections in the United Kingdom In the United Kingdom, all general elections since 1935 have been held on a Thursday, and this has become a tradition, although not a requirement of the law — which merely states that an election may be held on any day "except Saturdays, Sundays, Christmas Eve, Christmas Day, Good Friday, bank holidays in any part of the United Kingdom and any day appointed for public thanksgiving and mourning". Additionally, local elections are usually held on the first Thursday in May. The Electoral Administration Act 2006 removed Maundy Thursday as an excluded day on the electoral timetable, therefore an election can now be held on Maundy Thursday; prior to this elections were sometimes scheduled on the Tuesday before as an alternative. Astrology Thursday is aligned by the planet Jupiter and the astrological signs of Pisces and Sagittarius. Popular culture In the nursery rhyme, "Monday's Child", "Thursday's Child has far to go". In some high schools in the United States during the 1950s and the 1960s, rumours said that if someone wore green on Thursdays, it meant that he or she was gay or lesbian. Thursday is the day of the Second Round draw in the English League Cup. Super Thursday is an annual promotional event in the publishing industry as well as an important day in UK elections (see above). Literature Gabriel Syme, the main character, was given the title of Thursday in G. K. Chesterton's novel The Man Who Was Thursday (1908). The titular day in Sweet Thursday (1954) (the sequel to John Steinbeck's novel Cannery Row (1945)), the author explains, is the day after Lousy Wednesday and the day before Waiting Friday. In The Hitchhiker's Guide to the Galaxy by Douglas Adams, the character Arthur Dent says: "This must be Thursday. I never could get the hang of Thursdays". A few minutes later the planet Earth is destroyed. In another Douglas Adams book, The Long Dark Tea-time of the Soul (1988), one of the characters says to the character Thor, after whom the day was named: "I'm not used to spending the evening with someone who's got a whole day named after them". In the cross media work Thursday's Fictions by Richard James Allen and Karen Pearlman, Thursday is the title character, a woman who tries to cheat the cycle of reincarnation to get a form of eternal life. Thursday's Fictions has been a stage production, a book, a film and an 3D online immersive world in Second Life. Thursday Next is the central character in a series of novels by Jasper Fforde. In Garth Nix's popular The Keys to the Kingdom series, Thursday is an antagonist, a violent general who is a personification of the actual day and the Sin of Wrath. According to Nostradamus' prediction (Century 1, Quatrain 50), a powerful (but otherwise unidentified) leader who will threaten "the East" will be born of three water signs and takes Thursday as his feast day. Cinema Thursday (1998 film) is a movie starring Thomas Jane, about the day of a drug dealer gone straight, who gets pulled back into his old lifestyle. The Thursday (1963), is an Italian film. Music Thursday Afternoon is a 1985 album by the British ambient musician Brian Eno consisting of one 60-minute-long composition. It is the rearranged soundtrack to a video production of the same title made in 1984. Donnerstag aus Licht (Thursday from Light) is an opera by Karlheinz Stockhausen. Thursday is a post-hardcore band from New Brunswick, New Jersey, formed in 1997. "Thursday's Child" is a David Bowie song from the album hours...(1999). "Thursday's Child" is a song by The Chameleons on Script of the Bridge (1983). "Outlook for Thursday" was a hit in New Zealand for Dave Dobbyn. Thursday (mixtape)" is the name of a mixtape by R&B artist The Weeknd released in 2011. "Thirsty" is a song by American pop band AJR that prominently features the lyrics 'Thirsty, thirsty Thursday'
Technology
Days of the week
null
54634
https://en.wikipedia.org/wiki/Wednesday
Wednesday
Wednesday is the day of the week between Tuesday and Thursday. According to international standard ISO 8601, it is the third day of the week. In English, the name is derived from Old English and Middle English , 'day of Woden', reflecting the religion practised by the Anglo-Saxons, the English equivalent to the Norse god Odin. In many Romance languages, such as the French , Spanish or Italian , the day's name is a calque of Latin 'day of Mercury'. Wednesday is in the middle of the common Western five-day workweek that starts on Monday and finishes on Friday. Etymology See Names of the days of the week for more on naming conventions. The name Wednesday continues Middle English . Old English still had , which would be continued as *Wodnesday (but Old Frisian has an attested ). By the early 13th century, the i-mutated form was introduced unetymologically. The name is a calque of the Latin 'day of Mercury', reflecting the fact that the Germanic god Woden (Wodanaz or Odin) during the Roman era was interpreted as "Germanic Mercury". The Latin name dates to the late 2nd or early 3rd century. It is a calque of Greek (), a term first attested, together with the system of naming the seven weekdays after the seven classical planets, in the Anthologiarum by Vettius Valens (c. AD 170). The Latin name is reflected directly in the weekday name in most modern Romance languages: (Sardinian), (French), (Italian), (Spanish), (Romanian), (Catalan), or (Corsican), (Venetian). In Welsh it is , meaning 'Mercury's Day'. The Dutch name for the day, , has the same etymology as English Wednesday; it comes from Middle Dutch , ('Wodan's day'). The German name for the day, (literally: 'mid-week'), replaced the former name ('Wodan's day') in the 10th century. (Similarly, the Yiddish word for Wednesday is (), meaning and sounding a lot like the German word it came from.) Most Slavic languages follow this pattern and use derivations of 'the middle' (Belarusian , Bulgarian , Croatian , Czech , Macedonian , Polish , Russian , Serbian or , Slovak , Slovene , Ukrainian ). The Finnish name is ('middle of the week'), as is the Icelandic name: , and the Faroese name: ('mid-week day'). Some dialects of Faroese have , though, which shares etymology with Wednesday. Danish, Norwegian, Swedish , ( meaning 'Odin's day'). In Japanese, the word for Wednesday is meaning 'water day' and is associated with (): Mercury (the planet), literally meaning 'water star'. Similarly, in Korean the word for Wednesday is , also meaning 'water day'. In most of the languages of India, the word for Wednesday is — meaning 'day' and Budha being the planet Mercury. In Armenian ( ), Georgian ( ), Turkish (), and Tajik () languages the word literally means 'four (days) from Saturday' originating from Persian ( ). Portuguese uses the word , meaning 'fourth day', while in Greek the word is () meaning simply 'fourth'. Similarly, Arabic means 'fourth', Hebrew means 'fourth', and Persian means 'fourth day'. Yet the name for the day in Estonian , Lithuanian , and Latvian means 'third day' while in Mandarin Chinese (), means 'day three', as Sunday is unnumbered. Religious observances The Creation narrative in the Hebrew Bible places the creation of the Sun and Moon on "the fourth day" of the divine workweek. Quakers traditionally referred to Wednesday as "Fourth Day" to avoid the pagan associations with the name "Wednesday", or in keeping with the practice of treating each day as equally divine. The Eastern Orthodox Church observes Wednesday (as well as Friday) as a fast day throughout the year (with the exception of several fast-free periods during the year). Fasting on Wednesday and Fridays entails abstinence from meat or meat products (i.e., four-footed animals), poultry and dairy products. Unless a feast day occurs on a Wednesday, the Orthodox also abstain from fish, from using oil in their cooking and from alcoholic beverages (there is some debate over whether abstention from oil involves all cooking oil or only olive oil). For the Orthodox, Wednesdays and Fridays throughout the year commemorate the betrayal of Jesus (Wednesday) and the Crucifixion of Christ (Friday). There are hymns in the Octoekhos which reflect this liturgically. These include special Theotokia (hymns to the Mother of God) called ('Cross-Theotokia'). The dismissal at the end of services on Wednesday begins with these words: "May Christ our true God, through the power of the precious and life-giving cross...." In Irish and Scottish Gaelic, the name for Wednesday also refers to fasting, as it is in Irish Gaelic and in Scottish Gaelic, which comes from , meaning 'first', and , meaning 'fasting', which combined means 'first day of fasting'. In American culture many Catholic and Protestant churches schedule study or prayer meetings on Wednesday nights. The sports calendar in many American public schools reflects this, reserving Mondays and Thursdays for girls' games and Tuesdays and Fridays for boys' games while generally avoiding events on Wednesday evening. In the Catholic devotion of the Holy Rosary, the glorious mysteries are meditated on Wednesday and also Sunday throughout the year. Wednesday is the day of the week devoted by the Catholic tradition to Saint Joseph. In Hinduism, Budha is the god of Mercury (planet), Wednesday, and of merchants and merchandise. Krishna, Vithoba, and Ganesha are also worshipped on Wednesday. Cultural usage According to the Thai solar calendar, the color associated with Wednesday is green. In the folk rhyme Monday's Child, "Wednesday's child is full of woe". In the rhyme Solomon Grundy, Grundy was "married on Wednesday". In Winnie the Pooh and the Blustery Day, the disagreeable nature of the weather is attributed to it being "Winds-Day" (a play on Wednesday). In Richard Brautigan's In Watermelon Sugar Wednesday is the day when the sun shines grey. Wednesday Friday Addams is a member of the fictional family The Addams Family. Her name is derived from the idea that Wednesday's child is full of woe. Additionally, Wednesday sometimes appears as a character's name in literary works. These include Thursday's fictions by Richard James Allen, Wednesday Next from the Thursday Next series by Jasper Fforde and Neil Gaiman's novel American Gods. In the 1945 John Steinbeck novel Sweet Thursday, the titular day is preceded by "Lousy Wednesday". Wednesday is sometimes informally referred to as "hump day" in North America, a reference to the fact that Wednesday is the middle day—or "hump"—of a typical work week. Lillördag, or "little Saturday", is a Nordic tradition of turning Wednesday evening into a small weekend-like celebration. Humpday is also a name of a 2009 film. In Poland, Wednesday night is often referred by young people as "time of vodka", after song by Bartosz Walaszek "Środowa noc to wódy czas" Astrology The astrological sign of the planet Mercury, ☿, represents Wednesday— to the Romans, it had similar names in Latin-derived languages, such as the Italian ( means 'day'), the French , and the Spanish . In English, this became "Woden's Day", since the Roman god Mercury was identified by Woden in Northern Europe and it is especially aligned by the astrological signs of Gemini and Virgo. Named days Ash Wednesday, the first day of Lent in the Western Christian tradition, occurs forty-six days before Easter (forty, not counting Sundays). Black Wednesday, the day of a financial crisis in the United Kingdom Holy Wednesday, sometimes called Spy Wednesday in allusion to the betrayal of Jesus by Judas Iscariot, is the Wednesday immediately preceding Easter. Red Wednesday, the Yezidi festival celebrated in Iraq
Technology
Days of the week
null
54635
https://en.wikipedia.org/wiki/Tuesday
Tuesday
Tuesday is the day of the week between Monday and Wednesday. According to international standard ISO 8601, Monday is the first day of the week; thus, Tuesday is the second day of the week. According to many traditional calendars, however, Sunday is the first day of the week, so Tuesday is the third day of the week. In some Muslim countries, Saturday is the first day of the week and thus Tuesday is the fourth day of the week. The English name is derived from Middle English , from Old English meaning "Tīw's Day", the day of Tiw or Týr, the god of single combat, law, and justice in Norse mythology. Tiw was equated with Mars in the , and the name of the day is a translation of Latin . Etymology The name Tuesday derives from the Old English and literally means "Tiw's Day". Tiw is the Old English form of the Proto-Germanic god *Tîwaz, or Týr in Old Norse. *Tîwaz derives from the Proto-Indo-European base *dei-, *deyā-, *dīdyā-, meaning 'to shine', whence comes also such words as "deity". The German Dienstag and Dutch dinsdag are derived from the Germanic custom of the thing, as Tiw / Týr also had a strong connection to the thing. The Latin name ("day of Mars") is equivalent to the Greek (, "day of Ares"). In most languages with Latin origins (Italian, French, Spanish, Catalan, Romanian, Galician, Sardinian, Corsican, but not Portuguese), the day is named after Mars, the Roman parallel of the Ancient Greek Ares (). In some Slavic languages the word Tuesday originated from Old Church Slavonic word meaning "the second". Bulgarian and Russian () ( ) is derived from the Bulgarian and Russian adjective for 'second' – () or (). In Japanese, the second day of the week is (), from (), the planet Mars. Similarly, in Korean the word Tuesday is (), means literally fire day, and Mars the planet is referred to as the fire star with the same words, but this is unrelated to the Roman god Mars, which is referred to phonetically as Mars. In the Indo-Aryan languages Pali and Sanskrit the name of the day is taken from ('one who is red in colour'), a style (manner of address) for Mangala, the god of war, and for Mars, the red planet. In the Nahuatl language, Tuesday is () meaning "day of Huitzilopochtli". In Arabic, Tuesday is (), and in Hebrew it is (), meaning "third day". When added after the word / ( or ) it means "the third day". Religious observances In the Eastern Orthodox Church, Tuesdays are dedicated to Saint John the Baptist. The Octoechos contains hymns on this theme, arranged in an eight-week cycle, that are chanted on Tuesdays throughout the year. At the end of Divine Services on Tuesday, the dismissal begins with the words: "May Christ our True God, through the intercessions of his most-pure Mother, of the honorable and glorious Prophet, Forerunner and Baptist John…" In Hinduism, Tuesday is a popular day for worshipping and praying to Hanuman and Kartikeya, some also worship Kali, Durga, Parvati, and Ganesha. Many Hindus fast during Tuesday. Many Hindu married women also observe the Mangala Gauri Vrat of fasting every Tuesday in the Hindu month of Shravana, as the month is dedicated to Gauri and Shiva. Tuesday is also viewed as the day ruled by Mangala (Mars) in Hinduism. Cultural references In the Greek world, Tuesday (the day of the week of the Fall of Constantinople) is considered an unlucky day. The same is true in the Spanish-speaking world; it is believed that this is due to the association between Tuesday and Mars, the god of war and therefore related to death. For both Greeks and Spanish-speakers, the 13th of the month is considered unlucky if it falls on Tuesday, instead of Friday. In Judaism, on the other hand, Tuesday is considered a particularly lucky day, because in Bereshit (parashah), known in the Christian tradition as the first chapters of Genesis, the paragraph about this day contains the phrase "it was good" twice. In the Thai solar calendar, the day is named for the Pali word for the planet Mars, which also means "Ashes of the Dead"; the color associated with Tuesday is pink. In the folk rhyme Monday's Child, "Tuesday's child is full of grace". Common occurrences United States Tuesday is the usual day for elections in the United States. Federal elections take place on the Tuesday after the first Monday in November; this date was established by a law of 1845 for presidential elections (specifically for the selection of the Electoral College), and was extended to elections for the House of Representatives in 1875 and for the Senate in 1914. Tuesday was the earliest day of the week which was practical for polling in the early 19th century: citizens might have to travel for a whole day to cast their vote, and would not wish to leave on Sunday which was a day of worship for the great majority of them. However, a bill was introduced in 2012 to move elections to weekends, with a co-sponsor stating that "by moving Election Day from a single day in the middle of the workweek to a full weekend, we are encouraging more working Americans to participate. Our democracy will be best served when our leaders are elected by as many Americans as possible." Video games are commonly released on Tuesdays in the United States, this fact often attributed to the Sonic the Hedgehog 2 "Sonic 2s day" marketing campaign in 1992. DVDs and Blu-rays are released on Tuesday. Albums were typically released on Tuesdays as well, but this has changed to Fridays globally in 2015. Australia In Australia, the board of the Reserve Bank of Australia meets on the first Tuesday of every month except January. The federal government hands down the federal budget on the second Tuesday in May, the practice since 1994 (except in 1996 and 2016). The Melbourne Cup is held each year on the first Tuesday in November. Astrology In astrology, Tuesday is aligned by the planet Mars and the astrological signs of Aries and Scorpio. Named days Black Tuesday, in the United States, refers to Tuesday, October 29, 1929, part of the great Stock Market Crash of 1929. This was the Tuesday after Black Thursday. Easter Tuesday is the Tuesday within the Octave of Easter. Patch Tuesday is the second Tuesday of every month when Microsoft releases patches for their products. Some system administrators call this day Black Tuesday. Shrove Tuesday (also called Mardi Gras – Fat Tuesday) precedes the first day of Lent in the Western Christian calendar. Super Tuesday is the day many American states hold their presidential primary elections. Twosday (portmanteau of two and Tuesday) is the name given to Tuesday, February 22, 2022, and an unofficial one-time secular observance held on that day. Travel Tuesday is an unofficial observance occurring the Tuesday after Thanksgiving. Giving Tuesday, or #GivingTuesday, is a day encouraging people to do good and a global movement to promote generosity. It takes place on the Tuesday following Black Friday and was launched in 2012 in the United States.
Technology
Days of the week
null
54648
https://en.wikipedia.org/wiki/Solar%20flare
Solar flare
A solar flare is a relatively intense, localized emission of electromagnetic radiation in the Sun's atmosphere. Flares occur in active regions and are often, but not always, accompanied by coronal mass ejections, solar particle events, and other eruptive solar phenomena. The occurrence of solar flares varies with the 11-year solar cycle. Solar flares are thought to occur when stored magnetic energy in the Sun's atmosphere accelerates charged particles in the surrounding plasma. This results in the emission of electromagnetic radiation across the electromagnetic spectrum. The extreme ultraviolet and X-ray radiation from solar flares is absorbed by the daylight side of Earth's upper atmosphere, in particular the ionosphere, and does not reach the surface. This absorption can temporarily increase the ionization of the ionosphere which may interfere with short-wave radio communication. The prediction of solar flares is an active area of research. Flares also occur on other stars, where the term stellar flare applies. Physical description Solar flares are eruptions of electromagnetic radiation originating in the Sun's atmosphere. They affect all layers of the solar atmosphere (photosphere, chromosphere, and corona). The plasma medium is heated to >107 kelvin, while electrons, protons, and heavier ions are accelerated to near the speed of light. Flares emit electromagnetic radiation across the electromagnetic spectrum, from radio waves to gamma rays. Flares occur in active regions, often around sunspots, where intense magnetic fields penetrate the photosphere to link the corona to the solar interior. Flares are powered by the sudden (timescales of minutes to tens of minutes) release of magnetic energy stored in the corona. The same energy releases may also produce coronal mass ejections (CMEs), although the relationship between CMEs and flares is not well understood. Associated with solar flares are flare sprays. They involve faster ejections of material than eruptive prominences, and reach velocities of 20 to 2000 kilometers per second. Cause Flares occur when accelerated charged particles, mainly electrons, interact with the plasma medium. Evidence suggests that the phenomenon of magnetic reconnection leads to this extreme acceleration of charged particles. On the Sun, magnetic reconnection may happen on solar arcades – a type of prominence consisting of a series of closely occurring loops following magnetic lines of force. These lines of force quickly reconnect into a lower arcade of loops leaving a helix of magnetic field unconnected to the rest of the arcade. The sudden release of energy in this reconnection is the origin of the particle acceleration. The unconnected magnetic helical field and the material that it contains may violently expand outwards forming a coronal mass ejection. This also explains why solar flares typically erupt from active regions on the Sun where magnetic fields are much stronger. Although there is a general agreement on the source of a flare's energy, the mechanisms involved are not well understood. It is not clear how the magnetic energy is transformed into the kinetic energy of the particles, nor is it known how some particles can be accelerated to the GeV range (109 electron volt) and beyond. There are also some inconsistencies regarding the total number of accelerated particles, which sometimes seems to be greater than the total number in the coronal loop. Post-eruption loops and arcades After the eruption of a solar flare, post-eruption loops made of hot plasma begin to form across the neutral line separating regions of opposite magnetic polarity near the flare's source. These loops extend from the photosphere up into the corona and form along the neutral line at increasingly greater distances from the source as time progresses. The existence of these hot loops is thought to be continued by prolonged heating present after the eruption and during the flare's decay stage. In sufficiently powerful flares, typically of C-class or higher, the loops may combine to form an elongated arch-like structure known as a post-eruption arcade. These structures may last anywhere from multiple hours to multiple days after the initial flare. In some cases, dark sunward-traveling plasma voids known as supra-arcade downflows may form above these arcades. Frequency The frequency of occurrence of solar flares varies with the 11-year solar cycle. It can typically range from several per day during solar maxima to less than one every week during solar minima. Additionally, more powerful flares are less frequent than weaker ones. For example, X10-class (severe) flares occur on average about eight times per cycle, whereas M1-class (minor) flares occur on average about 2000 times per cycle. Erich Rieger discovered with coworkers in 1984, an approximately 154 day period in the occurrence of gamma-ray emitting solar flares at least since the solar cycle 19. The period has since been confirmed in most heliophysics data and the interplanetary magnetic field and is commonly known as the Rieger period. The period's resonance harmonics also have been reported from most data types in the heliosphere. The frequency distributions of various flare phenomena can be characterized by power-law distributions. For example, the peak fluxes of radio, extreme ultraviolet, and hard and soft X-ray emissions; total energies; and flare durations (see ) have been found to follow power-law distributions. Classification Soft X-ray The modern classification system for solar flares uses the letters A, B, C, M, or X, according to the peak flux in watts per square metre (W/m2) of soft X-rays with wavelengths , as measured by GOES satellites in geosynchronous orbit. The strength of an event within a class is noted by a numerical suffix ranging from 1 up to, but excluding, 10, which is also the factor for that event within the class. Hence, an X2 flare is twice the strength of an X1 flare, an X3 flare is three times as powerful as an X1. M-class flares are a tenth the size of X-class flares with the same numeric suffix. An X2 is four times more powerful than an M5 flare. X-class flares with a peak flux that exceeds 10−3 W/m2 may be noted with a numerical suffix equal to or greater than 10. This system was originally devised in 1970 and included only the letters C, M, and X. These letters were chosen to avoid confusion with other optical classification systems. The A and B classes were added in the 1990s as instruments became more sensitive to weaker flares. Around the same time, the backronym moderate for M-class flares and extreme for X-class flares began to be used. Importance An earlier classification system, sometimes referred to as the flare importance, was based on H-alpha spectral observations. The scheme uses both the intensity and emitting surface. The classification in intensity is qualitative, referring to the flares as: faint (f), normal (n), or brilliant (b). The emitting surface is measured in terms of millionths of the hemisphere and is described below. (The total hemisphere area AH = 15.5 × 1012 km2.) A flare is then classified taking S or a number that represents its size and a letter that represents its peak intensity, v.g.: Sn is a normal sunflare. Duration A common measure of flare duration is the full width at half maximum (FWHM) time of flux in the soft X-ray bands measured by GOES. The FWHM time spans from when a flare's flux first reaches halfway between its maximum flux and the background flux and when it again reaches this value as the flare decays. Using this measure, the duration of a flare ranges from approximately tens of seconds to several hours with a median duration of approximately 6 and 11 minutes in the bands, respectively. Flares can also be classified based on their duration as either impulsive or long duration events (LDE). The time threshold separating the two is not well defined. The SWPC regards events requiring 30 minutes or more to decay to half maximum as LDEs, whereas Belgium's Solar-Terrestrial Centre of Excellence regards events with duration greater than 60 minutes as LDEs. Effects The electromagnetic radiation emitted during a solar flare propagates away from the Sun at the speed of light with intensity inversely proportional to the square of the distance from its source region. The excess ionizing radiation, namely X-ray and extreme ultraviolet (XUV) radiation, is known to affect planetary atmospheres and is of relevance to human space exploration and the search for extraterrestrial life. Solar flares also affect other objects in the Solar System. Research into these effects has primarily focused on the atmosphere of Mars and, to a lesser extent, that of Venus. The impacts on other planets in the Solar System are little studied in comparison. As of 2024, research on their effects on Mercury have been limited to modeling of the response of ions in the planet's magnetosphere, and their impact on Jupiter and Saturn have only been studied in the context of X-ray radiation back scattering off of the planets' upper atmospheres. Ionosphere Enhanced XUV irradiance during solar flares can result in increased ionization, dissociation, and heating in the ionospheres of Earth and Earth-like planets. On Earth, these changes to the upper atmosphere, collectively referred to as sudden ionospheric disturbances, can interfere with short-wave radio communication and global navigation satellite systems (GNSS) such as GPS, and subsequent expansion of the upper atmosphere can increase drag on satellites in low Earth orbit leading to orbital decay over time. Flare-associated XUV photons interact with and ionize neutral constituents of planetary atmospheres via the process of photoionization. The electrons that are freed in this process, referred to as photoelectrons to distinguish them from the ambient ionospheric electrons, are left with kinetic energies equal to the photon energy in excess of the ionization threshold. In the lower ionosphere where flare impacts are greatest and transport phenomena are less important, the newly liberated photoelectrons lose energy primarily via thermalization with the ambient electrons and neutral species and via secondary ionization due to collisions with the latter, or so-called photoelectron impact ionization. In the process of thermalization, photoelectrons transfer energy to neutral species, resulting in heating and expansion of the neutral atmosphere. The greatest increases in ionization occur in the lower ionosphere where wavelengths with the greatest relative increase in irradiance—the highly penetrative X-ray wavelengths—are absorbed, corresponding to Earth's E and D layers and Mars's M1 layer. Radio blackouts The temporary increase in ionization of the daylight side of Earth's atmosphere, in particular the D layer of the ionosphere, can interfere with short-wave radio communications that rely on its level of ionization for skywave propagation. Skywave, or skip, refers to the propagation of radio waves reflected or refracted off of the ionized ionosphere. When ionization is higher than normal, radio waves get degraded or completely absorbed by losing energy from the more frequent collisions with free electrons. The level of ionization of the atmosphere correlates with the strength of the associated solar flare in soft X-ray radiation. The Space Weather Prediction Center, a part of the United States National Oceanic and Atmospheric Administration, classifies radio blackouts by the peak soft X-ray intensity of the associated flare. Solar flare effect During non-flaring or solar quiet conditions, electric currents flow through the ionosphere's dayside E layer inducing small-amplitude diurnal variations in the geomagnetic field. These ionospheric currents can be strengthened during large solar flares due to increases in electrical conductivity associated with enhanced ionization of the E and D layers. The subsequent increase in the induced geomagnetic field variation is referred to as a solar flare effect (sfe) or historically as a magnetic crochet. The latter term derives from the French word meaning hook reflecting the hook-like disturbances in magnetic field strength observed by ground-based magnetometers. These disturbances are on the order of a few nanoteslas and last for a few minutes, which is relatively minor compared to those induced during geomagnetic storms. Health Low Earth orbit For astronauts in low Earth orbit, an expected radiation dose from the electromagnetic radiation emitted during a solar flare is about 0.05 gray, which is not immediately lethal on its own. Of much more concern for astronauts is the particle radiation associated with solar particle events. Mars The impacts of solar flare radiation on Mars are relevant to exploration and the search for life on the planet. Models of its atmosphere indicate that the most energetic solar flares previously recorded may have provided acute doses of radiation that would have been almost harmful or lethal to mammals and other higher organisms on Mars's surface. Furthermore, flares energetic enough to provide lethal doses, while not yet observed on the Sun, are thought to occur and have been observed on other Sun-like stars. Observational history Flares produce radiation across the electromagnetic spectrum, although with different intensity. They are not very intense in visible light, but they can be very bright at particular spectral lines. They normally produce bremsstrahlung in X-rays and synchrotron radiation in radio. Optical observations Solar flares were first observed by Richard Carrington and Richard Hodgson independently on 1 September 1859 by projecting the image of the solar disk produced by an optical telescope through a broad-band filter. It was an extraordinarily intense white light flare, a flare emitting a high amount of light in the visual spectrum. Since flares produce copious amounts of radiation at H-alpha, adding a narrow (≈1 Å) passband filter centered at this wavelength to the optical telescope allows the observation of not very bright flares with small telescopes. For years Hα was the main, if not the only, source of information about solar flares. Other passband filters are also used. Radio observations During World War II, on February 25 and 26, 1942, British radar operators observed radiation that Stanley Hey interpreted as solar emission. Their discovery did not go public until the end of the conflict. The same year, Southworth also observed the Sun in radio, but as with Hey, his observations were only known after 1945. In 1943, Grote Reber was the first to report radioastronomical observations of the Sun at 160 MHz. The fast development of radioastronomy revealed new peculiarities of the solar activity like storms and bursts related to the flares. Today, ground-based radiotelescopes observe the Sun from c. 15 MHz up to 400 GHz. Space telescopes Because the Earth's atmosphere absorbs much of the electromagnetic radiation emitted by the Sun with wavelengths shorter than 300 nm, space-based telescopes allowed for the observation of solar flares in previously unobserved high-energy spectral lines. Since the 1970s, the GOES series of satellites have been continuously observing the Sun in soft X-rays, and their observations have become the standard measure of flares, diminishing the importance of the H-alpha classification. Additionally, space-based telescopes allow for the observation of extremely long wavelengths—as long as a few kilometres—which cannot propagate through the ionosphere. Examples of large solar flares The most powerful flare ever observed is thought to be the flare associated with the 1859 Carrington Event. While no soft X-ray measurements were made at the time, the magnetic crochet associated with the flare was recorded by ground-based magnetometers allowing the flare's strength to be estimated after the event. Using these magnetometer readings, its soft X-ray class has been estimated to be greater than X10 and around X45 (±5). In modern times, the largest solar flare measured with instruments occurred on 4 November 2003. This event saturated the GOES detectors, and because of this, its classification is only approximate. Initially, extrapolating the GOES curve, it was estimated to be X28. Later analysis of the ionospheric effects suggested increasing this estimate to X45. This event produced the first clear evidence of a new spectral component above 100 GHz. Prediction Current methods of flare prediction are problematic, and there is no certain indication that an active region on the Sun will produce a flare. However, many properties of active regions and their sunspots correlate with flaring. For example, magnetically complex regions (based on line-of-sight magnetic field) referred to as delta spots frequently produce the largest flares. A simple scheme of sunspot classification based on the McIntosh system for sunspot groups, or related to a region's fractal complexity is commonly used as a starting point for flare prediction. Predictions are usually stated in terms of probabilities for occurrence of flares above M- or X-class within 24 or 48 hours. The U.S. National Oceanic and Atmospheric Administration (NOAA) issues forecasts of this kind. MAG4 was developed at the University of Alabama in Huntsville with support from the Space Radiation Analysis Group at Johnson Space Flight Center (NASA/SRAG) for forecasting M- and X-class flares, CMEs, fast CME, and solar energetic particle events. A physics-based method that can predict imminent large solar flares was proposed by Institute for Space-Earth Environmental Research (ISEE), Nagoya University.
Physical sciences
Solar System
Astronomy
54653
https://en.wikipedia.org/wiki/Chromosphere
Chromosphere
A chromosphere ("sphere of color", from the Ancient Greek words χρῶμα (khrôma) 'color' and σφαῖρα (sphaîra) 'sphere') is the second layer of a star's atmosphere, located above the photosphere and below the solar transition region and corona. The term usually refers to the Sun's chromosphere, but not exclusively, since it also refers to the corresponding layer of a stellar atmosphere. The name was suggested by the English astronomer Norman Lockyer after conducting systematic solar observations in order to distinguish the layer from the white-light emitting photosphere. In the Sun's atmosphere, the chromosphere is roughly in height, or slightly more than 1% of the Sun's radius at maximum thickness. It possesses a homogeneous layer at the boundary with the photosphere. Narrow jets of plasma, called spicules, rise from this homogeneous region and through the chromosphere, extending up to into the corona above. The chromosphere has a characteristic red color due to electromagnetic emissions in the Hα spectral line. Information about the chromosphere is primarily obtained by analysis of its emitted electromagnetic radiation. The chromosphere is also visible in the light emitted by ionized calcium, Ca II, in the violet part of the solar spectrum at a wavelength of 393.4 nanometers (the Calcium K-line). Chromospheres have also been observed on stars other than the Sun. On large stars, chromospheres sometimes make up a significant proportion of the entire star. For example, the chromosphere of supergiant star Antares has been found to be about 2.5 times larger in thickness than the star's radius. Physical properties The density of the Sun's chromosphere decreases exponentially with distance from the center of the Sun by a factor of roughly 10 million, from about at the chromosphere's inner boundary to under at the outer boundary. The temperature initially decreases from the inner boundary at about to a minimum of approximately , but then increasing to upwards of at the outer boundary with the transition layer of the corona (see ). The density of the chromosphere is 10−4 times that of the underlying photosphere and 10−8 times that of the Earth's atmosphere at sea level. This makes the chromosphere normally invisible and it can be seen only during a total eclipse, where its reddish colour is revealed. The colour hues are anywhere between pink and red. Without special equipment, the chromosphere cannot normally be seen due to the overwhelming brightness of the photosphere. The chromosphere's spectrum is dominated by emission lines when observed at the solar limb. In particular, one of its strongest lines is the Hα at a wavelength of ; this line is emitted by a hydrogen atom whenever its electron makes a transition from the n=3 to the n=2 energy level. A wavelength of is in the red part of the spectrum, which causes the chromosphere to have a characteristic reddish colour. Phenomena Many different phenomena can be observed in chromospheres. Plage A plage is a particularly bright region within stellar chromospheres, which are often associated with magnetic activity. Spicules The most commonly identified feature in the solar chromosphere are spicules. Spicules rise to the top of the chromosphere and then sink back down again over the course of about 10 minutes. Oscillations Since the first observations with the instrument SUMER on board SOHO, periodic oscillations in the solar chromosphere have been found with a frequency from to , corresponding to a characteristic periodic time of three minutes. Oscillations of the radial component of the plasma velocity are typical of the high chromosphere. The photospheric granulation pattern usually has no oscillations above ; however, higher frequency waves (, or a period) were detected in the solar atmosphere (at temperatures typical of the transition region and corona) by TRACE. Loops Plasma loops can be seen at the border of the solar disk in the chromosphere. They are different from solar prominences because they are concentric arches with maximum temperature of the order (too low to be considered coronal features). These cool-temperature loops show an intense variability: they appear and disappear in some UV lines in a time less than an hour, or they rapidly expand in 10–20 minutes. Foukal studied these cool loops in detail from the observations taken with the EUV spectrometer on Skylab in 1976. When the plasma temperature of these loops becomes coronal (above ), these features appear more stable and evolve over longer times. Network Images taken in typical chromospheric lines show the presence of brighter cells, usually referred to as the network, while the surrounding darker regions are named internetwork. They look similar to granules commonly observed on the photosphere due to the heat convection. On other stars Chromospheres are present on almost all luminous stars other than white dwarfs. They are most prominent and magnetically active on lower-main sequence stars, on brown dwarfs of F and later spectral types, and on giant and subgiant stars. A spectroscopic measure of chromospheric activity on other stars is the Mount Wilson S-index.
Physical sciences
Solar System
Astronomy
54681
https://en.wikipedia.org/wiki/NP-hardness
NP-hardness
In computational complexity theory, a computational problem H is called NP-hard if, for every problem L which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from L to H. That is, assuming a solution for H takes 1 unit time, Hs solution can be used to solve L in polynomial time. As a consequence, finding a polynomial time algorithm to solve a single NP-hard problem would give polynomial time algorithms for all the problems in the complexity class NP. As it is suspected, but unproven, that P≠NP, it is unlikely that any polynomial-time algorithms for NP-hard problems exist. A simple example of an NP-hard problem is the subset sum problem. Informally, if H is NP-hard, then it is at least as difficult to solve as the problems in NP. However, the opposite direction is not true: some problems are undecidable, and therefore even more difficult to solve than all problems in NP, but they are probably not NP-hard (unless P=NP). Definition A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time many-one reduction from L to H. Another definition is to require that there be a polynomial-time reduction from an NP-complete problem G to H. As any problem L in NP reduces in polynomial time to G, L reduces in turn to H in polynomial time so this new definition implies the previous one. It does not restrict the class NP-hard to decision problems, and it also includes search problems or optimization problems. Consequences If P ≠ NP, then NP-hard problems could not be solved in polynomial time. Some NP-hard optimization problems can be polynomial-time approximated up to some constant approximation ratio (in particular, those in APX) or even up to any approximation ratio (those in PTAS or FPTAS). There are many classes of approximability, each one enabling approximation up to a different level. Examples All NP-complete problems are also NP-hard (see List of NP-complete problems). For example, the optimization problem of finding the least-cost cyclic route through all nodes of a weighted graph—commonly known as the travelling salesman problem—is NP-hard. The subset sum problem is another example: given a set of integers, does any non-empty subset of them add up to zero? That is a decision problem and happens to be NP-complete. There are decision problems that are NP-hard but not NP-complete such as the halting problem. That is the problem which asks "given a program and its input, will it run forever?" That is a yes/no question and so is a decision problem. It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, the Boolean satisfiability problem can be reduced to the halting problem by transforming it to the description of a Turing machine that tries all truth value assignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not in NP since all problems in NP are decidable in a finite number of operations, but the halting problem, in general, is undecidable. There are also NP-hard problems that are neither NP-complete nor Undecidable. For instance, the language of true quantified Boolean formulas is decidable in polynomial space, but not in non-deterministic polynomial time (unless NP = PSPACE). NP-naming convention NP-hard problems do not have to be elements of the complexity class NP. As NP plays a central role in computational complexity, it is used as the basis of several classes: NP Class of computational decision problems for which any given yes-solution can be verified as a solution in polynomial time by a deterministic Turing machine (or solvable by a non-deterministic Turing machine in polynomial time). NP-hard Class of problems which are at least as hard as the hardest problems in NP. Problems that are NP-hard do not have to be elements of NP; indeed, they may not even be decidable. NP-complete Class of decision problems which contains the hardest problems in NP. Each NP-complete problem has to be in NP. NP-easy At most as hard as NP, but not necessarily in NP. NP-equivalent Decision problems that are both NP-hard and NP-easy, but not necessarily in NP. NP-intermediate If P and NP are different, then there exist decision problems in the region of NP that fall between P and the NP-complete problems. (If P and NP are the same class, then NP-intermediate problems do not exist because in this case every NP-complete problem would fall in P, and by definition, every problem in NP can be reduced to an NP-complete problem.) Application areas NP-hard problems are often tackled with rules-based languages in areas including: Approximate computing Configuration Cryptography Data mining Decision support Phylogenetics Planning Process monitoring and control Rosters or schedules Routing/vehicle routing Scheduling
Mathematics
Complexity theory
null
54695
https://en.wikipedia.org/wiki/RAID
RAID
RAID (; redundant array of inexpensive disks or redundant array of independent disks) is a data storage virtualization technology that combines multiple physical data storage components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This is in contrast to the previous concept of highly reliable mainframe disk drives known as single large expensive disk (SLED). Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word "RAID" followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives. History The term "RAID" was invented by David Patterson, Garth Gibson, and Randy Katz at the University of California, Berkeley in 1987. In their June 1988 paper "A Case for Redundant Arrays of Inexpensive Disks (RAID)", presented at the SIGMOD Conference, they argued that the top-performing mainframe disk drives of the time could be beaten on performance by an array of the inexpensive drives that had been developed for the growing personal computer market. Although failures would rise in proportion to the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of any large single drive. Although not yet using that terminology, the technologies of the five levels of RAID named in the June 1988 paper were used in various products prior to the paper's publication, including the following: Mirroring (RAID 1) was well established in the 1970s including, for example, Tandem NonStop Systems. In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4. Around 1983, DEC began shipping subsystem mirrored RA8X disk drives (now known as RAID 1) as part of its HSC50 subsystem. In 1986, Clark et al. at IBM filed a patent disclosing what was subsequently named RAID 5. Around 1988, the Thinking Machines' DataVault used error correction codes (now known as RAID 2) in an array of disk drives. A similar approach was used in the early 1960s on the IBM 353. Industry manufacturers later redefined the RAID acronym to stand for "redundant array of independent disks". Overview Many RAID levels employ an error protection scheme called "parity", a widely used method in information technology to provide fault tolerance in a given set of data. Most use simple XOR, but RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois field or Reed–Solomon error correction. RAID can also provide data security with solid-state drives (SSDs) without the expense of an all-SSD system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to provide a significant speed advantage, an appropriate controller is needed that uses the fast SSD for all read operations. Adaptec calls this "hybrid RAID". Standard levels Originally, there were five standard levels of RAID, but many variations have evolved, including several nested levels and many non-standard levels (mostly proprietary). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard: RAID 0 consists of block-level striping, but no mirroring or parity. Assuming n fully-used drives of equal capacity, the capacity of a RAID 0 volume matches that of a spanned volume: the total of the n drives' capacities. However, because striping distributes the contents of each file across all drives, the failure of any drive renders the entire RAID 0 volume inaccessible. Typically, all data is lost, and files cannot be recovered without a backup copy. By contrast, a spanned volume, which stores files sequentially, loses data stored on the failed drive but preserves data stored on the remaining drives. However, recovering the files after drive failure can be challenging and often depends on the specifics of the filesystem. Regardless, files that span onto or off a failed drive will be permanently lost. On the other hand, the benefit of RAID 0 is that the throughput of read and write operations to any file is multiplied by the number of drives because, unlike spanned volumes, reads and writes are performed concurrently. The cost is increased vulnerability to drive failures—since any drive in a RAID 0 setup failing causes the entire volume to be lost, the average failure rate of the volume rises with the number of attached drives. This makes RAID 0 a poor choice for scenarios requiring data reliability or fault tolerance. RAID 1 consists of data mirroring, without parity or striping. Data is written identically to two or more drives, thereby producing a "mirrored set" of drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its seek time and rotational latency), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning. RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive. This level is of historical significance only; although it was used on some early machines (for example, the Thinking Machines CM-2), it is not used by any commercially available system. RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive. Although implementations exist, RAID 3 is not commonly used in practice. RAID 4 consists of block-level striping with dedicated parity. This level was previously used by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called RAID-DP. The main advantage of RAID 4 over RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the whole group of data drives, while in RAID 4 one I/O read operation does not have to spread across all data drives. As a result, more I/O operations can be executed in parallel, improving the performance of small transfers. RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity information is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks. Like all single-parity concepts, large RAID 5 implementations are susceptible to system failures because of trends regarding array rebuild time and the chance of drive failure during rebuild (see "Increasing rebuild time and failure probability" section, below). Rebuilding an array requires reading all data from all disks, opening a chance for a second drive failure and the loss of the entire array. RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5. RAID 10 also minimizes these problems. Nested (hybrid) RAID In what was originally termed hybrid RAID, many storage controllers allow RAID levels to be nested. The elements of a RAID may be either individual drives or arrays themselves. Arrays are rarely nested more than one level deep. The final array is known as the top array. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0), most vendors omit the "+" (yielding RAID 10 and RAID 50, respectively). RAID 0+1: creates two stripes and mirrors them. If a single drive failure occurs then one of the mirrors has failed, at this point it is running effectively as RAID 0 with no redundancy. Significantly higher risk is introduced during a rebuild than RAID 1+0 as all the data from all the drives in the remaining stripe has to be read rather than just from one drive, increasing the chance of an unrecoverable read error (URE) and significantly extending the rebuild window. RAID 1+0: (see: RAID 10) creates a striped set from a series of mirrored drives. The array can sustain multiple drive losses so long as no mirror loses all its drives. JBOD RAID N+N: With JBOD (just a bunch of disks), it is possible to concatenate disks, but also volumes such as RAID sets. With larger drive capacities, write delay and rebuilding time increase dramatically (especially, as described above, with RAID 5 and RAID 6). By splitting a larger RAID N set into smaller subsets and concatenating them with linear JBOD, write and rebuilding time will be reduced. If a hardware RAID controller is not capable of nesting linear JBOD with RAID N, then linear JBOD can be achieved with OS-level software RAID in combination with separate RAID N subset volumes created within one, or more, hardware RAID controller(s). Besides a drastic speed increase, this also provides a substantial advantage: the possibility to start a linear JBOD with a small set of disks and to be able to expand the total set with disks of different size, later on (in time, disks of bigger size become available on the market). There is another advantage in the form of disaster recovery (if a RAID N subset happens to fail, then the data on the other RAID N subsets is not lost, reducing restore time). Non-standard levels Many configurations other than the basic numbered RAID levels are possible, and many companies, organizations, and groups have created their own non-standard configurations, in many cases designed to meet the specialized needs of a small niche group. Such configurations include the following: Linux MD RAID 10 provides a general RAID driver that in its "near" layout defaults to a standard RAID 1 with two drives, and a standard RAID 1+0 with four drives; however, it can include any number of drives, including odd numbers. With its "far" layout, MD RAID 10 can run both striped and mirrored, even with only two drives in f2 layout; this runs mirroring with striped reads, giving the read performance of RAID 0. Regular RAID 1, as provided by Linux software RAID, does not stripe reads, but can perform reads in parallel. Hadoop has a RAID system that generates a parity file by xor-ing a stripe of blocks in a single HDFS file. BeeGFS, the parallel file system, has internal striping (comparable to file-based RAID0) and replication (comparable to file-based RAID10) options to aggregate throughput and capacity of multiple servers and is typically based on top of an underlying RAID to make disk failures transparent. Declustered RAID scatters dual (or more) copies of the data across all disks (possibly hundreds) in a storage subsystem, while holding back enough spare capacity to allow for a few disks to fail. The scattering is based on algorithms which give the appearance of arbitrariness. When one or more disks fail the missing copies are rebuilt into that spare capacity, again arbitrarily. Because the rebuild is done from and to all the remaining disks, it operates much faster than with traditional RAID, reducing the overall impact on clients of the storage system. Implementations The distribution of data across multiple drives can be managed either by dedicated computer hardware or by software. A software solution may be part of the operating system, part of the firmware and drivers supplied with a standard drive controller (so-called "hardware-assisted software RAID"), or it may reside entirely within the hardware RAID controller. Hardware-based Hardware RAID controllers can be configured through card BIOS or Option ROM before an operating system is booted, and after the operating system is booted, proprietary configuration utilities are available from the manufacturer of each controller. Unlike the network interface controllers for Ethernet, which can usually be configured and serviced entirely through the common operating system paradigms like ifconfig in Unix, without a need for any third-party tools, each manufacturer of each RAID controller usually provides their own proprietary software tooling for each operating system that they deem to support, ensuring a vendor lock-in, and contributing to reliability issues. For example, in FreeBSD, in order to access the configuration of Adaptec RAID controllers, users are required to enable Linux compatibility layer, and use the Linux tooling from Adaptec, potentially compromising the stability, reliability and security of their setup, especially when taking the long-term view. Some other operating systems have implemented their own generic frameworks for interfacing with any RAID controller, and provide tools for monitoring RAID volume status, as well as facilitation of drive identification through LED blinking, alarm management and hot spare disk designations from within the operating system without having to reboot into card BIOS. For example, this was the approach taken by OpenBSD in 2005 with its bio(4) pseudo-device and the bioctl utility, which provide volume status, and allow LED/alarm/hotspare control, as well as the sensors (including the drive sensor) for health monitoring; this approach has subsequently been adopted and extended by NetBSD in 2007 as well. Software-based Software RAID implementations are provided by many modern operating systems. Software RAID can be implemented as: A layer that abstracts multiple devices, thereby providing a single virtual device (such as Linux kernel's md and OpenBSD's softraid) A more generic logical volume manager (provided with most server-class operating systems such as Veritas or LVM) A component of the file system (such as ZFS, Spectrum Scale or Btrfs) A layer that sits above any file system and provides parity protection to user data (such as RAID-F) Some advanced file systems are designed to organize data across multiple storage devices directly, without needing the help of a third-party logical volume manager: ZFS supports the equivalents of RAID 0, RAID 1, RAID 5 (RAID-Z1) single-parity, RAID 6 (RAID-Z2) double-parity, and a triple-parity version (RAID-Z3) also referred to as RAID 7. As it always stripes over top-level vdevs, it supports equivalents of the 1+0, 5+0, and 6+0 nested RAID levels (as well as striped triple-parity sets) but not other nested combinations. ZFS is the native file system on Solaris and illumos, and is also available on FreeBSD and Linux. Open-source ZFS implementations are actively developed under the OpenZFS umbrella project. Spectrum Scale, initially developed by IBM for media streaming and scalable analytics, supports declustered RAID protection schemes up to n+3. A particularity is the dynamic rebuilding priority which runs with low impact in the background until a data chunk hits n+0 redundancy, in which case this chunk is quickly rebuilt to at least n+1. On top, Spectrum Scale supports metro-distance RAID 1. Btrfs supports RAID 0, RAID 1 and RAID 10 (RAID 5 and 6 are under development). XFS was originally designed to provide an integrated volume manager that supports concatenating, mirroring and striping of multiple physical storage devices. However, the implementation of XFS in Linux kernel lacks the integrated volume manager. Many operating systems provide RAID implementations, including the following: Hewlett-Packard's OpenVMS operating system supports RAID 1. The mirrored disks, called a "shadow set", can be in different locations to assist in disaster recovery. Apple's macOS and macOS Server natively support RAID 0, RAID 1, and RAID 1+0, which can be created with Disk Utility or its command-line interface, while RAID 4 and RAID 5 can only be created using the third-party software SoftRAID by OWC, with the driver for SoftRAID access natively included since macOS 13.3. FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5, and all nestings via GEOM modules and ccd. Linux's md supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and all nestings. Certain reshaping/resizing/expanding operations are also supported. Microsoft Windows supports RAID 0, RAID 1, and RAID 5 using various software implementations. Logical Disk Manager, introduced with Windows 2000, allows for the creation of RAID 0, RAID 1, and RAID 5 volumes by using dynamic disks, but this was limited only to professional and server editions of Windows until the release of Windows 8. Windows XP can be modified to unlock support for RAID 0, 1, and 5. Windows 8 and Windows Server 2012 introduced a RAID-like feature known as Storage Spaces, which also allows users to specify mirroring, parity, or no redundancy on a folder-by-folder basis. These options are similar to RAID 1 and RAID 5, but are implemented at a higher abstraction level. NetBSD supports RAID 0, 1, 4, and 5 via its software implementation, named RAIDframe. OpenBSD supports RAID 0, 1 and 5 via its software implementation, named softraid. If a boot drive fails, the system has to be sophisticated enough to be able to boot from the remaining drive or drives. For instance, consider a computer whose disk is configured as RAID 1 (mirrored drives); if the first drive in the array fails, then a first-stage boot loader might not be sophisticated enough to attempt loading the second-stage boot loader from the second drive as a fallback. The second-stage boot loader for FreeBSD is capable of loading a kernel from such an array. Firmware- and driver-based Software-implemented RAID is not always compatible with the system's boot process, and it is generally impractical for desktop versions of Windows. However, hardware RAID controllers are expensive and proprietary. To fill this gap, inexpensive "RAID controllers" were introduced that do not contain a dedicated RAID controller chip, but simply a standard drive controller chip, or the chipset built-in RAID function, with proprietary firmware and drivers. During early bootup, the RAID is implemented by the firmware and, once the operating system has been more completely loaded, the drivers take over control. Consequently, such controllers may not work when driver support is not available for the host operating system. An example is Intel Rapid Storage Technology, implemented on many consumer-level motherboards. Because some minimal hardware support is involved, this implementation is also called "hardware-assisted software RAID", "hybrid model" RAID, or even "fake RAID". If RAID 5 is supported, the hardware may provide a hardware XOR accelerator. An advantage of this model over the pure software RAID is that—if using a redundancy mode—the boot drive is protected from failure (due to the firmware) during the boot process even before the operating system's drivers take over. Integrity Data scrubbing (referred to in some environments as patrol read) involves periodic reading and checking by the RAID controller of all the blocks in an array, including those not otherwise accessed. This detects bad blocks before use. Data scrubbing checks for bad blocks on each storage device in an array, but also uses the redundancy of the array to recover bad blocks on a single drive and to reassign the recovered data to spare blocks elsewhere on the drive. Frequently, a RAID controller is configured to "drop" a component drive (that is, to assume a component drive has failed) if the drive has been unresponsive for eight seconds or so; this might cause the array controller to drop a good drive because that drive has not been given enough time to complete its internal error recovery procedure. Consequently, using consumer-marketed drives with RAID can be risky, and so-called "enterprise class" drives limit this error recovery time to reduce risk. Western Digital's desktop drives used to have a specific fix. A utility called WDTLER.exe limited a drive's error recovery time. The utility enabled TLER (time limited error recovery), which limits the error recovery time to seven seconds. Around September 2009, Western Digital disabled this feature in their desktop drives (such as the Caviar Black line), making such drives unsuitable for use in RAID configurations. However, Western Digital enterprise class drives are shipped from the factory with TLER enabled. Similar technologies are used by Seagate, Samsung, and Hitachi. For non-RAID usage, an enterprise class drive with a short error recovery timeout that cannot be changed is therefore less suitable than a desktop drive. In late 2010, the Smartmontools program began supporting the configuration of ATA Error Recovery Control, allowing the tool to configure many desktop class hard drives for use in RAID setups. While RAID may protect against physical drive failure, the data is still exposed to operator, software, hardware, and virus destruction. Many studies cite operator fault as a common source of malfunction, such as a server operator replacing the incorrect drive in a faulty RAID, and disabling the system (even temporarily) in the process. An array can be overwhelmed by catastrophic failure that exceeds its recovery capacity and the entire array is at risk of physical damage by fire, natural disaster, and human forces, however backups can be stored off site. An array is also vulnerable to controller failure because it is not always possible to migrate it to a new, different controller without data loss. Weaknesses Correlated failures In practice, the drives are often the same age (with similar wear) and subject to the same environment. Since many drive failures are due to mechanical issues (which are more likely on older drives), this violates the assumptions of independent, identical rate of failure amongst drives; failures are in fact statistically correlated. In practice, the chances for a second failure before the first has been recovered (causing data loss) are higher than the chances for random failures. In a study of about 100,000 drives, the probability of two drives in the same cluster failing within one hour was four times larger than predicted by the exponential statistical distribution—which characterizes processes in which events occur continuously and independently at a constant average rate. The probability of two failures in the same 10-hour period was twice as large as predicted by an exponential distribution. Unrecoverable read errors during rebuild Unrecoverable read errors (URE) present as sector read failures, also known as latent sector errors (LSE). The associated media assessment measure, unrecoverable bit error (UBE) rate, is typically guaranteed to be less than one bit in 1015 for enterprise-class drives (SCSI, FC, SAS or SATA), and less than one bit in 1014 for desktop-class drives (IDE/ATA/PATA or SATA). Increasing drive capacities and large RAID 5 instances have led to the maximum error rates being insufficient to guarantee a successful recovery, due to the high likelihood of such an error occurring on one or more remaining drives during a RAID set rebuild. When rebuilding, parity-based schemes such as RAID 5 are particularly prone to the effects of UREs as they affect not only the sector where they occur, but also reconstructed blocks using that sector for parity computation. Double-protection parity-based schemes, such as RAID 6, attempt to address this issue by providing redundancy that allows double-drive failures; as a downside, such schemes suffer from elevated write penalty—the number of times the storage medium must be accessed during a single write operation. Schemes that duplicate (mirror) data in a drive-to-drive manner, such as RAID 1 and RAID 10, have a lower risk from UREs than those using parity computation or mirroring between striped sets. Data scrubbing, as a background process, can be used to detect and recover from UREs, effectively reducing the risk of them happening during RAID rebuilds and causing double-drive failures. The recovery of UREs involves remapping of affected underlying disk sectors, utilizing the drive's sector remapping pool; in case of UREs detected during background scrubbing, data redundancy provided by a fully operational RAID set allows the missing data to be reconstructed and rewritten to a remapped sector. Increasing rebuild time and failure probability Drive capacity has grown at a much faster rate than transfer speed, and error rates have only fallen a little in comparison. Therefore, larger-capacity drives may take hours if not days to rebuild, during which time other drives may fail or yet undetected read errors may surface. The rebuild time is also limited if the entire array is still in operation at reduced capacity. Given an array with only one redundant drive (which applies to RAID levels 3, 4 and 5, and to "classic" two-drive RAID 1), a second drive failure would cause complete failure of the array. Even though individual drives' mean time between failure (MTBF) have increased over time, this increase has not kept pace with the increased storage capacity of the drives. The time to rebuild the array after a single drive failure, as well as the chance of a second failure during a rebuild, have increased over time. Some commentators have declared that RAID 6 is only a "band aid" in this respect, because it only kicks the problem a little further down the road. However, according to the 2006 NetApp study of Berriman et al., the chance of failure decreases by a factor of about 3,800 (relative to RAID 5) for a proper implementation of RAID 6, even when using commodity drives. Nevertheless, if the currently observed technology trends remain unchanged, in 2019 a RAID 6 array will have the same chance of failure as its RAID 5 counterpart had in 2010. Mirroring schemes such as RAID 10 have a bounded recovery time as they require the copy of a single failed drive, compared with parity schemes such as RAID 6, which require the copy of all blocks of the drives in an array set. Triple parity schemes, or triple mirroring, have been suggested as one approach to improve resilience to an additional drive failure during this large rebuild time. Atomicity A system crash or other interruption of a write operation can result in states where the parity is inconsistent with the data due to non-atomicity of the write process, such that the parity cannot be used for recovery in the case of a disk failure. This is commonly termed the write hole which is a known data corruption issue in older and low-end RAIDs, caused by interrupted destaging of writes to disk. The write hole can be addressed in a few ways: Write-ahead logging. Hardware RAID systems use an onboard nonvolatile cache for this purpose. mdadm can use a dedicated journaling device (to avoid performance penalty, typically, SSDs and NVMs are preferred) for this purpose. Write intent logging. mdadm uses a "write-intent-bitmap". If it finds any location marked as incompletely written at startup, it resyncs them. It closes the write hole but does not protect against loss of in-transit data, unlike a full WAL. Partial parity. mdadm can save a "partial parity" that, when combined with modified chunks, recovers the original parity. This closes the write hole, but again does not protect against loss of in-transit data. Dynamic stripe size. RAID-Z ensures that each block is its own stripe, so every block is complete. Copy-on-write (COW) transactional semantics guard metadata associated with stripes. The downside is IO fragmentation. Avoiding overwriting used stripes. bcachefs, which uses a copying garbage collector, chooses this option. COW again protect references to striped data. Write hole is a little understood and rarely mentioned failure mode for redundant storage systems that do not utilize transactional features. Database researcher Jim Gray wrote "Update in Place is a Poison Apple" during the early days of relational database commercialization. Write-cache reliability There are concerns about write-cache reliability, specifically regarding devices equipped with a write-back cache, which is a caching system that reports the data as written as soon as it is written to cache, as opposed to when it is written to the non-volatile medium. If the system experiences a power loss or other major failure, the data may be irrevocably lost from the cache before reaching the non-volatile storage. For this reason good write-back cache implementations include mechanisms, such as redundant battery power, to preserve cache contents across system failures (including power failures) and to flush the cache at system restart time.
Technology
Data storage and memory
null
54717
https://en.wikipedia.org/wiki/De%20Broglie%E2%80%93Bohm%20theory
De Broglie–Bohm theory
The de Broglie–Bohm theory is an interpretation of quantum mechanics which postulates that, in addition to the wavefunction, an actual configuration of particles exists, even when unobserved. The evolution over time of the configuration of all particles is defined by a guiding equation. The evolution of the wave function over time is given by the Schrödinger equation. The theory is named after Louis de Broglie (1892–1987) and David Bohm (1917–1992). The theory is deterministic and explicitly nonlocal: the velocity of any one particle depends on the value of the guiding equation, which depends on the configuration of all the particles under consideration. Measurements are a particular case of quantum processes described by the theory—for which it yields the same quantum predictions as other interpretations of quantum mechanics. The theory does not have a "measurement problem", due to the fact that the particles have a definite configuration at all times. The Born rule in de Broglie–Bohm theory is not a postulate. Rather, in this theory, the link between the probability density and the wave function has the status of a theorem, a result of a separate postulate, the "quantum equilibrium hypothesis", which is additional to the basic principles governing the wave function. There are several equivalent mathematical formulations of the theory. Overview De Broglie–Bohm theory is based on the following postulates: There is a configuration of the universe, described by coordinates , which is an element of the configuration space . The configuration space is different for different versions of pilot-wave theory. For example, this may be the space of positions of particles, or, in case of field theory, the space of field configurations . The configuration evolves (for spin=0) according to the guiding equation where is the probability current or probability flux, and is the momentum operator. Here, is the standard complex-valued wavefunction from quantum theory, which evolves according to Schrödinger's equation This completes the specification of the theory for any quantum theory with Hamilton operator of type . The configuration is distributed according to at some moment of time , and this consequently holds for all times. Such a state is named quantum equilibrium. With quantum equilibrium, this theory agrees with the results of standard quantum mechanics. Even though this latter relation is frequently presented as an axiom of the theory, Bohm presented it as derivable from statistical-mechanical arguments in the original papers of 1952. This argument was further supported by the work of Bohm in 1953 and was substantiated by Vigier and Bohm's paper of 1954, in which they introduced stochastic fluid fluctuations that drive a process of asymptotic relaxation from quantum non-equilibrium to quantum equilibrium (ρ → |ψ|2). Double-slit experiment The double-slit experiment is an illustration of wave–particle duality. In it, a beam of particles (such as electrons) travels through a barrier that has two slits. If a detector screen is on the side beyond the barrier, the pattern of detected particles shows interference fringes characteristic of waves arriving at the screen from two sources (the two slits); however, the interference pattern is made up of individual dots corresponding to particles that had arrived on the screen. The system seems to exhibit the behaviour of both waves (interference patterns) and particles (dots on the screen). If this experiment is modified so that one slit is closed, no interference pattern is observed. Thus, the state of both slits affects the final results. It can also be arranged to have a minimally invasive detector at one of the slits to detect which slit the particle went through. When that is done, the interference pattern disappears. In de Broglie–Bohm theory, the wavefunction is defined at both slits, but each particle has a well-defined trajectory that passes through exactly one of the slits. The final position of the particle on the detector screen and the slit through which the particle passes is determined by the initial position of the particle. Such initial position is not knowable or controllable by the experimenter, so there is an appearance of randomness in the pattern of detection. In Bohm's 1952 papers he used the wavefunction to construct a quantum potential that, when included in Newton's equations, gave the trajectories of the particles streaming through the two slits. In effect the wavefunction interferes with itself and guides the particles by the quantum potential in such a way that the particles avoid the regions in which the interference is destructive and are attracted to the regions in which the interference is constructive, resulting in the interference pattern on the detector screen. To explain the behavior when the particle is detected to go through one slit, one needs to appreciate the role of the conditional wavefunction and how it results in the collapse of the wavefunction; this is explained below. The basic idea is that the environment registering the detection effectively separates the two wave packets in configuration space. Theory Pilot wave The de Broglie–Bohm theory describes a pilot wave in a configuration space and trajectories of particles as in classical mechanics but defined by non-Newtonian mechanics. At every moment of time there exists not only a wavefunction, but also a well-defined configuration of the whole universe (i.e., the system as defined by the boundary conditions used in solving the Schrödinger equation). The de Broglie–Bohm theory works on particle positions and trajectories like classical mechanics but the dynamics are different. In classical mechanics, the accelerations of the particles are imparted directly by forces, which exist in physical three-dimensional space. In de Broglie–Bohm theory, the quantum "field exerts a new kind of "quantum-mechanical" force". Bohm hypothesized that each particle has a "complex and subtle inner structure" that provides the capacity to react to the information provided by the wavefunction by the quantum potential. Also, unlike in classical mechanics, physical properties (e.g., mass, charge) are spread out over the wavefunction in de Broglie–Bohm theory, not localized at the position of the particle. The wavefunction itself, and not the particles, determines the dynamical evolution of the system: the particles do not act back onto the wave function. As Bohm and Hiley worded it, "the Schrödinger equation for the quantum field does not have sources, nor does it have any other way by which the field could be directly affected by the condition of the particles [...] the quantum theory can be understood completely in terms of the assumption that the quantum field has no sources or other forms of dependence on the particles". P. Holland considers this lack of reciprocal action of particles and wave function to be one "[a]mong the many nonclassical properties exhibited by this theory". Holland later called this a merely apparent lack of back reaction, due to the incompleteness of the description. In what follows below, the setup for one particle moving in is given followed by the setup for N particles moving in 3 dimensions. In the first instance, configuration space and real space are the same, while in the second, real space is still , but configuration space becomes . While the particle positions themselves are in real space, the velocity field and wavefunction are on configuration space, which is how particles are entangled with each other in this theory. Extensions to this theory include spin and more complicated configuration spaces. We use variations of for particle positions, while represents the complex-valued wavefunction on configuration space. Guiding equation For a spinless single particle moving in , the particle's velocity is For many particles labeled for the -th particle their velocities are The main fact to notice is that this velocity field depends on the actual positions of all of the particles in the universe. As explained below, in most experimental situations, the influence of all of those particles can be encapsulated into an effective wavefunction for a subsystem of the universe. Schrödinger's equation The one-particle Schrödinger equation governs the time evolution of a complex-valued wavefunction on . The equation represents a quantized version of the total energy of a classical system evolving under a real-valued potential function on : For many particles, the equation is the same except that and are now on configuration space, : This is the same wavefunction as in conventional quantum mechanics. Relation to the Born rule In Bohm's original papers, he discusses how de Broglie–Bohm theory results in the usual measurement results of quantum mechanics. The main idea is that this is true if the positions of the particles satisfy the statistical distribution given by . And that distribution is guaranteed to be true for all time by the guiding equation if the initial distribution of the particles satisfies . For a given experiment, one can postulate this as being true and verify it experimentally. But, as argued by Dürr et al., one needs to argue that this distribution for subsystems is typical. The authors argue that , by virtue of its equivariance under the dynamical evolution of the system, is the appropriate measure of typicality for initial conditions of the positions of the particles. The authors then prove that the vast majority of possible initial configurations will give rise to statistics obeying the Born rule (i.e., ) for measurement outcomes. In summary, in a universe governed by the de Broglie–Bohm dynamics, Born rule behavior is typical. The situation is thus analogous to the situation in classical statistical physics. A low-entropy initial condition will, with overwhelmingly high probability, evolve into a higher-entropy state: behavior consistent with the second law of thermodynamics is typical. There are anomalous initial conditions that would give rise to violations of the second law; however in the absence of some very detailed evidence supporting the realization of one of those conditions, it would be quite unreasonable to expect anything but the actually observed uniform increase of entropy. Similarly in the de Broglie–Bohm theory, there are anomalous initial conditions that would produce measurement statistics in violation of the Born rule (conflicting the predictions of standard quantum theory), but the typicality theorem shows that absent some specific reason to believe one of those special initial conditions was in fact realized, the Born rule behavior is what one should expect. It is in this qualified sense that the Born rule is, for the de Broglie–Bohm theory, a theorem rather than (as in ordinary quantum theory) an additional postulate. It can also be shown that a distribution of particles which is not distributed according to the Born rule (that is, a distribution "out of quantum equilibrium") and evolving under the de Broglie–Bohm dynamics is overwhelmingly likely to evolve dynamically into a state distributed as . The conditional wavefunction of a subsystem In the formulation of the de Broglie–Bohm theory, there is only a wavefunction for the entire universe (which always evolves by the Schrödinger equation). Here, the "universe" is simply the system limited by the same boundary conditions used to solve the Schrödinger equation. However, once the theory is formulated, it is convenient to introduce a notion of wavefunction also for subsystems of the universe. Let us write the wavefunction of the universe as , where denotes the configuration variables associated to some subsystem (I) of the universe, and denotes the remaining configuration variables. Denote respectively by and the actual configuration of subsystem (I) and of the rest of the universe. For simplicity, we consider here only the spinless case. The conditional wavefunction of subsystem (I) is defined by It follows immediately from the fact that satisfies the guiding equation that also the configuration satisfies a guiding equation identical to the one presented in the formulation of the theory, with the universal wavefunction replaced with the conditional wavefunction . Also, the fact that is random with probability density given by the square modulus of implies that the conditional probability density of given is given by the square modulus of the (normalized) conditional wavefunction (in the terminology of Dürr et al. this fact is called the fundamental conditional probability formula). Unlike the universal wavefunction, the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation, but in many situations it does. For instance, if the universal wavefunction factors as then the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to (this is what standard quantum theory would regard as the wavefunction of subsystem (I)). If, in addition, the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then does satisfy a Schrödinger equation. More generally, assume that the universal wave function can be written in the form where solves Schrödinger equation and, for all and . Then, again, the conditional wavefunction of subsystem (I) is (up to an irrelevant scalar factor) equal to , and if the Hamiltonian does not contain an interaction term between subsystems (I) and (II), then satisfies a Schrödinger equation. The fact that the conditional wavefunction of a subsystem does not always evolve by the Schrödinger equation is related to the fact that the usual collapse rule of standard quantum theory emerges from the Bohmian formalism when one considers conditional wavefunctions of subsystems. Extensions Relativity Pilot-wave theory is explicitly nonlocal, which is in ostensible conflict with special relativity. Various extensions of "Bohm-like" mechanics exist that attempt to resolve this problem. Bohm himself in 1953 presented an extension of the theory satisfying the Dirac equation for a single particle. However, this was not extensible to the many-particle case because it used an absolute time. A renewed interest in constructing Lorentz-invariant extensions of Bohmian theory arose in the 1990s; see Bohm and Hiley: The Undivided Universe and references therein. Another approach is given by Dürr et al., who use Bohm–Dirac models and a Lorentz-invariant foliation of space-time. Thus, Dürr et al. (1999) showed that it is possible to formally restore Lorentz invariance for the Bohm–Dirac theory by introducing additional structure. This approach still requires a foliation of space-time. While this is in conflict with the standard interpretation of relativity, the preferred foliation, if unobservable, does not lead to any empirical conflicts with relativity. In 2013, Dürr et al. suggested that the required foliation could be covariantly determined by the wavefunction. The relation between nonlocality and preferred foliation can be better understood as follows. In de Broglie–Bohm theory, nonlocality manifests as the fact that the velocity and acceleration of one particle depends on the instantaneous positions of all other particles. On the other hand, in the theory of relativity the concept of instantaneousness does not have an invariant meaning. Thus, to define particle trajectories, one needs an additional rule that defines which space-time points should be considered instantaneous. The simplest way to achieve this is to introduce a preferred foliation of space-time by hand, such that each hypersurface of the foliation defines a hypersurface of equal time. Initially, it had been considered impossible to set out a description of photon trajectories in the de Broglie–Bohm theory in view of the difficulties of describing bosons relativistically. In 1996, Partha Ghose presented a relativistic quantum-mechanical description of spin-0 and spin-1 bosons starting from the Duffin–Kemmer–Petiau equation, setting out Bohmian trajectories for massive bosons and for massless bosons (and therefore photons). In 2001, Jean-Pierre Vigier emphasized the importance of deriving a well-defined description of light in terms of particle trajectories in the framework of either the Bohmian mechanics or the Nelson stochastic mechanics. The same year, Ghose worked out Bohmian photon trajectories for specific cases. Subsequent weak-measurement experiments yielded trajectories that coincide with the predicted trajectories. The significance of these experimental findings is controversial. Chris Dewdney and G. Horton have proposed a relativistically covariant, wave-functional formulation of Bohm's quantum field theory and have extended it to a form that allows the inclusion of gravity. Nikolić has proposed a Lorentz-covariant formulation of the Bohmian interpretation of many-particle wavefunctions. He has developed a generalized relativistic-invariant probabilistic interpretation of quantum theory, in which is no longer a probability density in space, but a probability density in space-time. He uses this generalized probabilistic interpretation to formulate a relativistic-covariant version of de Broglie–Bohm theory without introducing a preferred foliation of space-time. His work also covers the extension of the Bohmian interpretation to a quantization of fields and strings. Roderick I. Sutherland at the University in Sydney has a Lagrangian formalism for the pilot wave and its beables. It draws on Yakir Aharonov's retrocasual weak measurements to explain many-particle entanglement in a special relativistic way without the need for configuration space. The basic idea was already published by Olivier Costa de Beauregard in the 1950s and is also used by John Cramer in his transactional interpretation except the beables that exist between the von Neumann strong projection operator measurements. Sutherland's Lagrangian includes two-way action-reaction between pilot wave and beables. Therefore, it is a post-quantum non-statistical theory with final boundary conditions that violate the no-signal theorems of quantum theory. Just as special relativity is a limiting case of general relativity when the spacetime curvature vanishes, so, too is statistical no-entanglement signaling quantum theory with the Born rule a limiting case of the post-quantum action-reaction Lagrangian when the reaction is set to zero and the final boundary condition is integrated out. Spin To incorporate spin, the wavefunction becomes complex-vector-valued. The value space is called spin space; for a spin-1/2 particle, spin space can be taken to be . The guiding equation is modified by taking inner products in spin space to reduce the complex vectors to complex numbers. The Schrödinger equation is modified by adding a Pauli spin term: where — the mass, charge and magnetic moment of the –th particle — the appropriate spin operator acting in the –th particle's spin space — spin quantum number of the –th particle ( for electron) is vector potential in is the magnetic field in is the covariant derivative, involving the vector potential, ascribed to the coordinates of –th particle (in SI units) — the wavefunction defined on the multidimensional configuration space; e.g. a system consisting of two spin-1/2 particles and one spin-1 particle has a wavefunction of the form where is a tensor product, so this spin space is 12-dimensional is the inner product in spin space : Stochastic electrodynamics Stochastic electrodynamics (SED) is an extension of the de Broglie–Bohm interpretation of quantum mechanics, with the electromagnetic zero-point field (ZPF) playing a central role as the guiding pilot-wave. Modern approaches to SED, like those proposed by the group around late Gerhard Grössing, among others, consider wave and particle-like quantum effects as well-coordinated emergent systems. These emergent systems are the result of speculated and calculated sub-quantum interactions with the zero-point field. Quantum field theory In Dürr et al., the authors describe an extension of de Broglie–Bohm theory for handling creation and annihilation operators, which they refer to as "Bell-type quantum field theories". The basic idea is that configuration space becomes the (disjoint) space of all possible configurations of any number of particles. For part of the time, the system evolves deterministically under the guiding equation with a fixed number of particles. But under a stochastic process, particles may be created and annihilated. The distribution of creation events is dictated by the wavefunction. The wavefunction itself is evolving at all times over the full multi-particle configuration space. Hrvoje Nikolić introduces a purely deterministic de Broglie–Bohm theory of particle creation and destruction, according to which particle trajectories are continuous, but particle detectors behave as if particles have been created or destroyed even when a true creation or destruction of particles does not take place. Curved space To extend de Broglie–Bohm theory to curved space (Riemannian manifolds in mathematical parlance), one simply notes that all of the elements of these equations make sense, such as gradients and Laplacians. Thus, we use equations that have the same form as above. Topological and boundary conditions may apply in supplementing the evolution of Schrödinger's equation. For a de Broglie–Bohm theory on curved space with spin, the spin space becomes a vector bundle over configuration space, and the potential in Schrödinger's equation becomes a local self-adjoint operator acting on that space. The field equations for the de Broglie–Bohm theory in the relativistic case with spin can also be given for curved space-times with torsion. In a general spacetime with curvature and torsion, the guiding equation for the four-velocity of an elementary fermion particle iswhere the wave function is a spinor, is the corresponding adjoint, are the Dirac matrices, and is a tetrad. If the wave function propagates according to the curved Dirac equation, then the particle moves according to the Mathisson-Papapetrou equations of motion, which are an extension of the geodesic equation. This relativistic wave-particle duality follows from the conservation laws for the spin tensor and energy-momentum tensor, and also from the covariant Heisenberg picture equation of motion. Exploiting nonlocality De Broglie and Bohm's causal interpretation of quantum mechanics was later extended by Bohm, Vigier, Hiley, Valentini and others to include stochastic properties. Bohm and other physicists, including Valentini, view the Born rule linking to the probability density function as representing not a basic law, but a result of a system having reached quantum equilibrium during the course of the time development under the Schrödinger equation. It can be shown that, once an equilibrium has been reached, the system remains in such equilibrium over the course of its further evolution: this follows from the continuity equation associated with the Schrödinger evolution of . It is less straightforward to demonstrate whether and how such an equilibrium is reached in the first place. Antony Valentini has extended de Broglie–Bohm theory to include signal nonlocality that would allow entanglement to be used as a stand-alone communication channel without a secondary classical "key" signal to "unlock" the message encoded in the entanglement. This violates orthodox quantum theory but has the virtue of making the parallel universes of the chaotic inflation theory observable in principle. Unlike de Broglie–Bohm theory, Valentini's theory the wavefunction evolution also depends on the ontological variables. This introduces an instability, a feedback loop that pushes the hidden variables out of "sub-quantal heat death". The resulting theory becomes nonlinear and non-unitary. Valentini argues that the laws of quantum mechanics are emergent and form a "quantum equilibrium" that is analogous to thermal equilibrium in classical dynamics, such that other "quantum non-equilibrium" distributions may in principle be observed and exploited, for which the statistical predictions of quantum theory are violated. It is controversially argued that quantum theory is merely a special case of a much wider nonlinear physics, a physics in which non-local (superluminal) signalling is possible, and in which the uncertainty principle can be violated. Results Below are some highlights of the results that arise out of an analysis of de Broglie–Bohm theory. Experimental results agree with all of quantum mechanics' standard predictions insofar as it has them. But while standard quantum mechanics is limited to discussing the results of "measurements", de Broglie–Bohm theory governs the dynamics of a system without the intervention of outside observers (p. 117 in Bell). The basis for agreement with standard quantum mechanics is that the particles are distributed according to . This is a statement of observer ignorance: the initial positions are represented by a statistical distribution so deterministic trajectories will result in a statistical distribution. Measuring spin and polarization According to ordinary quantum theory, it is not possible to measure the spin or polarization of a particle directly; instead, the component in one direction is measured; the outcome from a single particle may be 1, meaning that the particle is aligned with the measuring apparatus, or −1, meaning that it is aligned the opposite way. An ensemble of particles prepared by a polarizer to be in state 1 will all measure polarized in state 1 in a subsequent apparatus. A polarized ensemble sent through a polarizer set at angle to the first pass will result in some values of 1 and some of −1 with a probability that depends on the relative alignment. For a full explanation of this, see the Stern–Gerlach experiment. In de Broglie–Bohm theory, the results of a spin experiment cannot be analyzed without some knowledge of the experimental setup. It is possible to modify the setup so that the trajectory of the particle is unaffected, but that the particle with one setup registers as spin-up, while in the other setup it registers as spin-down. Thus, for the de Broglie–Bohm theory, the particle's spin is not an intrinsic property of the particle; instead spin is, so to speak, in the wavefunction of the particle in relation to the particular device being used to measure the spin. This is an illustration of what is sometimes referred to as contextuality and is related to naive realism about operators. Interpretationally, measurement results are a deterministic property of the system and its environment, which includes information about the experimental setup including the context of co-measured observables; in no sense does the system itself possess the property being measured, as would have been the case in classical physics. Measurements, the quantum formalism, and observer independence De Broglie–Bohm theory gives the almost results as (non-relativisitic) quantum mechanics. It treats the wavefunction as a fundamental object in the theory, as the wavefunction describes how the particles move. This means that no experiment can distinguish between the two theories. This section outlines the ideas as to how the standard quantum formalism arises out of quantum mechanics. Collapse of the wavefunction De Broglie–Bohm theory is a theory that applies primarily to the whole universe. That is, there is a single wavefunction governing the motion of all of the particles in the universe according to the guiding equation. Theoretically, the motion of one particle depends on the positions of all of the other particles in the universe. In some situations, such as in experimental systems, we can represent the system itself in terms of a de Broglie–Bohm theory in which the wavefunction of the system is obtained by conditioning on the environment of the system. Thus, the system can be analyzed with Schrödinger's equation and the guiding equation, with an initial distribution for the particles in the system (see the section on the conditional wavefunction of a subsystem for details). It requires a special setup for the conditional wavefunction of a system to obey a quantum evolution. When a system interacts with its environment, such as through a measurement, the conditional wavefunction of the system evolves in a different way. The evolution of the universal wavefunction can become such that the wavefunction of the system appears to be in a superposition of distinct states. But if the environment has recorded the results of the experiment, then using the actual Bohmian configuration of the environment to condition on, the conditional wavefunction collapses to just one alternative, the one corresponding with the measurement results. Collapse of the universal wavefunction never occurs in de Broglie–Bohm theory. Its entire evolution is governed by Schrödinger's equation, and the particles' evolutions are governed by the guiding equation. Collapse only occurs in a phenomenological way for systems that seem to follow their own Schrödinger's equation. As this is an effective description of the system, it is a matter of choice as to what to define the experimental system to include, and this will affect when "collapse" occurs. Operators as observables In the standard quantum formalism, measuring observables is generally thought of as measuring operators on the Hilbert space. For example, measuring position is considered to be a measurement of the position operator. This relationship between physical measurements and Hilbert space operators is, for standard quantum mechanics, an additional axiom of the theory. The de Broglie–Bohm theory, by contrast, requires no such measurement axioms (and measurement as such is not a dynamically distinct or special sub-category of physical processes in the theory). In particular, the usual operators-as-observables formalism is, for de Broglie–Bohm theory, a theorem. A major point of the analysis is that many of the measurements of the observables do not correspond to properties of the particles; they are (as in the case of spin discussed above) measurements of the wavefunction. In the history of de Broglie–Bohm theory, the proponents have often had to deal with claims that this theory is impossible. Such arguments are generally based on inappropriate analysis of operators as observables. If one believes that spin measurements are indeed measuring the spin of a particle that existed prior to the measurement, then one does reach contradictions. De Broglie–Bohm theory deals with this by noting that spin is not a feature of the particle, but rather that of the wavefunction. As such, it only has a definite outcome once the experimental apparatus is chosen. Once that is taken into account, the impossibility theorems become irrelevant. There are also objections to this theory based on what it says about particular situations usually involving eigenstates of an operator. For example, the ground state of hydrogen is a real wavefunction. According to the guiding equation, this means that the electron is at rest when in this state. Nevertheless, it is distributed according to , and no contradiction to experimental results is possible to detect. Operators as observables leads many to believe that many operators are equivalent. De Broglie–Bohm theory, from this perspective, chooses the position observable as a favored observable rather than, say, the momentum observable. Again, the link to the position observable is a consequence of the dynamics. The motivation for de Broglie–Bohm theory is to describe a system of particles. This implies that the goal of the theory is to describe the positions of those particles at all times. Other observables do not have this compelling ontological status. Having definite positions explains having definite results such as flashes on a detector screen. Other observables would not lead to that conclusion, but there need not be any problem in defining a mathematical theory for other observables; see Hyman et al. for an exploration of the fact that a probability density and probability current can be defined for any set of commuting operators. Hidden variables De Broglie–Bohm theory is often referred to as a "hidden-variable" theory. Bohm used this description in his original papers on the subject, writing: "From the point of view of the usual interpretation, these additional elements or parameters [permitting a detailed causal and continuous description of all processes] could be called 'hidden' variables." Bohm and Hiley later stated that they found Bohm's choice of the term "hidden variables" to be too restrictive. In particular, they argued that a particle is not actually hidden but rather "is what is most directly manifested in an observation [though] its properties cannot be observed with arbitrary precision (within the limits set by uncertainty principle)". However, others nevertheless treat the term "hidden variable" as a suitable description. Generalized particle trajectories can be extrapolated from numerous weak measurements on an ensemble of equally prepared systems, and such trajectories coincide with the de Broglie–Bohm trajectories. In particular, an experiment with two entangled photons, in which a set of Bohmian trajectories for one of the photons was determined using weak measurements and postselection, can be understood in terms of a nonlocal connection between that photon's trajectory and the other photon's polarization. However, not only the De Broglie–Bohm interpretation, but also many other interpretations of quantum mechanics that do not include such trajectories are consistent with such experimental evidence. Different predictions A specialized version of the double slit experiment has been devised to test characteristics of the trajectory predictions. Experimental realization of this concept disagreed with the Bohm predictions. where they differed from standard quantum mechanics. These conclusions have been the subject of debate. Heisenberg's uncertainty principle The Heisenberg's uncertainty principle states that when two complementary measurements are made, there is a limit to the product of their accuracy. As an example, if one measures the position with an accuracy of and the momentum with an accuracy of , then In de Broglie–Bohm theory, there is always a matter of fact about the position and momentum of a particle. Each particle has a well-defined trajectory, as well as a wavefunction. Observers have limited knowledge as to what this trajectory is (and thus of the position and momentum). It is the lack of knowledge of the particle's trajectory that accounts for the uncertainty relation. What one can know about a particle at any given time is described by the wavefunction. Since the uncertainty relation can be derived from the wavefunction in other interpretations of quantum mechanics, it can be likewise derived (in the epistemic sense mentioned above) on the de Broglie–Bohm theory. To put the statement differently, the particles' positions are only known statistically. As in classical mechanics, successive observations of the particles' positions refine the experimenter's knowledge of the particles' initial conditions. Thus, with succeeding observations, the initial conditions become more and more restricted. This formalism is consistent with the normal use of the Schrödinger equation. For the derivation of the uncertainty relation, see Heisenberg uncertainty principle, noting that this article describes the principle from the viewpoint of the Copenhagen interpretation. Quantum entanglement, Einstein–Podolsky–Rosen paradox, Bell's theorem, and nonlocality De Broglie–Bohm theory highlighted the issue of nonlocality: it inspired John Stewart Bell to prove his now-famous theorem, which in turn led to the Bell test experiments. In the Einstein–Podolsky–Rosen paradox, the authors describe a thought experiment that one could perform on a pair of particles that have interacted, the results of which they interpreted as indicating that quantum mechanics is an incomplete theory. Decades later John Bell proved Bell's theorem (see p. 14 in Bell), in which he showed that, if they are to agree with the empirical predictions of quantum mechanics, all such "hidden-variable" completions of quantum mechanics must either be nonlocal (as the Bohm interpretation is) or give up the assumption that experiments produce unique results (see counterfactual definiteness and many-worlds interpretation). In particular, Bell proved that any local theory with unique results must make empirical predictions satisfying a statistical constraint called "Bell's inequality". Alain Aspect performed a series of Bell test experiments that test Bell's inequality using an EPR-type setup. Aspect's results show experimentally that Bell's inequality is in fact violated, meaning that the relevant quantum-mechanical predictions are correct. In these Bell test experiments, entangled pairs of particles are created; the particles are separated, traveling to remote measuring apparatus. The orientation of the measuring apparatus can be changed while the particles are in flight, demonstrating the apparent nonlocality of the effect. The de Broglie–Bohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal. It is often criticized or rejected based on this; Bell's attitude was: "It is a merit of the de Broglie–Bohm version to bring this [nonlocality] out so explicitly that it cannot be ignored." The de Broglie–Bohm theory describes the physics in the Bell test experiments as follows: to understand the evolution of the particles, we need to set up a wave equation for both particles; the orientation of the apparatus affects the wavefunction. The particles in the experiment follow the guidance of the wavefunction. It is the wavefunction that carries the faster-than-light effect of changing the orientation of the apparatus. Maudlin provides an analysis of exactly what kind of nonlocality is present and how it is compatible with relativity. Bell has shown that the nonlocality does not allow superluminal communication. Maudlin has shown this in greater detail. Classical limit Bohm's formulation of de Broglie–Bohm theory in a classical-looking version has the merits that the emergence of classical behavior seems to follow immediately for any situation in which the quantum potential is negligible, as noted by Bohm in 1952. Modern methods of decoherence are relevant to an analysis of this limit. See Allori et al. for steps towards a rigorous analysis. Quantum trajectory method Work by Robert E. Wyatt in the early 2000s attempted to use the Bohm "particles" as an adaptive mesh that follows the actual trajectory of a quantum state in time and space. In the "quantum trajectory" method, one samples the quantum wavefunction with a mesh of quadrature points. One then evolves the quadrature points in time according to the Bohm equations of motion. At each time step, one then re-synthesizes the wavefunction from the points, recomputes the quantum forces, and continues the calculation. (QuickTime movies of this for H + H2 reactive scattering can be found on the Wyatt group web-site at UT Austin.) This approach has been adapted, extended, and used by a number of researchers in the chemical physics community as a way to compute semi-classical and quasi-classical molecular dynamics. A 2007 issue of The Journal of Physical Chemistry A was dedicated to Prof. Wyatt and his work on "computational Bohmian dynamics". Eric R. Bittner's group at the University of Houston has advanced a statistical variant of this approach that uses Bayesian sampling technique to sample the quantum density and compute the quantum potential on a structureless mesh of points. This technique was recently used to estimate quantum effects in the heat capacity of small clusters Nen for n ≈ 100. There remain difficulties using the Bohmian approach, mostly associated with the formation of singularities in the quantum potential due to nodes in the quantum wavefunction. In general, nodes forming due to interference effects lead to the case where This results in an infinite force on the sample particles forcing them to move away from the node and often crossing the path of other sample points (which violates single-valuedness). Various schemes have been developed to overcome this; however, no general solution has yet emerged. These methods, as does Bohm's Hamilton–Jacobi formulation, do not apply to situations in which the full dynamics of spin need to be taken into account. The properties of trajectories in the de Broglie–Bohm theory differ significantly from the Moyal quantum trajectories as well as the quantum trajectories from the unraveling of an open quantum system. Similarities with the many-worlds interpretation Kim Joris Boström has proposed a non-relativistic quantum mechanical theory that combines elements of de Broglie-Bohm mechanics and Everett's many-worlds. In particular, the unreal many-worlds interpretation of Hawking and Weinberg is similar to the Bohmian concept of unreal empty branch worlds: Many authors have expressed critical views of de Broglie–Bohm theory by comparing it to Everett's many-worlds approach. Many (but not all) proponents of de Broglie–Bohm theory (such as Bohm and Bell) interpret the universal wavefunction as physically real. According to some supporters of Everett's theory, if the (never collapsing) wavefunction is taken to be physically real, then it is natural to interpret the theory as having the same many worlds as Everett's theory. In the Everettian view the role of the Bohmian particle is to act as a "pointer", tagging, or selecting, just one branch of the universal wavefunction (the assumption that this branch indicates which wave packet determines the observed result of a given experiment is called the "result assumption"); the other branches are designated "empty" and implicitly assumed by Bohm to be devoid of conscious observers. H. Dieter Zeh comments on these "empty" branches: David Deutsch has expressed the same point more "acerbically": This conclusion has been challenged by Detlef Dürr and Justin Lazarovici: The Bohmian, of course, cannot accept this argument. For her, it is decidedly the particle configuration in three-dimensional space and not the wave function on the abstract configuration space that constitutes a world (or rather, the world). Instead, she will accuse the Everettian of not having local beables (in Bell's sense) in her theory, that is, the ontological variables that refer to localized entities in three-dimensional space or four-dimensional spacetime. The many worlds of her theory thus merely appear as a grotesque consequence of this omission. Occam's-razor criticism Both Hugh Everett III and Bohm treated the wavefunction as a physically real field. Everett's many-worlds interpretation is an attempt to demonstrate that the wavefunction alone is sufficient to account for all our observations. When we see the particle detectors flash or hear the click of a Geiger counter, Everett's theory interprets this as our wavefunction responding to changes in the detector's wavefunction, which is responding in turn to the passage of another wavefunction (which we think of as a "particle", but is actually just another wave packet). No particle (in the Bohm sense of having a defined position and velocity) exists according to that theory. For this reason Everett sometimes referred to his own many-worlds approach as the "pure wave theory". Of Bohm's 1952 approach, Everett said: In the Everettian view, then, the Bohm particles are superfluous entities, similar to, and equally as unnecessary as, for example, the luminiferous ether, which was found to be unnecessary in special relativity. This argument is sometimes called the "redundancy argument", since the superfluous particles are redundant in the sense of Occam's razor. According to Brown & Wallace, the de Broglie–Bohm particles play no role in the solution of the measurement problem. For these authors, the "result assumption" (see above) is inconsistent with the view that there is no measurement problem in the predictable outcome (i.e. single-outcome) case. They also say that a standard tacit assumption of de Broglie–Bohm theory (that an observer becomes aware of configurations of particles of ordinary objects by means of correlations between such configurations and the configuration of the particles in the observer's brain) is unreasonable. This conclusion has been challenged by Valentini, who argues that the entirety of such objections arises from a failure to interpret de Broglie–Bohm theory on its own terms. According to Peter R. Holland, in a wider Hamiltonian framework, theories can be formulated in which particles do act back on the wave function. Derivations De Broglie–Bohm theory has been derived many times and in many ways. Below are six derivations, all of which are very different and lead to different ways of understanding and extending this theory. Schrödinger's equation can be derived by using Einstein's light quanta hypothesis: and de Broglie's hypothesis: . The guiding equation can be derived in a similar fashion. We assume a plane wave: . Notice that . Assuming that for the particle's actual velocity, we have that . Thus, we have the guiding equation. Notice that this derivation does not use Schrödinger's equation. Preserving the density under the time evolution is another method of derivation. This is the method that Bell cites. It is this method that generalizes to many possible alternative theories. The starting point is the continuity equation for the density . This equation describes a probability flow along a current. We take the velocity field associated with this current as the velocity field whose integral curves yield the motion of the particle. A method applicable for particles without spin is to do a polar decomposition of the wavefunction and transform Schrödinger's equation into two coupled equations: the continuity equation from above and the Hamilton–Jacobi equation. This is the method used by Bohm in 1952. The decomposition and equations are as follows: Decomposition: Note that corresponds to the probability density . Continuity equation: . Hamilton–Jacobi equation: The Hamilton–Jacobi equation is the equation derived from a Newtonian system with potential and velocity field The potential is the classical potential that appears in Schrödinger's equation, and the other term involving is the quantum potential, terminology introduced by Bohm. This leads to viewing the quantum theory as particles moving under the classical force modified by a quantum force. However, unlike standard Newtonian mechanics, the initial velocity field is already specified by , which is a symptom of this being a first-order theory, not a second-order theory. A fourth derivation was given by Dürr et al. In their derivation, they derive the velocity field by demanding the appropriate transformation properties given by the various symmetries that Schrödinger's equation satisfies, once the wavefunction is suitably transformed. The guiding equation is what emerges from that analysis. A fifth derivation, given by Dürr et al. is appropriate for generalization to quantum field theory and the Dirac equation. The idea is that a velocity field can also be understood as a first-order differential operator acting on functions. Thus, if we know how it acts on functions, we know what it is. Then given the Hamiltonian operator , the equation to satisfy for all functions (with associated multiplication operator ) is , where is the local Hermitian inner product on the value space of the wavefunction. This formulation allows for stochastic theories such as the creation and annihilation of particles. A further derivation has been given by Peter R. Holland, on which he bases his quantum-physics textbook The Quantum Theory of Motion. It is based on three basic postulates and an additional fourth postulate that links the wavefunction to measurement probabilities: A physical system consists in a spatiotemporally propagating wave and a point particle guided by it. The wave is described mathematically by a solution to Schrödinger's wave equation. The particle motion is described by a solution to in dependence on initial condition , with the phase of .The fourth postulate is subsidiary yet consistent with the first three: The probability to find the particle in the differential volume at time t equals . History The theory was historically developed in the 1920s by de Broglie, who, in 1927, was persuaded to abandon it in favour of the then-mainstream Copenhagen interpretation. David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot-wave theory in 1952. Bohm's suggestions were not then widely received, partly due to reasons unrelated to their content, such as Bohm's youthful communist affiliations. The de Broglie–Bohm theory was widely deemed unacceptable by mainstream theorists, mostly because of its explicit non-locality. On the theory, John Stewart Bell, author of the 1964 Bell's theorem wrote in 1982: Since the 1990s, there has been renewed interest in formulating extensions to de Broglie–Bohm theory, attempting to reconcile it with special relativity and quantum field theory, besides other features such as spin or curved spatial geometries. De Broglie–Bohm theory has a history of different formulations and names. In this section, each stage is given a name and a main reference. Pilot-wave theory Louis de Broglie presented his pilot wave theory at the 1927 Solvay Conference, after close collaboration with Schrödinger, who developed his wave equation for de Broglie's theory. At the end of the presentation, Wolfgang Pauli pointed out that it was not compatible with a semi-classical technique Fermi had previously adopted in the case of inelastic scattering. Contrary to a popular legend, de Broglie actually gave the correct rebuttal that the particular technique could not be generalized for Pauli's purpose, although the audience might have been lost in the technical details and de Broglie's mild manner left the impression that Pauli's objection was valid. He was eventually persuaded to abandon this theory nonetheless because he was "discouraged by criticisms which [it] roused". De Broglie's theory already applies to multiple spin-less particles, but lacks an adequate theory of measurement as no one understood quantum decoherence at the time. An analysis of de Broglie's presentation is given in Bacciagaluppi et al. Also, in 1932 John von Neumann published a no hidden variables proof in his book Mathematical Foundations of Quantum Mechanics, that was widely believed to prove that all hidden-variable theories are impossible. This sealed the fate of de Broglie's theory for the next two decades. In 1926, Erwin Madelung had developed a hydrodynamic version of Schrödinger's equation, which is incorrectly considered as a basis for the density current derivation of the de Broglie–Bohm theory. The Madelung equations, being quantum analog of Euler equations of fluid dynamics, differ philosophically from the de Broglie–Bohm mechanics and are the basis of the stochastic interpretation of quantum mechanics. Peter R. Holland has pointed out that, earlier in 1927, Einstein had actually submitted a preprint with a similar proposal but, not convinced, had withdrawn it before publication. According to Holland, failure to appreciate key points of the de Broglie–Bohm theory has led to confusion, the key point being "that the trajectories of a many-body quantum system are correlated not because the particles exert a direct force on one another (à la Coulomb) but because all are acted upon by an entity – mathematically described by the wavefunction or functions of it – that lies beyond them". This entity is the quantum potential. After publishing his popular textbook Quantum Theory that adhered entirely to the Copenhagen orthodoxy, Bohm was persuaded by Einstein to take a critical look at von Neumann's no hidden variables proof. The result was 'A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I and II' [Bohm 1952]. It was an independent origination of the pilot wave theory, and extended it to incorporate a consistent theory of measurement, and to address a criticism of Pauli that de Broglie did not properly respond to; it is taken to be deterministic (though Bohm hinted in the original papers that there should be disturbances to this, in the way Brownian motion disturbs Newtonian mechanics). This stage is known as the de Broglie–Bohm Theory in Bell's work [Bell 1987] and is the basis for 'The Quantum Theory of Motion' [Holland 1993]. This stage applies to multiple particles, and is deterministic. The de Broglie–Bohm theory is an example of a hidden-variables theory. Bohm originally hoped that hidden variables could provide a local, causal, objective description that would resolve or eliminate many of the paradoxes of quantum mechanics, such as Schrödinger's cat, the measurement problem and the collapse of the wavefunction. However, Bell's theorem complicates this hope, as it demonstrates that there can be no local hidden-variable theory that is compatible with the predictions of quantum mechanics. The Bohmian interpretation is causal but not local. Bohm's paper was largely ignored or panned by other physicists. Albert Einstein, who had suggested that Bohm search for a realist alternative to the prevailing Copenhagen approach, did not consider Bohm's interpretation to be a satisfactory answer to the quantum nonlocality question, calling it "too cheap", while Werner Heisenberg considered it a "superfluous 'ideological superstructure' ". Wolfgang Pauli, who had been unconvinced by de Broglie in 1927, conceded to Bohm as follows: I just received your long letter of 20th November, and I also have studied more thoroughly the details of your paper. I do not see any longer the possibility of any logical contradiction as long as your results agree completely with those of the usual wave mechanics and as long as no means is given to measure the values of your hidden parameters both in the measuring apparatus and in the observe [sic] system. As far as the whole matter stands now, your 'extra wave-mechanical predictions' are still a check, which cannot be cashed. He subsequently described Bohm's theory as "artificial metaphysics". According to physicist Max Dresden, when Bohm's theory was presented at the Institute for Advanced Study in Princeton, many of the objections were ad hominem, focusing on Bohm's sympathy with communists as exemplified by his refusal to give testimony to the House Un-American Activities Committee. In 1979, Chris Philippidis, Chris Dewdney and Basil Hiley were the first to perform numeric computations on the basis of the quantum potential to deduce ensembles of particle trajectories. Their work renewed the interests of physicists in the Bohm interpretation of quantum physics. Eventually John Bell began to defend the theory. In "Speakable and Unspeakable in Quantum Mechanics" [Bell 1987], several of the papers refer to hidden-variables theories (which include Bohm's). The trajectories of the Bohm model that would result for particular experimental arrangements were termed "surreal" by some. Still in 2016, mathematical physicist Sheldon Goldstein said of Bohm's theory: "There was a time when you couldn't even talk about it because it was heretical. It probably still is the kiss of death for a physics career to be actually working on Bohm, but maybe that's changing." Bohmian mechanics Bohmian mechanics is the same theory, but with an emphasis on the notion of current flow, which is determined on the basis of the quantum equilibrium hypothesis that the probability follows the Born rule. The term "Bohmian mechanics" is also often used to include most of the further extensions past the spin-less version of Bohm. While de Broglie–Bohm theory has Lagrangians and Hamilton-Jacobi equations as a primary focus and backdrop, with the icon of the quantum potential, Bohmian mechanics considers the continuity equation as primary and has the guiding equation as its icon. They are mathematically equivalent in so far as the Hamilton-Jacobi formulation applies, i.e., spin-less particles. All of non-relativistic quantum mechanics can be fully accounted for in this theory. Recent studies have used this formalism to compute the evolution of many-body quantum systems, with a considerable increase in speed as compared to other quantum-based methods. Causal interpretation and ontological interpretation Bohm developed his original ideas, calling them the Causal Interpretation. Later he felt that causal sounded too much like deterministic and preferred to call his theory the Ontological Interpretation. The main reference is "The Undivided Universe" (Bohm, Hiley 1993). This stage covers work by Bohm and in collaboration with Jean-Pierre Vigier and Basil Hiley. Bohm is clear that this theory is non-deterministic (the work with Hiley includes a stochastic theory). As such, this theory is not strictly speaking a formulation of de Broglie–Bohm theory, but it deserves mention here because the term "Bohm Interpretation" is ambiguous between this theory and de Broglie–Bohm theory. In 1996 philosopher of science Arthur Fine gave an in-depth analysis of possible interpretations of Bohm's model of 1952. William Simpson has suggested a hylomorphic interpretation of Bohmian mechanics, in which the cosmos is an Aristotelian substance composed of material particles and a substantial form. The wave function is assigned a dispositional role in choreographing the trajectories of the particles. Hydrodynamic quantum analogs Experiments on hydrodynamical analogs of quantum mechanics beginning with the work of Couder and Fort (2006) have purported to show that macroscopic classical pilot-waves can exhibit characteristics previously thought to be restricted to the quantum realm. Hydrodynamic pilot-wave analogs have been claimed to duplicate the double slit experiment, tunneling, quantized orbits, and numerous other quantum phenomena which have led to a resurgence in interest in pilot wave theories. The analogs have been compared to the Faraday wave. These results have been disputed: experiments fail to reproduce aspects of the double-slit experiments. High precision measurements in the tunneling case point to a different origin of the unpredictable crossing: rather than initial position uncertainty or environmental noise, interactions at the barrier seem to be involved. Another classical analog has been reported in surface gravity waves. Surrealistic trajectories In 1992, Englert, Scully, Sussman, and Walther proposed experiments that would show particles taking paths that differ from the Bohm trajectories. They described the Bohm trajectories as "surrealistic"; their proposal was later referred to as ESSW after the last names of the authors. In 2016, Mahler et al. verified the ESSW predictions. However they propose the surealistic effect is a consequence of the nonlocality inherent in Bohm's theory.
Physical sciences
Quantum mechanics
Physics
54738
https://en.wikipedia.org/wiki/Interpretations%20of%20quantum%20mechanics
Interpretations of quantum mechanics
An interpretation of quantum mechanics is an attempt to explain how the mathematical theory of quantum mechanics might correspond to experienced reality. Quantum mechanics has held up to rigorous and extremely precise tests in an extraordinarily broad range of experiments. However, there exist a number of contending schools of thought over their interpretation. These views on interpretation differ on such fundamental questions as whether quantum mechanics is deterministic or stochastic, local or non-local, which elements of quantum mechanics can be considered real, and what the nature of measurement is, among other matters. While some variation of the Copenhagen interpretation is commonly presented in textbooks, many other interpretations have been developed. Despite nearly a century of debate and experiment, no consensus has been reached among physicists and philosophers of physics concerning which interpretation best "represents" reality. History The definition of quantum theorists' terms, such as wave function and matrix mechanics, progressed through many stages. For instance, Erwin Schrödinger originally viewed the electron's wave function as its charge density smeared across space, but Max Born reinterpreted the absolute square value of the wave function as the electron's probability density distributed across space; the Born rule, as it is now called, matched experiment, whereas Schrödinger's charge density view did not. The views of several early pioneers of quantum mechanics, such as Niels Bohr and Werner Heisenberg, are often grouped together as the "Copenhagen interpretation", though physicists and historians of physics have argued that this terminology obscures differences between the views so designated. Copenhagen-type ideas were never universally embraced, and challenges to a perceived Copenhagen orthodoxy gained increasing attention in the 1950s with the pilot-wave interpretation of David Bohm and the many-worlds interpretation of Hugh Everett III. The physicist N. David Mermin once quipped, "New interpretations appear every year. None ever disappear." As a rough guide to development of the mainstream view during the 1990s and 2000s, a "snapshot" of opinions was collected in a poll by Schlosshauer et al. at the "Quantum Physics and the Nature of Reality" conference of July 2011. The authors reference a similarly informal poll carried out by Max Tegmark at the "Fundamental Problems in Quantum Theory" conference in August 1997. The main conclusion of the authors is that "the Copenhagen interpretation still reigns supreme", receiving the most votes in their poll (42%), besides the rise to mainstream notability of the many-worlds interpretations: "The Copenhagen interpretation still reigns supreme here, especially if we lump it together with intellectual offsprings such as information-based interpretations and the quantum Bayesian interpretation. In Tegmark's poll, the Everett interpretation received 17% of the vote, which is similar to the number of votes (18%) in our poll." Some concepts originating from studies of interpretations have found more practical application in quantum information science. Nature More or less, all interpretations of quantum mechanics share two qualities: They interpret a formalism—a set of equations and principles to generate predictions via input of initial conditions They interpret a phenomenology—a set of observations, including those obtained by empirical research and those obtained informally, such as humans' experience of an unequivocal world Two qualities vary among interpretations: Epistemology—claims about the possibility, scope, and means toward relevant knowledge of the world Ontology—claims about what things, such as categories and entities, exist in the world In the philosophy of science, the distinction between knowledge and reality is termed epistemic versus ontic. A general law can be seen as a generalisation of the regularity of outcomes (epistemic), whereas a causal mechanism may be thought of as determining or regulating outcomes (ontic). A phenomenon can be interpreted either as ontic or as epistemic. For instance, indeterminism may be attributed to limitations of human observation and perception (epistemic), or may be explained as intrinsic physical randomness (ontic). Confusing the epistemic with the ontic—if for example one were to presume that a general law actually "governs" outcomes, and that the statement of a regularity has the role of a causal mechanism—is a category mistake. In a broad sense, scientific theory can be viewed as offering an approximately true description or explanation of the natural world (scientific realism) or as providing nothing more than an account of our knowledge of the natural world (antirealism). A realist stance sees the epistemic as giving us a window onto the ontic, whereas an antirealist stance sees the epistemic as providing only a logically consistent picture of the ontic. In the first half of the 20th Century, a key antirealist philosophy was logical positivism, which sought to exclude unobservable aspects of reality from scientific theory. Since the 1950s antirealism has adopted a more modest approach, often in the form of instrumentalism, permitting talk of unobservables but ultimately discarding the very question of realism and positing scientific theory as a tool to help us make predictions, not to attain a deep metaphysical understanding of the world. The instrumentalist view is typified by David Mermin's famous slogan: "Shut up and calculate" (which is often misattributed to Richard Feynman). Interpretive challenges Abstract, mathematical nature of quantum field theories: the mathematical structure of quantum mechanics is abstract and does not result in a single, clear interpretation of its quantities. Apparent indeterministic and irreversible processes: in classical field theory, a physical property at a given location in the field is readily derived. In most mathematical formulations of quantum mechanics, measurement (understood as an interaction with a given state) has a special role in the theory, as it is the sole process that can cause a nonunitary, irreversible evolution of the state. Role of the observer in determining outcomes. Copenhagen-type interpretations imply that the wavefunction is a calculational tool, and represents reality only immediately after a measurement performed by an observer. Everettian interpretations grant that all possible outcomes are real, and that measurement-type interactions cause a branching process in which each possibility is realised. Classically unexpected correlations between remote objects: entangled quantum systems, as illustrated in the EPR paradox, obey statistics that seem to violate principles of local causality by action at a distance. Complementarity of proffered descriptions: complementarity holds that no set of classical physical concepts can simultaneously refer to all properties of a quantum system. For instance, wave description A and particulate description B can each describe quantum system S, but not simultaneously. This implies the composition of physical properties of S does not obey the rules of classical propositional logic when using propositional connectives (see "Quantum logic"). Like contextuality, the "origin of complementarity lies in the non-commutativity of operators" that describe quantum objects. Rapidly rising intricacy, far exceeding humans' present calculational capacity, as a system's size increases: since the state space of a quantum system is exponential in the number of subsystems, it is difficult to derive classical approximations. Contextual behaviour of systems locally: Quantum contextuality demonstrates that classical intuitions, in which properties of a system hold definite values independent of the manner of their measurement, fail even for local systems. Also, physical principles such as Leibniz's Principle of the identity of indiscernibles no longer apply in the quantum domain, signaling that most classical intuitions may be incorrect about the quantum world. Influential interpretations Copenhagen interpretation The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics principally attributed to Niels Bohr and Werner Heisenberg. It is one of the oldest attitudes towards quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught. There is no definitive historical statement of what is the Copenhagen interpretation, and there were in particular fundamental disagreements between the views of Bohr and Heisenberg. For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed, while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process that imparts the classical behavior of "observation" or "measurement". Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states certain pairs of complementary properties cannot all be observed or measured simultaneously. Moreover, properties only result from the act of "observing" or "measuring"; the theory avoids assuming definite values from unperformed experiments. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' mental arbitrariness. The statistical interpretation of wavefunctions due to Max Born differs sharply from Schrödinger's original intent, which was to have a theory with continuous time evolution and in which wavefunctions directly described physical reality. Many worlds The many-worlds interpretation is an interpretation of quantum mechanics in which a universal wavefunction obeys the same deterministic, reversible laws at all times; in particular there is no (indeterministic and irreversible) wavefunction collapse associated with measurement. The phenomena associated with measurement are claimed to be explained by decoherence, which occurs when states interact with the environment. More precisely, the parts of the wavefunction describing observers become increasingly entangled with the parts of the wavefunction describing their experiments. Although all possible outcomes of experiments continue to lie in the wavefunction's support, the times at which they become correlated with observers effectively "split" the universe into mutually unobservable alternate histories. Quantum information theories Quantum informational approaches have attracted growing support. They subdivide into two kinds. Information ontologies, such as J. A. Wheeler's "it from bit". These approaches have been described as a revival of immaterialism. Interpretations where quantum mechanics is said to describe an observer's knowledge of the world, rather than the world itself. This approach has some similarity with Bohr's thinking. Collapse (also known as reduction) is often interpreted as an observer acquiring information from a measurement, rather than as an objective event. These approaches have been appraised as similar to instrumentalism. James Hartle writes, The state is not an objective property of an individual system but is that information, obtained from a knowledge of how a system was prepared, which can be used for making predictions about future measurements. ... A quantum mechanical state being a summary of the observer's information about an individual physical system changes both by dynamical laws, and whenever the observer acquires new information about the system through the process of measurement. The existence of two laws for the evolution of the state vector ... becomes problematical only if it is believed that the state vector is an objective property of the system ... The "reduction of the wavepacket" does take place in the consciousness of the observer, not because of any unique physical process which takes place there, but only because the state is a construct of the observer and not an objective property of the physical system. Relational quantum mechanics The essential idea behind relational quantum mechanics, following the precedent of special relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, relational quantum mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by relational quantum mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory has to do not with objects themselves, but the relations between them. QBism QBism, which originally stood for "quantum Bayesianism", is an interpretation of quantum mechanics that takes an agent's actions and experiences as the central concerns of the theory. This interpretation is distinguished by its use of a subjective Bayesian account of probabilities to understand the quantum mechanical Born rule as a normative addition to good decision-making. QBism draws from the fields of quantum information and Bayesian probability and aims to eliminate the interpretational conundrums that have beset quantum theory. QBism deals with common questions in the interpretation of quantum theory about the nature of wavefunction superposition, quantum measurement, and entanglement. According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead it represents the degrees of belief an agent has about the possible outcomes of measurements. For this reason, some philosophers of science have deemed QBism a form of anti-realism. The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists of more than can be captured by any putative third-person account of it. Consistent histories The consistent histories interpretation generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability. It is claimed to be consistent with the Schrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle). Ensemble interpretation The ensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. That is, it claims to make the fewest assumptions associated with the standard mathematics. It takes the statistical interpretation of Born to the fullest extent. The interpretation states that the wave function does not apply to an individual systemfor example, a single particlebut is an abstract statistical quantity that only applies to an ensemble (a vast multitude) of similarly prepared systems or particles. In the words of Einstein: The most prominent current advocate of the ensemble interpretation is Leslie E. Ballentine, professor at Simon Fraser University, author of the text book Quantum Mechanics, A Modern Development. De Broglie–Bohm theory The de Broglie–Bohm theory of quantum mechanics (also known as the pilot wave theory) is a theory by Louis de Broglie and extended later by David Bohm to include measurements. Particles, which always have positions, are guided by the wavefunction. The wavefunction evolves according to the Schrödinger wave equation, and the wavefunction never collapses. The theory takes place in a single spacetime, is non-local, and is deterministic. The simultaneous determination of a particle's position and velocity is subject to the usual uncertainty principle constraint. The theory is considered to be a hidden-variable theory, and by embracing non-locality it satisfies Bell's inequality. The measurement problem is resolved, since the particles have definite positions at all times. Collapse is explained as phenomenological. Transactional interpretation The transactional interpretation of quantum mechanics (TIQM) by John G. Cramer is an interpretation of quantum mechanics inspired by the Wheeler–Feynman absorber theory. It describes the collapse of the wave function as resulting from a time-symmetric transaction between a possibility wave from the source to the receiver (the wave function) and a possibility wave from the receiver to source (the complex conjugate of the wave function). This interpretation of quantum mechanics is unique in that it not only views the wave function as a real entity, but the complex conjugate of the wave function, which appears in the Born rule for calculating the expected value for an observable, as also real. Von Neumann–Wigner interpretation In his treatise The Mathematical Foundations of Quantum Mechanics, John von Neumann deeply analyzed the so-called measurement problem. He concluded that the entire physical universe could be made subject to the Schrödinger equation (the universal wave function). He also described how measurement could cause a collapse of the wave function. This point of view was prominently expanded on by Eugene Wigner, who argued that human experimenter consciousness (or maybe even dog consciousness) was critical for the collapse, but he later abandoned this interpretation. However, consciousness remains a mystery. The origin and place in nature of consciousness are not well understood. Some specific proposals for consciousness caused wave-function collapse have been shown to be unfalsifiable. Quantum logic Quantum logic can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical Boolean logic with the facts related to measurement and observation in quantum mechanics. Modal interpretations of quantum theory Modal interpretations of quantum mechanics were first conceived of in 1972 by Bas van Fraassen, in his paper "A formal approach to the philosophy of science". Van Fraassen introduced a distinction between a dynamical state, which describes what might be true about a system and which always evolves according to the Schrödinger equation, and a value state, which indicates what is actually true about a system at a given time. The term "modal interpretation" now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions, including proposals by Kochen, Dieks, Clifton, Dickson, and Bub. According to Michel Bitbol, Schrödinger's views on how to interpret quantum mechanics progressed through as many as four stages, ending with a non-collapse view that in respects resembles the interpretations of Everett and van Fraassen. Because Schrödinger subscribed to a kind of post-Machian neutral monism, in which "matter" and "mind" are only different aspects or arrangements of the same common elements, treating the wavefunction as ontic and treating it as epistemic became interchangeable. Time-symmetric theories Time-symmetric interpretations of quantum mechanics were first suggested by Walter Schottky in 1921. Several theories have been proposed that modify the equations of quantum mechanics to be symmetric with respect to time reversal. (See Wheeler–Feynman time-symmetric theory.) This creates retrocausality: events in the future can affect ones in the past, exactly as events in the past can affect ones in the future. In these theories, a single measurement cannot fully determine the state of a system (making them a type of hidden-variables theory), but given two measurements performed at different times, it is possible to calculate the exact state of the system at all intermediate times. The collapse of the wavefunction is therefore not a physical change to the system, just a change in our knowledge of it due to the second measurement. Similarly, they explain entanglement as not being a true physical state but just an illusion created by ignoring retrocausality. The point where two particles appear to "become entangled" is simply a point where each particle is being influenced by events that occur to the other particle in the future. Not all advocates of time-symmetric causality favour modifying the unitary dynamics of standard quantum mechanics. Thus a leading exponent of the two-state vector formalism, Lev Vaidman, states that the two-state vector formalism dovetails well with Hugh Everett's many-worlds interpretation. Other interpretations As well as the mainstream interpretations discussed above, a number of other interpretations have been proposed that have not made a significant scientific impact for whatever reason. These range from proposals by mainstream physicists to the more occult ideas of quantum mysticism. Related concepts Some ideas are discussed in the context of interpreting quantum mechanics but are not necessarily regarded as interpretations themselves. Quantum Darwinism Quantum Darwinism is a theory meant to explain the emergence of the classical world from the quantum world as due to a process of Darwinian natural selection induced by the environment interacting with the quantum system; where the many possible quantum states are selected against in favor of a stable pointer state. It was proposed in 2003 by Wojciech Zurek and a group of collaborators including Ollivier, Poulin, Paz and Blume-Kohout. The development of the theory is due to the integration of a number of Zurek's research topics pursued over the course of twenty-five years including pointer states, einselection and decoherence. Objective-collapse theories Objective-collapse theories differ from the Copenhagen interpretation by regarding both the wave function and the process of collapse as ontologically objective (meaning these exist and occur independent of the observer). In objective theories, collapse occurs either randomly ("spontaneous localization") or when some physical threshold is reached, with observers having no special role. Thus, objective-collapse theories are realistic, indeterministic, no-hidden-variables theories. Standard quantum mechanics does not specify any mechanism of collapse; quantum mechanics would need to be extended if objective collapse is correct. The requirement for an extension means that objective-collapse theories are alternatives to quantum mechanics rather than interpretations of it. Examples include the Ghirardi–Rimini–Weber theory the continuous spontaneous localization model the Penrose interpretation Comparisons The most common interpretations are summarized in the table below. The values shown in the cells of the table are not without controversy, for the precise meanings of some of the concepts involved are unclear and, in fact, are themselves at the center of the controversy surrounding the given interpretation. For another table comparing interpretations of quantum theory, see reference. No experimental evidence exists that distinguishes among these interpretations. To that extent, the physical theory stands, and is consistent with itself and with reality. Nevertheless, designing experiments that would test the various interpretations is the subject of active research. Most of these interpretations have variants. For example, it is difficult to get a precise definition of the Copenhagen interpretation as it was developed and argued by many people. The silent approach Although interpretational opinions are openly and widely discussed today, that was not always the case. A notable exponent of a tendency of silence was Paul Dirac who once wrote: "The interpretation of quantum mechanics has been dealt with by many authors, and I do not want to discuss it here. I want to deal with more fundamental things." This position is not uncommon among practitioners of quantum mechanics. Similarly Richard Feynman wrote many popularizations of quantum mechanics without ever publishing about interpretation issues like quantum measurement. Others, like Nico van Kampen and Willis Lamb, have openly criticized non-orthodox interpretations of quantum mechanics.
Physical sciences
Quantum mechanics
Physics
54743
https://en.wikipedia.org/wiki/Inbreeding
Inbreeding
Inbreeding is the production of offspring from the mating or breeding of individuals or organisms that are closely related genetically. By analogy, the term is used in human reproduction, but more commonly refers to the genetic disorders and other consequences that may arise from expression of deleterious recessive traits resulting from incestuous sexual relationships and consanguinity. Animals avoid inbreeding only rarely. Inbreeding results in homozygosity which can increase the chances of offspring being affected by recessive traits. In extreme cases, this usually leads to at least temporarily decreased biological fitness of a population (called inbreeding depression), which is its ability to survive and reproduce. An individual who inherits such deleterious traits is colloquially referred to as inbred. The avoidance of expression of such deleterious recessive alleles caused by inbreeding, via inbreeding avoidance mechanisms, is the main selective reason for outcrossing. Crossbreeding between populations sometimes has positive effects on fitness-related traits, but also sometimes leads to negative effects known as outbreeding depression. However, increased homozygosity increases the probability of fixing beneficial alleles and also slightly decreases the probability of fixing deleterious alleles in a population. Inbreeding can result in purging of deleterious alleles from a population through purifying selection. Inbreeding is a technique used in selective breeding. For example, in livestock breeding, breeders may use inbreeding when trying to establish a new and desirable trait in the stock and for producing distinct families within a breed, but will need to watch for undesirable characteristics in offspring, which can then be eliminated through further selective breeding or culling. Inbreeding also helps to ascertain the type of gene action affecting a trait. Inbreeding is also used to reveal deleterious recessive alleles, which can then be eliminated through assortative breeding or through culling. In plant breeding, inbred lines are used as stocks for the creation of hybrid lines to make use of the effects of heterosis. Inbreeding in plants also occurs naturally in the form of self-pollination. Inbreeding can significantly influence gene expression which can prevent inbreeding depression. Overview Offspring of biologically related persons are subject to the possible effects of inbreeding, such as congenital birth defects. The chances of such disorders are increased when the biological parents are more closely related. This is because such pairings have a 25% probability of producing homozygous zygotes, resulting in offspring with two recessive alleles, which can produce disorders when these alleles are deleterious. Because most recessive alleles are rare in populations, it is unlikely that two unrelated partners will both be carriers of the same deleterious allele; however, because close relatives share a large fraction of their alleles, the probability that any such deleterious allele is inherited from the common ancestor through both parents is increased dramatically. For each homozygous recessive individual formed there is an equal chance of producing a homozygous dominant individual — one completely devoid of the harmful allele. Contrary to common belief, inbreeding does not in itself alter allele frequencies, but rather increases the relative proportion of homozygotes to heterozygotes; however, because the increased proportion of deleterious homozygotes exposes the allele to natural selection, in the long run its frequency decreases more rapidly in inbred populations. In the short term, incestuous reproduction is expected to increase the number of spontaneous abortions of zygotes, perinatal deaths, and postnatal offspring with birth defects. The advantages of inbreeding may be the result of a tendency to preserve the structures of alleles interacting at different loci that have been adapted together by a common selective history. Malformations or harmful traits can stay within a population due to a high homozygosity rate, and this will cause a population to become fixed for certain traits, like having too many bones in an area, like the vertebral column of wolves on Isle Royale or having cranial abnormalities, such as in Northern elephant seals, where their cranial bone length in the lower mandibular tooth row has changed. Having a high homozygosity rate is problematic for a population because it will unmask recessive deleterious alleles generated by mutations, reduce heterozygote advantage, and it is detrimental to the survival of small, endangered animal populations. When deleterious recessive alleles are unmasked due to the increased homozygosity generated by inbreeding, this can cause inbreeding depression. There may also be other deleterious effects besides those caused by recessive diseases. Thus, similar immune systems may be more vulnerable to infectious diseases (see Major histocompatibility complex and sexual selection). Inbreeding history of the population should also be considered when discussing the variation in the severity of inbreeding depression between and within species. With persistent inbreeding, there is evidence that shows that inbreeding depression becomes less severe. This is associated with the unmasking and elimination of severely deleterious recessive alleles. However, inbreeding depression is not a temporary phenomenon because this elimination of deleterious recessive alleles will never be complete. Eliminating slightly deleterious mutations through inbreeding under moderate selection is not as effective. Fixation of alleles most likely occurs through Muller's ratchet, when an asexual population's genome accumulates deleterious mutations that are irreversible. Despite all its disadvantages, inbreeding can also have a variety of advantages, such as ensuring a child produced from the mating contains, and will pass on, a higher percentage of its mother/father's genetics, reducing the recombination load, and allowing the expression of recessive advantageous phenotypes. Some species with a Haplodiploidy mating system depend on the ability to produce sons to mate with as a means of ensuring a mate can be found if no other male is available. It has been proposed that under circumstances when the advantages of inbreeding outweigh the disadvantages, preferential breeding within small groups could be promoted, potentially leading to speciation. Genetic disorders Autosomal recessive disorders occur in individuals who have two copies of an allele for a particular recessive genetic mutation. Except in certain rare circumstances, such as new mutations or uniparental disomy, both parents of an individual with such a disorder will be carriers of the gene. These carriers do not display any signs of the mutation and may be unaware that they carry the mutated gene. Since relatives share a higher proportion of their genes than do unrelated people, it is more likely that related parents will both be carriers of the same recessive allele, and therefore their children are at a higher risk of inheriting an autosomal recessive genetic disorder. The extent to which the risk increases depends on the degree of genetic relationship between the parents; the risk is greater when the parents are close relatives and lower for relationships between more distant relatives, such as second cousins, though still greater than for the general population. Children of parent-child or sibling-sibling unions are at an increased risk compared to cousin-cousin unions. Inbreeding may result in a greater than expected phenotypic expression of deleterious recessive alleles within a population. As a result, first-generation inbred individuals are more likely to show physical and health defects, including: The isolation of a small population for a period of time can lead to inbreeding within that population, resulting in increased genetic relatedness between breeding individuals. Inbreeding depression can also occur in a large population if individuals tend to mate with their relatives, instead of mating randomly. Due to higher prenatal and postnatal mortality rates, some individuals in the first generation of inbreeding will not live on to reproduce. Over time, with isolation, such as a population bottleneck caused by purposeful (assortative) breeding or natural environmental factors, the deleterious inherited traits are culled. Island species are often very inbred, as their isolation from the larger group on a mainland allows natural selection to work on their population. This type of isolation may result in the formation of race or even speciation, as the inbreeding first removes many deleterious genes, and permits the expression of genes that allow a population to adapt to an ecosystem. As the adaptation becomes more pronounced, the new species or race radiates from its entrance into the new space, or dies out if it cannot adapt and, most importantly, reproduce. The reduced genetic diversity, for example due to a bottleneck will unavoidably increase inbreeding for the entire population. This may mean that a species may not be able to adapt to changes in environmental conditions. Each individual will have similar immune systems, as immune systems are genetically based. When a species becomes endangered, the population may fall below a minimum whereby the forced interbreeding between the remaining animals will result in extinction. Natural breedings include inbreeding by necessity, and most animals only migrate when necessary. In many cases, the closest available mate is a mother, sister, grandmother, father, brother, or grandfather. In all cases, the environment presents stresses to remove from the population those individuals who cannot survive because of illness. There was an assumption that wild populations do not inbreed; this is not what is observed in some cases in the wild. However, in species such as horses, animals in wild or feral conditions often drive off the young of both sexes, thought to be a mechanism by which the species instinctively avoids some of the genetic consequences of inbreeding. In general, many mammal species, including humanity's closest primate relatives, avoid close inbreeding possibly due to the deleterious effects. Examples Although there are several examples of inbred populations of wild animals, the negative consequences of this inbreeding are poorly documented. In the South American sea lion, there was concern that recent population crashes would reduce genetic diversity. Historical analysis indicated that a population expansion from just two matrilineal lines was responsible for most of the individuals within the population. Even so, the diversity within the lines allowed great variation in the gene pool that may help to protect the South American sea lion from extinction. In lions, prides are often followed by related males in bachelor groups. When the dominant male is killed or driven off by one of these bachelors, a father may be replaced by his son. There is no mechanism for preventing inbreeding or to ensure outcrossing. In the prides, most lionesses are related to one another. If there is more than one dominant male, the group of alpha males are usually related. Two lines are then being "line bred". Also, in some populations, such as the Crater lions, it is known that a population bottleneck has occurred. Researchers found far greater genetic heterozygosity than expected. In fact, predators are known for low genetic variance, along with most of the top portion of the trophic levels of an ecosystem. Additionally, the alpha males of two neighboring prides can be from the same litter; one brother may come to acquire leadership over another's pride, and subsequently mate with his 'nieces' or cousins. However, killing another male's cubs, upon the takeover, allows the new selected gene complement of the incoming alpha male to prevail over the previous male. There are genetic assays being scheduled for lions to determine their genetic diversity. The preliminary studies show results inconsistent with the outcrossing paradigm based on individual environments of the studied groups. In Central California, sea otters were thought to have been driven to extinction due to over hunting, until a small colony was discovered in the Point Sur region in the 1930s. Since then, the population has grown and spread along the central Californian coast to around 2,000 individuals, a level that has remained stable for over a decade. Population growth is limited by the fact that all Californian sea otters are descended from the isolated colony, resulting in inbreeding. Cheetahs are another example of inbreeding. Thousands of years ago, the cheetah went through a population bottleneck that reduced its population dramatically so the animals that are alive today are all related to one another. A consequence from inbreeding for this species has been high juvenile mortality, low fecundity, and poor breeding success. In a study on an island population of song sparrows, individuals that were inbred showed significantly lower survival rates than outbred individuals during a severe winter weather related population crash. These studies show that inbreeding depression and ecological factors have an influence on survival. The Florida panther population was reduced to about 30 animals, so inbreeding became a problem. Several females were imported from Texas and now the population is better off genetically. Measures A measure of inbreeding of an individual A is the probability F(A) that both alleles in one locus are derived from the same allele in an ancestor. These two identical alleles that are both derived from a common ancestor are said to be identical by descent. This probability F(A) is called the "coefficient of inbreeding". Another useful measure that describes the extent to which two individuals are related (say individuals A and B) is their coancestry coefficient f(A,B), which gives the probability that one randomly selected allele from A and another randomly selected allele from B are identical by descent. This is also denoted as the kinship coefficient between A and B. A particular case is the self-coancestry of individual A with itself, f(A,A), which is the probability that taking one random allele from A and then, independently and with replacement, another random allele also from A, both are identical by descent. Since they can be identical by descent by sampling the same allele or by sampling both alleles that happen to be identical by descent, we have f(A,A) = 1/2 + F(A)/2. Both the inbreeding and the coancestry coefficients can be defined for specific individuals or as average population values. They can be computed from genealogies or estimated from the population size and its breeding properties, but all methods assume no selection and are limited to neutral alleles. There are several methods to compute this percentage. The two main ways are the path method and the tabular method. Typical coancestries between relatives are as follows: Father/daughter or mother/son → 25% () Brother/sister → 25% () Grandfather/granddaughter or grandmother/grandson → 12.5% () Half-brother/half-sister, Double cousins → 12.5% () Uncle/niece or aunt/nephew → 12.5% () Great-grandfather/great-granddaughter or great-grandmother/great-grandson → 6.25% () Half-uncle/niece or half-aunt/nephew → 6.25% () First cousins → 6.25% () Animals Wild animals Banded mongoose females regularly mate with their fathers and brothers. Bed bugs: North Carolina State University found that bedbugs, in contrast to most other insects, tolerate incest and are able to genetically withstand the effects of inbreeding quite well. Common fruit fly females prefer to mate with their own brothers over unrelated males. Cottony cushion scales: 'It turns out that females in these hermaphrodite insects are not really fertilizing their eggs themselves, but instead are having this done by a parasitic tissue that infects them at birth,' says Laura Ross of Oxford University's Department of Zoology. 'It seems that this infectious tissue derives from left-over sperm from their father, who has found a sneaky way of having more children by mating with his daughters.' Adactylidium: The single male offspring mite mates with all the daughters when they are still in the mother. The females, now impregnated, cut holes in their mother's body so that they can emerge. The male emerges as well, but does not look for food or new mates, and dies after a few hours. The females die at the age of 4 days, when their own offspring eat them alive from the inside. Domestic animals Breeding in domestic animals is primarily assortative breeding (see selective breeding). Without the sorting of individuals by trait, a breed could not be established, nor could poor genetic material be removed. Homozygosity is the case where similar or identical alleles combine to express a trait that is not otherwise expressed (recessiveness). Inbreeding exposes recessive alleles through increasing homozygosity. Breeders must avoid breeding from individuals that demonstrate either homozygosity or heterozygosity for disease causing alleles. The goal of preventing the transfer of deleterious alleles may be achieved by reproductive isolation, sterilization, or, in the extreme case, culling. Culling is not strictly necessary if genetics are the only issue in hand. Small animals such as cats and dogs may be sterilized, but in the case of large agricultural animals, such as cattle, culling is usually the only economic option. The issue of casual breeders who inbreed irresponsibly is discussed in the following quotation on cattle: Meanwhile, milk production per cow per lactation increased from 17,444 lbs to 25,013 lbs from 1978 to 1998 for the Holstein breed. Mean breeding values for milk of Holstein cows increased by 4,829 lbs during this period. High producing cows are increasingly difficult to breed and are subject to higher health costs than cows of lower genetic merit for production (Cassell, 2001). Intensive selection for higher yield has increased relationships among animals within breed and increased the rate of casual inbreeding. Many of the traits that affect profitability in crosses of modern dairy breeds have not been studied in designed experiments. Indeed, all crossbreeding research involving North American breeds and strains is very dated (McAllister, 2001) if it exists at all. As a result of long-term cooperation between USDA and dairy farmers which led to a revolution in dairy cattle productivity, the United States has since 1992 been the world’s largest supplier of dairy bull semen. However, US genomic technology has resulted in the US dairy cattle population becoming "the most inbred it’s ever been" and the rate of increase in US national milk yield has tapered off. Efforts are now being made to identify desirable genes in cattle breeds not yet optimized by US dairy breeders in order to apply hybrid vigor to the US dairy cattle population and thus propel US dairy technology to even higher levels of productivity. The BBC produced two documentaries on dog inbreeding titled Pedigree Dogs Exposed and Pedigree Dogs Exposed: Three Years On that document the negative health consequences of excessive inbreeding. Linebreeding Linebreeding is a form of inbreeding. There is no clear distinction between the two terms, but linebreeding may encompass crosses between individuals and their descendants or two cousins. This method can be used to increase a particular animal's contribution to the population. While linebreeding is less likely to cause problems in the first generation than does inbreeding, over time, linebreeding can reduce the genetic diversity of a population and cause problems related to a too-small gene pool that may include an increased prevalence of genetic disorders and inbreeding depression. Outcrossing Outcrossing is where two unrelated individuals are crossed to produce progeny. In outcrossing, unless there is verifiable genetic information, one may find that all individuals are distantly related to an ancient progenitor. If the trait carries throughout a population, all individuals can have this trait. This is called the founder effect. In the well established breeds, that are commonly bred, a large gene pool is present. For example, in 2004, over 18,000 Persian cats were registered. A possibility exists for a complete outcross, if no barriers exist between the individuals to breed. However, it is not always the case, and a form of distant linebreeding occurs. Again it is up to the assortative breeder to know what sort of traits, both positive and negative, exist within the diversity of one breeding. This diversity of genetic expression, within even close relatives, increases the variability and diversity of viable stock. Laboratory animals Systematic inbreeding and maintenance of inbred strains of laboratory mice and rats is of great importance for biomedical research. The inbreeding guarantees a consistent and uniform animal model for experimental purposes and enables genetic studies in congenic and knock-out animals. In order to achieve a mouse strain that is considered inbred, a minimum of 20 sequential generations of sibling matings must occur. With each successive generation of breeding, homozygosity in the entire genome increases, eliminating heterozygous loci. With 20 generations of sibling matings, homozygosity is occurring at roughly 98.7% of all loci in the genome, allowing for these offspring to serve as animal models for genetic studies. The use of inbred strains is also important for genetic studies in animal models, for example to distinguish genetic from environmental effects. The mice that are inbred typically show considerably lower survival rates. Humans Effects Inbreeding increases homozygosity, which can increase the chances of the expression of deleterious or beneficial recessive alleles and therefore has the potential to either decrease or increase the fitness of the offspring. Depending on the rate of inbreeding, natural selection may still be able to eliminate deleterious alleles. With continuous inbreeding, genetic variation is lost and homozygosity is increased, enabling the expression of recessive deleterious alleles in homozygotes. The coefficient of inbreeding, or the degree of inbreeding in an individual, is an estimate of the percent of homozygous alleles in the overall genome. The more biologically related the parents are, the greater the coefficient of inbreeding, since their genomes have many similarities already. This overall homozygosity becomes an issue when there are deleterious recessive alleles in the gene pool of the family. By pairing chromosomes of similar genomes, the chance for these recessive alleles to pair and become homozygous greatly increases, leading to offspring with autosomal recessive disorders. However, these deleterious effects are common for very close relatives but not for those related on the 3rd cousin or greater level, who exhibit increased fitness. Inbreeding is especially problematic in small populations where the genetic variation is already limited. By inbreeding, individuals are further decreasing genetic variation by increasing homozygosity in the genomes of their offspring. Thus, the likelihood of deleterious recessive alleles to pair is significantly higher in a small inbreeding population than in a larger inbreeding population. The fitness consequences of consanguineous mating have been studied since their scientific recognition by Charles Darwin in 1839. Some of the most harmful effects known from such breeding includes its effects on the mortality rate as well as on the general health of the offspring. Since the 1960s, there have been many studies to support such debilitating effects on the human organism. Specifically, inbreeding has been found to decrease fertility as a direct result of increasing homozygosity of deleterious recessive alleles. Fetuses produced by inbreeding also face a greater risk of spontaneous abortions due to inherent complications in development. Among mothers who experience stillbirths and early infant deaths, those that are inbreeding have a significantly higher chance of reaching repeated results with future offspring. Additionally, consanguineous parents possess a high risk of premature birth and producing underweight and undersized infants. Viable inbred offspring are also likely to be inflicted with physical deformities and genetically inherited diseases. Studies have confirmed an increase in several genetic disorders due to inbreeding such as blindness, hearing loss, neonatal diabetes, limb malformations, disorders of sex development, schizophrenia and several others. Moreover, there is an increased risk for congenital heart disease depending on the inbreeding coefficient (See coefficient of inbreeding) of the offspring, with significant risk accompanied by an F =.125 or higher. Prevalence The general negative outlook and eschewal of inbreeding that is prevalent in the Western world today has roots from over 2000 years ago. Specifically, written documents such as the Bible illustrate that there have been laws and social customs that have called for the abstention from inbreeding. Along with cultural taboos, parental education and awareness of inbreeding consequences have played large roles in minimizing inbreeding frequencies in areas like Europe. That being so, there are less urbanized and less populated regions across the world that have shown continuity in the practice of inbreeding. The continuity of inbreeding is often either by choice or unavoidably due to the limitations of the geographical area. When by choice, the rate of consanguinity is highly dependent on religion and culture. In the Western world, some Anabaptist groups are highly inbred because they originate from small founder populations that have bred as a closed population. Of the practicing regions, Middle Eastern and northern Africa territories show the greatest frequencies of consanguinity. Among these populations with high levels of inbreeding, researchers have found several disorders prevalent among inbred offspring. In Lebanon, Saudi Arabia, Egypt, and in Israel, the offspring of consanguineous relationships have an increased risk of congenital malformations, congenital heart defects, congenital hydrocephalus and neural tube defects. Furthermore, among inbred children in Palestine and Lebanon, there is a positive association between consanguinity and reported cleft lip/palate cases. Historically, populations of Qatar have engaged in consanguineous relationships of all kinds, leading to high risk of inheriting genetic diseases. As of 2014, around 5% of the Qatari population suffered from hereditary hearing loss; most were descendants of a consanguineous relationship. Royalty and nobility Inter-nobility marriage was used as a method of forming political alliances among elites. These ties were often sealed only upon the birth of progeny within the arranged marriage. Thus marriage was seen as a union of lines of nobility and not as a contract between individuals. Royal intermarriage was often practiced among European royal families, usually for interests of state. Over time, due to the relatively limited number of potential consorts, the gene pool of many ruling families grew progressively smaller, until all European royalty was related. This also resulted in many being descended from a certain person through many lines of descent, such as the numerous European royalty and nobility descended from the British Queen Victoria or King Christian IX of Denmark. The House of Habsburg was known for its intermarriages; the Habsburg lip often cited as an ill-effect. The closely related houses of Habsburg, Bourbon, Braganza and Wittelsbach also frequently engaged in first-cousin unions as well as the occasional double-cousin and uncle–niece marriages. In ancient Egypt, royal women were believed to carry the bloodlines and so it was advantageous for a pharaoh to marry his sister or half-sister; in such cases a special combination between endogamy and polygamy is found. Normally, the old ruler's eldest son and daughter (who could be either siblings or half-siblings) became the new rulers. All rulers of the Ptolemaic dynasty uninterruptedly from Ptolemy IV (Ptolemy II married his sister but had no issue) were married to their brothers and sisters, so as to keep the Ptolemaic blood "pure" and to strengthen the line of succession. King Tutankhamun's mother is reported to have been the half-sister to his father, Cleopatra VII (also called Cleopatra VI) and Ptolemy XIII, who married and became co-rulers of ancient Egypt following their father's death, are the most widely known example.
Biology and health sciences
Genetics
Biology
54749
https://en.wikipedia.org/wiki/Chromatic%20aberration
Chromatic aberration
In optics, chromatic aberration (CA), also called chromatic distortion, color aberration, color fringing, or purple fringing, is a failure of a lens to focus all colors to the same point. It is caused by dispersion: the refractive index of the lens elements varies with the wavelength of light. The refractive index of most transparent materials decreases with increasing wavelength. Since the focal length of a lens depends on the refractive index, this variation in refractive index affects focusing. Since the focal length of the lens varies with the color of the light different colors of light are brought to focus at different distances from the lens or with different levels of magnification. Chromatic aberration manifests itself as "fringes" of color along boundaries that separate dark and bright parts of the image. Types There are two types of chromatic aberration: axial (longitudinal), and transverse (lateral). Axial aberration occurs when different wavelengths of light are focused at different distances from the lens (focus shift). Longitudinal aberration is typical at long focal lengths. Transverse aberration occurs when different wavelengths are focused at different positions in the focal plane, because the magnification and/or distortion of the lens also varies with wavelength. Transverse aberration is typical at short focal lengths. The ambiguous acronym LCA is sometimes used for either longitudinal or lateral chromatic aberration. The two types of chromatic aberration have different characteristics, and may occur together. Axial CA occurs throughout the image and is specified by optical engineers, optometrists, and vision scientists in diopters. It can be reduced by stopping down, which increases depth of field so that though the different wavelengths focus at different distances, they are still in acceptable focus. Transverse CA does not occur on the optical axis of an optical system (which is typically the center of the image) and increases away from the optical axis. It is not affected by stopping down since it is caused by the different magnification of the lens with each color of light. In digital sensors, axial CA results in the red and blue planes being defocused (assuming that the green plane is in focus), which is relatively difficult to remedy in post-processing, while transverse CA results in the red, green, and blue planes being at different magnifications (magnification changing along radii, as in geometric distortion), and can be corrected by radially scaling the planes appropriately so they line up. Minimization In the earliest uses of lenses, chromatic aberration was reduced by increasing the focal length of the lens where possible. For example, this could result in extremely long telescopes such as the very long aerial telescopes of the 17th century. Isaac Newton's theories about white light being composed of a spectrum of colors led him to the conclusion that uneven refraction of light caused chromatic aberration (leading him to build the first reflecting telescope, his Newtonian telescope, in 1668.) Modern telescopes, as well as other catoptric and catadioptric systems, continue to use mirrors, which have no chromatic aberration. There exists a point called the circle of least confusion, where chromatic aberration can be minimized. It can be further minimized by using an achromatic lens or achromat, in which materials with differing dispersion are assembled together to form a compound lens. The most common type is an achromatic doublet, with elements made of crown and flint glass. This reduces the amount of chromatic aberration over a certain range of wavelengths, though it does not produce perfect correction. By combining more than two lenses of different composition, the degree of correction can be further increased, as seen in an apochromatic lens or apochromat. "Achromat" and "apochromat" refer to the type of correction (2 or 3 wavelengths correctly focused), not the degree (how defocused the other wavelengths are), and an achromat made with sufficiently low dispersion glass can yield significantly better correction than an achromat made with more conventional glass. Similarly, the benefit of apochromats is not simply that they focus three wavelengths sharply, but that their error on other wavelengths is also quite small. Many types of glass have been developed to reduce chromatic aberration. These are low dispersion glass, most notably, glasses containing fluorite. These hybridized glasses have a very low level of optical dispersion; only two compiled lenses made of these substances can yield a high level of correction. The use of achromats was an important step in the development of optical microscopes and telescopes. An alternative to achromatic doublets is the use of diffractive optical elements. Diffractive optical elements are able to generate arbitrary complex wave fronts from a sample of optical material which is essentially flat. Diffractive optical elements have negative dispersion characteristics, complementary to the positive Abbe numbers of optical glasses and plastics. Specifically, in the visible part of the spectrum diffractives have a negative Abbe number of −3.5. Diffractive optical elements can be fabricated using diamond turning techniques. Telephoto lenses using diffractive elements to minimize chromatic aberration are commercially available from Canon and Nikon for interchangeable-lens cameras; these include 800mm f/6.3, 500mm f/5.6, and 300mm f/4 models by Nikon (branded as "phase fresnel" or PF), and 800mm f/11, 600mm f/11, and 400mm f/4 models by Canon (branded as "diffractive optics" or DO). They produce sharp images with reduced chromatic aberration at a lower weight and size than traditional optics of similar specifications and are generally well-regarded by wildlife photographers. Mathematics of chromatic aberration minimization For a doublet consisting of two thin lenses in contact, the Abbe number of the lens materials is used to calculate the correct focal length of the lenses to ensure correction of chromatic aberration. If the focal lengths of the two lenses for light at the yellow Fraunhofer D-line (589.2 nm) are f1 and f2, then best correction occurs for the condition: where V1 and V2 are the Abbe numbers of the materials of the first and second lenses, respectively. Since Abbe numbers are positive, one of the focal lengths must be negative, i.e., a diverging lens, for the condition to be met. The overall focal length of the doublet f is given by the standard formula for thin lenses in contact: and the above condition ensures this will be the focal length of the doublet for light at the blue and red Fraunhofer F and C lines (486.1 nm and 656.3 nm respectively). The focal length for light at other visible wavelengths will be similar but not exactly equal to this. Chromatic aberration is used during a duochrome eye test to ensure that a correct lens power has been selected. The patient is confronted with red and green images and asked which is sharper. If the prescription is right, then the cornea, lens and prescribed lens will focus the red and green wavelengths just in front, and behind the retina, appearing of equal sharpness. If the lens is too powerful or weak, then one will focus on the retina, and the other will be much more blurred in comparison. Image processing to reduce the appearance of lateral chromatic aberration In some circumstances, it is possible to correct some of the effects of chromatic aberration in digital post-processing. However, in real-world circumstances, chromatic aberration results in permanent loss of some image detail. Detailed knowledge of the optical system used to produce the image can allow for some useful correction. In an ideal situation, post-processing to remove or correct lateral chromatic aberration would involve scaling the fringed color channels, or subtracting some of a scaled versions of the fringed channels, so that all channels spatially overlap each other correctly in the final image. As chromatic aberration is complex (due to its relationship to focal length, etc.) some camera manufacturers employ lens-specific chromatic aberration appearance minimization techniques. Almost every major camera manufacturer enables some form of chromatic aberration correction, both in-camera and via their proprietary software. Third-party software tools such as PTLens are also capable of performing complex chromatic aberration appearance minimization with their large database of cameras and lens. In reality, even theoretically perfect post-processing based chromatic aberration reduction-removal-correction systems do not increase image detail as well as a lens that is optically well-corrected for chromatic aberration would for the following reasons: Rescaling is only applicable to lateral chromatic aberration but there is also longitudinal chromatic aberration Rescaling individual color channels result in a loss of resolution from the original image Most camera sensors only capture a few and discrete (e.g., RGB) color channels but chromatic aberration is not discrete and occurs across the light spectrum The dyes used in the digital camera sensors for capturing color are not very efficient so cross-channel color contamination is unavoidable and causes, for example, the chromatic aberration in the red channel to also be blended into the green channel along with any green chromatic aberration. The above are closely related to the specific scene that is captured so no amount of programming and knowledge of the capturing equipment (e.g., camera and lens data) can overcome these limitations. Photography The term "purple fringing" is commonly used in photography, although not all purple fringing can be attributed to chromatic aberration. Similar colored fringing around highlights may also be caused by lens flare. Colored fringing around highlights or dark regions may be due to the receptors for different colors having differing dynamic range or sensitivity – therefore preserving detail in one or two color channels, while "blowing out" or failing to register, in the other channel or channels. On digital cameras, the particular demosaicing algorithm is likely to affect the apparent degree of this problem. Another cause of this fringing is chromatic aberration in the very small microlenses used to collect more light for each CCD pixel; since these lenses are tuned to correctly focus green light, the incorrect focusing of red and blue results in purple fringing around highlights. This is a uniform problem across the frame, and is more of a problem in CCDs with a very small pixel pitch such as those used in compact cameras. Some cameras, such as the Panasonic Lumix series and newer Nikon and Sony DSLRs, feature a processing step specifically designed to remove it. On photographs taken using a digital camera, very small highlights may frequently appear to have chromatic aberration where in fact the effect is because the highlight image is too small to stimulate all three color pixels, and so is recorded with an incorrect color. This may not occur with all types of digital camera sensor. Again, the de-mosaicing algorithm may affect the apparent degree of the problem. Black-and-white photography Chromatic aberration also affects black-and-white photography. Although there are no colors in the photograph, chromatic aberration will blur the image. It can be reduced by using a narrow-band color filter, or by converting a single color channel to black and white. This will, however, require longer exposure (and change the resulting image). (This is only true with panchromatic black-and-white film, since orthochromatic film is already sensitive to only a limited spectrum.) Electron microscopy Chromatic aberration also affects electron microscopy, although instead of different colors having different focal points, different electron energies may have different focal points.
Physical sciences
Optics
Physics
54773
https://en.wikipedia.org/wiki/Cherry
Cherry
A cherry is the fruit of many plants of the genus Prunus, and is a fleshy drupe (stone fruit). Commercial cherries are obtained from cultivars of several species, such as the sweet Prunus avium and the sour Prunus cerasus. The name 'cherry' also refers to the cherry tree and its wood, and is sometimes applied to almonds and visually similar flowering trees in the genus Prunus, as in "ornamental cherry" or "cherry blossom". Wild cherry may refer to any of the cherry species growing outside cultivation, although Prunus avium is often referred to specifically by the name "wild cherry" in the British Isles. Botany True cherries Prunus subg. Cerasus contains species that are typically called cherries. They are known as true cherries and distinguished by having a single winter bud per axil, by having the flowers in small corymbs or umbels of several together (occasionally solitary, e.g. P. serrula; some species with short racemes, e.g. P. maacki), and by having smooth fruit with no obvious groove. Examples of true cherries are: Prunus apetala (Siebold & Zucc.) Franch. & Sav. – clove cherry Prunus avium (L.) L. – sweet cherry, wild cherry, mazzard or gean Prunus campanulata Maxim. – Taiwan cherry, Formosan cherry or bell-flowered cherry Prunus canescens Bois. – grey-leaf cherry Prunus cerasus L. – sour cherry Prunus emarginata (Douglas ex Hook.) Walp. – Oregon cherry or bitter cherry Prunus fruticosa Pall. – European dwarf cherry, dwarf cherry, Mongolian cherry or steppe cherry Prunus incisa Thunb. – Fuji cherry Prunus jamasakura Siebold ex Koidz. – Japanese mountain cherry or Japanese hill cherry Prunus leveilleana (Koidz.) Koehne – Korean mountain cherry Prunus maackii Rupr. – Manchurian cherry or Amur chokecherry Prunus mahaleb L. – Saint Lucie cherry, rock cherry, perfumed cherry or mahaleb cherry Prunus maximowiczii Rupr. – Miyama cherry or Korean cherry Prunus nipponica Matsum. – Takane cherry, peak cherry or Japanese alpine cherry Prunus pensylvanica L.f. – pin cherry, fire cherry, or wild red cherry Prunus pseudocerasus Lindl. – Chinese sour cherry or Chinese cherry Prunus rufa Wall ex Hook.f. – Himalayan cherry Prunus rufoides C.K.Schneid. – tailed-leaf cherry Prunus sargentii Rehder – northern Japanese hill cherry, northern Japanese mountain cherry or Sargent's cherry Prunus serrula Franch. – paperbark cherry, birch bark cherry or Tibetan cherry Prunus serrulata Lindl. – Japanese cherry, hill cherry, Oriental cherry or East Asian cherry Prunus speciosa (Koidz.) Ingram – Oshima cherry Prunus takesimensis Nakai – Ulleungdo cherry Prunus yedoensis Matsum. – Yoshino cherry or Tokyo cherry Bush cherries Bush cherries are characterized by having three winter buds per axil. They used to be included in Prunus subg. Cerasus, but phylogenetic research indicates they should be a section of Prunus subg. Prunus. Examples of bush cherries are: Prunus cistena Koehne – purple-leaf sand cherry Prunus humilis Bunge – Chinese plum-cherry or humble bush cherry Prunus japonica Thunb. – Korean cherry Prunus prostrata Labill. – mountain cherry, rock cherry, spreading cherry or prostrate cherry Prunus pumila L. – sand cherry Prunus tomentosa Thunb. – Nanking cherry, Manchu cherry, downy cherry, Shanghai cherry, Ando cherry, mountain cherry, Chinese dwarf cherry, Chinese bush cherry Bird cherries, cherry laurels, and other racemose cherries Prunus subg. Padus contains most racemose species that are called cherries which used to be included in the genera Padus (bird cherries), Laurocerasus (cherry laurels), Pygeum (tropical species such as African cherry) and Maddenia. Examples of the racemose cherries are: Prunus africana (Hook.f.) Kalkman – African cherry Prunus caroliniana Aiton – Carolina laurel cherry or laurel cherry Prunus cornuta (Wall. ex Royle) Steud. – Himalayan bird cherry Prunus grayana Maxim. – Japanese bird cherry or Gray's bird cherry Prunus ilicifolia (Nutt. ex Hook. & Arn.) Walp. – hollyleaf cherry, evergreen cherry, holly-leaved cherry or islay Prunus laurocerasus L. – cherry laurel Prunus lyonii (Eastw.) Sarg. – Catalina Island cherry Prunus myrtifolia (L.) Urb. – West Indian cherry Prunus napaulensis (Ser.) Steud. – Nepal bird cherry Prunus occidentalis Sw. – western cherry laurel Prunus padus L. – bird cherry or European bird cherry Prunus pleuradenia Griseb. – Antilles cherry Prunus serotina Ehrh. – black cherry, wild cherry Prunus ssiori F.Schmidt – Hokkaido bird cherry Prunus virginiana L. – chokecherry Etymology The English word cherry derives from Old Northern French or Norman cherise from the Latin cerasum, referring to an ancient Greek region, Kerasous (Κερασοῦς) near Giresun, Turkey, from which cherries were first thought to be exported to Europe. The word "cherry" is also used for some species that bear fruits with similar size and shape even though they are not in the same Prunus genus; some of these species include the "Jamaican cherry" (Muntingia calabura) and the "Spanish cherry" (Mimusops elengi). Antiquity The indigenous range of the sweet cherry extends through most of Europe, western Asia, and parts of northern Africa, and the fruit has been consumed throughout its range since prehistoric times. A cultivated cherry is recorded as having been brought to Rome by Lucius Licinius Lucullus from northeastern Anatolia, also known as the Pontus region, in 72 BCE. Cherries were introduced into England at Teynham, near Sittingbourne in Kent, by order of Henry VIII, who had tasted them in Flanders. Cherries, along with many other fruiting trees and plants, probably first arrived in North America around 1606 in the New France colony of Port Royal, which is modern-day Annapolis Royal, Nova Scotia. Richard Guthrie described in 1629, the "fruitful valley adorned with...great variety of fruit trees, chestnuts, pears, apples, cherries, plums and all other fruits." Cultivation The cultivated forms are of the species sweet cherry (P. avium) to which most cherry cultivars belong, and the sour cherry (P. cerasus), which is used mainly for cooking. Both species originate in Europe and western Asia; they usually do not cross-pollinate. Some other species, although having edible fruit, are not grown extensively for consumption, except in northern regions where the two main species will not grow. Irrigation, spraying, labor, and their propensity to damage from rain and hail make cherries relatively expensive. Nonetheless, demand is high for the fruit. In commercial production, sour cherries, as well as sweet cherries sometimes, are harvested by using a mechanized "shaker." Hand picking is also widely used for sweet as well as sour cherries to harvest the fruit to avoid damage to both fruit and trees. Common rootstocks include Mazzard, Mahaleb, Colt, and Gisela Series, a dwarfing rootstock that produces trees significantly smaller than others, only 8 to 10 feet (2.5 to 3 meters) tall. Sour cherries require no pollenizer, while few sweet varieties are self-fertile. A cherry tree will take three to four years once it is planted in the orchard to produce its first crop of fruit, and seven years to attain full maturity. Growing season Like most temperate-latitude trees, cherry trees require a certain number of chilling hours each year to break dormancy and bloom and produce fruit. The number of chilling hours required depends on the variety. Because of this cold-weather requirement, no members of the genus Prunus can grow in tropical climates. (See "production" section for more information on chilling requirements) Cherries have a short growing season and can grow in most temperate latitudes. Cherries blossom in April (in the Northern Hemisphere) and the peak season for the cherry harvest is in the summer. In southern Europe in June, in North America in June, in England in mid-July, and in southern British Columbia (Canada) in June to mid-August. In many parts of North America, they are among the first tree fruits to flower and ripen in mid-Spring. In the Southern Hemisphere, cherries are usually at their peak in late December and are widely associated with Christmas. 'Burlat' is an early variety which ripens during the beginning of December, 'Lapins' ripens near the end of December, and 'Sweetheart' finish slightly later. Pests and diseases Generally, the cherry can be a difficult fruit tree to grow and keep alive. In Europe, the first visible pest in the growing season soon after blossom (in April in western Europe) usually is the black cherry aphid ("cherry blackfly," Myzus cerasi), which causes leaves at the tips of branches to curl, with the blackfly colonies exuding a sticky secretion which promotes fungal growth on the leaves and fruit. At the fruiting stage in June/July (Europe), the cherry fruit fly (Rhagoletis cingulata and Rhagoletis cerasi) lays its eggs in the immature fruit, whereafter its larvae feed on the cherry flesh and exit through a small hole (about 1 mm diameter), which in turn is the entry point for fungal infection of the cherry fruit after rainfall. In addition, cherry trees are susceptible to bacterial canker, cytospora canker, brown rot of the fruit, root rot from overly wet soil, crown rot, and several viruses. Cultivars The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit: See cherry blossom and Prunus for ornamental trees. Production In 2020, world production of sweet cherries was 2.61 million tonnes, with Turkey producing 28% of this total. Other major producers of sweet cherries were the United States and Chile. World production of sour cherries in 2020 was 1.48 million tonnes, led by Russia, Turkey, Ukraine and Serbia. Middle East Major commercial cherry orchards in West Asia are in Turkey, Syria, Lebanon, and Azerbaijan. Europe Major commercial cherry orchards in Europe are in Turkey, Italy, Spain and other Mediterranean regions, and to a smaller extent in the Baltic States and southern Scandinavia. In France since the 1920s, the first cherries of the season come in April/May from the region of Céret (Pyrénées-Orientales), where the local producers send, as a tradition since 1932, the first crate of cherries to the president of the Republic. North America In the United States, most sweet cherries are grown in Washington, California, Oregon, Wisconsin, and Michigan. Important sweet cherry cultivars include Bing, Ulster, Rainier, Brooks, Tulare, King, and Sweetheart. Both Oregon and Michigan provide light-colored 'Royal Ann' ('Napoleon'; alternately 'Queen Anne') cherries for the maraschino cherry process. Most sour (also called tart) cherries are grown in Michigan, followed by Utah, New York, and Washington. Sour cherries include 'Nanking' and 'Evans.' Traverse City, Michigan is called the "Cherry Capital of the World", hosting a National Cherry Festival and making the world's largest cherry pie. The specific region of northern Michigan known for tart cherry production is referred to as the "Traverse Bay" region. Most cherry varieties have a chilling requirement of 800 or more hours, meaning that in order to break dormancy, blossom, and set fruit, the winter season needs to have at least 800 hours where the temperature is below . "Low chill" varieties requiring 300 hours or less are Minnie Royal and Royal Lee, requiring cross-pollinization, whereas the cultivar, Royal Crimson, is self-fertile. These varieties extend the range of cultivation of cherries to the mild winter areas of southern US. This is a boon to California producers of sweet cherries, as California is the second largest producer of sweet cherries in the US. Native and non-native sweet cherries grow well in Canada's provinces of Ontario and British Columbia where an annual cherry festival has been celebrated for seven consecutive decades in the Okanagan Valley town of Osoyoos. In addition to the Okanagan, other British Columbia cherry growing regions are the Similkameen Valley and Kootenay Valley, all three regions together producing 5.5 million kg annually or 60% of total Canadian output. Sweet cherry varieties in British Columbia include 'Rainier', 'Van', 'Chelan', 'Lapins', 'Sweetheart', 'Skeena', 'Staccato', 'Christalina' and 'Bing.' Australia In Australia, cherries are grown in all the states except for the Northern Territory. The major producing regions are located in the temperate areas within New South Wales, Victoria, South Australia and Tasmania. Western Australia has limited production in the elevated parts in the southwest of the state. Key production areas include Young, Orange and Bathurst in New South Wales, Wandin, the Goulburn and Murray valley areas in Victoria, the Adelaide Hills region in South Australia, and the Huon and Derwent Valleys in Tasmania. Key commercial varieties in order of seasonality include 'Empress', 'Merchant', 'Supreme', 'Ron's seedling', 'Chelan', 'Ulster', 'Van', 'Bing', 'Stella', 'Nordwunder', 'Lapins', 'Simone', 'Regina', 'Kordia' and 'Sweetheart'. New varieties are being introduced, including the late season 'Staccato' and early season 'Sequoia'. The Australian Cherry Breeding program is developing a series of new varieties which are under testing evaluation. The New South Wales town of Young is called the "Cherry Capital of Australia" and hosts the National Cherry Festival. Nutritional value Raw sweet cherries are 82% water, 16% carbohydrates, 1% protein, and negligible in fat (table). As raw fruit, sweet cherries provide little nutrient content per 100 g serving, as only dietary fiber and vitamin C are present in moderate content, while other vitamins and dietary minerals each supply less than 10% of the Daily Value (DV) per serving, respectively (table). Compared to sweet cherries, raw sour cherries contain 50% more vitamin C per 100 g (12% DV) and about 20 times more vitamin A (8% DV), beta-Carotene in particular (table). Health risks The cherry kernels, accessible by chewing or breaking the hard-shelled cherry pits, contain amygdalin, a chemical that releases the toxic compound hydrogen cyanide when ingested. The amount of amygdalin in each cherry varies widely, and symptoms would show only after eating several crushed pits (3–4 of the Morello variety or 7–9 of the red or black varieties). Swallowing the pits whole normally causes no complications. Other uses Cherry wood is valued for its rich color and straight grain in manufacturing fine furniture, particularly desks, tables and chairs.
Biology and health sciences
Rosales
null
54808
https://en.wikipedia.org/wiki/Termite
Termite
Termites are a group of detritophagous eusocial insects which consume a variety of decaying plant material, generally in the form of wood, leaf litter, and soil humus. They are distinguished by their moniliform antennae and the soft-bodied and often unpigmented worker caste for which they have been commonly termed "white ants"; however, they are not ants, being more closely related to cockroaches. About 2,972 extant species are currently described, 2,105 of which are members of the family Termitidae. Termites comprise the infraorder Isoptera, or alternatively the epifamily Termitoidae, within the order Blattodea (along with cockroaches). Termites were once classified in a separate order from cockroaches, but recent phylogenetic studies indicate that they evolved from cockroaches, as they are deeply nested within the group, and the sister group to wood-eating cockroaches of the genus Cryptocercus. Previous estimates suggested the divergence took place during the Jurassic or Triassic. More recent estimates suggest that they have an origin during the Late Jurassic, with the first fossil records in the Early Cretaceous. Similarly to ants and some bees and wasps from the separate order Hymenoptera, most termites have an analogous "worker" and "soldier" caste system consisting of mostly sterile individuals which are physically and behaviorally distinct. Unlike ants, most colonies begin from sexually mature individuals known as the "king" and "queen" that together form a lifelong monogamous pair. Also unlike ants, which undergo a complete metamorphosis, termites undergo an incomplete metamorphosis that proceeds through egg, nymph, and adult stages. Termite colonies are commonly described as superorganisms due to the collective behaviors of the individuals which form a self-governing entity: the colony itself. Their colonies range in size from a few hundred individuals to enormous societies with several million individuals. Most species are rarely seen, having a cryptic life-history where they remain hidden within the galleries and tunnels of their nests for most of their lives. Termites' success as a group has led to them colonizing almost every global landmass, with the highest diversity occurring in the tropics where they are estimated to constitute 10% of the animal biomass, particularly in Africa which has the richest diversity with more than 1000 described species. They are important decomposers of decaying plant matter in the subtropical and tropical regions of the world, and their recycling of wood and plant matter is of considerable ecological importance. Many species are ecosystem engineers capable of altering soil characteristics such as hydrology, decomposition, nutrient cycling, vegetative growth, and consequently surrounding biodiversity through the large mounds constructed by certain species. Termites have several impacts on humans. They are a delicacy in the diet of some human cultures such as the Makiritare in the Alto Orinoco province of Venezuela, where they are commonly used as a spice. They are also used in traditional medicinal treatments of various diseases and ailments, such as influenza, asthma, bronchitis, etc. Termites are most famous for being structural pests; however, the vast majority of termite species are innocuous, with the regional numbers of economically significant species being: North America, 9; Australia, 16; Indian subcontinent, 26; tropical Africa, 24; Central America and the West Indies, 17. Of known pest species, 28 of the most invasive and structurally damaging belong to the genus Coptotermes. The distribution of most known pest species is expected to increase over time as a consequence of climate change. Increased urbanization and connectivity is also predicted to expand the range of some pest termites. Etymology The infraorder name Isoptera is derived from the Greek words iso (equal) and ptera (winged), which refers to the nearly equal size of the fore and hind wings. "Termite" derives from the Latin and Late Latin word termes ("woodworm, white ant"), altered by the influence of Latin terere ("to rub, wear, erode") from the earlier word tarmes. A termite nest is also known as a termitary or termitarium (plural termitaria or termitariums). The word was first used in English in 1781. Earlier attested designations were "wood ants" or "white ants", though these may never have been in wide use as termites do not exist in the British Isles. Taxonomy and evolution Termites were formerly placed in the order Isoptera. As early as 1934 suggestions were made that they were closely related to wood-eating cockroaches (genus Cryptocercus, the woodroach) based on the similarity of their symbiotic gut flagellates. In the 1960s additional evidence supporting that hypothesis emerged when F. A. McKittrick noted similar morphological characteristics between some termites and Cryptocercus nymphs. In 2008 DNA analysis from 16S rRNA sequences supported the position of termites being nested within the evolutionary tree containing the order Blattodea, which included the cockroaches. The cockroach genus Cryptocercus shares the strongest phylogenetical similarity with termites and is considered to be a sister-group to termites. Termites and Cryptocercus share similar morphological and social features: for example, most cockroaches do not exhibit social characteristics, but Cryptocercus takes care of its young and exhibits other social behaviour such as trophallaxis and allogrooming. Termites are thought to be the descendants of the genus Cryptocercus. Some researchers have suggested a more conservative measure of retaining the termites as the Termitoidae, an epifamily within the cockroach order, which preserves the classification of termites at family level and below. Termites have long been accepted to be closely related to cockroaches and mantids, and they are classified in the same superorder (Dictyoptera). The oldest unambiguous termite fossils date to the early Cretaceous, but given the diversity of Cretaceous termites and early fossil records showing mutualism between microorganisms and these insects, they possibly originated earlier in the Jurassic or Triassic. Possible evidence of a Jurassic origin is the assumption that the extinct mammaliaform Fruitafossor from Morrison Formation consumed termites, judging from its morphological similarity to modern termite-eating mammals. Morrison Formation also yields social insect nest fossils close to that of termites. The oldest termite nest discovered is believed to be from the Upper Cretaceous in West Texas, where the oldest known faecal pellets were also discovered. Claims that termites emerged earlier have faced controversy. For example, F. M. Weesner indicated that the Mastotermitidae termites may go back to the Late Permian, 251 million years ago, and fossil wings that have a close resemblance to the wings of Mastotermes of the Mastotermitidae, the most primitive living termite, have been discovered in the Permian layers in Kansas. It is even possible that the first termites emerged during the Carboniferous. The folded wings of the fossil wood roach Pycnoblattina, arranged in a convex pattern between segments 1a and 2a, resemble those seen in Mastotermes, the only living insect with the same pattern. Kumar Krishna et al., though, consider that all of the Paleozoic and Triassic insects tentatively classified as termites are in fact unrelated to termites and should be excluded from the Isoptera. Other studies suggest that the origin of termites is more recent, having diverged from Cryptocercus sometime during the Early Cretaceous. The primitive giant northern termite (Mastotermes darwiniensis) exhibits numerous cockroach-like characteristics that are not shared with other termites, such as laying its eggs in rafts and having anal lobes on the wings. It has been proposed that the Isoptera and Cryptocercidae be grouped in the clade "Xylophagodea". Termites are sometimes called "white ants", but the only resemblance to the ants is due to their sociality which is due to convergent evolution with termites being the first social insects to evolve a caste system more than 100 million years ago. Termite genomes are generally relatively large compared to those of other insects; the first fully sequenced termite genome, of Zootermopsis nevadensis, which was published in the journal Nature Communications, consists of roughly 500Mb, while two subsequently published genomes, Macrotermes natalensis and Cryptotermes secundus, are considerably larger at around 1.3Gb. External phylogeny showing relationship of termites with other insect groups: Internal phylogeny showing relationship of extant termite families: There are currently 3,173 living and fossil termite species recognised, classified in 12 families; reproductive and/or soldier castes are usually required for identification. The infraorder Isoptera is divided into the following clade and family groups, showing the subfamilies in their respective classification: Early-diverging termite families Infraorder Isoptera Brullé, 1832 Family Cratomastotermitidae Engel, Grimaldi, & Krishna, 2009 Family Mastotermitidae Desneux, 1904 Parvorder Euisoptera Engel, Grimaldi, & Krishna, 2009 Family Melqartitermitidae Engel, 2021 Family Mylacrotermitidae Engel, 2021 Family Krishnatermitidae Engel, 2021 Family Termopsidae Holmgren, 1911 Family Carinatermitidae Krishna & Grimaldi, 2000 Minorder Teletisoptera Barden & Engel, 2021 Family Archotermopsidae Engel, Grimaldi, & Krishna, 2009 Family Hodotermitidae Desneux, 1904 Family Hodotermopsidae Engel, 2021 subfamily Hodotermopsellinae Engel & Jouault, 2024 subfamily Hodotermopsinae Engel, 2021 Family Arceotermitidae Engel, 2021 subfamily Arceotermitinae Engel, 2021 subfamily Cosmotermitinae Engel, 2021 Family Stolotermitidae Holmgren, 1910 subfamily Stolotermitinae Holmgren, 1910 subfamily Porotermitinae Emerson, 1942 Minorder Artisoptera Engel, 2021 Family Tanytermitidae Engel, 2021 Microrder Icoisoptera Engel, 2013 Family Kalotermitidae Froggatt, 1897 Nanorder Neoisoptera Engel, Grimaldi, & Krishna, 2009 see below for families and subfamilies Neoisoptera The Neoisoptera, literally meaning "newer termites" (in an evolutionary sense), are a recently coined clade that include families such as the Heterotermitidae, Rhinotermitidae and Termitidae. Neoisopterans have a bifurcated caste development with true workers, and so notably lack pseudergates (except in some basal taxa such as Serritermitidae: see below). All Neoisopterans have a fontanelle, which appears as a circular pore or series of pores in a depressed region within the middle of the head. The fontanelle connects to the frontal gland, a novel organ unique to Neoisopteran termites which evolved to excrete an array of defensive chemicals and secretions, and so is typically most developed in the soldier caste. Cellulose digestion in the family Termitidae has co-evolved with bacterial gut microbiota and many taxa have evolved additional symbiotic relationships such as with the fungus Termitomyces; in contrast, basal Neoisopterans and all other Euisoptera have flagellates and prokaryotes in their hindguts. Extant families and subfamilies are organized as follows: Early-Diverging Neoisoptera (Non-Geoisoptera) Family Archeorhinotermitidae Krishna & Grimaldi, 2003 Family Stylotermitidae Holmgren & Holmgren, 1917 Family Serritermitidae Holmgren, 1910 Family Rhinotermitidae Froggatt, 1897 Family Termitogetonidae Holmgren, 1910 Family Psammotermitidae Holmgren, 1910 Subfamily Prorhinotermitinae Quennedey & Deligne, 1975 Subfamily Psammotermitinae Holmgren, 1910 Clade Geoisoptera Engel, Hellemans, & Bourguignon, 2024 Family Heterotermitidae Froggatt, 1897 (=Coptotermitinae Holmgren, 1910) Family Termitidae Latreille, 1802 Subfamily Sphaerotermitinae Engel & Krishna, 2004 Subfamily Macrotermitinae Kemner, 1934, nomen protectum [ICZN 2003] Subfamily Foraminitermitinae Holmgren, 1912 Subfamily Apicotermitinae Grassé & Noirot, 1954 [1955] Subfamily Microcerotermitinae Holmgren, 1910 Subfamily Syntermitinae Engel & Krishna, 2004 Subfamily Forficulitermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Engelitermitinae Romero Arias, Roisin, & Scheffrahn, 2024 Subfamily Crepititermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Protohamitermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Cylindrotermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Neocapritermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Nasutitermitinae Hare, 1937 Subfamily Promirotermitinae Hellemans, Engel, & Bourguignon, 2024 Subfamily Mirocapritermitinae Kemner, 1934 Subfamily Amitermitinae Kemner, 1934 Subfamily Cubitermitinae Weidner, 1956 Subfamily Termitinae Latreille, 1802 Distribution and diversity Termites are found on all continents except Antarctica. The diversity of termite species is low in North America and Europe (10 species known in Europe and 50 in North America), but is high in South America, where over 400 species are known. Of the 2,972 extant termite species currently classified, 1,000 are found in Africa, where mounds are extremely abundant in certain regions. Approximately 1.1 million active termite mounds can be found in the northern Kruger National Park alone. In Asia, there are 435 species of termites, which are mainly distributed in China. Within China, termite species are restricted to mild tropical and subtropical habitats south of the Yangtze River. In Australia, all ecological groups of termites (dampwood, drywood, subterranean) are endemic to the country, with over 360 classified species. Because termites are highly social and abundant, they represent a disproportionate amount of the world's insect biomass. Termites and ants comprise about 1% of insect species, but represent more than 50% of insect biomass. Due to their soft cuticles, termites do not inhabit cool or cold habitats. There are three ecological groups of termites: dampwood, drywood and subterranean. Dampwood termites are found only in coniferous forests, and drywood termites are found in hardwood forests; subterranean termites live in widely diverse areas. One species in the drywood group is the West Indian drywood termite (Cryptotermes brevis), which is an invasive species in Australia. Description Termites are usually small, measuring between in length. The largest of all extant termites are the queens of the species Macrotermes bellicosus, measuring up to over 10 centimetres (4 in) in length. Another giant termite, the extinct Gyatermes styriensis, flourished in Austria during the Miocene and had a wingspan of and a body length of . Most worker and soldier termites are completely blind as they do not have a pair of eyes. However, some species, such as Hodotermes mossambicus, have compound eyes which they use for orientation and to distinguish sunlight from moonlight. The alates (winged males and females) have eyes along with lateral ocelli. Lateral ocelli, however, are not found in all termites, absent in the families Hodotermitidae, Termopsidae, and Archotermopsidae. Like other insects, termites have a small tongue-shaped labrum and a clypeus; the clypeus is divided into a postclypeus and anteclypeus. Termite antennae have a number of functions such as the sensing of touch, taste, odours (including pheromones), heat and vibration. The three basic segments of a termite antenna include a scape, a pedicel (typically shorter than the scape), and the flagellum (all segments beyond the scape and pedicel). The mouth parts contain a maxillae, a labium, and a set of mandibles. The maxillae and labium have palps that help termites sense food and handling. The cuticle of most castes is soft and flexible due to a resulting lack of sclerotization, particularly of the abdomen which often appears translucent. Pigmentation and sclerotization of the cuticle correlates with life history, with species that spend more time in the surface in the open tending to have a more sclerotized and pigmented exoskeleton. Consistent with all insects, the anatomy of the termite thorax consists of three segments: the prothorax, the mesothorax and the metathorax. Each segment contains a pair of legs. On alates, the wings are located at the mesothorax and metathorax, which is consistent with all four-winged insects. The mesothorax and metathorax have well-developed exoskeletal plates; the prothorax has smaller plates. Termites have a ten-segmented abdomen with two plates, the tergites and the sternites. The tenth abdominal segment has a pair of short cerci. There are ten tergites, of which nine are wide and one is elongated. The reproductive organs are similar to those in cockroaches but are more simplified. For example, the intromittent organ is not present in male alates, and the sperm is either immotile or aflagellate. However, Mastotermitidae termites have multiflagellate sperm with limited motility. The genitals in females are also simplified. Unlike in other termites, Mastotermitidae females have an ovipositor, a feature strikingly similar to that in female cockroaches. The non-reproductive castes of termites are wingless and rely exclusively on their six legs for locomotion. The alates fly only for a brief amount of time, so they also rely on their legs. The appearance of the legs is similar in each caste, but the soldiers have larger and heavier legs. The structure of the legs is consistent with other insects: the parts of a leg include a coxa, trochanter, femur, tibia and the tarsus. The number of tibial spurs on an individual's leg varies. Some species of termite have an arolium, located between the claws, which is present in species that climb on smooth surfaces but is absent in most termites. Unlike in ants, the hind-wings and fore-wings are of equal length. Most of the time, the alates are poor flyers; their technique is to launch themselves in the air and fly in a random direction. Studies show that in comparison to larger termites, smaller termites cannot fly long distances. When a termite is in flight, its wings remain at a right angle, and when the termite is at rest, its wings remain parallel to the body. Caste system Due to termites being hemimetabolous insects, where the young go through multiple and gradual adultoid molts before becoming an adult, the advent of eusociality has significantly altered the developmental patterns of this group of insects of which, although similar, is not homologous to that of the eusocial Hymenoptera. Unlike ants, bees, and wasps which undergo a complete metamorphosis and as a result only exhibit developmental plasticity at the immobile larval stage, the mobile adultoid instars of termites remain developmentally flexible throughout all life stages up to the final molt, which has uniquely allowed for the evolution of distinct yet flexible castes amongst the immatures. As a result the caste system of termites consists mostly of neotenous or juvenile individuals that undertake the most labor in the colony, which is in contrast to the eusocial Hymenoptera where work is strictly undertaken by the adults. The developmental plasticity in termites can be described similarly to cell potency, where each molt offers a varying level of phenotypic potency. Early instars typically exhibit the highest phenotypic potency and can be described as totipotent (able to molt into all alternative phenotypes), whereas following instars can be pluripotent (able to molt into reproductives and non-reproductives but cannot molt into at least one phenotype), to multipotent (able to molt into either reproductive or non-reproductive phenotypes), to unipotent (able to molt into developmentally close phenotypes), and then finally committed (no longer able to change phenotype, functionally an adult.) In most termites, phenotypic potency decreases with every successive molt. Notable exceptions are basal taxa such as the Archotermopsidae, which are able to retain high developmental plasticity even up to the late instars. In these basal taxa, the immatures are able to go through progressive (nymph-to-imago), regressive (winged-to-wingless) and stationary (size increase, remains wingless) molts, which typically indicates the developmental trajectory an individual follows. There is significant variation of the developmental patterns in termites even across closely related taxa, but can typically be generalized into the following two patterns: The first is the linear developmental pathway, where all immatures are capable of developing into winged adults (Alates), exhibit high phenotypic potency, and where there exists no true sterile caste other than the soldier. The second is the bifurcated developmental pathway, where immatures diverge into two distinct developmental lineages known as the nymphal (winged) and apterous (wingless) lines. The bifurcation occurs early, either at the egg or the first two instars, and represents an irreversible and committed development to either the reproductive or non-reproductive lifestyles. As such, the apterous lineage consists mostly of wingless and truly altruistic sterile individuals (true workers, soldiers), whereas the nymphal lineage consists mainly of fertile individuals destined to become winged reproductives. The bifurcated developmental pathway is found mainly in the derived taxa (i.e. Neoisoptera), and is believed to have evolved in tandem with the sterile worker caste as species moved to foraging for food beyond their nests, as opposed to the nest also being the food (such as in obligate wood-dwellers). There are three main castes which are discussed below: Worker termites undertake the most labor within the colony, being responsible for foraging, food storage, and brood and nest maintenance. Workers are tasked with the digestion of cellulose in food and are thus the most likely caste to be found in infested wood. The process of worker termites feeding other nestmates is known as trophallaxis. Trophallaxis is an effective nutritional tactic to convert and recycle nitrogenous components. It frees the parents from feeding all but the first generation of offspring, allowing for the group to grow much larger and ensuring that the necessary gut symbionts are transferred from one generation to another. Workers are believed to have evolved from older wingless immatures (Larvae) that evolved cooperative behaviors; and indeed in some basal taxa the late instar larvae are known to undertake the role of workers without differentiating as a true separate caste. Workers can either be male or female, although in some species with polymorphic workers either sex may be restricted to a certain developmental path. Workers may also be fertile or sterile, however the term "worker" is normally reserved for the latter, having evolved in taxa that exhibit a bifurcated developmental pathway. As a result, sterile workers like in the family Termitidae are termed true workers and are the most derived, while those that are undifferentiated and fertile as in the wood-nesting Archotermopsidae are termed pseudergates, which are the most basal. True workers are individuals which irreversibly develop from the apterous lineage and have completely forgo development into a winged adult. They display altruistic behaviors and either have terminal molts or exhibit a low level of phenotypical potency. True workers across different termite taxa (Mastotermitidae, Hodotermitidae, Rhinotermitidae & Termitidae) can widely vary in the level of developmental plasticity even between closely related taxa, with many species having true workers that can molt into the other apterous castes such as ergatoids (worker reproductive; apterous neotenics), soldiers, or the other worker castes. Pseudergates sensu stricto are individuals which arise from the linear developmental pathway that have regressively molted and lost their wing buds, and are regarded as totipotent immatures. They are capable of performing work but are overall less involved in labor and considered more cooperative than truly altruistic. Pseudergates sensu lato, otherwise known as false workers, are most represented in basal lineages (Kalotermitidae, Archotermopsidae, Hodotermopsidae, Serritermitidae) and closely resemble true workers in which they also perform most of the work and are similarly altruistic, however differ in developing from the linear developmental pathway where they exist in a stationary molt; i.e they have halted development before the growth of wing buds, and are regarded as pluripotent immatures. The soldier caste is the most anatomically and behaviorally specialized, and their sole purpose is to defend the colony. Many soldiers have large heads with highly modified powerful jaws so enlarged that they cannot feed themselves. Instead, like juveniles, they are fed by workers. Fontanelles, simple holes in the forehead that lead to a gland which exudes defensive secretions, are a feature of the clade Neoisoptera and are present in all extant taxa such as Rhinotermitidae. The majority of termite species have mandibulate soldiers which are easily identified by the disproportionately large sclerotized head and mandibles. Among certain termites, the soldier caste has evolved globular (phragmotic) heads to block their narrow tunnels such as seen in Cryptotermes. Amongst mandibulate soldiers, the mandibles have been adapted for a variety of defensive strategies: Biting/crushing (Incisitermes), slashing (Cubitermes), slashing/snapping (Dentispicotermes), symmetrical snapping (Termes), asymmetrical snapping (Neocapritermes), and piercing (Armitermes). In the more derived termite taxa, the soldier caste can be polymorphic and include minor and major forms. Other morphologically specialized soldiers includes the Nasutes, which have a horn-like nozzle projection (nasus) on the head. These unique soldiers are able to spray noxious, sticky secretions containing diterpenes at their enemies. Nitrogen fixation plays an important role in Nasute nutrition. Soldiers are normally a committed sterile caste and so do not molt into anything else, but in certain basal taxa like the Archotermopsidae they are known to rarely molt into neotenic forms that develop functional sexual organs. In species with the linear developmental pathway, soldiers develop from apterous immatures and constitute the only true sterile caste in these taxa. The primary reproductive caste of a colony consists of the fertile adult (imago) female and male individuals, colloquially known as the queen and king. The queen of the colony is responsible for egg production of the colony. Unlike in ants, the male and female reproductives form lifelong pairs where the king will continue to mate with the queen throughout their lives. In some species, the abdomen of the queen swells up dramatically to increase fecundity, a characteristic known as physogastrism. Depending on the species, the queen starts producing reproductive alates at a certain time of the year, and huge swarms emerge from the colony when nuptial flight begins. These swarms attract a wide variety of predators. The queens can be particularly long-lived for insects, with some reportedly living as long as 30 or 50 years. In both the linear and bifurcated developmental pathways, the primary reproductives only develop from winged immatures (nymphs). These winged immatures are capable of regressively molting into a form known as brachypterous neotenics (nymphoids), which retain juvenile and adult characteristics. BN's can be found in both the derived and basal termite taxa, and generally serve as supplementary reproductives. Life cycle Termites are often compared with the social Hymenoptera (ants and various species of bees and wasps), but their differing evolutionary origins result in major differences in life cycle. In the eusocial Hymenoptera, the workers are exclusively female. Males (drones) are haploid and develop from unfertilised eggs, while females (both workers and the queen) are diploid and develop from fertilised eggs. In contrast, worker termites, which constitute the majority in a colony, are diploid individuals of both sexes and develop from fertilised eggs. Depending on species, male and female workers may have different roles in a termite colony. The life cycle of a termite begins with an egg, but is different from that of a bee or ant in that it goes through a developmental process called incomplete metamorphosis, going through multiple gradual pre-adult molts that are highly developmentally plastic before becoming an adult. Unlike in other hemimetabolous insects, nymphs are more strictly defined in termites as immature young with visible wing buds, which often invariably go through a series of moults to become winged adults. Larvae, which are defined as early nymph instars with absent wing buds, exhibit the highest developmental potentiality and are able to molt into Alates, Soldiers, Neotenics, or Workers. Workers are believed to have evolved from larvae, sharing many similarities to the extent that workers can be regarded as "larval", in that both lack wings, eyes, and functional reproductive organs while maintaining varying levels of developmental flexibility, although usually to a much lesser extent in workers. The main distinction being that while larvae are wholly dependent on other nestmates to survive, workers are independent and are able to feed themselves and contribute to the colony. Workers remain wingless and across many taxa become developmentally arrested, appearing to not change into any other caste until death. In some basal taxa, there is no distinction, with the "workers" (pseudergates) essentially being late instar larvae that retain the ability to change into all other castes. The development of larvae into adults can take months; the time period depends on food availability and nutrition, temperature, and the size of the colony. Since larvae and nymphs are unable to feed themselves, workers must feed them, but workers also take part in the social life of the colony and have certain other tasks to accomplish such as foraging, building or maintaining the nest or tending to the queen. Pheromones regulate the caste system in termite colonies, preventing all but a very few of the termites from becoming fertile queens. Queens of the eusocial termite Reticulitermes speratus are capable of a long lifespan without sacrificing fecundity. These long-lived queens have a significantly lower level of oxidative damage, including oxidative DNA damage, than workers, soldiers and nymphs. The lower levels of damage appear to be due to increased catalase, an enzyme that protects against oxidative stress. Reproduction Termite alates (winged virgin queens and kings) only leave the colony when a nuptial flight takes place. Alate males and females pair up together and then land in search of a suitable place for a colony. A termite king and queen do not mate until they find such a spot. When they do, they excavate a chamber big enough for both, close up the entrance and proceed to mate. After mating, the pair may never surface again, spending the rest of their lives in the nest. Nuptial flight time varies in each species. For example, alates in certain species emerge during the day in summer while others emerge during the winter. The nuptial flight may also begin at dusk, when the alates swarm around areas with many lights. The time when nuptial flight begins depends on the environmental conditions, the time of day, moisture, wind speed and precipitation. The number of termites in a colony also varies, with the larger species typically having 100–1,000 individuals. However, some termite colonies, including those with many individuals, can number in the millions. The queen only lays 10–20 eggs in the very early stages of the colony, but lays as many as 1,000 a day when the colony is several years old. At maturity, a primary queen has a great capacity to lay eggs. In some species, the mature queen has a greatly distended abdomen and may produce 40,000 eggs a day. The two mature ovaries may have some 2,000 ovarioles each. The abdomen increases the queen's body length to several times more than before mating and reduces her ability to move freely; attendant workers provide assistance. The king grows only slightly larger after initial mating and continues to mate with the queen for life (a termite queen can live between 30 and 50 years); this is very different from ant colonies, in which a queen mates once with the males and stores the gametes for life, as the male ants die shortly after mating. If a queen is absent, a termite king produces pheromones which encourage the development of replacement termite queens. As the queen and king are monogamous, sperm competition does not occur. Termites going through incomplete metamorphosis on the path to becoming alates form a subcaste in certain species of termite, functioning as potential supplementary reproductives. These supplementary reproductives only mature into primary reproductives upon the death of a king or queen, or when the primary reproductives are separated from the colony. Supplementaries have the ability to replace a dead primary reproductive, and there may also be more than a single supplementary within a colony. Some queens have the ability to switch from sexual reproduction to asexual reproduction. Studies show that while termite queens mate with the king to produce colony workers, the queens reproduce their replacements (neotenic queens) parthenogenetically. The neotropical termite Embiratermes neotenicus and several other related species produce colonies that contain a primary king accompanied by a primary queen or by up to 200 neotenic queens that had originated through thelytokous parthenogenesis of a founding primary queen. The form of parthenogenesis likely employed maintains heterozygosity in the passage of the genome from mother to daughter, thus avoiding inbreeding depression. Behaviour and ecology Diet Termites are primarily detritivores, consuming dead plants at any level of decomposition. They also play a vital role in the ecosystem by recycling waste material such as dead wood, faeces and plants. Many species eat cellulose, having a specialised midgut that breaks down the fibre. Termites are considered to be a major source (11%) of atmospheric methane, one of the prime greenhouse gases, produced from the breakdown of cellulose. Termites rely primarily upon a symbiotic microbial community that includes bacteria, flagellate protists such as metamonads and hypermastigids. This community provides the enzymes that digests the cellulose, allowing the insects to absorb the end products for their own use. The microbial ecosystem present in the termite gut contains many species found nowhere else on Earth. Termites hatch without these symbionts present in their guts, and develop them after fed a culture from other termites. Gut protozoa, such as Trichonympha, in turn, rely on symbiotic bacteria embedded on their surfaces to produce some of the necessary digestive enzymes. Most higher termites, especially in the family Termitidae, can produce their own cellulase enzymes, but they rely primarily upon the bacteria. The flagellates have been lost in Termitidae. Researchers have found species of spirochetes living in termite guts capable of fixing atmospheric nitrogen to a form usable by the insect. Scientists' understanding of the relationship between the termite digestive tract and the microbial endosymbionts is still rudimentary; what is true in all termite species, however, is that the workers feed the other members of the colony with substances derived from the digestion of plant material, either from the mouth or anus. Judging from closely related bacterial species, it is strongly presumed that the termites' and cockroach's gut microbiota derives from their dictyopteran ancestors. Despite primarily consuming decaying plant material as a group, many termite species have been observed to opportunistically feed on dead animals to supplement their dietary needs. Termites are also known to harbor bacteriophages in their gut. Some of these bacteriophages likely infect the symbiotic bacteria which play a key role in termite biology. The exact role and function of bacteriophages in the termite gut microbiome is not clearly understood. Termite gut bacteriophages also show similarity to bacteriophages (CrAssphage) found in the human gut. Certain species such as Gnathamitermes tubiformans have seasonal food habits. For example, they may preferentially consume Red three-awn (Aristida longiseta) during the summer, Buffalograss (Buchloe dactyloides) from May to August, and blue grama Bouteloua gracilis during spring, summer and autumn. Colonies of G. tubiformans consume less food in spring than they do during autumn when their feeding activity is high. Various woods differ in their susceptibility to termite attack; the differences are attributed to such factors as moisture content, hardness, and resin and lignin content. In one study, the drywood termite Cryptotermes brevis strongly preferred poplar and maple woods to other woods that were generally rejected by the termite colony. These preferences may in part have represented conditioned or learned behaviour. Some species of termite practice fungiculture. They maintain a "garden" of specialised fungi of genus Termitomyces, which are nourished by the excrement of the insects. When the fungi are eaten, their spores pass undamaged through the intestines of the termites to complete the cycle by germinating in the fresh faecal pellets. Molecular evidence suggests that the family Macrotermitinae developed agriculture about 31 million years ago. It is assumed that more than 90 per cent of dry wood in the semiarid savannah ecosystems of Africa and Asia are reprocessed by these termites. Originally living in the rainforest, fungus farming allowed them to colonise the African savannah and other new environments, eventually expanding into Asia. Depending on their feeding habits, termites are placed into two groups: the lower termites and higher termites. The lower termites predominately feed on wood. As wood is difficult to digest, termites prefer to consume fungus-infected wood because it is easier to digest and the fungi are high in protein. Meanwhile, the higher termites consume a wide variety of materials, including faeces, humus, grass, leaves and roots. The gut of the lower termites contains many species of bacteria along with protozoa and Holomastigotoides, while the higher termites only have a few species of bacteria with no protozoa. Predators Termites are consumed by a wide variety of predators. One termite species alone, Hodotermes mossambicus, was reported (1990) in the stomach contents of 65 birds and 19 mammals. Arthropods such as ants, centipedes, cockroaches, crickets, dragonflies, scorpions and spiders, reptiles such as lizards, and amphibians such as frogs and toads consume termites, with two spiders in the family Ammoxenidae being specialist termite predators. Other predators include aardvarks, aardwolves, anteaters, bats, bears, bilbies, many birds, echidnas, foxes, galagos, numbats, mice and pangolins. The aardwolf is an insectivorous mammal that primarily feeds on termites; it locates its food by sound and also by detecting the scent secreted by the soldiers; a single aardwolf is capable of consuming thousands of termites in a single night by using its long, sticky tongue. Sloth bears break open mounds to consume the nestmates, while chimpanzees have developed tools to "fish" termites from their nest. Wear pattern analysis of bone tools used by the early hominin Paranthropus robustus suggests that they used these tools to dig into termite mounds. Among all predators, ants are the greatest enemy to termites. Some ant genera are specialist predators of termites. For example, Megaponera is a strictly termite-eating (termitophagous) genus that perform raiding activities, some lasting several hours. Paltothyreus tarsatus is another termite-raiding species, with each individual stacking as many termites as possible in its mandibles before returning home, all the while recruiting additional nestmates to the raiding site through chemical trails. The Malaysian basicerotine ants Eurhopalothrix heliscata uses a different strategy of termite hunting by pressing themselves into tight spaces, as they hunt through rotting wood housing termite colonies. Once inside, the ants seize their prey by using their short but sharp mandibles. Tetramorium uelense is a specialised predator species that feeds on small termites. A scout recruits 10–30 workers to an area where termites are present, killing them by immobilising them with their stinger. Centromyrmex and Iridomyrmex colonies sometimes nest in termite mounds, and so the termites are preyed on by these ants. No evidence for any kind of relationship (other than a predatory one) is known. Other ants, including Acanthostichus, Camponotus, Crematogaster, Cylindromyrmex, Leptogenys, Odontomachus, Ophthalmopone, Pachycondyla, Rhytidoponera, Solenopsis and Wasmannia, also prey on termites. Specialized subterranean species of army ants such as ones in the genus Dorylus are known to commonly predate on young Macrotermes colonies. Ants are not the only invertebrates that perform raids. Many sphecoid wasps and several species including Polybia and Angiopolybia are known to raid termite mounds during the termites' nuptial flight. Parasites, pathogens and viruses Termites are less likely to be attacked by parasites than bees, wasps and ants, as they are usually well protected in their mounds. Nevertheless, termites are infected by a variety of parasites. Some of these include dipteran flies, Pyemotes mites, and a large number of nematode parasites. Most nematode parasites are in the order Rhabditida; others are in the genus Mermis, Diplogaster aerivora and Harteria gallinarum. Under imminent threat of an attack by parasites, a colony may migrate to a new location. Certain fungal pathogens such as Aspergillus nomius and Metarhizium anisopliae are, however, major threats to a termite colony as they are not host-specific and may infect large portions of the colony; transmission usually occurs via direct physical contact. M. anisopliae is known to weaken the termite immune system. Infection with A. nomius only occurs when a colony is under great stress. Over 34 fungal species are known to live as parasites on the exoskeleton of termites, with many being host-specific and only causing indirect harm to their host. Termites are infected by viruses including Entomopoxvirinae and the Nuclear Polyhedrosis Virus. Locomotion and foraging Because the worker and soldier castes lack wings and thus never fly, and the reproductives use their wings for just a brief amount of time, termites predominantly rely upon their legs to move about. Foraging behaviour depends on the type of termite. For example, certain species feed on the wood structures they inhabit, and others harvest food that is near the nest. Most workers are rarely found out in the open, and do not forage unprotected; they rely on sheeting and runways to protect them from predators. Subterranean termites construct tunnels and galleries to look for food, and workers who manage to find food sources recruit additional nestmates by depositing a phagostimulant pheromone that attracts workers. Foraging workers use semiochemicals to communicate with each other, and workers who begin to forage outside of their nest release trail pheromones from their sternal glands. In one species, Nasutitermes costalis, there are three phases in a foraging expedition: first, soldiers scout an area. When they find a food source, they communicate to other soldiers and a small force of workers starts to emerge. In the second phase, workers appear in large numbers at the site. The third phase is marked by a decrease in the number of soldiers present and an increase in the number of workers. Isolated termite workers may engage in Lévy flight behaviour as an optimised strategy for finding their nestmates or foraging for food. Competition Competition between two colonies always results in agonistic behaviour towards each other, resulting in fights. These fights can cause mortality on both sides and, in some cases, the gain or loss of territory. "Cemetery pits" may be present, where the bodies of dead termites are buried. Studies show that when termites encounter each other in foraging areas, some of the termites deliberately block passages to prevent other termites from entering. Dead termites from other colonies found in exploratory tunnels leads to the isolation of the area and thus the need to construct new tunnels. Conflict between two competitors does not always occur. For example, though they might block each other's passages, colonies of Macrotermes bellicosus and Macrotermes subhyalinus are not always aggressive towards each other. Suicide cramming is known in Coptotermes formosanus. Since C. formosanus colonies may get into physical conflict, some termites squeeze tightly into foraging tunnels and die, successfully blocking the tunnel and ending all agonistic activities. Among the reproductive caste, neotenic queens may compete with each other to become the dominant queen when there are no primary reproductives. This struggle among the queens leads to the elimination of all but a single queen, which, with the king, takes over the colony. Ants and termites may compete with each other for nesting space. In particular, ants that prey on termites usually have a negative impact on arboreal nesting species. Communication Most termites are blind, so communication primarily occurs through chemical, mechanical and pheromonal cues. These methods of communication are used in a variety of activities, including foraging, locating reproductives, construction of nests, recognition of nestmates, nuptial flight, locating and fighting enemies, and defending the nests. The most common way of communicating is through antennation. A number of pheromones are known, including contact pheromones (which are transmitted when workers are engaged in trophallaxis or grooming) and alarm, trail and sex pheromones. The alarm pheromone and other defensive chemicals are secreted from the frontal gland. Trail pheromones are secreted from the sternal gland, and sex pheromones derive from two glandular sources: the sternal and tergal glands. When termites go out to look for food, they forage in columns along the ground through vegetation. A trail can be identified by the faecal deposits or runways that are covered by objects. Workers leave pheromones on these trails, which are detected by other nestmates through olfactory receptors. Termites can also communicate through mechanical cues, vibrations, and physical contact. These signals are frequently used for alarm communication or for evaluating a food source. When termites construct their nests, they use predominantly indirect communication. No single termite would be in charge of any particular construction project. Individual termites react rather than think, but at a group level, they exhibit a sort of collective cognition. Specific structures or other objects such as pellets of soil or pillars cause termites to start building. The termite adds these objects onto existing structures, and such behaviour encourages building behaviour in other workers. The result is a self-organised process whereby the information that directs termite activity results from changes in the environment rather than from direct contact among individuals. Termites can distinguish nestmates and non-nestmates through chemical communication and gut symbionts: chemicals consisting of hydrocarbons released from the cuticle allow the recognition of alien termite species. Each colony has its own distinct odour. This odour is a result of genetic and environmental factors such as the termites' diet and the composition of the bacteria within the termites' intestines. Defence Termites rely on alarm communication to defend a colony. Alarm pheromones can be released when the nest has been breached or is being attacked by enemies or potential pathogens. Termites always avoid nestmates infected with Metarhizium anisopliae spores, through vibrational signals released by infected nestmates. Other methods of defence include headbanging and secretion of fluids from the frontal gland and defecating faeces containing alarm pheromones. In some species, some soldiers block tunnels to prevent their enemies from entering the nest, and they may deliberately rupture themselves as an act of defence. In cases where the intrusion is coming from a breach that is larger than the soldier's head, soldiers form a phalanx-like formation around the breach and bite at intruders. If an invasion carried out by Megaponera analis is successful, an entire colony may be destroyed, although this scenario is rare. To termites, any breach of their tunnels or nests is a cause for alarm. When termites detect a potential breach, the soldiers usually bang their heads, apparently to attract other soldiers for defence and to recruit additional workers to repair any breach. Additionally, an alarmed termite bumps into other termites which causes them to be alarmed and to leave pheromone trails to the disturbed area, which is also a way to recruit extra workers. The pantropical subfamily Nasutitermitinae has a specialised caste of soldiers, known as nasutes, that have the ability to exude noxious liquids through a horn-like frontal projection that they use for defence. Nasutes have lost their mandibles through the course of evolution and must be fed by workers. A wide variety of monoterpene hydrocarbon solvents have been identified in the liquids that nasutes secrete. Similarly, Formosan subterranean termites have been known to secrete naphthalene to protect their nests. Soldiers of the species Globitermes sulphureus commit suicide by autothysis – rupturing a large gland just beneath the surface of their cuticles. The thick, yellow fluid in the gland becomes very sticky on contact with the air, entangling ants or other insects that are trying to invade the nest. Another termite, Neocapriterme taracua, also engages in suicidal defence. Workers physically unable to use their mandibles while in a fight form a pouch full of chemicals, then deliberately rupture themselves, releasing toxic chemicals that paralyse and kill their enemies. The soldiers of the neotropical termite family Serritermitidae have a defence strategy which involves front gland autothysis, with the body rupturing between the head and abdomen. When soldiers guarding nest entrances are attacked by intruders, they engage in autothysis, creating a block that denies entry to any attacker. Workers use several different strategies to deal with their dead, including burying, cannibalism, and avoiding a corpse altogether. To avoid pathogens, termites occasionally engage in necrophoresis, in which a nestmate carries away a corpse from the colony to dispose of it elsewhere. Which strategy is used depends on the nature of the corpse a worker is dealing with (i.e. the age of the carcass). Relationship with other organisms A species of fungus is known to mimic termite eggs, successfully avoiding its natural predators. These small brown balls, known as "termite balls", rarely kill the eggs, and in some cases the workers tend to them. This fungus mimics these eggs by producing cellulose-digesting enzymes known as glucosidases. A unique mimicking behaviour exists between various species of Trichopsenius beetles and certain termite species within Reticulitermes. The beetles share the same cuticle hydrocarbons as the termites and even biosynthesize them. This chemical mimicry allows the beetles to integrate themselves within the termite colonies. The developed appendages on the physogastric abdomen of Austrospirachtha mimetes allows the beetle to mimic a termite worker. Some species of ant are known to capture termites to use as a fresh food source later on, rather than killing them. For example, Formica nigra captures termites, and those that try to escape are immediately seized and driven underground. Certain species of ants in the subfamily Ponerinae conduct these raids although other ant species go in alone to steal the eggs or nymphs. Ants such as Megaponera analis attack the outside of mounds and Dorylinae ants attack underground. Despite this, some termites and ants can coexist peacefully. Some species of termite, including Nasutitermes corniger, form associations with certain ant species to keep away predatory ant species. The earliest known association between Azteca ants and Nasutitermes termites date back to the Oligocene to Miocene period. 54 species of ants are known to inhabit Nasutitermes mounds, both occupied and abandoned ones. One reason many ants live in Nasutitermes mounds is due to the termites' frequent occurrence in their geographical range; another is to protect themselves from floods. Iridomyrmex also inhabits termite mounds although no evidence for any kind of relationship (other than a predatory one) is known. In rare cases, certain species of termites live inside active ant colonies. Some invertebrate organisms such as beetles, caterpillars, flies and millipedes are termitophiles and dwell inside termite colonies (they are unable to survive independently). As a result, certain beetles and flies have evolved with their hosts. They have developed a gland that secrete a substance that attracts the workers by licking them. Mounds may also provide shelter and warmth to birds, lizards, snakes and scorpions. Termites are known to carry pollen and regularly visit flowers, so are regarded as potential pollinators for a number of flowering plants. One flower in particular, Rhizanthella gardneri, is regularly pollinated by foraging workers, and it is perhaps the only Orchidaceae flower in the world to be pollinated by termites. Many plants have developed effective defences against termites. However, seedlings are vulnerable to termite attacks and need additional protection, as their defence mechanisms only develop when they have passed the seedling stage. Defence is typically achieved by secreting antifeedant chemicals into the woody cell walls. This reduces the ability of termites to efficiently digest the cellulose. A commercial product, "Blockaid", has been developed in Australia that uses a range of plant extracts to create a paint-on nontoxic termite barrier for buildings. An extract of a species of Australian figwort, Eremophila, has been shown to repel termites; tests have shown that termites are strongly repelled by the toxic material to the extent that they will starve rather than consume the food. When kept close to the extract, they become disoriented and eventually die. Relationship with the environment Termite populations can be substantially impacted by environmental changes including those caused by human intervention. A Brazilian study investigated the termite assemblages of three sites of Caatinga under different levels of anthropogenic disturbance in the semi-arid region of northeastern Brazil were sampled using 65 x 2 m transects. A total of 26 species of termites were present in the three sites, and 196 encounters were recorded in the transects. The termite assemblages were considerably different among sites, with a conspicuous reduction in both diversity and abundance with increased disturbance, related to the reduction of tree density and soil cover, and with the intensity of trampling by cattle and goats. The wood-feeders were the most severely affected feeding group. Nests A termite nest can be considered as being composed of two parts, the inanimate and the animate. The animate is all of the termites living inside the colony, and the inanimate part is the structure itself, which is constructed by the termites. Nests can be broadly separated into three main categories: hypogeal, i.e subterranean (completely below ground), epigeal (protruding above the soil surface), and arboreal (built above ground, but always connected to the ground via shelter tubes). Epigeal nests (mounds) protrude from the earth with ground contact and are made out of earth and mud. A nest has many functions such as providing a protected living space and providing shelter against predators. Most termites construct underground colonies rather than multifunctional nests and mounds. Primitive termites of today nest in wooden structures such as logs, stumps and the dead parts of trees, as did termites millions of years ago. To build their nests, termites use a variety of resources such as faeces which have many desirable properties as a construction material. Other building materials include partly digested plant material, used in carton nests (arboreal nests built from faecal elements and wood), and soil, used in subterranean nest and mound construction. Not all nests are visible, as many nests in tropical forests are located underground. Species in the subfamily Apicotermitinae are good examples of subterranean nest builders, as they only dwell inside tunnels. Other termites live in wood, and tunnels are constructed as they feed on the wood. Nests and mounds protect the termites' soft bodies against desiccation, light, pathogens and parasites, as well as providing a fortification against predators. Nests made out of carton are particularly weak, and so the inhabitants use counter-attack strategies against invading predators. Arboreal carton nests of mangrove swamp-dwelling Nasutitermes are enriched in lignin and depleted in cellulose and xylans. This change is caused by bacterial decay in the gut of the termites: they use their faeces as a carton building material. Arboreal termites nests can account for as much as 2% of above ground carbon storage in Puerto Rican mangrove swamps. These Nasutitermes nests are mainly composed of partially biodegraded wood material from the stems and branches of mangrove trees, namely, Rhizophora mangle (red mangrove), Avicennia germinans (black mangrove) and Laguncularia racemosa (white mangrove). Some species build complex nests called polycalic nests; this habitat is called polycalism. Polycalic species of termites form multiple nests, or calies, connected by subterranean chambers. The termite genera Apicotermes and Trinervitermes are known to have polycalic species. Polycalic nests appear to be less frequent in mound-building species although polycalic arboreal nests have been observed in a few species of Nasutitermes. Mounds Nests are considered mounds if they protrude from the earth's surface. A mound provides termites the same protection as a nest but is stronger. Mounds located in areas with torrential and continuous rainfall are at risk of mound erosion due to their clay-rich construction. Those made from carton can provide protection from the rain, and in fact can withstand high precipitation. Certain areas in mounds are used as strong points in case of a breach. For example, Cubitermes colonies build narrow tunnels used as strong points, as the diameter of the tunnels is small enough for soldiers to block. A highly protected chamber, known as the "queen's cell", houses the queen and king and is used as a last line of defence. Species in the genus Macrotermes arguably build the most complex structures in the insect world, constructing enormous mounds. These mounds are among the largest in the world, reaching a height of 8 to 9 metres (26 to 29 feet), and consist of chimneys, pinnacles and ridges. Another termite species, Amitermes meridionalis, can build nests 3 to 4 metres (9 to 13 feet) high and 2.5 metres (8 feet) wide. The tallest mound ever recorded was 12.8 metres (42 ft) long found in the Democratic Republic of the Congo. The sculptured mounds sometimes have elaborate and distinctive forms, such as those of the compass termite (Amitermes meridionalis and A. laurensis), which builds tall, wedge-shaped mounds with the long axis oriented approximately north–south, which gives them their common name. This orientation has been experimentally shown to assist thermoregulation. The north–south orientation causes the internal temperature of a mound to increase rapidly during the morning while avoiding overheating from the midday sun. The temperature then remains at a plateau for the rest of the day until the evening. Shelter tubes Termites construct shelter tubes, also known as earthen tubes or mud tubes, that start from the ground. These shelter tubes can be found on walls and other structures. Constructed by termites during the night, a time of higher humidity, these tubes provide protection to termites from potential predators, especially ants. Shelter tubes also provide high humidity and darkness and allow workers to collect food sources that cannot be accessed in any other way. These passageways are made from soil and faeces and are normally brown in colour. The size of these shelter tubes depends on the number of food sources that are available. They range from less than 1 cm to several cm in width, but may be dozens of metres in length. Relationship with humans As pests Owing to their wood-eating habits, many termite species can do significant damage to unprotected buildings and other wooden structures. Termites play an important role as decomposers of wood and vegetative material, and the conflict with humans occurs where structures and landscapes containing structural wood components, cellulose derived structural materials and ornamental vegetation provide termites with a reliable source of food and moisture. Their habit of remaining concealed often results in their presence being undetected until the timbers are severely damaged, with only a thin exterior layer of wood remaining, which protects them from the environment. Of the 3,106 species known, only 183 species cause damage; 83 species cause significant damage to wooden structures. In North America, 18 subterranean species are pests; in Australia, 16 species have an economic impact; in the Indian subcontinent 26 species are considered pests, and in tropical Africa, 24. In Central America and the West Indies, there are 17 pest species. Among the termite genera, Coptotermes has the highest number of pest species of any genus, with 28 species known to cause damage. Less than 10% of drywood termites are pests, but they infect wooden structures and furniture in tropical, subtropical and other regions. Dampwood termites only attack lumber material exposed to rainfall or soil. Drywood termites thrive in warm climates, and human activities can enable them to invade homes since they can be transported through contaminated goods, containers and ships. Colonies of termites have been seen thriving in warm buildings located in cold regions. Some termites are considered invasive species. Cryptotermes brevis, the most widely introduced invasive termite species in the world, has been introduced to all the islands in the West Indies and to Australia. In addition to causing damage to buildings, termites can also damage food crops. Termites may attack trees whose resistance to damage is low but generally ignore fast-growing plants. Most attacks occur at harvest time; crops and trees are attacked during the dry season. In Australia, at a cost of more than per year, termites cause more damage to houses than fire, floods and storms combined. In Malaysia, it is estimated that termites caused about RM400 million of damages to properties and buildings. The damage caused by termites costs the southwestern United States approximately $1.5 billion each year in wood structure damage, but the true cost of damage worldwide cannot be determined. Drywood termites are responsible for a large proportion of the damage caused by termites. The goal of termite control is to keep structures and susceptible ornamental plants free from termites.; Structures may be homes or business, or elements such as wooden fence posts and telephone poles. Regular and thorough inspections by a trained professional may be necessary to detect termite activity in the absence of more obvious signs like termite swarmers or alates inside or adjacent to a structure. Termite monitors made of wood or cellulose adjacent to a structure may also provide indication of termite foraging activity where it will be in conflict with humans. Termites can be controlled by application of Bordeaux mixture or other substances that contain copper such as chromated copper arsenate. In the United states, application of a soil termiticide with the active ingredient Fipronil, such as Termidor SC or Taurus SC, by a licensed professional, is a common remedy approved by the Environmental Protection Agency for economically significant subterranean termites. A growing demand for alternative, green, and "more natural" extermination methods has increased demand for mechanical and biological control methods such as orange oil. To better control the population of termites, various methods have been developed to track termite movements. One early method involved distributing termite bait laced with immunoglobulin G (IgG) marker proteins from rabbits or chickens. Termites collected from the field could be tested for the rabbit-IgG markers using a rabbit-IgG-specific assay. More recently developed, less expensive alternatives include tracking the termites using egg white, cow milk, or soy milk proteins, which can be sprayed on termites in the field. Termites bearing these proteins can be traced using a protein-specific ELISA test. RNAi insecticides specific to termites are in development. One factor reducing investment in its research and development is concern about high potential for resistance evolution. In 1994, termites, of the species Reticulitermes grassei, were identified in two bungalows in Saunton, Devon. Anecdotal evidence suggests the infestation could date back 70 years before the official identification. There are reports that gardeners had seen white ants and that a greenhouse had had to be replaced in the past. The Saunton infestation was the first and only colony ever recorded in the UK. In 1998, Termite Eradication Programme was set-up, with the intention of containing and eradicating the colony. The TEP was managed by the Ministry of Housing, Communities & Local Government (now the Department for Levelling Up, Housing and Communities.) The TEP used "insect growth regulators" to prevent the termites from reaching maturity and reproducing. In 2021, the UK's Termite Eradication Programme announced the eradication of the colony, the first time a country has eradicated termites. As food 43 termite species are used as food by humans or are fed to livestock. These insects are particularly important in impoverished countries where malnutrition is common, as the protein from termites can help improve the human diet. Termites are consumed in many regions globally, but this practice has only become popular in developed nations in recent years. Termites are consumed by people in many different cultures around the world. In many parts of Africa, the alates are an important factor in the diets of native populations. Groups have different ways of collecting or cultivating insects; sometimes collecting soldiers from several species. Though harder to acquire, queens are regarded as a delicacy. Termite alates are high in nutrition with adequate levels of fat and protein. They are regarded as pleasant in taste, having a nut-like flavour after they are cooked. Alates are collected when the rainy season begins. During a nuptial flight, they are typically seen around lights to which they are attracted, and so nets are set up on lamps and captured alates are later collected. The wings are removed through a technique that is similar to winnowing. The best result comes when they are lightly roasted on a hot plate or fried until crisp. Oil is not required as their bodies usually contain sufficient amounts of oil. Termites are typically eaten when livestock is lean and tribal crops have not yet developed or produced any food, or if food stocks from a previous growing season are limited. In addition to Africa, termites are consumed in local or tribal areas in Asia and North and South America. In Australia, Indigenous Australians are aware that termites are edible but do not consume them even in times of scarcity; there are few explanations as to why. Termite mounds are the main sources of soil consumption (geophagy) in many countries including Kenya, Tanzania, Zambia, Zimbabwe and South Africa. Researchers have suggested that termites are suitable candidates for human consumption and space agriculture, as they are high in protein and can be used to convert inedible waste to consumable products for humans. In agriculture Termites can be major agricultural pests, particularly in East Africa and North Asia, where crop losses can be severe (3–100% in crop loss in Africa). Counterbalancing this is the greatly improved water infiltration where termite tunnels in the soil allow rainwater to soak in deeply, which helps reduce runoff and consequent soil erosion through bioturbation. In South America, cultivated plants such as eucalyptus, upland rice and sugarcane can be severely damaged by termite infestations, with attacks on leaves, roots and woody tissue. Termites can also attack other plants, including cassava, coffee, cotton, fruit trees, maize, peanuts, soybeans and vegetables. Mounds can disrupt farming activities, making it difficult for farmers to operate farming machinery; however, despite farmers' dislike of the mounds, it is often the case that no net loss of production occurs. Termites can be beneficial to agriculture, such as by boosting crop yields and enriching the soil. Termites and ants can re-colonise untilled land that contains crop stubble, which colonies use for nourishment when they establish their nests. The presence of nests in fields enables larger amounts of rainwater to soak into the ground and increases the amount of nitrogen in the soil, both essential for the growth of crops. In science and technology The termite gut has inspired various research efforts aimed at replacing fossil fuels with cleaner, renewable energy sources. Termites are efficient bioreactors, theoretically capable of producing two litres of hydrogen from a single sheet of paper. Approximately 200 species of microbes live inside the termite hindgut, releasing the hydrogen that was trapped inside wood and plants that they digest. Through the action of unidentified enzymes in the termite gut, lignocellulose polymers are broken down into sugars and are transformed into hydrogen. The bacteria within the gut turns the sugar and hydrogen into cellulose acetate, an acetate ester of cellulose on which termites rely for energy. Community DNA sequencing of the microbes in the termite hindgut has been employed to provide a better understanding of the metabolic pathway. Genetic engineering may enable hydrogen to be generated in bioreactors from woody biomass. The development of autonomous robots capable of constructing intricate structures without human assistance has been inspired by the complex mounds that termites build. These robots work independently and can move by themselves on a tracked grid, capable of climbing and lifting up bricks. Such robots may be useful for future projects on Mars, or for building levees to prevent flooding. Termites use sophisticated means to control the temperatures of their mounds. As discussed above, the shape and orientation of the mounds of the Australian compass termite stabilises their internal temperatures during the day. As the towers heat up, the solar chimney effect (stack effect) creates an updraft of air within the mound. Wind blowing across the tops of the towers enhances the circulation of air through the mounds, which also include side vents in their construction. The solar chimney effect has been in use for centuries in the Middle East and Near East for passive cooling, as well as in Europe by the Romans. It is only relatively recently, however, that climate responsive construction techniques have become incorporated into modern architecture. Especially in Africa, the stack effect has become a popular means to achieve natural ventilation and passive cooling in modern buildings. In culture The Eastgate Centre is a shopping centre and office block in central Harare, Zimbabwe, whose architect, Mick Pearce, used passive cooling inspired by that used by the local termites. It was the first major building exploiting termite-inspired cooling techniques to attract international attention. Other such buildings include the Learning Resource Center at the Catholic University of Eastern Africa and the Council House 2 building in Melbourne, Australia. Few zoos hold termites, due to the difficulty in keeping them captive and to the reluctance of authorities to permit potential pests. One of the few that do, the Zoo Basel in Switzerland, has two thriving Macrotermes bellicosus populations – resulting in an event very rare in captivity: the mass migrations of young flying termites. This happened in September 2008, when thousands of male termites left their mound each night, died, and covered the floors and water pits of the house holding their exhibit. African tribes in several countries have termites as totems, and for this reason tribe members are forbidden to eat the reproductive alates. Termites are widely used in traditional popular medicine; they are used as treatments for diseases and other conditions such as asthma, bronchitis, hoarseness, influenza, sinusitis, tonsillitis and whooping cough. In Nigeria, Macrotermes nigeriensis is used for spiritual protection and to treat wounds and sick pregnant women. In Southeast Asia, termites are used in ritual practices. In Malaysia, Singapore and Thailand, termite mounds are commonly worshiped among the populace. Abandoned mounds are viewed as structures created by spirits, believing a local guardian dwells within the mound; this is known as Keramat and Datok Kong. In urban areas, local residents construct red-painted shrines over mounds that have been abandoned, where they pray for good health, protection and luck.
Biology and health sciences
Insects and other hexapods
null
54813
https://en.wikipedia.org/wiki/Shellac
Shellac
Shellac () is a resin secreted by the female lac bug on trees in the forests of India and Thailand. Chemically, it is mainly composed of aleuritic acid, jalaric acid, shellolic acid, and other natural waxes. It is processed and sold as dry flakes and dissolved in alcohol to make liquid shellac, which is used as a brush-on colorant, food glaze and wood finish. Shellac functions as a tough natural primer, sanding sealant, tannin-blocker, odour-blocker, stain, and high-gloss varnish. Shellac was once used in electrical applications as it possesses good insulation qualities and seals out moisture. Phonograph and 78 rpm gramophone records were made of shellac until they were gradually replaced by vinyl. By 1948 shellac was no longer used to make records. From the time shellac replaced oil and wax finishes in the 19th century, it was one of the dominant wood finishes in the western world until it was largely replaced by nitrocellulose lacquer in the 1920s and 1930s. Besides wood finishing, shellac is used as an ingredient in food, medication and candy as confectioner's glaze, as well as a means of preserving harvested citrus fruit. Etymology Shellac comes from shell and lac, a partial calque of French , 'lac in thin pieces', later , 'gum lac'. Most European languages (except Romance ones and Greek) have borrowed the word for the substance from English or from the German equivalent . Production Shellac is scraped from the bark of the trees where the female lac bug, Kerria lacca (order Hemiptera, family Kerriidae, also known as Laccifer lacca), secretes it to form a tunnel-like tube as it traverses the branches of the tree. Though these tunnels are sometimes referred to as "cocoons", they are not cocoons in the entomological sense. This insect is in the same superfamily as the insect from which cochineal is obtained. The insects suck the sap of the tree and excrete "sticklac" almost constantly. The least-coloured shellac is produced when the insects feed on the kusum tree (Schleichera). The number of lac bugs required to produce of shellac has variously been estimated between and . The root word lakh is a unit in the Indian numbering system for and presumably refers to the huge numbers of insects that swarm on host trees, up to . The raw shellac, which contains bark shavings and lac bugs removed during scraping, is placed in canvas tubes (much like long socks) and heated over a fire. This causes the shellac to liquefy, and it seeps out of the canvas, leaving the bark and bugs behind. The thick, sticky shellac is then dried into a flat sheet and broken into flakes, or dried into "buttons" (pucks/cakes), then bagged and sold. The end-user then crushes it into a fine powder and mixes it with ethyl alcohol before use, to dissolve the flakes and make liquid shellac. Liquid shellac has a limited shelf life (about 1 year), so is sold in dry form for dissolution before use. Liquid shellac sold in hardware stores is often marked with the production (mixing) date, so the consumer can know whether the shellac inside is still good. Some manufacturers (e.g., Zinsser) have ceased labeling shellac with the production date, but the production date may be discernible from the production lot code. Alternatively, old shellac may be tested to see if it is still usable: a few drops on glass should dry to a hard surface in roughly 15 minutes. Shellac that remains tacky for a long time is no longer usable. Storage life depends on peak temperature, so refrigeration extends shelf life. The thickness (concentration) of shellac is measured by the unit "pound cut", referring to the amount (in pounds) of shellac flakes dissolved in a gallon of denatured alcohol. For example: a 1-lb. cut of shellac is the strength obtained by dissolving one pound of shellac flakes in a gallon of alcohol (equivalent to ). Most pre-mixed commercial preparations come at a 3-lb. cut. Multiple thin layers of shellac produce a significantly better end result than a few thick layers. Thick layers of shellac do not adhere to the substrate or to each other well, and thus can peel off with relative ease; in addition, thick shellac will obscure fine details in carved designs in wood and other substrates. Shellac naturally dries to a high-gloss sheen. For applications where a flatter (less shiny) sheen is desired, products containing amorphous silica, such as "Shellac Flat", may be added to the dissolved shellac. Shellac naturally contains a small amount of wax (3%–5% by volume), which comes from the lac bug. In some preparations, this wax is removed (the resulting product being called "dewaxed shellac"). This is done for applications where the shellac will be coated with something else (such as paint or varnish), so the topcoat will adhere. Waxy (non-dewaxed) shellac appears milky in liquid form, but dries clear. Colours and availability Shellac comes in many warm colours, ranging from a very light blonde ("platina") to a very dark brown ("garnet"), with many varieties of brown, yellow, orange and red in between. The colour is influenced by the sap of the tree the lac bug is living on and by the time of harvest. Historically, the most commonly sold shellac is called "orange shellac", and was used extensively as a combination stain and protectant for wood panelling and cabinetry in the 20th century. Shellac was once very common anywhere paints or varnishes were sold (such as hardware stores). However, cheaper and more abrasion- and chemical-resistant finishes, such as polyurethane, have almost completely replaced it in decorative residential wood finishing such as hardwood floors, wooden wainscoting plank panelling, and kitchen cabinets. These alternative products, however, must be applied over a stain if the user wants the wood to be coloured; clear or blonde shellac may be applied over a stain without affecting the colour of the finished piece, as a protective topcoat. "Wax over shellac" (an application of buffed-on paste wax over several coats of shellac) is often regarded as a beautiful, if fragile, finish for hardwood floors. Luthiers still use shellac to French polish fine acoustic stringed instruments, but it has been replaced by synthetic plastic lacquers and varnishes in many workshops, especially high-volume production environments. Shellac dissolved in alcohol, typically more dilute than as used in French polish, is now commonly sold as "sanding sealer" by several companies. It is used to seal wooden surfaces, often as preparation for a final more durable finish; it reduces the amount of final coating required by reducing its absorption into the wood. Properties Shellac is a natural bioadhesive polymer and is chemically similar to synthetic polymers. It can thus be considered a natural form of plastic. With a melting point of , it can be classed as a thermoplastic used to bind wood flour, the mixture can be moulded with heat and pressure. Shellac scratches more easily than most lacquers and varnishes, and application is more labour-intensive, which is why it has been replaced by plastic in most areas. Shellac is much softer than Urushi lacquer, for instance, which is far superior with regard to both chemical and mechanical resistance. But damaged shellac can easily be touched up with another coat of shellac (unlike polyurethane, which chemically cures to a solid) because the new coat merges with and bonds to the existing coat(s). Shellac is soluble in alkaline solutions of ammonia, sodium borate, sodium carbonate, and sodium hydroxide, and also in various organic solvents. When dissolved in alcohol (typically denatured ethanol) for application, shellac yields a coating of good durability and hardness. Upon mild hydrolysis shellac gives a complex mix of aliphatic and alicyclic hydroxy acids and their polymers that varies in exact composition depending upon the source of the shellac and the season of collection. The major component of the aliphatic component is aleuritic acid, whereas the main alicyclic component is shellolic acid. Shellac is UV-resistant, and does not darken as it ages (though the wood under it may do so, as in the case of pine). History The earliest written evidence of shellac goes back years, but shellac is known to have been used earlier. According to the ancient Indian epic poem, the Mahabharata, an entire palace was coated with dried shellac. Shellac was uncommonly used as a dyestuff for as long as there was a trade with the East Indies. According to Merrifield, shellac was first used as a binding agent in artist's pigments in Spain in the year 1220. The use of overall paint or varnish decoration on large pieces of furniture was first popularised in Venice (then later throughout Italy). There are a number of 13th-century references to painted or varnished cassone, often dowry cassone that were made deliberately impressive as part of dynastic marriages. The definition of varnish is not always clear, but it seems to have been a spirit varnish based on gum benjamin or mastic, both traded around the Mediterranean. At some time, shellac began to be used as well. An article from the Journal of the American Institute of Conservation describes using infrared spectroscopy to identify shellac coating on a 16th-century cassone. This is also the period in history where "varnisher" was identified as a distinct trade, separate from both carpenter and artist. Another use for shellac is sealing wax. The widespread use of shellac seals in Europe dates back to the 17th century, thanks to the increasing trade with India. Uses Historical In the early- and mid-twentieth century, orange shellac was used as a one-product finish (combination stain and varnish-like topcoat) on decorative wood panelling used on walls and ceilings in homes, particularly in the US. In the American South, use of knotty pine plank panelling covered with orange shellac was once as common in new construction as drywall is today. It was also often used on kitchen cabinets and hardwood floors, prior to the advent of polyurethane. Until the advent of vinyl, most gramophone records were pressed from shellac compounds. From 1921 to 1928, tons of shellac were used to create 260 million records for Europe. In the 1930s, it was estimated that half of all shellac was used for gramophone records. Use of shellac for records was common until the 1950s and continued into the 1970s in some non-Western countries, as well as for some children's records. Until recent advances in technology, shellac (French polish) was the only glue used in the making of ballet dancers' pointe shoes, to stiffen the box (toe area) to support the dancer en pointe. Many manufacturers of pointe shoes still use the traditional techniques, and many dancers use shellac to revive a softening pair of shoes. Shellac was historically used as a protective coating on paintings. Sheets of Braille were coated with shellac to help protect them from wear due to being read by hand. Shellac was used from the mid-nineteenth century to produce small moulded goods such as picture frames, boxes, toilet articles, jewelry, inkwells and even dentures. Advances in plastics have rendered shellac obsolete as a moulding compound. Shellac (both orange and white varieties) was used both in the field and laboratory to glue and stabilise dinosaur bones until about the mid-1960s. While effective at the time, the long-term negative effects of shellac (being organic in nature) on dinosaur bones and other fossils is debated, and shellac is very rarely used by professional conservators and fossil preparators today. Shellac was used for fixing inductor, motor, generator and transformer windings. It was applied directly to single-layer windings in an alcohol solution. For multi-layer windings, the whole coil was submerged in shellac solution, then drained and placed in a warm location to allow the alcohol to evaporate. The shellac locked the wire turns in place, provided extra insulation, prevented movement and vibration and reduced buzz and hum. In motors and generators it also helps transfer force generated by magnetic attraction and repulsion from the windings to the rotor or armature. In more recent times, shellac has been replaced in these applications by synthetic resins such as polyester resin. Some applications use shellac mixed with other natural or synthetic resins, such as pine resin or phenol-formaldehyde resin, of which Bakelite is the best known, for electrical use. Mixed with other resins, barium sulfate, calcium carbonate, zinc sulfide, aluminium oxide and/or cuprous carbonate (malachite), shellac forms a component of heat-cured capping cement used to fasten the caps or bases to the bulbs of electric lamps. Current uses It is the central element of the traditional "French polish" method of finishing furniture, fine string instruments, and pianos. Shellac, being edible, is used as a glazing agent on pills (see excipient) and sweets, in the form of pharmaceutical glaze (or, "confectioner's glaze"). Because of its acidic properties (resisting stomach acids), shellac-coated pills may be used for a timed enteric or colonic release. Shellac is used as a 'wax' coating on citrus fruit to prolong its shelf/storage life. It is also used to replace the natural wax of the apple, which is removed during the cleaning process. When used for this purpose, it has the food additive E number E904. Shellac is an odour and stain blocker and so is often used as the base of "all-purpose" primers. Although its durability against abrasives and many common solvents is not very good, shellac provides an excellent barrier against water vapour penetration. Shellac-based primers are an effective sealant to control odours associated with fire damage. Shellac has traditionally been used as a dye for cotton and, especially, silk cloth in Thailand, particularly in the north-eastern region. It yields a range of warm colours from pale yellow through to dark orange-reds and dark ochre. Naturally dyed silk cloth, including that using shellac, is widely available in the rural northeast, especially in Ban Khwao District, Chaiyaphum province. The Thai name for the insect and the substance is "khrang" (Thai: ครั่ง). Wood finish Wood finishing is one of the most traditional and still popular uses of shellac mixed with solvents or alcohol. This dissolved shellac liquid, applied to a piece of wood, is an evaporative finish: the alcohol of the shellac mixture evaporates, leaving behind a protective film. Shellac as wood finish is natural and non-toxic in its pure form. A finish made of shellac is UV-resistant. For water-resistance and durability, it does not keep up with synthetic finishing products. Because it is compatible with most other finishes, shellac is also used as a barrier or primer coat on wood to prevent the bleeding of resin or pigments into the final finish, or to prevent wood stain from blotching. Other Shellac is used: in the tying of artificial flies for trout and salmon, where the shellac was used to seal all trimmed materials at the head of the fly. in combination with wax for preserving and imparting a shine to citrus fruits, such as lemons and oranges. in dental technology, where it is occasionally used in the production of custom impression trays and temporary denture baseplate production. as a binder in India ink. for bicycles, as a protective and decorative coating for bicycle handlebar tape, and as a hard-drying adhesive for tubular tyres, particularly for track racing. for re-attaching ink sacs when restoring vintage fountain pens, the orange variety preferably. applied as a coating with either a standard or modified Huon-Stuehrer nozzle, can be economically micro-sprayed onto various smooth candies, such as chocolate coated peanuts. Irregularities on the surface of the product being sprayed may result in the formation of unsightly aggregates ("lac-aggs") which precludes the use of this technique on foods such as walnuts or raisins. for fixing pads to the key-cups of woodwind instruments. for luthierie applications, to bind wood fibres down and prevent tear out on the soft spruce soundboards. to stiffen and impart water-resistance to felt hats, for wood finishing and as a constituent of gossamer (or goss for short), a cheesecloth fabric coated in shellac and ammonia solution used in the shell of traditional silk top and riding hats. for mounting insects, in the form of a gel adhesive mixture composed of 75% ethyl alcohol. as a binder in the fabrication of abrasive wheels, imparting flexibility and smoothness not found in vitrified (ceramic bond) wheels. 'Elastic' bonded wheels typically contain plaster of paris, yielding a stronger bond when mixed with shellac; the mixture of dry plaster powder, abrasive (e.g. corundum/aluminium oxide Al2O3), and shellac are heated and the mixture pressed in a mould. in fireworks pyrotechnic compositions as a low-temperature fuel, where it allows the creation of pure 'greens' and 'blues'- colours difficult to achieve with other fuel mixes. in jewellery; shellac is often applied to the top of a 'shellac stick' in order to hold small, complex, objects. By melting the shellac, the jeweller can press the object (such as a stone setting mount) into it. The shellac, once cool, can firmly hold the object, allowing it to be manipulated with tools. in watchmaking, due to its low melting temperature (about ), shellac is used in most mechanical movements to adjust and adhere pallet stones to the pallet fork and secure the roller jewel to the roller table of the balance wheel. Also for securing small parts to a 'wax chuck' (faceplate) in a watchmakers' lathe. in the early twentieth century, it was used to protect some military rifle stocks. in Jelly Belly jelly beans, in combination with beeswax to give them their final buff and polish. in modern traditional archery, shellac is one of the hot-melt glue/resin products used to attach arrowheads to wooden or bamboo arrow shafts. in alcohol solution as sanding sealer, widely sold to seal sanded surfaces, typically wooden surfaces before a final coat of a more durable finish. Similar to French polish but more dilute. as a topcoat in nail polish (although not all nail polish sold as "shellac" contains shellac, and some nail polish not labelled in this way does). in sculpture, to seal plaster and in conjunction with wax or oil-soaps, to act as a barrier during mold-making processes. as a dilute solution in the sealing of harpsichord soundboards, protecting them from dust and buffering humidity changes while maintaining a bare-wood appearance. as a waterproofing agent for leather (e.g., for the soles of figure skate boots). as a way for ballet dancers to harden their pointe shoes, making them last longer. Gallery
Physical sciences
Terpenes and terpenoids
Chemistry
54840
https://en.wikipedia.org/wiki/Eutrophication
Eutrophication
Eutrophication is a general term describing a process in which nutrients accumulate in a body of water, resulting in an increased growth of organism that may deplete the oxygen in the water. Eutrophication may occur naturally or as a result of human actions. Manmade, or cultural, eutrophication occurs when sewage, industrial wastewater, fertilizer runoff, and other nutrient sources are released into the environment. Such nutrient pollution usually causes algal blooms and bacterial growth, resulting in the depletion of dissolved oxygen in water and causing substantial environmental degradation. Approaches for prevention and reversal of eutrophication include minimizing point source pollution from sewage and agriculture as well as other nonpoint pollution sources. Additionally, the introduction of bacteria and algae-inhibiting organisms such as shellfish and seaweed can also help reduce nitrogen pollution, which in turn controls the growth of cyanobacteria, the main source of harmful algae blooms. History and terminology The term "eutrophication" comes from the Greek eutrophos, meaning "well-nourished". Water bodies with very low nutrient levels are termed oligotrophic and those with moderate nutrient levels are termed mesotrophic. Advanced eutrophication may also be referred to as dystrophic and hypertrophic conditions. Thus, eutrophication has been defined as "degradation of water quality owing to enrichment by nutrients which results in excessive plant (principally algae) growth and decay." Eutrophication was recognized as a water pollution problem in European and North American lakes and reservoirs in the mid-20th century. Breakthrough research carried out at the Experimental Lakes Area (ELA) in Ontario, Canada, in the 1970s provided the evidence that freshwater bodies are phosphorus-limited. ELA uses the whole ecosystem approach and long-term, whole-lake investigations of freshwater focusing on cultural eutrophication. Causes Eutrophication is caused by excessive concentrations of nutrients, most commonly phosphates and nitrates, although this varies with location. Prior to their being phasing out in the 1970's, phosphate-containing detergents contributed to eutrophication. Since then, sewage and agriculture have emerged as the dominant phosphate sources. The main sources of nitrogen pollution are from agricultural runoff containing fertilizers and animal wastes, from sewage, and from atmospheric deposition of nitrogen originating from combustion or animal waste. The limitation of productivity in any aquatic system varies with the rate of supply (from external sources) and removal (flushing out) of nutrients from the body of water. This means that some nutrients are more prevalent in certain areas than others and different ecosystems and environments have different limiting factors. Phosphorus is the limiting factor for plant growth in most freshwater ecosystems, and because phosphate adheres tightly to soil particles and sinks in areas such as wetlands and lakes, due to its prevalence nowadays more and more phosphorus is accumulating inside freshwater bodies. In marine ecosystems, nitrogen is the primary limiting nutrient; nitrous oxide (created by the combustion of fossil fuels) and its deposition in the water from the atmosphere has led to an increase in nitrogen levels, and also the heightened levels of eutrophication in the ocean. Cultural eutrophication Cultural or anthropogenic eutrophication is the process that causes eutrophication because of human activity. The problem became more apparent following the introduction of chemical fertilizers in agriculture (green revolution of the mid-1900s). Phosphorus and nitrogen are the two main nutrients that cause cultural eutrophication as they enrich the water, allowing for some aquatic plants, especially algae to grow rapidly and bloom in high densities. Algal blooms can shade out benthic plants thereby altering the overall plant community. When algae die off, their degradation by bacteria removes oxygen, potentially, generating anoxic conditions. This anoxic environment kills off aerobic organisms (e.g. fish and invertebrates) in the water body. This also affects terrestrial animals, restricting their access to affected water (e.g. as drinking sources). Selection for algal and aquatic plant species that can thrive in nutrient-rich conditions can cause structural and functional disruption to entire aquatic ecosystems and their food webs, resulting in loss of habitat and species biodiversity. There are several sources of excessive nutrients from human activity including run-off from fertilized fields, lawns, and golf courses, untreated sewage and wastewater and internal combustion of fuels creating nitrogen pollution. Cultural eutrophication can occur in fresh water and salt water bodies, shallow waters being the most susceptible. In shore lines and shallow lakes, sediments are frequently resuspended by wind and waves which can result in nutrient release from sediments into the overlying water, enhancing eutrophication. The deterioration of water quality caused by cultural eutrophication can therefore negatively impact human uses including potable supply for consumption, industrial uses and recreation. Natural eutrophication Eutrophication can be a natural process and occurs naturally through the gradual accumulation of sediment and nutrients. Naturally, eutrophication is usually caused by the natural accumulation of nutrients from dissolved phosphate minerals and dead plant matter in water. Natural eutrophication has been well-characterized in lakes. Paleolimnologists now recognise that climate change, geology, and other external influences are also critical in regulating the natural productivity of lakes. A few artificial lakes also demonstrate the reverse process (meiotrophication), becoming less nutrient rich with time as nutrient poor inputs slowly elute the nutrient richer water mass of the lake. This process may be seen in artificial lakes and reservoirs which tend to be highly eutrophic on first filling but may become more oligotrophic with time. The main difference between natural and anthropogenic eutrophication is that the natural process is very slow, occurring on geological time scales. Effects Ecological effects Eutrophication can have the following ecological effects: increased biomass of phytoplankton, changes in macrophyte species composition and biomass, dissolved oxygen depletion, increased incidences of fish kills, loss of desirable fish species. Decreased biodiversity When an ecosystem experiences an increase in nutrients, primary producers reap the benefits first. In aquatic ecosystems, species such as algae experience a population increase (called an algal bloom). Algal blooms limit the sunlight available to bottom-dwelling organisms and cause wide swings in the amount of dissolved oxygen in the water. Oxygen is required by all aerobically respiring plants and animals and it is replenished in daylight by photosynthesizing plants and algae. Under eutrophic conditions, dissolved oxygen greatly increases during the day, but is greatly reduced after dark by the respiring algae and by microorganisms that feed on the increasing mass of dead algae. When dissolved oxygen levels decline to hypoxic levels, fish and other marine animals suffocate. As a result, creatures such as fish, shrimp, and especially immobile bottom dwellers die off. In extreme cases, anaerobic conditions ensue, promoting growth of bacteria. Zones where this occurs are known as dead zones. New species invasion Eutrophication may cause competitive release by making abundant a normally limiting nutrient. This process causes shifts in the species composition of ecosystems. For instance, an increase in nitrogen might allow new, competitive species to invade and out-compete original inhabitant species. This has been shown to occur in New England salt marshes. In Europe and Asia, the common carp frequently lives in naturally eutrophic or hypereutrophic areas, and is adapted to living in such conditions. The eutrophication of areas outside its natural range partially explain the fish's success in colonizing these areas after being introduced. Toxicity Some harmful algal blooms resulting from eutrophication, are toxic to plants and animals. Freshwater algal blooms can pose a threat to livestock. When the algae die or are eaten, neuro- and hepatotoxins are released which can kill animals and may pose a threat to humans. An example of algal toxins working their way into humans is the case of shellfish poisoning. Biotoxins created during algal blooms are taken up by shellfish (mussels, oysters), leading to these human foods acquiring the toxicity and poisoning humans. Examples include paralytic, neurotoxic, and diarrhoetic shellfish poisoning. Other marine animals can be vectors for such toxins, as in the case of ciguatera, where it is typically a predator fish that accumulates the toxin and then poisons humans. Economic effects Eutrophication and harmful algal blooms can have economic impacts due to increasing water treatment costs, commercial fishing and shellfish losses, recreational fishing losses (reductions in harvestable fish and shellfish), and reduced tourism income (decreases in perceived aesthetic value of the water body). Water treatment costs can be increased due to decreases in water transparency (increased turbidity). There can also be issues with color and smell during drinking water treatment. Health impacts Human health effects of eutrophication derive from two main issues excess nitrate in drinking water and exposure to toxic algae. Nitrates in drinking water can cause blue baby syndrome in infants and can react with chemicals used to treat water to create disinfection by-products in drinking water. Getting direct contact with toxic algae through swimming or drinking can cause rashes, stomach or liver illness, and respiratory or neurological problems . Causes and effects for different types of water bodies Freshwater systems One response to added amounts of nutrients in aquatic ecosystems is the rapid growth of microscopic algae, creating an algal bloom. In freshwater ecosystems, the formation of floating algal blooms are commonly nitrogen-fixing cyanobacteria (blue-green algae). This outcome is favored when soluble nitrogen becomes limiting and phosphorus inputs remain significant. Nutrient pollution is a major cause of algal blooms and excess growth of other aquatic plants leading to overcrowding competition for sunlight, space, and oxygen. Increased competition for the added nutrients can cause potential disruption to entire ecosystems and food webs, as well as a loss of habitat, and biodiversity of species. When overproduced macrophytes and algae die in eutrophic water, their decompose further consumes dissolved oxygen. The depleted oxygen levels in turn may lead to fish kills and a range of other effects reducing biodiversity. Nutrients may become concentrated in an anoxic zone, often in deeper waters cut off by stratification of the water column and may only be made available again during autumn turn-over in temperate areas or in conditions of turbulent flow. The dead algae and organic load carried by the water inflows into a lake settle to the bottom and undergo anaerobic digestion releasing greenhouse gases such as methane and CO2. Some of the methane gas may be oxidised by anaerobic methane oxidation bacteria such as Methylococcus capsulatus, which in turn may provide a food source for zooplankton. Thus a self-sustaining biological process can take place to generate primary food source for the phytoplankton and zooplankton depending on the availability of adequate dissolved oxygen in the water body. Enhanced growth of aquatic vegetation, phytoplankton and algal blooms disrupts normal functioning of the ecosystem, causing a variety of problems such as a lack of oxygen which is needed for fish and shellfish to survive. The growth of dense algae in surface waters can shade the deeper water and reduce the viability of benthic shelter plants with resultant impacts on the wider ecosystem. Eutrophication also decreases the value of rivers, lakes and aesthetic enjoyment. Health problems can occur where eutrophic conditions interfere with drinking water treatment. Phosphorus is often regarded as the main culprit in cases of eutrophication in lakes subjected to "point source" pollution from sewage pipes. The concentration of algae and the trophic state of lakes correspond well to phosphorus levels in water. Studies conducted in the Experimental Lakes Area in Ontario have shown a relationship between the addition of phosphorus and the rate of eutrophication. Later stages of eutrophication lead to blooms of nitrogen-fixing cyanobacteria limited solely by the phosphorus concentration. Phosphorus-base eutrophication in fresh water lakes has been addressed in several cases. Coastal waters Eutrophication is a common phenomenon in coastal waters, where nitrogenous sources are the main culprit. In coastal waters, nitrogen is commonly the key limiting nutrient of marine waters (unlike the freshwater systems where phosphorus is often the limiting nutrient). Therefore, nitrogen levels are more important than phosphorus levels for understanding and controlling eutrophication problems in salt water. Estuaries, as the interface between freshwater and saltwater, can be both phosphorus and nitrogen limited and commonly exhibit symptoms of eutrophication. Eutrophication in estuaries often results in bottom water hypoxia or anoxia, leading to fish kills and habitat degradation. Upwelling in coastal systems also promotes increased productivity by conveying deep, nutrient-rich waters to the surface, where the nutrients can be assimilated by algae. Examples of anthropogenic sources of nitrogen-rich pollution to coastal waters include sea cage fish farming and discharges of ammonia from the production of coke from coal. In addition to runoff from land, wastes from fish farming and industrial ammonia discharges, atmospheric fixed nitrogen can be an important nutrient source in the open ocean. This could account for around one third of the ocean's external (non-recycled) nitrogen supply, and up to 3% of the annual new marine biological production. Coastal waters embrace a wide range of marine habitats from enclosed estuaries to the open waters of the continental shelf. Phytoplankton productivity in coastal waters depends on both nutrient and light supply, with the latter an important limiting factor in waters near to shore where sediment resuspension often limits light penetration. Nutrients are supplied to coastal waters from land via river and groundwater and also via the atmosphere. There is also an important source from the open ocean, via mixing of relatively nutrient rich deep ocean waters. Nutrient inputs from the ocean are little changed by human activity, although climate change may alter the water flows across the shelf break. By contrast, inputs from land to coastal zones of the nutrients nitrogen and phosphorus have been increased by human activity globally. The extent of increases varies greatly from place to place depending on human activities in the catchments. A third key nutrient, dissolved silicon, is derived primarily from sediment weathering to rivers and from offshore and is therefore much less affected by human activity. Effects of coastal eutrophication These increasing nitrogen and phosphorus nutrient inputs exert eutrophication pressures on coastal zones. These pressures vary geographically depending on the catchment activities and associated nutrient load. The geographical setting of the coastal zone is another important factor as it controls dilution of the nutrient load and oxygen exchange with the atmosphere. The effects of these eutrophication pressures can be seen in several different ways: There is evidence from satellite monitoring that the amounts of chlorophyll as a measure of overall phytoplankton activity are increasing in many coastal areas worldwide due to increased nutrient inputs. The phytoplankton species composition may change due to increased nutrient loadings and changes in the proportions of key nutrients. In particular the increases in nitrogen and phosphorus inputs, along with much smaller changes in silicon inputs, create changes in the ratio of nitrogen and phosphorus to silicon. These changing nutrient ratios drive changes in phytoplankton species composition, particularly disadvantaging silica rich phytoplankton species like diatoms compared to other species. This process leads to the development of nuisance algal blooms in areas such as the North Sea (see also OSPAR Convention) and the Black Sea. In some cases nutrient enrichment can lead to harmful algal blooms (HABs). Such blooms can occur naturally, but there is good evidence that these are increasing as a result of nutrient enrichment, although the causal linkage between nutrient enrichment and HABs is not straightforward. Oxygen depletion has existed in some coastal seas such as the Baltic for thousands of years. In such areas the density structure of the water column severely restricts water column mixing and associated oxygenation of deep water. However, increases in the inputs of bacterially degradable organic matter to such isolated deep waters can exacerbate such oxygen depletion in oceans. These areas of lower dissolved oxygen have increased globally in recent decades. They are usually connected with nutrient enrichment and resulting algal blooms. Climate change will generally tend to increase water column stratification and so exacerbate this oxygen depletion problem. An example of such coastal oxygen depletion is in the Gulf of Mexico where an area of seasonal anoxia more than 5000 square miles in area has developed since the 1950s. The increased primary production driving this anoxia is fueled by nutrients supplied by the Mississippi river. A similar process has been documented in the Black Sea. Hypolimnetic oxygen depletion can lead to summer "kills". During summer stratification, inputs or organic matter and sedimentation of primary producers can increase rates of respiration in the hypolimnion. If oxygen depletion becomes extreme, aerobic organisms (such as fish) may die, resulting in what is known as a "summer kill". Extent of the problem Surveys showed that 54% of lakes in Asia are eutrophic; in Europe, 53%; in North America, 48%; in South America, 41%; and in Africa, 28%. In South Africa, a study by the CSIR using remote sensing has shown more than 60% of the reservoirs surveyed were eutrophic. The World Resources Institute has identified 375 hypoxic coastal zones in the world, concentrated in coastal areas in Western Europe, the Eastern and Southern coasts of the US, and East Asia, particularly Japan. Prevention As a society, there are certain steps we can take to ensure the minimization of eutrophication, thereby reducing its harmful effects on humans and other living organisms in order to sustain a healthy norm of living, some of which are as follows: Minimizing pollution from sewage There are multiple different ways to fix cultural eutrophication with raw sewage being a point source of pollution. For example, sewage treatment plants can be upgraded for biological nutrient removal so that they discharge much less nitrogen and phosphorus to the receiving water body. However, even with good secondary treatment, most final effluents from sewage treatment works contain substantial concentrations of nitrogen as nitrate, nitrite or ammonia. Removal of these nutrients is an expensive and often difficult process. Laws regulating the discharge and treatment of sewage have led to dramatic nutrient reductions to surrounding ecosystems. As a major contributor to the nonpoint source nutrient loading of water bodies is untreated domestic sewage, it is necessary to provide treatment facilities to highly urbanized areas, particularly those in developing countries, in which treatment of domestic waste water is a scarcity. The technology to safely and efficiently reuse wastewater, both from domestic and industrial sources, should be a primary concern for policy regarding eutrophication. Minimizing nutrient pollution by agriculture There are many ways to help fix cultural eutrophication caused by agriculture. Some recommendations issued by the U.S. Department of Agriculture include: Nutrient management techniques - Anyone using fertilizers should apply fertilizer in the correct amount, at the right time of year, with the right method and placement. Organically fertilized fields can "significantly reduce harmful nitrate leaching" compared to conventionally fertilized fields. Eutrophication impacts are in some cases higher from organic production than they are from conventional production. In Japan the amount of nitrogen produced by livestock is adequate to serve the fertilizer needs for the agriculture industry. Year-round ground cover - a cover crop will prevent periods of bare ground thus eliminating erosion and runoff of nutrients even after the growing season has passed. Planting field buffers - Planting trees, shrubs and grasses along the edges of fields can help catch the runoff and absorb some nutrients before the water makes it to a nearby water body. Riparian buffer zones are interfaces between a flowing body of water and land, and have been created near waterways in an attempt to filter pollutants; sediment and nutrients are deposited here instead of in water. Creating buffer zones near farms and roads is another possible way to prevent nutrients from traveling too far. Conservation tillage - By reducing frequency and intensity of tilling, the land will enhance the chance of nutrients absorbing into the ground. Policy The United Nations framework for Sustainable Development Goals recognizes the damaging effects of eutrophication for marine environments. It has established a timeline for creating an Index of Coastal Eutrophication and Floating Plastic Debris Density (ICEP) within Sustainable Development Goal 14 (life below water). SDG 14 specifically has a target to: "by 2025, prevent and significantly reduce marine pollution of all kinds, in particular from land-based activities, including marine debris and nutrient pollution". Policy and regulations are a set of tools to minimize causes of eutrophication. Nonpoint sources of pollution are the primary contributors to eutrophication, and their effects can be minimized through common agricultural practices. Reducing the amount of pollutants that reach a watershed can be achieved through the protection of its forest cover, reducing the amount of erosion leeching into a watershed. Also, through the efficient, controlled use of land using sustainable agricultural practices to minimize land degradation, the amount of soil runoff and nitrogen-based fertilizers reaching a watershed can be reduced. Waste disposal technology constitutes another factor in eutrophication prevention. Because a body of water can have an effect on a range of people reaching far beyond that of the watershed, cooperation between different organizations is necessary to prevent the intrusion of contaminants that can lead to eutrophication. Agencies ranging from state governments to those of water resource management and non-governmental organizations, going as low as the local population, are responsible for preventing eutrophication of water bodies. In the United States, the most well known inter-state effort to prevent eutrophication is the Chesapeake Bay. Reversal and remediation Reducing nutrient inputs is a crucial precondition for restoration. Still, there are two caveats: Firstly, it can take a long time, mainly because of the storage of nutrients in sediments. Secondly, restoration may need more than a simple reversal of inputs since there are sometimes several stable but very different ecological states. Recovery of eutrophicated lakes is slow, often requiring several decades. In environmental remediation, nutrient removal technologies include biofiltration, which uses living material to capture and biologically degrade pollutants. Examples include green belts, riparian areas, natural and constructed wetlands, and treatment ponds. Algae bloom forecasting The National Oceanic Atmospheric Admiration in the United States has created a forecasting tool for regions such as the Great Lakes, the Gulf of Maine, and The Gulf of Mexico. Shorter term predictions can help to show the intensity, location, and trajectory of blooms in order to warn more directly affected communities. Longer term tests in specific regions and bodies help to predict larger scale factors like scale of future blooms and factors that could lead to more adverse effects. Nutrient bioextraction Nutrient bioextraction is bioremediation involving cultured plants and animals. Nutrient bioextraction or bioharvesting is the practice of farming and harvesting shellfish and seaweed to remove nitrogen and other nutrients from natural water bodies. Shellfish in estuaries It has been suggested that nitrogen removal by oyster reefs could generate net benefits for sources facing nitrogen emission restrictions, similar to other nutrient trading scenarios. Specifically, if oysters maintain nitrogen levels in estuaries below thresholds, then oysters effectively stave off an enforcement response, and compliance costs parties responsible for nitrogen emission would otherwise incur. Several studies have shown that oysters and mussels can dramatically impact nitrogen levels in estuaries. Filter feeding activity is considered beneficial to water quality by controlling phytoplankton density and sequestering nutrients, which can be removed from the system through shellfish harvest, buried in the sediments, or lost through denitrification. Foundational work toward the idea of improving marine water quality through shellfish cultivation was conducted by Odd Lindahl et al., using mussels in Sweden. In the United States, shellfish restoration projects have been conducted on the East, West and Gulf coasts. Seaweed farming Studies have demonstrated seaweed's potential to improve nitrogen levels. Seaweed aquaculture offers an opportunity to mitigate, and adapt to climate change. Seaweed, such as kelp, also absorbs phosphorus and nitrogen and is thus helpful to remove excessive nutrients from polluted parts of the sea. Some cultivated seaweeds have very high productivity and could absorb large quantities of N, P, , producing large amounts of having an excellent effect on decreasing eutrophication. It is believed that seaweed cultivation in large scale should be a good solution to the eutrophication problem in coastal waters. Geo-engineering Another technique for combatting hypoxia/eutrophication in localized situations is direct injection of compressed air, a technique used in the restoration of the Salford Docks area of the Manchester Ship Canal in England. For smaller-scale waters such as aquaculture ponds, pump aeration is standard. Chemical removal of phosphorus Removing phosphorus can remediate eutrophication. Of the several phosphate sorbents, alum (aluminium sulfate) is of practical interest.) Many materials have been investigated. The phosphate sorbent is commonly applied in the surface of the water body and it sinks to the bottom of the lake reducing phosphate, such sorbents have been applied worldwide to manage eutrophication and algal bloom (for example under the commercial name Phoslock). In a large-scale study, 114 lakes were monitored for the effectiveness of alum at phosphorus reduction. Across all lakes, alum effectively reduced the phosphorus for 11 years. While there was variety in longevity (21 years in deep lakes and 5.7 years in shallow lakes), the results express the effectiveness of alum at controlling phosphorus within lakes. Alum treatment is less effective in deep lakes, as well as lakes with substantial external phosphorus loading. Finnish phosphorus removal measures started in the mid-1970s and have targeted rivers and lakes polluted by industrial and municipal discharges. These efforts have had a 90% removal efficiency. Still, some targeted point sources did not show a decrease in runoff despite reduction efforts.
Physical sciences
Water: General
Earth science
54888
https://en.wikipedia.org/wiki/Telomere
Telomere
A telomere (; ) is a region of repetitive nucleotide sequences associated with specialized proteins at the ends of linear chromosomes (see Sequences). Telomeres are a widespread genetic feature most commonly found in eukaryotes. In most, if not all species possessing them, they protect the terminal regions of chromosomal DNA from progressive degradation and ensure the integrity of linear chromosomes by preventing DNA repair systems from mistaking the very ends of the DNA strand for a double-strand break. Discovery The existence of a special structure at the ends of chromosomes was independently proposed in 1938 by Hermann Joseph Muller, studying the fruit fly Drosophila melanogaster, and in 1939 by Barbara McClintock, working with maize. Muller observed that the ends of irradiated fruit fly chromosomes did not present alterations such as deletions or inversions. He hypothesized the presence of a protective cap, which he coined "telomeres", from the Greek telos (end) and meros (part). In the early 1970s, Soviet theorist Alexey Olovnikov first recognized that chromosomes could not completely replicate their ends; this is known as the "end replication problem". Building on this, and accommodating Leonard Hayflick's idea of limited somatic cell division, Olovnikov suggested that DNA sequences are lost every time a cell replicates until the loss reaches a critical level, at which point cell division ends. According to his theory of marginotomy, DNA sequences at the ends of telomeres are represented by tandem repeats, which create a buffer that determines the number of divisions that a certain cell clone can undergo. Furthermore, it was predicted that a specialized DNA polymerase (originally called a tandem-DNA-polymerase) could extend telomeres in immortal tissues such as germ line, cancer cells and stem cells. It also followed from this hypothesis that organisms with circular genome, such as bacteria, do not have the end replication problem and therefore do not age. Olovnikov suggested that in germline cells, cells of vegetatively propagated organisms, and immortal cell populations such as most cancer cell lines, an enzyme might be activated to prevent the shortening of DNA termini with each cell division. In 1975–1977, Elizabeth Blackburn, working as a postdoctoral fellow at Yale University with Joseph G. Gall, discovered the unusual nature of telomeres, with their simple repeated DNA sequences composing chromosome ends. Blackburn, Carol Greider, and Jack Szostak were awarded the 2009 Nobel Prize in Physiology or Medicine for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase. Structure and function End replication problem During DNA replication, DNA polymerase cannot replicate the sequences present at the 3' ends of the parent strands. This is a consequence of its unidirectional mode of DNA synthesis: it can only attach new nucleotides to an existing 3'-end (that is, synthesis progresses 5'-3') and thus it requires a primer to initiate replication. On the leading strand (oriented 5'-3' within the replication fork), DNA-polymerase continuously replicates from the point of initiation all the way to the strand's end with the primer (made of RNA) then being excised and substituted by DNA. The lagging strand, however, is oriented 3'-5' with respect to the replication fork so continuous replication by DNA-polymerase is impossible, which necessitates discontinuous replication involving the repeated synthesis of primers further 5' of the site of initiation (see lagging strand replication). The last primer to be involved in lagging-strand replication sits near the 3'-end of the template (corresponding to the potential 5'-end of the lagging-strand). Originally it was believed that the last primer would sit at the very end of the template, thus, once removed, the DNA-polymerase that substitutes primers with DNA (DNA-Pol δ in eukaryotes) would be unable to synthesize the "replacement DNA" from the 5'-end of the lagging strand so that the template nucleotides previously paired to the last primer would not be replicated. It has since been questioned whether the last lagging strand primer is placed exactly at the 3'-end of the template and it was demonstrated that it is rather synthesized at a distance of about 70–100 nucleotides which is consistent with the finding that DNA in cultured human cell is shortened by 50–100 base pairs per cell division. If coding sequences are degraded in this process, potentially vital genetic code would be lost. Telomeres are non-coding, repetitive sequences located at the termini of linear chromosomes to act as buffers for those coding sequences further behind. They "cap" the end-sequences and are progressively degraded in the process of DNA replication. The "end replication problem" is exclusive to linear chromosomes as circular chromosomes do not have ends lying without reach of DNA-polymerases. Most prokaryotes, relying on circular chromosomes, accordingly do not possess telomeres. A small fraction of bacterial chromosomes (such as those in Streptomyces, Agrobacterium, and Borrelia), however, are linear and possess telomeres, which are very different from those of the eukaryotic chromosomes in structure and function. The known structures of bacterial telomeres take the form of proteins bound to the ends of linear chromosomes, or hairpin loops of single-stranded DNA at the ends of the linear chromosomes. Telomere ends and shelterin At the very 3'-end of the telomere there is a 300 base pair overhang which can invade the double-stranded portion of the telomere forming a structure known as a T-loop. This loop is analogous to a knot, which stabilizes the telomere, and prevents the telomere ends from being recognized as breakpoints by the DNA repair machinery. Should non-homologous end joining occur at the telomeric ends, chromosomal fusion would result. The T-loop is maintained by several proteins, collectively referred to as the shelterin complex. In humans, the shelterin complex consists of six proteins identified as TRF1, TRF2, TIN2, POT1, TPP1, and RAP1. In many species, the sequence repeats are enriched in guanine, e.g. TTAGGG in vertebrates, which allows the formation of G-quadruplexes, a special conformation of DNA involving non-Watson-Crick base pairing. There are different subtypes depending on the involvement of single- or double-stranded DNA, among other things. There is evidence for the 3'-overhang in ciliates (that possess telomere repeats similar to those found in vertebrates) to form such G-quadruplexes that accommodate it, rather than a T-loop. G-quadruplexes present an obstacle for enzymes such as DNA-polymerases and are thus thought to be involved in the regulation of replication and transcription. Telomerase Many organisms have a ribonucleoprotein enzyme called telomerase, which carries out the task of adding repetitive nucleotide sequences to the ends of the DNA. Telomerase "replenishes" the telomere "cap" and requires no ATP. In most multicellular eukaryotic organisms, telomerase is active only in germ cells, some types of stem cells such as embryonic stem cells, and certain white blood cells. Telomerase can be reactivated and telomeres reset back to an embryonic state by somatic cell nuclear transfer. The steady shortening of telomeres with each replication in somatic (body) cells may have a role in senescence and in the prevention of cancer. This is because the telomeres act as a sort of time-delay "fuse", eventually running out after a certain number of cell divisions and resulting in the eventual loss of vital genetic information from the cell's chromosome with future divisions. Length Telomere length varies greatly between species, from approximately 300 base pairs in yeast to many kilobases in humans, and usually is composed of arrays of guanine-rich, six- to eight-base-pair-long repeats. Eukaryotic telomeres normally terminate with 3′ single-stranded-DNA overhang ranging from 75 to 300 bases, which is essential for telomere maintenance and capping. Multiple proteins binding single- and double-stranded telomere DNA have been identified. These function in both telomere maintenance and capping. Telomeres form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle, stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA, and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop. Shortening Oxidative damage Apart from the end replication problem, in vitro studies have shown that telomeres accumulate damage due to oxidative stress and that oxidative stress-mediated DNA damage has a major influence on telomere shortening in vivo. There is a multitude of ways in which oxidative stress, mediated by reactive oxygen species (ROS), can lead to DNA damage; however, it is yet unclear whether the elevated rate in telomeres is brought about by their inherent susceptibility or a diminished activity of DNA repair systems in these regions. Despite widespread agreement of the findings, widespread flaws regarding measurement and sampling have been pointed out; for example, a suspected species and tissue dependency of oxidative damage to telomeres is said to be insufficiently accounted for. Population-based studies have indicated an interaction between anti-oxidant intake and telomere length. In the Long Island Breast Cancer Study Project (LIBCSP), authors found a moderate increase in breast cancer risk among women with the shortest telomeres and lower dietary intake of beta carotene, vitamin C or E. These results suggest that cancer risk due to telomere shortening may interact with other mechanisms of DNA damage, specifically oxidative stress. Association with aging Although telomeres shorten during the lifetime of an individual, it is telomere shortening-rate rather than telomere length that is associated with the lifespan of a species. Critically short telomeres trigger a DNA damage response and cellular senescence. Mice have much longer telomeres, but a greatly accelerated telomere shortening-rate and greatly reduced lifespan compared to humans and elephants. Telomere shortening is associated with aging, mortality, and aging-related diseases in experimental animals. Although many factors can affect human lifespan, such as smoking, diet, and exercise, as persons approach the upper limit of human life expectancy, longer telomeres may be associated with lifespan. Potential effect of psychological stress Meta-analyses found that increased perceived psychological stress was associated with a small decrease in telomere length—but that these associations attenuate to no significant association when accounting for publication bias. The literature concerning telomeres as integrative biomarkers of exposure to stress and adversity is dominated by cross-sectional and correlational studies, which makes causal interpretation problematic. A 2020 review argued that the relationship between psychosocial stress and telomere length appears strongest for stress experienced in utero or early life. Lengthening The phenomenon of limited cellular division was first observed by Leonard Hayflick, and is now referred to as the Hayflick limit. Significant discoveries were subsequently made by a group of scientists organized at Geron Corporation by Geron's founder Michael D. West, that tied telomere shortening with the Hayflick limit. The cloning of the catalytic component of telomerase enabled experiments to test whether the expression of telomerase at levels sufficient to prevent telomere shortening was capable of immortalizing human cells. Telomerase was demonstrated in a 1998 publication in Science to be capable of extending cell lifespan, and now is well-recognized as capable of immortalizing human somatic cells. Two studies on long-lived seabirds demonstrate that the role of telomeres is far from being understood. In 2003, scientists observed that the telomeres of Leach's storm-petrel (Oceanodroma leucorhoa) seem to lengthen with chronological age, the first observed instance of such behaviour of telomeres. A study reported that telomere length of different mammalian species correlates inversely rather than directly with lifespan, and concluded that the contribution of telomere length to lifespan remains controversial. There is little evidence that, in humans, telomere length is a significant biomarker of normal aging with respect to important cognitive and physical abilities. Sequences Experimentally verified and predicted telomere sequence motifs from more than 9000 species are collected in research community curated database TeloBase. Some of the experimentally verified telomere nucleotide sequences are also listed in Telomerase Database website (see nucleic acid notation for letter representations). Research on disease risk Preliminary research indicates that disease risk in aging may be associated with telomere shortening, senescent cells, or SASP (senescence-associated secretory phenotype). Measurement Several techniques are currently employed to assess average telomere length in eukaryotic cells. One method is the Terminal Restriction Fragment (TRF) southern blot. There is a Web-based Analyser of the Length of Telomeres (WALTER), software processing the TRF pictures. A Real-Time PCR assay for telomere length involves determining the Telomere-to-Single Copy Gene (T/S) ratio, which is demonstrated to be proportional to the average telomere length in a cell. Tools have also been developed to estimate the length of telomere from whole genome sequencing (WGS) experiments. Amongst these are TelSeq, Telomerecat and telomereHunter. Length estimation from WGS typically works by differentiating telomere sequencing reads and then inferring the length of telomere that produced that number of reads. These methods have been shown to correlate with preexisting methods of estimation such as PCR and TRF. Flow-FISH is used to quantify the length of telomeres in human white blood cells. A semi-automated method for measuring the average length of telomeres with Flow FISH was published in Nature Protocols in 2006. While multiple companies offer telomere length measurement services, the utility of these measurements for widespread clinical or personal use has been questioned. Nobel Prize winner Elizabeth Blackburn, who was co-founder of one company, promoted the clinical utility of telomere length measures. In wildlife During the last two decades, eco-evolutionary studies have investigated the relevance of life-history traits and environmental conditions on telomeres of wildlife. Most of these studies have been conducted in endotherms, i.e. birds and mammals. They have provided evidence for the inheritance of telomere length; however, heritability estimates vary greatly within and among species. Age and telomere length often negatively correlate in vertebrates, but this decline is variable among taxa and linked to the method used for estimating telomere length. In contrast, the available information shows no sex differences in telomere length across vertebrates. Phylogeny and life history traits such as body size or the pace of life can also affect telomere dynamics. For example, it has been described across species of birds and mammals. In 2019, a meta-analysis confirmed that the exposure to stressors (e.g. pathogen infection, competition, reproductive effort and high activity level) was associated with shorter telomeres across different animal taxa. Studies on ectotherms, and other non-mammalian organisms, show that there is no single universal model of telomere erosion; rather, there is wide variation in relevant dynamics across Metazoa, and even within smaller taxonomic groups these patterns appear diverse.
Biology and health sciences
Molecular biology
Biology
54910
https://en.wikipedia.org/wiki/Chlorofluorocarbon
Chlorofluorocarbon
Chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs) are fully or partly halogenated hydrocarbons that contain carbon (C), hydrogen (H), chlorine (Cl), and fluorine (F), produced as volatile derivatives of methane, ethane, and propane. The most common example is dichlorodifluoromethane (R-12). R-12 is also commonly called Freon and was used as a refrigerant. Many CFCs have been widely used as refrigerants, propellants (in aerosol applications), gaseous fire suppression systems, and solvents. As a result of CFCs contributing to ozone depletion in the upper atmosphere, the manufacture of such compounds has been phased out under the Montreal Protocol, and they are being replaced with other products such as hydrofluorocarbons (HFCs) and hydrofluoroolefins (HFOs) including R-410A, R-134a and R-1234yf. Structure, properties and production As in simpler alkanes, carbon in the CFCs bond with tetrahedral symmetry. Because the fluorine and chlorine atoms differ greatly in size and effective charge from hydrogen and from each other, the methane-derived CFCs deviate from perfect tetrahedral symmetry. The physical properties of CFCs and HCFCs are tunable by changes in the number and identity of the halogen atoms. In general, they are volatile but less so than their parent alkanes. The decreased volatility is attributed to the molecular polarity induced by the halides, which induces intermolecular interactions. Thus, methane boils at −161 °C whereas the fluoromethanes boil between −51.7 (CF2H2) and −128 °C (CF4). The CFCs have still higher boiling points because the chloride is even more polarizable than fluoride. Because of their polarity, the CFCs are useful solvents, and their boiling points make them suitable as refrigerants. The CFCs are far less flammable than methane, in part because they contain fewer C-H bonds and in part because, in the case of the chlorides and bromides, the released halides quench the free radicals that sustain flames. The densities of CFCs are higher than their corresponding alkanes. In general, the density of these compounds correlates with the number of chlorides. CFCs and HCFCs are usually produced by halogen exchange starting from chlorinated methanes and ethanes. Illustrative is the synthesis of chlorodifluoromethane from chloroform: HCCl3 + 2 HF → HCF2Cl + 2 HCl Brominated derivatives are generated by free-radical reactions of hydrochlorofluorocarbons, replacing C-H bonds with C-Br bonds. The production of the anesthetic 2-bromo-2-chloro-1,1,1-trifluoroethane ("halothane") is illustrative: CF3CH2Cl + Br2 → CF3CHBrCl + HBr Applications CFCs and HCFCs are used in various applications because of their low toxicity, reactivity and flammability. Every permutation of fluorine, chlorine and hydrogen based on methane and ethane has been examined and most have been commercialized. Furthermore, many examples are known for higher numbers of carbon as well as related compounds containing bromine. Uses include refrigerants, blowing agents, aerosol propellants in medicinal applications, and degreasing solvents. Billions of kilograms of chlorodifluoromethane are produced annually as a precursor to tetrafluoroethylene, the monomer that is converted into Teflon. Classes of compounds and Numbering System Chlorofluorocarbons (CFCs): when derived from methane and ethane these compounds have the formulae CClmF4−m and C2ClmF6−m, where m is nonzero. Hydro-chlorofluorocarbons (HCFCs): when derived from methane and ethane these compounds have the formula CClmFnH4−m−n and C2ClxFyH6−x−y, where m, n, x, and y are nonzero. and bromofluorocarbons have formulae similar to the CFCs and HCFCs but also include bromine. Hydrofluorocarbons (HFCs): when derived from methane, ethane, propane, and butane, these compounds have the respective formulae CFmH4−m, C2FmH6−m, C3FmH8−m, and C4FmH10−m, where m is nonzero. Numbering system A special numbering system is to be used for fluorinated alkanes, prefixed with Freon-, R-, CFC- and HCFC-, where the rightmost value indicates the number of fluorine atoms, the next value to the left is the number of hydrogen atoms plus 1, and the next value to the left is the number of carbon atoms less one (zeroes are not stated), and the remaining atoms are chlorine. Freon-12, for example, indicates a methane derivative (only two numbers) containing two fluorine atoms (the second 2) and no hydrogen (1-1=0). It is therefore CCl2F2. Another equation that can be applied to get the correct molecular formula of the CFC/R/Freon class compounds is to take the numbering and add 90 to it. The resulting value will give the number of carbons as the first numeral, the second numeral gives the number of hydrogen atoms, and the third numeral gives the number of fluorine atoms. The rest of the unaccounted carbon bonds are occupied by chlorine atoms. The value of this equation is always a three figure number. An easy example is that of CFC-12, which gives: 90+12=102 -> 1 carbon, 0 hydrogens, 2 fluorine atoms, and hence 2 chlorine atoms resulting in CCl2F2. The main advantage of this method of deducing the molecular composition in comparison with the method described in the paragraph above is that it gives the number of carbon atoms of the molecule. Freons containing bromine are signified by four numbers. Isomers, which are common for ethane and propane derivatives, are indicated by letters following the numbers: Reactions The reaction of the CFCs which is responsible for the depletion of ozone, is the photo-induced scission of a C-Cl bond: CCl3F → CCl2F. + Cl. The chlorine atom, written often as Cl., behaves very differently from the chlorine molecule (Cl2). The radical Cl. is long-lived in the upper atmosphere, where it catalyzes the conversion of ozone into O2. Ozone absorbs UV-B radiation, so its depletion allows more of this high energy radiation to reach the Earth's surface. Bromine atoms are even more efficient catalysts; hence brominated CFCs are also regulated. Impact as greenhouse gases CFCs were phased out via the Montreal Protocol due to their part in ozone depletion. The atmospheric impacts of CFCs are not limited to their role as ozone-depleting chemicals. Infrared absorption bands prevent heat at that wavelength from escaping Earth's atmosphere. CFCs have their strongest absorption bands from C-F and C-Cl bonds in the spectral region of 7.8–15.3 μm—referred to as the "atmospheric window" due to the relative transparency of the atmosphere within this region. The strength of CFC absorption bands and the unique susceptibility of the atmosphere at wavelengths where CFCs (indeed all covalent fluorine compounds) absorb radiation creates a "super" greenhouse effect from CFCs and other unreactive fluorine-containing gases such as perfluorocarbons, HFCs, HCFCs, bromofluorocarbons, SF6, and NF3. This "atmospheric window" absorption is intensified by the low concentration of each individual CFC. Because CO2 is close to saturation with high concentrations and few infrared absorption bands, the radiation budget and hence the greenhouse effect has low sensitivity to changes in CO2 concentration; the increase in temperature is roughly logarithmic. Conversely, the low concentration of CFCs allow their effects to increase linearly with mass, so that chlorofluorocarbons are greenhouse gases with a much higher potential to enhance the greenhouse effect than CO2. Groups are actively disposing of legacy CFCs to reduce their impact on the atmosphere. According to NASA in 2018, the hole in the ozone layer has begun to recover as a result of CFC bans. However, research released in 2019 reports an alarming increase in CFCs, pointing to unregulated use in China. History Prior to, and during the 1920s, refrigerators used toxic gases as refrigerants, including ammonia, sulphur dioxide, and chloromethane. Later in the 1920s after a series of fatal accidents involving the leaking of chloromethane from refrigerators, a major collaborative effort began between American corporations Frigidaire, General Motors, and DuPont to develop a safer, non-toxic alternative. Thomas Midgley Jr. of General Motors is credited for synthesizing the first chlorofluorocarbons. The Frigidaire corporation was issued the first patent, number 1,886,339, for the formula for CFCs on December 31, 1928. In a demonstration for the American Chemical Society, Midgley flamboyantly demonstrated all these properties by inhaling a breath of the gas and using it to blow out a candle in 1930. By 1930, General Motors and Du Pont formed the Kinetic Chemical Company to produce Freon, and by 1935, over 8 million refrigerators utilizing R-12 were sold by Frigidaire and its competitors. In 1932, Carrier began using R-11 in the worlds first self-contained home air conditioning unit known as the "atmospheric cabinet". As a result of CFCs being largely non-toxic, they quickly became the coolant of choice in large air-conditioning systems. Public health codes in cities were revised to designate chlorofluorocarbons as the only gases that could be used as refrigerants in public buildings. Growth in CFCs continued over the following decades leading to peak annual sales of over 1 billion USD with greater than 1 million metric tonnes being produced annually. It wasn't until 1974 that it was first discovered by two University of California chemists, Professor F. Sherwood Rowland and Dr. Mario Molina, that the use of chlorofluorocarbons were causing a significant depletion in atmospheric ozone concentrations. This initiated the environmental effort which eventually resulted in the enactment of the Montreal Protocol. Commercial development and use in fire extinguishing During World War II, various chloroalkanes were in standard use in military aircraft, although these early halons suffered from excessive toxicity. Nevertheless, after the war they slowly became more common in civil aviation as well. In the 1960s, fluoroalkanes and bromofluoroalkanes became available and were quickly recognized as being highly effective fire-fighting materials. Much early research with Halon 1301 was conducted under the auspices of the US Armed Forces, while Halon 1211 was, initially, mainly developed in the UK. By the late 1960s they were standard in many applications where water and dry-powder extinguishers posed a threat of damage to the protected property, including computer rooms, telecommunications switches, laboratories, museums and art collections. Beginning with warships, in the 1970s, bromofluoroalkanes also progressively came to be associated with rapid knockdown of severe fires in confined spaces with minimal risk to personnel. By the early 1980s, bromofluoroalkanes were in common use on aircraft, ships, and large vehicles as well as in computer facilities and galleries. However, concern was beginning to be expressed about the impact of chloroalkanes and bromoalkanes on the ozone layer. The Vienna Convention for the Protection of the Ozone Layer did not cover bromofluoroalkanes under the same restrictions, instead, the consumption of bromofluoroalkanes was frozen at 1986 levels. This is due to the fact that emergency discharge of extinguishing systems was thought to be too small in volume to produce a significant impact, and too important to human safety for restriction. Regulation Since the late 1970s, the use of CFCs has been heavily regulated because of their destructive effects on the ozone layer. After the development of his electron capture detector, James Lovelock was the first to detect the widespread presence of CFCs in the air, finding a mole fraction of 60 ppt of CFC-11 over Ireland. In a self-funded research expedition ending in 1973, Lovelock went on to measure CFC-11 in both the Arctic and Antarctic, finding the presence of the gas in each of 50 air samples collected, and concluding that CFCs are not hazardous to the environment. The experiment did however provide the first useful data on the presence of CFCs in the atmosphere. The damage caused by CFCs was discovered by Sherry Rowland and Mario Molina who, after hearing a lecture on the subject of Lovelock's work, embarked on research resulting in the first publication suggesting the connection in 1974. It turns out that one of CFCs' most attractive features—their low reactivity—is key to their most destructive effects. CFCs' lack of reactivity gives them a lifespan that can exceed 100 years, giving them time to diffuse into the upper stratosphere. Once in the stratosphere, the sun's ultraviolet radiation is strong enough to cause the homolytic cleavage of the C-Cl bond. In 1976, under the Toxic Substances Control Act, the EPA banned commercial manufacturing and use of CFCs and aerosol propellants. This was later superseded in the 1990 amendments to the Clean Air Act to address stratospheric ozone depletion. By 1987, in response to a dramatic seasonal depletion of the ozone layer over Antarctica, diplomats in Montreal forged a treaty, the Montreal Protocol, which called for drastic reductions in the production of CFCs. On 2 March 1989, 12 European Community nations agreed to ban the production of all CFCs by the end of the century. In 1990, diplomats met in London and voted to significantly strengthen the Montreal Protocol by calling for a complete elimination of CFCs by 2000. By 2010, CFCs should have been completely eliminated from developing countries as well. Because the only CFCs available to countries adhering to the treaty is from recycling, their prices have increased considerably. A worldwide end to production should also terminate the smuggling of this material. However, there are current CFC smuggling issues, as recognized by the United Nations Environmental Programme (UNEP) in a 2006 report titled "Illegal Trade in Ozone Depleting Substances". UNEP estimates that between 16,000–38,000 tonnes of CFCs passed through the black market in the mid-1990s. The report estimated between 7,000 and 14,000 tonnes of CFCs are smuggled annually into developing countries. Asian countries are those with the most smuggling; as of 2007, China, India and South Korea were found to account for around 70% of global CFC production, South Korea later to ban CFC production in 2010. Possible reasons for continued CFC smuggling were also examined: the report noted that many of the refrigeration systems that were designed to be operated utilizing the banned CFC products have long lifespans and continue to operate. The cost of replacing the equipment of these items is sometimes cheaper than outfitting them with a more ozone-friendly appliance. Additionally, CFC smuggling is not considered a significant issue, so the perceived penalties for smuggling are low. In 2018 public attention was drawn to the issue, that at an unknown place in east Asia an estimated amount of 13,000 metric tons annually of CFCs have been produced since about 2012 in violation of the protocol. While the eventual phaseout of CFCs is likely, efforts are being taken to stem these current non-compliance problems. By the time of the Montreal Protocol, it was realised that deliberate and accidental discharges during system tests and maintenance accounted for substantially larger volumes than emergency discharges, and consequently halons were brought into the treaty, albeit with many exceptions. Regulatory gap While the production and consumption of CFCs are regulated under the Montreal Protocol, emissions from existing banks of CFCs are not regulated under the agreement. In 2002, there were an estimated 5,791 kilotons of CFCs in existing products such as refrigerators, air conditioners, aerosol cans and others. Approximately one-third of these CFCs are projected to be emitted over the next decade if action is not taken, posing a threat to both the ozone layer and the climate. A proportion of these CFCs can be safely captured and destroyed by means of high temperature, controlled incineration which destroys the CFC molecule. Regulation and DuPont In 1978 the United States banned the use of CFCs such as Freon in aerosol cans, the beginning of a long series of regulatory actions against their use. The critical DuPont manufacturing patent for Freon ("Process for Fluorinating Halohydrocarbons", U.S. Patent #3258500) was set to expire in 1979. In conjunction with other industrial peers DuPont formed a lobbying group, the "Alliance for Responsible CFC Policy", to combat regulations of ozone-depleting compounds. In 1986 DuPont, with new patents in hand, reversed its previous stance and publicly condemned CFCs. DuPont representatives appeared before the Montreal Protocol urging that CFCs be banned worldwide and stated that their new HCFCs would meet the worldwide demand for refrigerants. Phasing-out of CFCs Use of certain chloroalkanes as solvents for large scale application, such as dry cleaning, have been phased out, for example, by the IPPC directive on greenhouse gases in 1994 and by the volatile organic compounds (VOC) directive of the EU in 1997. Permitted chlorofluoroalkane uses are medicinal only. Bromofluoroalkanes have been largely phased out and the possession of equipment for their use is prohibited in some countries like the Netherlands and Belgium, from 1 January 2004, based on the Montreal Protocol and guidelines of the European Union. Production of new stocks ceased in most (probably all) countries in 1994. However many countries still require aircraft to be fitted with halon fire suppression systems because no safe and completely satisfactory alternative has been discovered for this application. There are also a few other, highly specialized uses. These programs recycle halon through "halon banks" coordinated by the Halon Recycling Corporation to ensure that discharge to the atmosphere occurs only in a genuine emergency and to conserve remaining stocks. The interim replacements for CFCs are hydrochlorofluorocarbons (HCFCs), which deplete stratospheric ozone, but to a much lesser extent than CFCs. Ultimately, hydrofluorocarbons (HFCs) will replace HCFCs. Unlike CFCs and HCFCs, HFCs have an ozone depletion potential (ODP) of 0. DuPont began producing hydrofluorocarbons as alternatives to Freon in the 1980s. These included Suva refrigerants and Dymel propellants. Natural refrigerants are climate friendly solutions that are enjoying increasing support from large companies and governments interested in reducing global warming emissions from refrigeration and air conditioning. Phasing-out of HFCs and HCFCs Hydrofluorocarbons are included in the Kyoto Protocol and are regulated under the Kigali Amendment to the Montreal Protocol due to their very high Global Warming Potential (GWP) and the recognition of halocarbon contributions to climate change. On September 21, 2007, approximately 200 countries agreed to accelerate the elimination of hydrochlorofluorocarbons entirely by 2020 in a United Nations-sponsored Montreal summit. Developing nations were given until 2030. Many nations, such as the United States and China, who had previously resisted such efforts, agreed with the accelerated phase out schedule. India successfully achieved the complete phase out of HCFC-141 b in 2020. It was reported that levels of HCFCs in the atmosphere had started to fall in 2021 due to their phase out under the Montreal Protocol. Properly collecting, controlling, and destroying CFCs and HCFCs While new production of these refrigerants has been banned, large volumes still exist in older systems and have been said to pose an immediate threat to our environment. Preventing the release of these harmful refrigerants has been ranked as one of the single most effective actions we can take to mitigate catastrophic climate change. Development of alternatives for CFCs Work on alternatives for chlorofluorocarbons in refrigerants began in the late 1970s after the first warnings of damage to stratospheric ozone were published. The hydrochlorofluorocarbons (HCFCs) are less stable in the lower atmosphere, enabling them to break down before reaching the ozone layer. Nevertheless, a significant fraction of the HCFCs do break down in the stratosphere and they have contributed to more chlorine buildup there than originally predicted. Later alternatives lacking the chlorine, the hydrofluorocarbons (HFCs) have an even shorter lifetimes in the lower atmosphere. One of these compounds, HFC-134a, were used in place of CFC-12 in automobile air conditioners. Hydrocarbon refrigerants (a propane/isobutane blend) were also used extensively in mobile air conditioning systems in Australia, the US and many other countries, as they had excellent thermodynamic properties and performed particularly well in high ambient temperatures. 1,1-Dichloro-1-fluoroethane (HCFC-141b) has replaced HFC-134a, due to its low ODP and GWP values. And according to the Montreal Protocol, HCFC-141b is supposed to be phased out completely and replaced with zero ODP substances such as cyclopentane, HFOs, and HFC-345a before January 2020. Among the natural refrigerants (along with ammonia and carbon dioxide), hydrocarbons have negligible environmental impacts and are also used worldwide in domestic and commercial refrigeration applications, and are becoming available in new split system air conditioners. Various other solvents and methods have replaced the use of CFCs in laboratory analytics. In Metered-dose inhalers (MDI), a non-ozone effecting substitute was developed as a propellant, known as "hydrofluoroalkane." Development of Hydrofluoroolefins as alternatives to CFCs and HCFCs The development of Hydrofluoroolefins (HFOs) as replacements for Hydrochlorofluorocarbons and Hydrofluorocarbons began after the Kigali amendment to the Montreal Protocol in 2016, which called for the phase out of high global warming potential (GWP) refrigerants and to replace them with other refrigerants with a lower GWP, closer to that of carbon dioxide. HFOs have an ozone depletion potential of 0.0, compared to the 1.0 of principal CFC-11, and a low GWP which make them environmentally safer alternatives to CFCs, HCFCs and HFCs. Hydrofluoroolefins serve as functional replacements for applications where high GWP hydrofluorocarbons were once used. In April 2022, the EPA signed a pre-published final rule Listing of HFO-1234yf under the Significant New Alternatives Policy (SNAP) Program for Motor Vehicle Air Conditioning in Nonroad Vehicles and Servicing Fittings for Small Refrigerant Cans. This ruling allows HFO-1234yf to take over in applications where ozone depleting CFCs such as R-12, and high GWP HFCs such as R-134a were once used. The phaseout and replacement of CFCs and HFCs in the automotive industry will ultimately reduce the release of these gases to atmosphere and in turn have a positive contribution to the mitigation of climate change. Tracer of ocean circulation Since the time history of CFC concentrations in the atmosphere is relatively well known, they have provided an important constraint on ocean circulation. CFCs dissolve in seawater at the ocean surface and are subsequently transported into the ocean interior. Because CFCs are inert, their concentration in the ocean interior reflects simply the convolution of their atmospheric time evolution and ocean circulation and mixing. CFC and SF6 tracer-derived age of ocean water Chlorofluorocarbons (CFCs) are anthropogenic compounds that have been released into the atmosphere since the 1930s in various applications such as in air-conditioning, refrigeration, blowing agents in foams, insulations and packing materials, propellants in aerosol cans, and as solvents. The entry of CFCs into the ocean makes them extremely useful as transient tracers to estimate rates and pathways of ocean circulation and mixing processes. However, due to production restrictions of CFCs in the 1980s, atmospheric concentrations of CFC-11 and CFC-12 has stopped increasing, and the CFC-11 to CFC-12 ratio in the atmosphere have been steadily decreasing, making water dating of water masses more problematic. Incidentally, production and release of sulfur hexafluoride (SF6) have rapidly increased in the atmosphere since the 1970s. Similar to CFCs, SF6 is also an inert gas and is not affected by oceanic chemical or biological activities. Thus, using CFCs in concert with SF6 as a tracer resolves the water dating issues due to decreased CFC concentrations. Using CFCs or SF6 as a tracer of ocean circulation allows for the derivation of rates for ocean processes due to the time-dependent source function. The elapsed time since a subsurface water mass was last in contact with the atmosphere is the tracer-derived age. Estimates of age can be derived based on the partial pressure of an individual compound and the ratio of the partial pressure of CFCs to each other (or SF6). Partial pressure and ratio dating techniques The age of a water parcel can be estimated by the CFC partial pressure (pCFC) age or SF6 partial pressure (pSF6) age. The pCFC age of a water sample is defined as: where [CFC] is the measured CFC concentration (pmol kg−1) and F is the solubility of CFC gas in seawater as a function of temperature and salinity. The CFC partial pressure is expressed in units of 10–12 atmospheres or parts-per-trillion (ppt). The solubility measurements of CFC-11 and CFC-12 have been previously measured by Warner and Weiss Additionally, the solubility measurement of CFC-113 was measured by Bu and Warner and SF6 by Wanninkhof et al. and Bullister et al. Theses authors mentioned above have expressed the solubility (F) at a total pressure of 1 atm as: where F = solubility expressed in either mol l−1 or mol kg−1 atm−1, T = absolute temperature, S = salinity in parts per thousand (ppt), a1, a2, a3, b1, b2, and b3 are constants to be determined from the least squares fit to the solubility measurements. This equation is derived from the integrated Van 't Hoff equation and the logarithmic Setchenow salinity dependence. It can be noted that the solubility of CFCs increase with decreasing temperature at approximately 1% per degree Celsius. Once the partial pressure of the CFC (or SF6) is derived, it is then compared to the atmospheric time histories for CFC-11, CFC-12, or SF6 in which the pCFC directly corresponds to the year with the same. The difference between the corresponding date and the collection date of the seawater sample is the average age for the water parcel. The age of a parcel of water can also be calculated using the ratio of two CFC partial pressures or the ratio of the SF6 partial pressure to a CFC partial pressure. Safety According to their material safety data sheets, CFCs and HCFCs are colorless, volatile, non-toxic liquids and gases with a faintly sweet ethereal odor. Overexposure at concentrations of 11% or more may cause dizziness, loss of concentration, central nervous system depression or cardiac arrhythmia. Vapors displace air and can cause asphyxiation in confined spaces. Dermal absorption of chlorofluorocarbons is possible, but low. Where the pulmonary uptake of inhaled chlorofluorocarbons occurs quickly with peak blood concentrations occurring in as little as 15 seconds with steady concentrations evening out after 20 minutes. Absorption of orally ingested chlorofluorocarbons is 35 to 48 times lower compared to inhalation. Although non-flammable, their combustion products include hydrofluoric acid and related species. Normal occupational exposure is rated at 0.07% and does not pose any serious health risks.
Physical sciences
Halocarbons
Chemistry
54911
https://en.wikipedia.org/wiki/Dew
Dew
Dew is water in the form of droplets that appears on thin, exposed objects in the morning or evening due to condensation. As the exposed surface cools by radiating its heat, atmospheric moisture condenses at a rate greater than that at which it can evaporate, resulting in the formation of water droplets. When temperatures are low enough, dew takes the form of ice, called frost. Because dew is related to the temperature of surfaces, in late summer it forms most easily on surfaces that are not warmed by conducted heat from deep ground, such as grass, leaves, railings, car roofs, and bridges. Formation Water vapor will condense into droplets depending on the temperature. The temperature at which droplets form is called the dew point. When surface temperature drops, eventually reaching the dew point, atmospheric water vapor condenses to form small droplets on the surface. This process distinguishes dew from those hydrometeors (meteorological occurrences of water), which form directly in air that has cooled to its dew point (typically around condensation nuclei), such as fog or clouds. The thermodynamic principles of formation, however, are the same. Dew is commonly formed during select times of the day. Nights, early mornings, and early evenings are all times during which dew is likely to be found. Occurrence Adequate cooling of the surface typically takes place when it loses more energy by infrared radiation than it receives as solar radiation from the Sun, which is especially the case on clear nights. Poor thermal conductivity restricts the replacement of such losses from deeper ground layers, which are typically warmer at night. Preferred objects of dew formation are thus poor conducting or well isolated from the ground, and non-metallic, while shiny metal coated surfaces are poor infrared radiators. Preferred weather conditions include the absence of clouds and little water vapor in the higher atmosphere to minimize greenhouse effects and sufficient humidity of the air near the ground. Typical dew nights are classically considered calm, because the wind transports (nocturnally) warmer air from higher levels to the cold surface. However, if the atmosphere is the major source of moisture (this type is called dewfall), a certain amount of ventilation is needed to replace the vapor that is already condensed. The highest optimum wind speeds could be found on arid islands. Wind always seems adverse, however, if the wet soil beneath is the major source of vapor (in which case dew is said to form by distillation). The processes of dew formation do not restrict its occurrence to the night and the outdoors. They are also working when eyeglasses get steamy in a warm, wet room or in industrial processes. However, the term condensation is preferred in these cases. Measurement A classical device for dew measurement is the drosometer. A small (artificial) condenser surface is suspended from an arm attached to a pointer or a pen that records the weight changes of the condenser on a drum. Besides being very wind sensitive, however, this, like all artificial surface devices, only provides a measure of the meteorological potential for dew formation. The actual amount of dew in a specific place is strongly dependent on surface properties. For its measurement, plants, leaves, or whole soil columns are placed on a balance with their surface at the same height and in the same surroundings as would occur naturally, thus providing a small lysimeter. Further methods include estimation by means of comparing the droplets to standardized photographs or volumetric measurement of the amount of water wiped from the surface. Some of these methods include guttation, while others only measure dewfall and/or distillation. Significance Due to its dependence on radiation balance, dew amounts can reach a theoretical maximum of about 0.8 mm per night; measured values, however, rarely exceed 0.5 mm. In most climates of the world, the annual average is too small to compete with rain. In regions with considerable dry seasons, adapted plants like lichen or pine seedlings benefit from dew. Large-scale, natural irrigation without rainfall, such as in the Atacama and Namib deserts, however, is mostly attributed to fog water. In the Negev Desert in Israel, dew has been found to account for almost half of the water found in three dominant desert species: Salsola inermis, Artemisia sieberi and Haloxylon scoparium. Another effect of dew is its hydration of fungal substrates and the mycelia of species such as pleated inkcaps, often found on lawns, and Phytophthora infestans which causes blight on potato plants. Historic The book On the Universe (De Mundo) (composed before 250 BC or between 350 and 200 BC) stated: "Dew is moisture minute in composition falling from a clear sky; ice is water congealed in a condensed form from a clear sky; hoar-frost is congealed dew, and 'dew-frost' is dew which is half congealed". In Greek mythology, Ersa is the goddess and personification of dew. Also, according to the myth, the dew in the morning was created when Eos (Ersa's aunt), goddess of the dawn, cried for her son's death, although later he received immortality. Dew, known in Hebrew as טל (tal), is significant in the Jewish religion for agricultural and theological purposes. On the first day of Passover, the Chazan, dressed in a white kittel, leads a service in which he prays for dew between that point and Sukkot. During the rainy season between December and Passover there are also additions in the Amidah for blessed dew to come together with rain. There are many midrashim that refer to dew as being the tool for ultimate resurrection. "Dewy" or "my father is the morning dew" are approximate etymologies of the Hebrew given name, Avital. In the Biblical Torah or Old Testament, dew is used symbolically in : "My doctrine shall drop as the rain, my speech shall distill as the dew, as the small rain upon the tender herb, and as the showers upon the grass." In the Catholic Mass in the Western Rite, whenever the Second Eucharistic Prayer is used, the priest prays over bread and wine, to God the Father; ‘Make holy, therefore, these gifts, we pray, by sending down your Spirit upon them like the dewfall, so that they may become for us the Body and Blood of our Lord Jesus Christ.’ The idea that the Holy Spirit enters the world and our lives in a quiet, undramatic way, ‘like the dewfall’, has great appeal for many Christians. Artificial harvesting The harvesting of dew potentially allows water availability in areas where supporting weather conditions, such as rain, are lacking. Several man-made devices such as antique big stone piles in Ukraine, medieval dew ponds in Southern England, and volcanic stone covers on the fields of Lanzarote have been thought to be dew-catching devices, but could be shown to work on other principles. At present, the International Organization for Dew Utilization (OPUR) is working on effective, foil-based condensers for regions where rain or fog cannot cover water needs throughout the year. Large-scale dew harvesting systems have been made by the Indian Institute of Management Ahmedabad (IIMA) with the participation of OPUR in the coastal, semiarid region of Kutch. These condensers can harvest more than 200 liters (on average) of dew water per night for about 90 nights in the October-to-May dew season. The IIMA research laboratory has shown that dew can serve as a supplementary source of water in coastal arid areas. A large-scale dew harvesting scheme envisages circulating cold sea water in EPDM collectors near the seashore. These condense dew and fog to supply clean drinking water. Other, more recent, studies display possible roof integration for dew harvesting devices.
Physical sciences
Precipitation
null
54929
https://en.wikipedia.org/wiki/Ginger
Ginger
Ginger (Zingiber officinale) is a flowering plant whose rhizome, ginger root or ginger, is widely used as a spice and a folk medicine. It is an herbaceous perennial that grows annual pseudostems (false stems made of the rolled bases of leaves) about one meter tall, bearing narrow leaf blades. The inflorescences bear flowers having pale yellow petals with purple edges, and arise directly from the rhizome on separate shoots. Ginger is in the family Zingiberaceae, which also includes turmeric (Curcuma longa), cardamom (Elettaria cardamomum), and galangal. Ginger originated in Maritime Southeast Asia and was likely domesticated first by the Austronesian peoples. It was transported with them throughout the Indo-Pacific during the Austronesian expansion ( BP), reaching as far as Hawaii. Ginger is one of the first spices to have been exported from Asia, arriving in Europe with the spice trade, and was used by ancient Greeks and Romans. The distantly related dicots in the genus Asarum are commonly called wild ginger because of their similar taste. Ginger has been used in traditional medicine in China, India and Japan for centuries, and as a dietary supplement. There is no good evidence that ginger helps alleviate nausea and vomiting associated with pregnancy or chemotherapy, and its safety has not been demonstrated. It remains uncertain whether ginger is effective for treating any disease, and use of ginger as a drug has not been approved by the FDA. In 2020, world production of ginger was 4.3 million tonnes, led by India with 43% of the world total. Etymology The English origin of the word "ginger" is from the mid-14th century, from Old English , which derives in turn from the Medieval Latin , from the Greek from the Prakrit (Middle Indic) , and from the Sanskrit . The Sanskrit word is thought to come from an ancient Dravidian word that also produced the Tamil and Malayalam term (from , "root"); an alternative explanation is that the Sanskrit word comes from , meaning "horn", and , meaning "body" (describing the shape of its root), but that may be folk etymology. The word probably was readopted in Middle English from the Old French (modern French ). Origin and distribution Ginger originated from Maritime Southeast Asia. It is a true cultigen and does not exist in its wild state. The most ancient evidence of its domestication is among the Austronesian peoples where it was among several species of ginger cultivated and exploited since ancient times. They cultivated other gingers including turmeric (Curcuma longa), white turmeric (Curcuma zedoaria), and bitter ginger (Zingiber zerumbet). The rhizomes and the leaves were used to flavour food or eaten directly. The leaves were also used to weave mats. Aside from these uses, ginger had religious significance among Austronesians, being used in rituals for healing and for asking protection from spirits. It was also used in the blessing of Austronesian ships. Ginger was carried with them in their voyages as canoe plants during the Austronesian expansion, starting from around 5,000 BP. They introduced it to the Pacific Islands in prehistory, long before any contact with other civilizations. Reflexes of the Proto-Malayo-Polynesian word * are found in Austronesian languages all the way to Hawaii. They also presumably introduced it to India along with other Southeast Asian food plants and Austronesian sailing technologies, during early contact by Austronesian sailors with the Dravidian-speaking peoples of Sri Lanka and South India at around 3,500 BP. It was also carried by Austronesian voyagers into Madagascar and the Comoros in the 1st millennium CE. From India, it was carried by traders into the Middle East and the Mediterranean by around the 1st century CE. It was primarily grown in southern India and the Greater Sunda Islands during the spice trade, along with peppers, cloves, and numerous other spices. History The first written record of ginger comes from the Analects, written by the Disciples of Confucius in China during the Warring States period (475–221 BCE). In it, Confucius was said to eat ginger with every meal. In 406, the monk Faxian wrote that ginger was grown in pots and carried on Chinese ships to prevent scurvy. During the Song dynasty (960–1279), ginger was being imported into China from southern countries. Ginger spice was introduced to the Mediterranean by the Arabs, and described by writers like Dioscorides (40–90) and Pliny the Elder (24–79). In 150, Ptolemy noted that ginger was produced in Ceylon (Sri Lanka). Ginger—along with its relative, galangal—was imported into the Roman Empire as part of very expensive herbal remedies that only the wealthy could afford, e.g. for the kidneys. Aëtius of Amida describes both ginger and galangal as ingredients in his complex herbal prescriptions. Raw and preserved ginger were imported into Europe in increased quantity during the Middle Ages after European tastes shifted favorably towards its culinary properties; during this time, ginger was described in the official pharmacopeias of several countries. In 14th century England, a pound of ginger cost as much as a sheep. Archaeological evidence of ginger in northwest Europe comes from the wreck of the Danish-Norwegian flagship, Gribshunden. The ship sank off the southern coast of Sweden in the summer of 1495 while conveying King Hans to a summit with the Swedish Council. Among the luxuries carried on the ship were ginger, cloves, saffron, and pepper. The ginger plant was smuggled onto the Caribbean islands from Asia sometime in the 16th century, along with black pepper, cloves, and cinnamon, at the encouragement of the Spanish Crown, though only ginger thrived. It eventually displaced sugar to become the leading export crop on both Hispaniola and Puerto Rico by the end of the century, until the introduction of slave labour from Africa made sugar more economical to produce in the 17th century. Horticulture Ginger produces clusters of white and pink flower buds that bloom into yellow flowers. Because of its aesthetic appeal and the adaptation of the plant to warm climates, it is often used as landscaping around subtropical homes. It is a perennial reed-like plant with annual leafy stems, about a meter (3 to 4 feet) tall. Traditionally, the rhizome is gathered when the stalk withers; it is immediately scalded, or washed and scraped, to kill it and prevent sprouting. The fragrant perisperm of the Zingiberaceae is used as sweetmeats by Bantu, and also as a condiment and sialogogue. Production In 2020, global production of ginger was 4.3 million tonnes, led by India with 43% of the world total. Nigeria, China, and Nepal also had substantial production. Production in India Though it is grown in many areas across the globe, ginger is "among the earliest recorded spices to be cultivated and exported from southwest India". India holds the seventh position in ginger export worldwide, however is the "largest producer of ginger in the world". Regions in southwest and Northeast India are most suitable for ginger production due to their warm and humid climate, average rainfall and land space. Ginger has the ability to grow in a wide variety of land types and areas, however is best produced when grown in a warm, humid environment, at an elevation between , and in well-drained soils at least 30 cm deep. A period of low rainfall prior to growing and well-distributed rainfall during growing are also essential for the ginger to thrive well in the soil. Ginger produced in India is most often farmed through homestead farming, with work adaptively shared by available family and community members. Ginger farming The size of the ginger rhizome is essential to the production of ginger. The larger the rhizome piece, the faster ginger will be produced and therefore the faster it will be sold onto the market. Prior to planting the seed rhizomes, farmers are required to treat the seeds to prevent pests, and rhizome rot and other seed-borne diseases. Various ways Indian farmers do seed treatment include dipping the seeds in cow dung emulsion, smoking the seeds before storage, and hot water treatment. Once the seeds are properly treated, the farmland in which they are to be planted must be thoroughly dug or ploughed by the farmer to break up the soil. After the soil is sufficiently ploughed (at least 3–5 times), water channels are made apart to irrigate the crop. The next step is planting the rhizome seed. In India, planting the irrigated ginger crop is usually done in the months between March and June as those months account for the beginning of the monsoon, or rainy season. Once the planting stage is done, farmers go on to mulch the crop to conserve moisture and check weed growth, as well as check surface run-off to conserve soil. Mulching is done by applying mulch (green leaves for example) to the plant beds directly after planting and again 45 and 90 days into growth. After mulching comes hilling, which is the stirring and breaking up of soil to check weed growth, break the firmness of the soil from rain, and conserve soil moisture. Farmers must ensure that their ginger crops are receiving supplemental irrigation if rainfall is low in their region. In India, farmers must irrigate their ginger crops every two weeks at the least between September and November (when the monsoon is over) to ensure maximum yield and high quality product. The final farming stage for ginger is the harvesting stage. When the rhizome is planted for products such as vegetable, soda, and candy, harvesting should be done between four and five months of planting, whereas when the rhizome is planted for products such as dried ginger or ginger oil, harvesting must be done eight to ten months after planting. Dry ginger is one of the most popular forms of ginger in commerce. Ginger rhizomes for dry ginger are harvested at full maturity (8–10 months). After soaking them in water, the outer skin is scraped off with a bamboo splinter or wooden knife by hand as it is too delicate a process to be done by machinery. The whole dried rhizomes are ground in the consuming centres. Fresh ginger does not need further processing after harvest, and it is harvested much younger. Transportation and export of ginger Ginger is sent through various stages to be transported to its final destination either domestically or internationally. The journey begins when farmers sell a portion of their produce to village traders who collect produce right at the farm gate. Once the produce is collected, it is transported to the closest assembly market where it is then taken to main regional or district level marketing centres. Farmers with a large yield of produce will directly take their produce to local or regional markets. Once the produce has "reached [the] regional level markets, they are cleaned, graded, and packed in sacks of about 60 kg". They are then moved to terminal markets such as in New Delhi, Kochi, and Bombay. States from which ginger is exported follow the marketing channels of vegetable marketing in India, and the steps are similar to those when transported domestically. However, instead of reaching a terminal market after the regional forwarding centres, the produce will reach an export market and then be sent off by vehicle, plane or boat to reach its final international destination, where it will arrive at a local retail market and finally reach the consumer once purchased. Dry ginger is most popularly traded between Asian countries through a unique distribution system involving a network of small retail outlets. Fresh and preserved ginger are often sold directly to supermarket chains, and in some countries fresh ginger is seen exclusively in small shops unique to certain ethnic communities. India frequently exports its ginger and other vegetable produce to nearby Pakistan and Bangladesh, as well as "Saudi Arabia, the United Arab Emirates, Morocco, the United States, Yemen Republic, the United Kingdom, and Netherlands". Though India is the largest ginger producer in the world, it fails to play the role of a large exporter and only accounts for about 1.17% of total ginger exports. Ginger farming in India is a costly and risky business, as farmers do not gain much money from exports and "more than 65% of the total cost incurred is toward labor and seed material purchase". The farm owner may benefit given that there is no losses in production or price decreases, which is not easily avoidable. Production of dry ginger proves to have a higher benefit-cost ratio, as well as ginger cultivated in intercropping systems rather than as a pure crop. Uses Culinary Ginger is a common spice used worldwide, whether for meals or as a folk medicine. Ginger can be used for a variety of food items such as vegetables, candy, soda, pickles, and alcoholic beverages. Ginger is a fragrant kitchen spice. Young ginger rhizomes are juicy and fleshy with a mild taste. They are often pickled in vinegar or sherry as a snack or cooked as an ingredient in many dishes. They can be steeped in boiling water to make ginger herb tea, to which honey may be added. Ginger can be made into candy or ginger wine. Asia Mature ginger rhizomes are fibrous and nearly dry. The juice from ginger roots is often used as a seasoning in Indian recipes and is a common ingredient of Chinese, Korean, Japanese, Vietnamese, and many South Asian cuisines for flavoring dishes such as seafood, meat, and vegetarian dishes. In Indian cuisine, ginger is a key ingredient, especially in thicker gravies, as well as in many other dishes, both vegetarian and meat-based. Ginger has a role in traditional Ayurvedic medicine. It is an ingredient in traditional Indian drinks, both cold and hot, including spiced masala chai. Fresh ginger is one of the main spices used for making pulse and lentil curries and other vegetable preparations. Fresh ginger together with peeled garlic cloves is crushed or ground to form ginger garlic masala. Fresh, as well as dried, ginger is used to spice tea and coffee, especially in winter. In south India, "sambharam" is a summer yogurt drink made with ginger as a key ingredient, along with green chillies, salt and curry leaves. Ginger powder is used in food preparations intended primarily for pregnant or nursing women, the most popular one being katlu, which is a mixture of gum resin, ghee, nuts, and sugar. Ginger is also consumed in candied and pickled form. In Japan, ginger is pickled to make beni shōga and gari or grated and used raw on tofu or noodles. It is made into a candy called shoga no sato zuke. In the traditional Korean kimchi, ginger is either finely minced or just juiced to avoid the fibrous texture and added to the ingredients of the spicy paste just before the fermenting process. In Myanmar, ginger is called gyin. It is widely used in cooking and as a main ingredient in traditional medicines. It is consumed as a salad dish called gyin-thot, which consists of shredded ginger preserved in oil, with a variety of nuts and seeds. In Thailand' where it is called ขิง khing, it is used to make a ginger garlic paste in cooking. In Indonesia, a beverage called wedang jahe is made from ginger and palm sugar. Indonesians also use ground ginger root, called jahe, as a common ingredient in local recipes. In Malaysia, ginger is called halia and used in many kinds of dishes, especially soups. Called luya in the Philippines, ginger is a common ingredient in local dishes and is brewed as a tea called salabat. In Vietnam, the fresh leaves, finely chopped, can be added to shrimp-and-yam soup (canh khoai mỡ) as a top garnish and spice to add a much subtler flavor of ginger than the chopped root. In China, sliced or whole ginger root is often paired with savory dishes such as fish, and chopped ginger root is commonly paired with meat, when it is cooked. Candied ginger is sometimes a component of Chinese candy boxes, and a herbal tea can be prepared from ginger. Raw ginger juice can be used to set milk and make a dessert, ginger milk curd. North America In the Caribbean, ginger is a popular spice for cooking and for making drinks such as sorrel, a drink made during the Christmas season. Jamaicans make ginger beer both as a carbonated beverage and also fresh in their homes. Ginger tea is often made from fresh ginger, as well as the famous regional specialty Jamaican ginger cake. Western countries In Western cuisine, ginger is traditionally used mainly in sweet foods such as ginger ale, gingerbread, ginger snaps, parkin, and speculaas. A ginger-flavored liqueur called Canton is produced in Jarnac, France. Ginger wine is a ginger-flavoured wine produced in the United Kingdom, traditionally sold in a green glass bottle. Ginger is also used as a spice added to hot coffee and tea. On the island of Corfu, Greece, a traditional drink called τσιτσιμπύρα (tsitsibira), a type of ginger beer, is made. The people of Corfu and the rest of the Ionian islands adopted the drink from the British, during the period of the United States of the Ionian Islands. Fresh ginger can be substituted for ground ginger at a ratio of six to one, although the flavours of fresh and dried ginger are somewhat different. Powdered dry ginger root is typically used as a flavouring for recipes such as gingerbread, cookies, crackers and cakes, ginger ale, and ginger beer. Candied or crystallized ginger, known in the UK as "stem ginger", is the root cooked in sugar until soft, and is a type of confectionery. Fresh ginger may be peeled before eating. For longer-term storage, the ginger can be placed in a plastic bag and refrigerated or frozen. Middle East Ginger is used in Iranian cuisine. Ginger bread is a kind of cookie traditionally prepared in the city of Gorgan on the holiday of Nowruz (New Year's Day). Similar ingredients Other members of the family Zingiberaceae are used in similar ways. They include the myoga (Zingiber mioga), the several types of galangal, the fingerroot (Boesenbergia rotunda), and the bitter ginger (Zingiber zerumbet). A dicotyledonous native species of eastern North America, Asarum canadense, is also known as "wild ginger", and its root has similar aromatic properties, but it is not related to true ginger. The plant contains aristolochic acid, a carcinogenic compound. The United States Food and Drug Administration warns that consumption of aristolochic acid-containing products is associated with "permanent kidney damage, sometimes resulting in kidney failure that has required kidney dialysis or kidney transplantation. In addition, some patients have developed certain types of cancers, most often occurring in the urinary tract." Nutrition Raw ginger is 79% water, 18% carbohydrates, 2% protein, and 1% fat (table). In a reference amount of , raw ginger supplies of food energy and moderate amounts of potassium (14% of the Daily Value, DV), magnesium (10% DV) and manganese (10% DV), but otherwise is low in micronutrient content (table). Composition and safety If consumed in reasonable quantities, ginger has few negative side effects, although large amounts may cause adverse events, such as gastrointestinal discomfort, and undesirable interactions with prescription drugs. It is on the FDA's "generally recognized as safe" list, though it does interact with some medications, including the anticoagulant drug warfarin and the cardiovascular drug nifedipine. Chemistry The characteristic fragrance and flavor of ginger result from volatile oils that compose 1–3% of the weight of fresh ginger, primarily consisting of sesquiterpenes, such as beta-bisabolene and zingiberene, zingerone, shogaols, and gingerols with [6]-gingerol (1-[4'-hydroxy-3'-methoxyphenyl]-5-hydroxy-3-decanone) as the major pungent compound. Some 400 chemical compounds exist in raw ginger. Zingerone is produced from gingerols during drying, having lower pungency and a spicy-sweet aroma. Shogaols are more pungent, and are formed from gingerols during heating, storage or via acidity. Numerous monoterpenes, amino acids, dietary fiber, protein, phytosterols, vitamins, and dietary minerals are other constituents. Fresh ginger also contains an enzyme zingibain which is a cysteine protease and has similar properties to rennet. Research Evidence that ginger use is associated with reduced nausea during pregnancy is of low quality. There is no good evidence ginger helps alleviate chemotherapy-induced nausea and vomiting. There is no clear evidence that taking ginger to treat nausea during pregnancy is safe. Ginger is not effective for treating dysmenorrhea. There is some evidence for it having an anti-inflammatory effect, and improving digestive function, but insufficient evidence for it affecting pain in osteoarthritis. The evidence that ginger retards blood clotting is mixed. A 2018 review found evidence that ginger could decrease body weight in obese subjects and increase HDL-cholesterol. Adverse effects Although generally recognized as safe, ginger can cause heartburn and other side effects, particularly if taken in powdered form. It may adversely affect individuals with gallstones, and may interfere with the effects of anticoagulants, such as warfarin or aspirin, and other prescription drugs. Gallery
Biology and health sciences
Monocots
null
54930
https://en.wikipedia.org/wiki/Fumarole
Fumarole
A fumarole (or fumerole) is a vent in the surface of the Earth or another rocky planet from which hot volcanic gases and vapors are emitted, without any accompanying liquids or solids. Fumaroles are characteristic of the late stages of volcanic activity, but fumarole activity can also precede a volcanic eruption and has been used for eruption prediction. Most fumaroles die down within a few days or weeks of the end of an eruption, but a few are persistent, lasting for decades or longer. An area containing fumaroles is known as a fumarole field. The predominant vapor emitted by fumaroles is steam, formed by the circulation of groundwater through heated rock. This is typically accompanied by volcanic gases given off by magma cooling deep below the surface. These volcanic gases include sulfur compounds, such as various sulfur oxides and hydrogen sulfide, and sometimes hydrogen chloride, hydrogen fluoride, and other gases. A fumarole that emits significant sulfur compounds is sometimes called a solfatara. Fumarole activity can break down rock around the vent, while simultaneously depositing sulfur and other minerals. Valuable hydrothermal mineral deposits can form beneath fumaroles. However, active fumaroles can be a hazard due to their emission of hot, poisonous gases. Description A fumarole (or fumerole; from French fumerolle, a domed structure with lateral openings, built over a kitchen to permit the escape of smoke) is an opening in a planet's crust which emits steam and gases, but no liquid or solid material. The temperature of the gases leaving the vent ranges from about . The steam forms when groundwater is superheated by hot rock, then flashes (boils due to depressurization) as it approaches the surface. In addition to steam, gases released by fumaroles include carbon dioxide, sulfur oxides, hydrogen sulfide, hydrogen chloride, and hydrogen fluoride. These have their origin in magma cooling underground. Not all these gases are present in all fumaroles; for example, fumaroles of Kilauea in Hawaii, US, contain almost no hydrogen chloride or hydrogen fluoride. The gases may also include traces of carbonyl sulfide, carbon disulfide, hydrogen, methane, or carbon monoxide. A fumarole that emits sulfurous gases can be referred to as a solfatara (from old Italian solfo, "sulfur"). Acid-sulfate hot springs can be formed by fumaroles when some of the steam condenses at the surface. Rising acidic vapors from below, such as CO2 and H2S, will then dissolve, creating steam-heated low-pH hot springs. Fumaroles are normally associated with the late stages of volcanic activity, although they may also precede volcanic activity and have been used to predict volcanic eruptions. In particular, changes in the composition and temperature of fumarole gases may point to an imminent eruption. An increase in sulfur oxide emissions is a particularly robust indication that new magma is rising from the depths, and may be detectable months to years before the eruption. Continued sulfur oxide emissions after an eruption is an indication that magma is continuing to rise towards the surface. Fumaroles may occur along tiny cracks, along long fissures, or in chaotic clusters or fields. They also occur on the surface of lava flows and pyroclastic flows. A fumarole field is an area of thermal springs and gas vents where shallow magma or hot igneous rocks release gases or interact with groundwater. When they occur in freezing environments, fumaroles may cause fumarolic ice towers. Fumaroles may persist for decades or centuries if located above a persistent heat source; or they may disappear within weeks to months if they occur atop a fresh volcanic deposit that quickly cools. The Valley of Ten Thousand Smokes, for example, was formed during the 1912 eruption of Novarupta in Alaska. Initially, thousands of fumaroles occurred in the cooling ash from the eruption, but over time most of them have become extinct. Persistent fumaroles are found at Sulfur Bank on the northern edge of the Kilauea caldera, but most fumaroles in Hawaii last no more than a few months. There are still numerous active fumaroles at Yellowstone National Park, US, some 70,000 years after the most recent eruption. Economic resources and hazards The acidic fumes from fumaroles can break down the rock around the vents, producing brightly colored alteration haloes. At Sulphur Banks near Kilauea in Hawaii, mild alteration reduces the rock to gray to white opal and kaolinite with the original texture of the rock still discernible. Alteration begins along joints in the rock and works inwards until the entire joint block is altered. More extreme alteration (at lower pH) reduces the material to clay minerals and iron oxides to produce red to reddish-brown clay. The same process can produce valuable hydrothermal ore deposits at depth. Fumaroles emitting sulfurous vapors form surface deposits of sulfur-rich minerals and of fumarole minerals. Sulfur crystals at Sulfur Banks near Kilauea can grow to in length, and considerable sulfur has been deposited at Sulfur Cone within Mauna Loa caldera. Places in which these deposits have been mined include: Kawah Ijen and Arjuno-Welirang, Indonesia Purico Complex near San Pedro de Atacama in Chile Mount Tongariro in the central North Island, New Zealand (mined by Māori until 1950) Whakaari / White Island in the Bay of Plenty, New Zealand (mined from the 1880s to the 1930s) Sicily, which had a near-monopoly on sulfur prior to development of the Frasch process for mining sulfur from salt domes. Sulfur mining in Indonesia is sometimes done for low pay, by hand, without respirators or other protective equipment. In April 2006 fumarole emissions killed three ski-patrol workers east of Chair 3 at Mammoth Mountain Ski Area in California. The workers were overpowered by an accumulation of toxic fumes (a mazuku) in a crevasse they had fallen into. Occurrences Fumaroles are found around the world in areas of volcanic activity. A few notable examples include: Campi Flegrei, Italy, known since ancient times and regarded as the entrance to Hell, which is now closely monitored because of the hazard it poses to nearby urbanization. Central Volcanic Zone, South America Corbetti Caldera, Ethiopia, where a geothermal power station is under construction Taupō Volcanic Zone, New Zealand, where fumaroles support a unique and critically endangered ecosystem Mount Usu, Japan Valley of Desolation in Morne Trois Pitons National Park in Dominica Furnas, São Miguel Island, Azores (Portugal) Yellowstone National Park has thousands of fumaroles, including Black Growler at Norris Geyser Basin and numerous fumaroles dotting Roaring Mountain. On Mars The formation known as Home Plate at Gusev Crater on Mars, which was examined by the Mars Exploration Rover (MER) Spirit, is suspected to be the eroded remains of an ancient and extinct fumarole.
Physical sciences
Volcanic landforms
Earth science
54952
https://en.wikipedia.org/wiki/Technical%20drawing
Technical drawing
Technical drawing, drafting or drawing, is the act and discipline of composing drawings that visually communicate how something functions or is constructed. Technical drawing is essential for communicating ideas in industry and engineering. To make the drawings easier to understand, people use familiar symbols, perspectives, units of measurement, notation systems, visual styles, and page layout. Together, such conventions constitute a visual language and help to ensure that the drawing is unambiguous and relatively easy to understand. Many of the symbols and principles of technical drawing are codified in an international standard called ISO 128. The need for precise communication in the preparation of a functional document distinguishes technical drawing from the expressive drawing of the visual arts. Artistic drawings are subjectively interpreted; their meanings are multiply determined. Technical drawings are understood to have one intended meaning. A draftsman is a person who makes a drawing (technical or expressive). A professional drafter who makes technical drawings is sometimes called a drafting technician. Methods Sketching A sketch is a quickly executed, freehand drawing that is usually not intended as a finished work. In general, sketching is a quick way to record an idea for later use. Architect's sketches primarily serve as a way to try out different ideas and establish a composition before a more finished work, especially when the finished work is expensive and time-consuming. Architectural sketches, for example, are a kind of diagram. These sketches, like metaphors, are used by architects as a means of communication in aiding design collaboration. This tool helps architects to abstract attributes of hypothetical provisional design solutions and summarize their complex patterns, thereby enhancing the design process. Manual or by instrument The basic drafting procedure is to place a piece of paper (or other material) on a smooth surface with right-angle corners and straight sides—typically a drawing board. A sliding straightedge known as a T-square is then placed on one of the sides, allowing it to be slid across the side of the table, and over the surface of the paper. "Parallel lines" can be drawn by moving the T-square and running a pencil or technical pen along the T-square's edge. The T-square is used to hold other devices such as set squares or triangles. In this case, the drafter places one or more triangles of known angles on the T-square — which is itself at right angles to the edge of the table — and can then draw lines at any chosen angle to others on the page. Modern drafting tables are equipped with a drafting machine that is supported on both sides of the table to slide over a large piece of paper. Because it is secured on both sides, lines drawn along the edge are guaranteed to be parallel. The drafter uses several technical drawing tools to draw curves and circles. Primary among these are the compasses, used for drawing arcs and circles, and the French curve, for drawing curves. A spline is a rubber coated articulated metal that can be manually bent to most curves. Drafting templates assist the drafter with creating recurring objects in a drawing without having to reproduce the object from scratch every time. This is especially useful when using common symbols; i.e. in the context of stagecraft, a lighting designer will draw from the USITT standard library of lighting fixture symbols to indicate the position of a common fixture across multiple positions. Templates are sold commercially by a number of vendors, usually customized to a specific task, but it is also not uncommon for a drafter to create his own templates. This basic drafting system requires an accurate table and constant attention to the positioning of the tools. A common error is to allow the triangles to push the top of the T-square down slightly, thereby throwing off all angles. Even tasks as simple as drawing two angled lines meeting at a point require a number of moves of the T-square and triangles, and in general, drafting can be a time-consuming process. A solution to these problems was the introduction of the mechanical "drafting machine", an application of the pantograph (sometimes referred to incorrectly as a "pentagraph" in these situations) which allowed the drafter to have an accurate right angle at any point on the page quickly. These machines often included the ability to change the angle, hence removing the need for the triangles. In addition to the mastery of the mechanics of drawing lines, arcs and circles (and text) onto a piece of paper—with respect to the detailing of physical objects—the drafting effort requires a thorough understanding of geometry, trigonometry and spatial comprehension, and in all cases demands precision and accuracy, and attention to detail of high order. Although drafting is sometimes accomplished by a project engineer, architect, or shop personnel (such as a machinist), skilled drafters (and/or designers) usually accomplish the task, and are always in demand to some degree. Computer aided design Today, the mechanics of the drafting task have largely been automated and accelerated through the use of computer-aided design systems (CAD). There are two types of computer-aided design systems used for the production of technical drawings: two dimensions ("2D") and three dimensions ("3D"). 2D CAD systems such as AutoCAD or MicroStation replace the paper drawing discipline. The lines, circles, arcs, and curves are created within the software. It is down to the technical drawing skill of the user to produce the drawing. There is still much scope for error in the drawing when producing first and third angle orthographic projections, auxiliary projections and cross-section views. A 2D CAD system is merely an electronic drawing board. Its greatest strength over direct to paper technical drawing is in the making of revisions. Whereas in a conventional hand drawn technical drawing, if a mistake is found, or a modification is required, a new drawing must be made from scratch, the 2D CAD system allows a copy of the original to be modified, saving considerable time. 2D CAD systems can be used to create plans for large projects such as buildings and aircraft but provide no way to check the various components will fit together. A 3D CAD system (such as KeyCreator, Autodesk Inventor, or SolidWorks) first produces the geometry of the part; the technical drawing comes from user defined views of that geometry. Any orthographic, projected or sectioned view is created by the software. There is no scope for error in the production of these views. The main scope for error comes in setting the parameter of first or third angle projection and displaying the relevant symbol on the technical drawing. 3D CAD allows individual parts to be assembled together to represent the final product. Buildings, aircraft, ships, and cars are modelled, assembled, and checked in 3D before technical drawings are released for manufacture. Both 2D and 3D CAD systems can be used to produce technical drawings for any discipline. The various disciplines (electrical, electronic, pneumatic, hydraulic, etc.) have industry recognized symbols to represent common components. BS and ISO produce standards to show recommended practices but it is up to individuals to produce the drawings to a standard. There is no definitive standard for layout or style. The only standard across engineering workshop drawings is in the creation of orthographic projections and cross-section views. In representing complex, three-dimensional objects in two-dimensional drawings, the objects can be described by at least one view plus material thickness note, 2, 3 or as many views and sections that are required to show all features of object. Applications Architecture The art and design that goes into making buildings is known as architecture. To communicate all aspects of the shape or design, detail drawings are used. In this field, the term plan is often used when referring to the full section view of these drawings as viewed from three feet above finished floor to show the locations of doorways, windows, stairwells, etc. Architectural drawings describe and document an architect's design. Engineering Engineering can be a very broad term. It stems from the Latin ingenerare, meaning "to create". Because this could apply to everything that humans create, it is given a narrower definition in the context of technical drawing. Engineering drawings generally deal with mechanical engineered items, such as manufactured parts and equipment. Engineering drawings are usually created in accordance with standardized conventions for layout, nomenclature, interpretation, appearance (such as typefaces and line styles), size, etc. Its purpose is to accurately and unambiguously capture all the geometric features of a product or a component. The end goal of an engineering drawing is to convey all the required information that will allow a manufacturer to produce that component. Software engineering Software engineering practitioners make use of diagrams for designing software. Formal standards and modelling languages such as Unified Modelling Language (UML) exist but most diagramming happens using informal ad hoc diagrams that illustrate a conceptual model. Practitioners reported that diagramming helped with analysing requirements, design, refactoring, documentation, onboarding, communication with stake holders. Diagrams are often transient or redrawn as required. Redrawn diagrams can act as a form of shared understanding in a team. Related fields Technical illustration Technical illustration is the use of illustration to visually communicate information of a technical nature. Technical illustrations can be component technical drawings or diagrams. The aim of technical illustration is "to generate expressive images that effectively convey certain information via the visual channel to the human observer". The main purpose of technical illustration is to describe or explain these items to a more or less nontechnical audience. The visual image should be accurate in terms of dimensions and proportions, and should provide "an overall impression of what an object is or does, to enhance the viewer's interest and understanding". According to Viola (2005), "illustrative techniques are often designed in a way that even a person with no technical understanding clearly understands the piece of art. The use of varying line widths to emphasize mass, proximity, and scale helped to make a simple line drawing more understandable to the lay person. Cross hatching, stippling, and other low abstraction techniques gave greater depth and dimension to the subject matter". Cutaway drawing A cutaway drawing is a technical illustration, in which part of the surface of a three-dimensional model is removed in order to show some of the model's interior in relation to its exterior. The purpose of a cutaway drawing is to "allow the viewer to have a look into an otherwise solid opaque object. Instead of letting the inner object shine through the surrounding surface, parts of outside object are simply removed. This produces a visual appearance as if someone had cutout a piece of the object or sliced it into parts. Cutaway illustrations avoid ambiguities with respect to spatial ordering, provide a sharp contrast between foreground and background objects, and facilitate a good understanding of spatial ordering". Technical drawings Types The two types of technical drawings are based on graphical projection. This is used to create an image of a three-dimensional object onto a two-dimensional surface. Two-dimensional representation Two-dimensional representation uses orthographic projection to create an image where only two of the three dimensions of the object are seen. Three-dimensional representation In a three-dimensional representation, also referred to as a pictorial, all three dimensions of an object are visible. Views Multiview Multiview is a type of orthographic projection. There are two conventions for using multiview, first-angle and third-angle. In both cases, the front or main side of the object is the same. First-angle is drawing the object sides based on where they land. Example, looking at the front side, rotate the object 90 degrees to the right. What is seen will be drawn to the right of the front side. Third-angle is drawing the object sides based on where they are. Example, looking at the front side, rotate the object 90 degrees to the right. What is seen is actually the left side of the object and will be drawn to the left of the front side. Section While multiview relates to external surfaces of an object, section views show an imaginary plane cut through an object. This is often useful to show voids in an object. Auxiliary Auxiliary views utilize an additional projection plane other than the common planes in a multiview. Since the features of an object need to show the true shape and size of the object, the projection plane must be parallel to the object surface. Therefore, any surface that is not in line with the three major axis needs its own projection plane to show the features correctly. Pattern Patterns, sometimes called developments, show the size and shape of a flat piece of material needed for later bending or folding into a three-dimensional shape. Exploded An exploded-view drawing is a technical drawing of an object that shows the relationship or order of assembly of the various parts. It shows the components of an object slightly separated by distance or suspended in surrounding space in the case of a three-dimensional exploded diagram. An object is represented as if there had been a small controlled explosion emanating from the middle of the object, causing the object's parts to be separated relative distances away from their original locations. An exploded view drawing (EVD) can show the intended assembly of mechanical or other parts. In mechanical systems, the component closest to the center is usually assembled first or is the main part inside which the other parts are assembled. The EVD can also help to represent the disassembly of parts, where those on the outside are normally removed first. Standards and conventions Basic drafting paper sizes There have been many standard sizes of paper at different times and in different countries, but today most of the world uses the international standard (A4 and its siblings). North America uses its own sizes. Patent drawing The applicant for a patent will be required by law to furnish a drawing of the invention if or when the nature of the case requires a drawing to understand the invention with the job. This drawing must be filed with the application. This includes practically all inventions except compositions of matter or processes, but a drawing may also be useful in the case of many processes. The drawing must show every feature of the invention specified in the claims and is required by the patent office rules to be in a particular form. The Office specifies the size of the sheet on which the drawing is made, the type of paper, the margins, and other details relating to the making of the drawing. The reason for specifying the standards in detail is that the drawings are printed and published in a uniform style when the patent issues and the drawings must also be such that they can be readily understood by persons using the patent descriptions. Sets of technical drawings Working drawings for production Working drawings are the set of technical drawings used during the manufacturing phase of a product. In architecture, these include civil drawings, architectural drawings, structural drawings, mechanical systems drawings, electrical drawings, and plumbing drawings. Assembly drawings Assembly drawings show how different parts go together, identify those parts by number, and have a parts list, often referred to as a bill of materials. In a technical service manual, this type of drawing may be referred to as an exploded view drawing or diagram. These parts may be used in engineering. As-fitted drawings Also called As-Built drawings or As-made drawings. As-fitted drawings represent a record of the completed works, literally 'as fitted'. These are based upon the working drawings and updated to reflect any changes or alterations undertaken during construction or manufacture.
Technology
Basics_5
null
54962
https://en.wikipedia.org/wiki/Geophysics
Geophysics
Geophysics () is a subject of natural science concerned with the physical processes and physical properties of the Earth and its surrounding space environment, and the use of quantitative methods for their analysis. Geophysicists, who usually study geophysics, physics, or one of the Earth sciences at the graduate level, complete investigations across a wide range of scientific disciplines. The term geophysics classically refers to solid earth applications: Earth's shape; its gravitational, magnetic fields, and electromagnetic fields ; its internal structure and composition; its dynamics and their surface expression in plate tectonics, the generation of magmas, volcanism and rock formation. However, modern geophysics organizations and pure scientists use a broader definition that includes the water cycle including snow and ice; fluid dynamics of the oceans and the atmosphere; electricity and magnetism in the ionosphere and magnetosphere and solar-terrestrial physics; and analogous problems associated with the Moon and other planets. Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation. The first seismic instrument was built in 132 AD. Isaac Newton applied his theory of mechanics to the tides and the precession of the equinox; and instruments were developed to measure the Earth's shape, density and gravity field, as well as the components of the water cycle. In the 20th century, geophysical methods were developed for remote exploration of the solid Earth and the ocean, and geophysics played an essential role in the development of the theory of plate tectonics. Geophysics is applied to societal needs, such as mineral resources, mitigation of natural hazards and environmental protection. In exploration geophysics, geophysical survey data are used to analyze potential petroleum reservoirs and mineral deposits, locate groundwater, find archaeological relics, determine the thickness of glaciers and soils, and assess sites for environmental remediation. Physical phenomena Geophysics is a highly interdisciplinary subject, and geophysicists contribute to every area of the Earth sciences, while some geophysicists conduct research in the planetary sciences. To provide a more clear idea on what constitutes geophysics, this section describes phenomena that are studied in physics and how they relate to the Earth and its surroundings. Geophysicists also investigate the physical processes and properties of the Earth, its fluid layers, and magnetic field along with the near-Earth environment in the Solar System, which includes other planetary bodies. Gravity The gravitational pull of the Moon and Sun gives rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes. Therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide. Gravitational forces make rocks press down on deeper rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earth's surface and above it can be used to look for mineral deposits (see gravity anomaly and gravimetry). The surface gravitational field provides information on the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth. The geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents (such as with very narrow canals). Heat flow The Earth is cooling, and the resulting heat flow generates the Earth's magnetic field through the geodynamo and plate tectonics through mantle convection. The main sources of heat are: primordial heat due to Earth's cooling and radioactivity in the planets upper crust. There is also some contributions from phase transitions. Heat is mostly carried to the surface by thermal convection, although there are two thermal boundary layers – the core–mantle boundary and the lithosphere – in which heat is transported by conduction. Some heat is carried up from the bottom of the mantle by mantle plumes. The heat flow at the Earth's surface is about , and it is a potential source of geothermal energy. Vibrations Seismic waves are vibrations that travel through the Earth's interior or along its surface. The entire Earth can also oscillate in forms that are called normal modes or free oscillations of the Earth. Ground motions from waves or normal modes are measured using seismographs. If the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection. Recording of seismic waves from controlled sources provides information on the region that the waves travel through. If the density or composition of the rock changes, waves are reflected. Reflections recorded using Reflection Seismology can provide a wealth of information on the structure of the earth up to several kilometers deep and are used to increase our understanding of the geology as well as to explore for oil and gas. Changes in the travel direction, called refraction, can be used to infer the deep structure of the Earth. Earthquakes pose a risk to humans. Understanding their mechanisms, which depend on the type of earthquake (e.g., intraplate or deep focus), can lead to better estimates of earthquake risk and improvements in earthquake engineering. Electricity Although we mainly notice electricity during thunderstorms, there is always a downward electric field near the surface that averages 120 volts per meter. Relative to the solid Earth, the ionization of the planet's atmosphere is a result of the galactic cosmic rays penetrating it, which leaves it with a net positive charge. A current of about 1800 amperes flows in the global circuit. It flows downward from the ionosphere over most of the Earth and back upwards through thunderstorms. The flow is manifested by lightning below the clouds and sprites above. A variety of electric methods are used in geophysical survey. Some measure spontaneous potential, a potential that arises in the ground because of human-made or natural disturbances. Telluric currents flow in Earth and the oceans. They have two causes: electromagnetic induction by the time-varying, external-origin geomagnetic field and motion of conducting bodies (such as seawater) across the Earth's permanent magnetic field. The distribution of telluric current density can be used to detect variations in electrical resistivity of underground structures. Geophysicists can also provide the electric current themselves (see induced polarization and electrical resistivity tomography). Electromagnetic waves Electromagnetic waves occur in the ionosphere and magnetosphere as well as in Earth's outer core. Dawn chorus is believed to be caused by high-energy electrons that get caught in the Van Allen radiation belt. Whistlers are produced by lightning strikes. Hiss may be generated by both. Electromagnetic waves may also be generated by earthquakes (see seismo-electromagnetics). In the highly conductive liquid iron of the outer core, magnetic fields are generated by electric currents through electromagnetic induction. Alfvén waves are magnetohydrodynamic waves in the magnetosphere or the Earth's core. In the core, they probably have little observable effect on the Earth's magnetic field, but slower waves such as magnetic Rossby waves may be one source of geomagnetic secular variation. Electromagnetic methods that are used for geophysical survey include transient electromagnetics, magnetotellurics, surface nuclear magnetic resonance and electromagnetic seabed logging. Magnetism The Earth's magnetic field protects the Earth from the deadly solar wind and has long been used for navigation. It originates in the fluid motions of the outer core. The magnetic field in the upper atmosphere gives rise to the auroras. The Earth's field is roughly like a tilted dipole, but it changes over time (a phenomenon called geomagnetic secular variation). Mostly the geomagnetic pole stays near the geographic pole, but at random intervals averaging 440,000 to a million years or so, the polarity of the Earth's field reverses. These geomagnetic reversals, analyzed within a Geomagnetic Polarity Time Scale, contain 184 polarity intervals in the last 83 million years, with change in frequency over time, with the most recent brief complete reversal of the Laschamp event occurring 41,000 years ago during the last glacial period. Geologists observed geomagnetic reversal recorded in volcanic rocks, through magnetostratigraphy correlation (see natural remanent magnetization) and their signature can be seen as parallel linear magnetic anomaly stripes on the seafloor. These stripes provide quantitative information on seafloor spreading, a part of plate tectonics. They are the basis of magnetostratigraphy, which correlates magnetic reversals with other stratigraphies to construct geologic time scales. In addition, the magnetization in rocks can be used to measure the motion of continents. Radioactivity Radioactive decay accounts for about 80% of the Earth's internal heat, powering the geodynamo and plate tectonics. The main heat-producing isotopes are potassium-40, uranium-238, uranium-235, and thorium-232. Radioactive elements are used for radiometric dating, the primary method for establishing an absolute time scale in geochronology. Unstable isotopes decay at predictable rates, and the decay rates of different isotopes cover several orders of magnitude, so radioactive decay can be used to accurately date both recent events and events in past geologic eras. Radiometric mapping using ground and airborne gamma spectrometry can be used to map the concentration and distribution of radioisotopes near the Earth's surface, which is useful for mapping lithology and alteration. Fluid dynamics Fluid motions occur in the magnetosphere, atmosphere, ocean, mantle and core. Even the mantle, though it has an enormous viscosity, flows like a fluid over long time intervals. This flow is reflected in phenomena such as isostasy, post-glacial rebound and mantle plumes. The mantle flow drives plate tectonics and the flow in the Earth's core drives the geodynamo. Geophysical fluid dynamics is a primary tool in physical oceanography and meteorology. The rotation of the Earth has profound effects on the Earth's fluid dynamics, often due to the Coriolis effect. In the atmosphere, it gives rise to large-scale patterns like Rossby waves and determines the basic circulation patterns of storms. In the ocean, they drive large-scale circulation patterns as well as Kelvin waves and Ekman spirals at the ocean surface. In the Earth's core, the circulation of the molten iron is structured by Taylor columns. Waves and other phenomena in the magnetosphere can be modeled using magnetohydrodynamics. Mineral physics The physical properties of minerals must be understood to infer the composition of the Earth's interior from seismology, the geothermal gradient and other sources of information. Mineral physicists study the elastic properties of minerals; their high-pressure phase diagrams, melting points and equations of state at high pressure; and the rheological properties of rocks, or their ability to flow. Deformation of rocks by creep make flow possible, although over short times the rocks are brittle. The viscosity of rocks is affected by temperature and pressure, and in turn, determines the rates at which tectonic plates move. Water is a very complex substance and its unique properties are essential for life. Its physical properties shape the hydrosphere and are an essential part of the water cycle and climate. Its thermodynamic properties determine evaporation and the thermal gradient in the atmosphere. The many types of precipitation involve a complex mixture of processes such as coalescence, supercooling and supersaturation. Some precipitated water becomes groundwater, and groundwater flow includes phenomena such as percolation, while the conductivity of water makes electrical and electromagnetic methods useful for tracking groundwater flow. Physical properties of water such as salinity have a large effect on its motion in the oceans. The many phases of ice form the cryosphere and come in forms like ice sheets, glaciers, sea ice, freshwater ice, snow, and frozen ground (or permafrost). Regions of the Earth Size and form of the Earth Contrary to popular belief, the earth is not entirely spherical but instead generally exhibits an ellipsoid shape- which is a result of the centrifugal forces the planet generates due to its constant motion. These forces cause the planets diameter to bulge towards the Equator and results in the ellipsoid shape. Earth's shape is constantly changing, and different factors including glacial isostatic rebound (large ice sheets melting causing the Earth's crust to the rebound due to the release of the pressure), geological features such as mountains or ocean trenches, tectonic plate dynamics, and natural disasters can further distort the planet's shape. Structure of the interior Evidence from seismology, heat flow at the surface, and mineral physics is combined with the Earth's mass and moment of inertia to infer models of the Earth's interior – its composition, density, temperature, pressure. For example, the Earth's mean specific gravity () is far higher than the typical specific gravity of rocks at the surface (), implying that the deeper material is denser. This is also implied by its low moment of inertia (, compared to for a sphere of constant density). However, some of the density increase is compression under the enormous pressures inside the Earth. The effect of pressure can be calculated using the Adams–Williamson equation. The conclusion is that pressure alone cannot account for the increase in density. Instead, we know that the Earth's core is composed of an alloy of iron and other minerals. Reconstructions of seismic waves in the deep interior of the Earth show that there are no S-waves in the outer core. This indicates that the outer core is liquid, because liquids cannot support shear. The outer core is liquid, and the motion of this highly conductive fluid generates the Earth's field. Earth's inner core, however, is solid because of the enormous pressure. Reconstruction of seismic reflections in the deep interior indicates some major discontinuities in seismic velocities that demarcate the major zones of the Earth: inner core, outer core, mantle, lithosphere and crust. The mantle itself is divided into the upper mantle, transition zone, lower mantle and D′′ layer. Between the crust and the mantle is the Mohorovičić discontinuity. The seismic model of the Earth does not by itself determine the composition of the layers. For a complete model of the Earth, mineral physics is needed to interpret seismic velocities in terms of composition. The mineral properties are temperature-dependent, so the geotherm must also be determined. This requires physical theory for thermal conduction and convection and the heat contribution of radioactive elements. The main model for the radial structure of the interior of the Earth is the preliminary reference Earth model (PREM). Some parts of this model have been updated by recent findings in mineral physics (see post-perovskite) and supplemented by seismic tomography. The mantle is mainly composed of silicates, and the boundaries between layers of the mantle are consistent with phase transitions. The mantle acts as a solid for seismic waves, but under high pressures and temperatures, it deforms so that over millions of years it acts like a liquid. This makes plate tectonics possible. Magnetosphere If a planet's magnetic field is strong enough, its interaction with the solar wind forms a magnetosphere. Early space probes mapped out the gross dimensions of the Earth's magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles called the Van Allen radiation belts. Methods Geodesy Geophysical measurements are generally at a particular time and place. Accurate measurements of position, along with earth deformation and gravity, are the province of geodesy. While geodesy and geophysics are separate fields, the two are so closely connected that many scientific organizations such as the American Geophysical Union, the Canadian Geophysical Union and the International Union of Geodesy and Geophysics encompass both. Absolute positions are most frequently determined using the global positioning system (GPS). A three-dimensional position is calculated using messages from four or more visible satellites and referred to the 1980 Geodetic Reference System. An alternative, optical astronomy, combines astronomical coordinates and the local gravity vector to get geodetic coordinates. This method only provides the position in two coordinates and is more difficult to use than GPS. However, it is useful for measuring motions of the Earth such as nutation and Chandler wobble. Relative positions of two or more points can be determined using very-long-baseline interferometry. Gravity measurements became part of geodesy because they were needed to related measurements at the surface of the Earth to the reference coordinate system. Gravity measurements on land can be made using gravimeters deployed either on the surface or in helicopter flyovers. Since the 1960s, the Earth's gravity field has been measured by analyzing the motion of satellites. Sea level can also be measured by satellites using radar altimetry, contributing to a more accurate geoid. In 2002, NASA launched the Gravity Recovery and Climate Experiment (GRACE), wherein two twin satellites map variations in Earth's gravity field by making measurements of the distance between the two satellites using GPS and a microwave ranging system. Gravity variations detected by GRACE include those caused by changes in ocean currents; runoff and ground water depletion; melting ice sheets and glaciers. Satellites and space probes Satellites in space have made it possible to collect data from not only the visible light region, but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics. Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins. Global positioning systems (GPS) and geographical information systems (GIS) Since geophysics is concerned with the shape of the Earth, and by extension the mapping of features around and in the planet, geophysical measurements include high accuracy GPS measurements. These measurements are processed to increase their accuracy through differential GPS processing. Once the geophysical measurements have been processed and inverted, the interpreted results are plotted using GIS. Programs such as ArcGIS and Geosoft were built to meet these needs and include many geophysical functions that are built-in, such as upward continuation, and the calculation of the measurement derivative such as the first-vertical derivative. Many geophysics companies have designed in-house geophysics programs that pre-date ArcGIS and GeoSoft in order to meet the visualization requirements of a geophysical dataset. Remote sensing Exploration geophysics is a branch of applied geophysics that involves the development and utilization of different seismic or electromagnetic methods which the aim of investigating different energy, mineral and water resources. This is done through the uses of various remote sensing platforms such as; satellites, aircraft, boats, drones, borehole sensing equipment and seismic receivers. These equipment are often used in conjunction with different geophysical methods such as magnetic, gravimetry, electromagnetic, radiometric, barometry methods in order to gather the data. The remote sensing platforms used in exploration geophysics are not perfect and need adjustments done on them in order to accurately account for the effects that the platform itself may have on the collected data. For example, when gathering aeromagnetic data (aircraft gathered magnetic data) using a conventional fixed-wing aircraft- the platform has to be adjusted to account for the electromagnetic currents that it may generate as it passes through Earth's magnetic field. There are also corrections related to changes in measured potential field intensity as the Earth rotates, as the Earth orbits the Sun, and as the moon orbits the Earth. Signal processing Geophysical measurements are often recorded as time-series with GPS location. Signal processing involves the correction of time-series data for unwanted noise or errors introduced by the measurement platform, such as aircraft vibrations in gravity data. It also involves the reduction of sources of noise, such as diurnal corrections in magnetic data. In seismic data, electromagnetic data, and gravity data, processing continues after error corrections to include computational geophysics which result in the final interpretation of the geophysical data into a geological interpretation of the geophysical measurements History Geophysics emerged as a separate discipline only in the 19th century, from the intersection of physical geography, geology, astronomy, meteorology, and physics. The first known use of the word geophysics was in German ("Geophysik") by Julius Fröbel in 1834. However, many geophysical phenomena – such as the Earth's magnetic field and earthquakes – have been investigated since the ancient era. Ancient and classical eras The magnetic compass existed in China back as far as the fourth century BC. It was used as much for feng shui as for navigation on land. It was not until good steel needles could be forged that compasses were used for navigation at sea; before that, they could not retain their magnetism long enough to be useful. The first mention of a compass in Europe was in 1190 AD. In circa 240 BC, Eratosthenes of Cyrene deduced that the Earth was round and measured the circumference of Earth with great precision. He developed a system of latitude and longitude. Perhaps the earliest contribution to seismology was the invention of a seismoscope by the prolific inventor Zhang Heng in 132 AD. This instrument was designed to drop a bronze ball from the mouth of a dragon into the mouth of a toad. By looking at which of eight toads had the ball, one could determine the direction of the earthquake. It was 1571 years before the first design for a seismoscope was published in Europe, by Jean de la Hautefeuille. It was never built. Beginnings of modern science The 17th century had major milestones that marked the beginning of modern science. In 1600, William Gilbert release a publication titled De Magnete (1600) where he conducted series of experiments on both natural magnets (called 'loadstones') and artificially magnetized iron. His experiments lead to observations involving a small compass needle (versorium) which replicated magnetic behaviours when subjected to a spherical magnet, along with it experiencing 'magnetic dips' when it was pivoted on a horizontal axis. HIs findings led to the deduction that compasses point north due to the Earth itself being a giant magnet. In 1687 Isaac Newton published his work titled Principia which was pivotal in the development of modern scientific fields such as astronomy and physics. In it, Newton both laid the foundations for classical mechanics and gravitation, as well as explained different geophysical phenomena such as the precession of the equinox (the orbit of whole star patterns along an ecliptic axis. Newton's theory of gravity had gained so much success, that it resulted in changing the main objective of physics in that era to unravel natures fundamental forces, and their characterizations in laws. The first seismometer, an instrument capable of keeping a continuous record of seismic activity, was built by James Forbes in 1844.
Physical sciences
Geophysics
null
54969
https://en.wikipedia.org/wiki/Snail
Snail
A snail is a shelled gastropod. The name is most often applied to land snails, terrestrial pulmonate gastropod molluscs. However, the common name snail is also used for most of the members of the molluscan class Gastropoda that have a coiled shell that is large enough for the animal to retract completely into. When the word "snail" is used in this most general sense, it includes not just land snails but also numerous species of sea snails and freshwater snails. Gastropods that naturally lack a shell, or have only an internal shell, are mostly called slugs, and land snails that have only a very small shell (that they cannot retract into) are often called semi-slugs. Snails have considerable human relevance, including as food items, as pests, and as vectors of disease, and their shells are used as decorative objects and are incorporated into jewellery. The snail has also had some cultural significance, tending to be associated with lethargy. The snail has also been used as a figure of speech in reference to slow-moving things. Overview Snails that respire using a lung belong to the group Pulmonata. As traditionally defined, the Pulmonata were found to be polyphyletic in a molecular study per Jörger et al., dating from 2010. But snails with gills also form a polyphyletic group; in other words, snails with lungs and snails with gills form a number of taxonomic groups that are not necessarily more closely related to each other than they are related to some other groups. Both snails that have lungs and snails that have gills have diversified so widely over geological time that a few species with gills can be found on land and numerous species with lungs can be found in freshwater. Even a few marine species have lungs. Snails can be found in a very wide range of environments, including ditches, deserts, and the abyssal depths of the sea. Although land snails may be more familiar to laymen, marine snails constitute the majority of snail species, and have much greater diversity and a greater biomass. Numerous kinds of snail can also be found in fresh water. Most snails have thousands of microscopic tooth-like structures located on a banded ribbon-like tongue called a radula. The radula works like a file, ripping food into small pieces. Many snails are herbivorous, eating plants or rasping algae from surfaces with their radulae, though a few land species and many marine species are omnivores or predatory carnivores. Snails cannot absorb colored pigments when eating paper or cardboard so their feces are also colored. Several species of the genus Achatina and related genera are known as giant African land snails; some grow to from snout to tail, and weigh . The largest living species of sea snail is Syrinx aruanus; its shell can measure up to in length, and the whole animal with the shell can weigh up to . The smallest land snail, Angustopila psammion, was discovered in 2022 and measures 0.6 mm in diameter. The largest known land gastropod is the African giant snail Achatina achatina, the largest recorded specimen of which measured from snout to tail when fully extended, with a shell length of in December 1978. It weighed exactly 900 g (about 2 lb). Named Gee Geronimo, this snail was owned by Christopher Hudson (1955–79) of Hove, East Sussex, UK, and was collected in Sierra Leone in June 1976. Snails are protostomes. That means during development, in the gastrulation phase, the blastopore forms the mouth first. Cleavage in snails is spiral holoblastic patterning. In spiral holoblastic cleavage, the cleavage plane rotates each division and the cell divisions are complete. Snails do not undergo metamorphosis after hatching. Snails hatch in the form of small adults. The only additional development they will undergo is to consume calcium to strengthen their shell. Snails can be male, female, hermaphroditic, or parthenogenetic so there are many different systems of sexual determination. Anatomy Snails have complex organ systems and anatomies that differ greatly from most animals. Snails and most other Mollusca share three anatomical features; the foot, the mantle, and the radula. Foot: The foot is a muscular organ used by Gastropods for locomotion. Gastropods' stomachs are located within their foot. Both land and sea snails travel by contracting foot muscles to deform the mucus layer beneath it into different wave-like patterns. Mantle: The mantle is the organ that produces shells for most species of mollusca. In snails, the mantle secretes the shell along the snail shell opening, continuously growing and producing the shell for the entirety of the snail’s life. The mantle creates a compartment known as the mantle cavity and is used by many mollusca as the surface where gas exchange occurs. Snails that use the mantle cavity as a lung are known as Pulmonate snails. Other snails may only have a gill. Snails in the Caenogastropoda families like Ampullariidae, have both a gill and a lung. Shell: Snail shells are mainly composed of a mixture of proteins called conchin, and calcium carbonate. Conchin is the main component in the outer layer of the shell, known as the periostracum. The inner layers of the shell are composed of a network of calcium carbonate, conchin, and different mineral salts. The mantle produces the shell through addition around a central axis called the columella, causing a spiraling pattern. The spiraling patterns on a snail’s shell are known as coils or whorls. Whorl size generally increases as the snail ages. Size differences in shell size are believed to be mainly influenced by genetic and environmental components. Moister conditions often correlate with larger snails. In larger populations, adult snails attain smaller shell sizes due to the effects of pheromones on growth rate. Radula: The radula is an anatomical structure used by most species of Mollusca for feeding. Gastropods are morphologically highly variable and have diverse feeding strategies. Snails can be herbivores, detritivores, scavengers, parasites, ciliary feeders, or have highly specialized predation. Nearly all snails utilize a feeding apparatus including the oral structures of one or more jaws and the radula. The radula comprises a chitinous ribbon with teeth arranged in transverse and longitudinal rows. The radula continually renews itself during the entire lifespan of a mollusk. The teeth and membrane are continuously synthesized in the radular sac and then shifted forward towards the working zone of the radula. The teeth harden and mineralize during their travel to the working zone. The presence of the radula is common throughout most snail species, but often differs in many characteristics, like the shape, size, and number of odontoblasts that form a tooth. Diet The average snail's diet varies greatly depending on the species, including different feeding styles from herbivores to highly specialized feeders and parasites. Some snails like the Euglandina rosea, or rosy wolfsnail, are carnivorous and prey on other snails. However, most land snails are herbivores or omnivores. Among land snails, there is also a large variation in preference for specific food. For example, Cepaea nemoralis, or the grove snail, prefers dead plant material over fresh herbs or grasses. Age may also impact food preference, with adult grove snails showing a significantly larger preference for dead plant material than juvenile grove snails. Other snails, like the generalist herbivore Arianta arbustorum, or copse snail, choose their meals based on availability, consuming a mix of arthropods, wilted flowers, fresh and decayed plant material, and soil. Generally, land snails are most active at night due to the damp weather. The humid nighttime air minimizes water evaporation and is beneficial to land snails because their movement requires mucus, which is mostly composed of water. In addition to aiding movement, mucus plays a vital role in transporting food from the gill to the mouth, cleansing the mantle cavity, and trapping food before ingestion. Types of snails by habitat Slugs Gastropods that lack a conspicuous shell are commonly called slugs rather than snails. Some species of slug have a maroon-brown shell, some have only an internal vestige that serves mainly as a calcium lactate repository, and others have some to no shell at all. Other than that there is little morphological difference between slugs and snails. There are however important differences in habitats and behavior. A shell-less animal is much more maneuverable and compressible, so even quite large land slugs can take advantage of habitats or retreats with very little space, retreats that would be inaccessible to a similar-sized snail. Slugs squeeze themselves into confined spaces such as under loose bark on trees or under stone slabs, logs or wooden boards lying on the ground. In such retreats they are in less danger from either predators or desiccation. Those are often suitable places for laying their eggs. Slugs as a group are far from monophyletic; scientifically speaking "slug" is a term of convenience with little taxonomic significance. The reduction or loss of the shell has evolved many times independently within several very different lineages of gastropods. The various taxa of land and sea gastropods with slug morphology occur within numerous higher taxonomic groups of shelled species; such independent slug taxa are not in general closely related to one another. Parasitic diseases Snails can also be associated with parasitic diseases such as schistosomiasis, angiostrongyliasis, fasciolopsiasis, opisthorchiasis, fascioliasis, paragonimiasis and clonorchiasis, which can be transmitted to humans. Human relevance Land snails are known as an agricultural and garden pest but some species are an edible delicacy and occasionally household pets. In addition, their mucus can also be used for skin care products. In agriculture There is a variety of snail-control measures that gardeners and farmers use in an attempt to reduce damage to valuable plants. Traditional pesticides are still used, as are many less toxic control options such as concentrated garlic or wormwood solutions. Copper metal is also a snail repellent, and thus a copper band around the trunk of a tree will prevent snails from climbing up and reaching the foliage and fruit. A layer of a dry, finely ground, and scratchy substance such as diatomaceous earth can also deter snails. The decollate snail (Rumina decollata) will capture and eat garden snails, and because of this it has sometimes been introduced as a biological pest control agent. However, this is not without problems, as the decollate snail is just as likely to attack and devour other gastropods that may represent a valuable part of the native fauna of the region. Textiles Certain varieties of snails, notably the family Muricidae, produce a secretion that is a color-fast natural dye. The ancient Tyrian purple was made in this way as were other purple and blue dyes. The extreme expense of extracting this secretion is sufficient quantities limited its use to the very wealthy. It is such dyes as these that led to certain shades of purple and blue being associated with royalty and wealth. As pets Throughout history, snails have been kept as pets. There are many famous snails such as Lefty (Born Jeremy) and within fiction, Gary and Brian the snail. Culinary use In French cuisine, edible snails are served for instance in Escargot à la Bourguignonne. The practice of rearing snails for food is known as heliciculture. For purposes of cultivation, the snails are kept in a dark place in a wired cage with dry straw or dry wood. Coppiced wine-grape vines are often used for this purpose. During the rainy period, the snails come out of hibernation and release most of their mucus onto the dry wood/straw. The snails are then prepared for cooking. Their texture when cooked is slightly chewy and tender. As well as being eaten as gourmet food, several species of land snails provide an easily harvested source of protein to many people in poor communities around the world. Many land snails are valuable because they can feed on a wide range of agricultural wastes, such as shed leaves in banana plantations. In some countries, giant African land snails are produced commercially for food. Land snails, freshwater snails and sea snails are all eaten in many countries. In certain parts of the world snails are fried. For example, in Indonesia, they are fried as satay, a dish known as sate kakul. The eggs of certain snail species are eaten in a fashion similar to the way caviar is eaten. In Bulgaria, snails are traditionally cooked in an oven with rice or fried in a pan with vegetable oil and red paprika powder. Before they are used for those dishes, however, they are thoroughly boiled in hot water (for up to 90 minutes) and manually extracted from their shells. The two species most commonly used for food in the country are Helix lucorum and Helix pomatia. Snails and slug species that are not normally eaten in certain areas have occasionally been used as famine food in historical times. A history of Scotland written in the 1800s recounts a description of various snails and their use as food items in times of plague. Cultural depictions Because of its slowness, the snail has traditionally been seen as a symbol of laziness. In Christian culture, it has been used as a symbol of the deadly sin of sloth. In Mayan mythology, the snail is associated with sexual desire, being personified by the god Uayeb. Snails were widely noted and used in divination. The Greek poet Hesiod wrote that snails signified the time to harvest by climbing the stalks, while the Aztec moon god Tecciztecatl bore a snail shell on his back. This symbolised rebirth; the snail's penchant for appearing and disappearing was analogised with the moon. Keong Emas (Javanese and Indonesian for Golden Snail) is a popular Javanese folklore about a princess magically transformed and contained in a golden snail shell. The folklore is a part of popular Javanese Panji cycle telling the stories about the prince Panji Asmoro Bangun (also known as Raden Inu Kertapati) and his consort, princess Dewi Sekartaji (also known as Dewi Chandra Kirana). In contemporary speech, the expression "a snail's pace" is often used to describe a slow, inefficient process. The phrase "snail mail" is used to mean regular postal service delivery of paper messages as opposed to the delivery of email, which can be virtually instantaneous.
Biology and health sciences
Mollusks
null
55017
https://en.wikipedia.org/wiki/Fusion%20power
Fusion power
Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as fusion reactors. Research into fusion reactors began in the 1940s, but as of 2024, no device has reached net power, although net positive reactions have been achieved. Fusion processes require fuel and a confined environment with sufficient temperature, pressure, and confinement time to create a plasma in which fusion can occur. The combination of these figures that results in a power-producing system is known as the Lawson criterion. In stars the most common fuel is hydrogen, and gravity provides extremely long confinement times that reach the conditions needed for fusion energy production. Proposed fusion reactors generally use heavy hydrogen isotopes such as deuterium and tritium (and especially a mixture of the two), which react more easily than protium (the most common hydrogen isotope) and produce a helium nucleus and an energized neutron, to allow them to reach the Lawson criterion requirements with less extreme conditions. Most designs aim to heat their fuel to around 100 million kelvins, which presents a major challenge in producing a successful design. Tritium is extremely rare on Earth, having a half life of only ~12.3 years. Consequently, during the operation of envisioned fusion reactors, known as breeder reactors, helium cooled pebble beds (HCPBs) are subjected to neutron fluxes to generate tritium to complete the fuel cycle. As a source of power, nuclear fusion has a number of potential advantages compared to fission. These include reduced radioactivity in operation, little high-level nuclear waste, ample fuel supplies (assuming tritium breeding or some forms of aneutronic fuels), and increased safety. However, the necessary combination of temperature, pressure, and duration has proven to be difficult to produce in a practical and economical manner. A second issue that affects common reactions is managing neutrons that are released during the reaction, which over time degrade many common materials used within the reaction chamber. Fusion researchers have investigated various confinement concepts. The early emphasis was on three main systems: z-pinch, stellarator, and magnetic mirror. The current leading designs are the tokamak and inertial confinement (ICF) by laser. Both designs are under research at very large scales, most notably the ITER tokamak in France and the National Ignition Facility (NIF) laser in the United States. Researchers are also studying other designs that may offer less expensive approaches. Among these alternatives, there is increasing interest in magnetized target fusion and inertial electrostatic confinement, and new variations of the stellarator. Background Mechanism Fusion reactions occur when two or more atomic nuclei come close enough for long enough that the nuclear force pulling them together exceeds the electrostatic force pushing them apart, fusing them into heavier nuclei. For nuclei heavier than iron-56, the reaction is endothermic, requiring an input of energy. The heavy nuclei bigger than iron have many more protons resulting in a greater repulsive force. For nuclei lighter than iron-56, the reaction is exothermic, releasing energy when they fuse. Since hydrogen has a single proton in its nucleus, it requires the least effort to attain fusion, and yields the most net energy output. Also since it has one electron, hydrogen is the easiest fuel to fully ionize. The repulsive electrostatic interaction between nuclei operates across larger distances than the strong force, which has a range of roughly one femtometer—the diameter of a proton or neutron. The fuel atoms must be supplied enough kinetic energy to approach one another closely enough for the strong force to overcome the electrostatic repulsion in order to initiate fusion. The "Coulomb barrier" is the quantity of kinetic energy required to move the fuel atoms near enough. Atoms can be heated to extremely high temperatures or accelerated in a particle accelerator to produce this energy. An atom loses its electrons once it is heated past its ionization energy. An ion is the name for the resultant bare nucleus. The result of this ionization is plasma, which is a heated cloud of ions and free electrons that were formerly bound to them. Plasmas are electrically conducting and magnetically controlled because the charges are separated. This is used by several fusion devices to confine the hot particles. Cross section A reaction's cross section, denoted σ, measures the probability that a fusion reaction will happen. This depends on the relative velocity of the two nuclei. Higher relative velocities generally increase the probability, but the probability begins to decrease again at very high energies. In a plasma, particle velocity can be characterized using a probability distribution. If the plasma is thermalized, the distribution looks like a Gaussian curve, or Maxwell–Boltzmann distribution. In this case, it is useful to use the average particle cross section over the velocity distribution. This is entered into the volumetric fusion rate: where: is the energy made by fusion, per time and volume n is the number density of species A or B, of the particles in the volume is the cross section of that reaction, average over all the velocities of the two species v is the energy released by that fusion reaction. Lawson criterion The Lawson criterion considers the energy balance between the energy produced in fusion reactions to the energy being lost to the environment. In order to generate usable energy, a system would have to produce more energy than it loses. Lawson assumed an energy balance, shown below. where: is the net power from fusion is the efficiency of capturing the output of the fusion is the rate of energy generated by the fusion reactions is the conduction losses as energetic mass leaves the plasma is the radiation losses as energy leaves as light. The rate of fusion, and thus Pfusion, depends on the temperature and density of the plasma. The plasma loses energy through conduction and radiation. Conduction occurs when ions, electrons, or neutrals impact other substances, typically a surface of the device, and transfer a portion of their kinetic energy to the other atoms. The rate of conduction is also based on the temperature and density. Radiation is energy that leaves the cloud as light. Radiation also increases with temperature as well as the mass of the ions. Fusion power systems must operate in a region where the rate of fusion is higher than the losses. Triple product: density, temperature, time The Lawson criterion argues that a machine holding a thermalized and quasi-neutral plasma has to generate enough energy to overcome its energy losses. The amount of energy released in a given volume is a function of the temperature, and thus the reaction rate on a per-particle basis, the density of particles within that volume, and finally the confinement time, the length of time that energy stays within the volume. This is known as the "triple product": the plasma density, temperature, and confinement time. In magnetic confinement, the density is low, on the order of a "good vacuum". For instance, in the ITER device the fuel density is about , which is about one-millionth atmospheric density. This means that the temperature and/or confinement time must increase. Fusion-relevant temperatures have been achieved using a variety of heating methods that were developed in the early 1970s. In modern machines, , the major remaining issue was the confinement time. Plasmas in strong magnetic fields are subject to a number of inherent instabilities, which must be suppressed to reach useful durations. One way to do this is to simply make the reactor volume larger, which reduces the rate of leakage due to classical diffusion. This is why ITER is so large. In contrast, inertial confinement systems approach useful triple product values via higher density, and have short confinement intervals. In NIF, the initial frozen hydrogen fuel load has a density less than water that is increased to about 100 times the density of lead. In these conditions, the rate of fusion is so high that the fuel fuses in the microseconds it takes for the heat generated by the reactions to blow the fuel apart. Although NIF is also large, this is a function of its "driver" design, not inherent to the fusion process. Energy capture Multiple approaches have been proposed to capture the energy that fusion produces. The simplest is to heat a fluid. The commonly targeted D-T reaction releases much of its energy as fast-moving neutrons. Electrically neutral, the neutron is unaffected by the confinement scheme. In most designs, it is captured in a thick "blanket" of lithium surrounding the reactor core. When struck by a high-energy neutron, the blanket heats up. It is then actively cooled with a working fluid that drives a turbine to produce power. Another design proposed to use the neutrons to breed fission fuel in a blanket of nuclear waste, a concept known as a fission-fusion hybrid. In these systems, the power output is enhanced by the fission events, and power is extracted using systems like those in conventional fission reactors. Designs that use other fuels, notably the proton-boron aneutronic fusion reaction, release much more of their energy in the form of charged particles. In these cases, power extraction systems based on the movement of these charges are possible. Direct energy conversion was developed at Lawrence Livermore National Laboratory (LLNL) in the 1980s as a method to maintain a voltage directly using fusion reaction products. This has demonstrated energy capture efficiency of 48 percent. Plasma behavior Plasma is an ionized gas that conducts electricity. In bulk, it is modeled using magnetohydrodynamics, which is a combination of the Navier–Stokes equations governing fluids and Maxwell's equations governing how magnetic and electric fields behave. Fusion exploits several plasma properties, including: Self-organizing plasma conducts electric and magnetic fields. Its motions generate fields that can in turn contain it. Diamagnetic plasma can generate its own internal magnetic field. This can reject an externally applied magnetic field, making it diamagnetic. Magnetic mirrors can reflect plasma when it moves from a low to high density field.:24 Methods Magnetic confinement Tokamak: the most well-developed and well-funded approach. This method drives hot plasma around in a magnetically confined torus, with an internal current. When completed, ITER will become the world's largest tokamak. As of September 2018 an estimated 226 experimental tokamaks were either planned, decommissioned or operating (50) worldwide. Spherical tokamak: also known as spherical torus. A variation on the tokamak with a spherical shape. Stellarator: Twisted rings of hot plasma. The stellarator attempts to create a natural twisted plasma path, using external magnets. Stellarators were developed by Lyman Spitzer in 1950 and evolved into four designs: Torsatron, Heliotron, Heliac and Helias. One example is Wendelstein 7-X, a German device. It is the world's largest stellarator. Internal rings: Stellarators create a twisted plasma using external magnets, while tokamaks do so using a current induced in the plasma. Several classes of designs provide this twist using conductors inside the plasma. Early calculations showed that collisions between the plasma and the supports for the conductors would remove energy faster than fusion reactions could replace it. Modern variations, including the Levitated Dipole Experiment (LDX), use a solid superconducting torus that is magnetically levitated inside the reactor chamber. Magnetic mirror: Developed by Richard F. Post and teams at Lawrence Livermore National Laboratory (LLNL) in the 1960s. Magnetic mirrors reflect plasma back and forth in a line. Variations included the Tandem Mirror, magnetic bottle and the biconic cusp. A series of mirror machines were built by the US government in the 1970s and 1980s, principally at LLNL. However, calculations in the 1970s estimated it was unlikely these would ever be commercially useful. Bumpy torus: A number of magnetic mirrors are arranged end-to-end in a toroidal ring. Any fuel ions that leak out of one are confined in a neighboring mirror, permitting the plasma pressure to be raised arbitrarily high without loss. An experimental facility, the ELMO Bumpy Torus or EBT was built and tested at Oak Ridge National Laboratory (ORNL) in the 1970s. Field-reversed configuration: This device traps plasma in a self-organized quasi-stable structure; where the particle motion makes an internal magnetic field which then traps itself. Spheromak: Similar to a field-reversed configuration, a semi-stable plasma structure made by using the plasmas' self-generated magnetic field. A spheromak has both toroidal and poloidal fields, while a field-reversed configuration has no toroidal field. Dynomak is a spheromak that is formed and sustained using continuous magnetic flux injection. Reversed field pinch: Here the plasma moves inside a ring. It has an internal magnetic field. Moving out from the center of this ring, the magnetic field reverses direction. Inertial confinement Indirect drive: Lasers heat a structure known as a Hohlraum that becomes so hot it begins to radiate x-ray light. These x-rays heat a fuel pellet, causing it to collapse inward to compress the fuel. The largest system using this method is the National Ignition Facility, followed closely by Laser Mégajoule. Direct drive: Lasers directly heat the fuel pellet. Notable direct drive experiments have been conducted at the Laboratory for Laser Energetics (LLE) and the GEKKO XII facilities. Good implosions require fuel pellets with close to a perfect shape in order to generate a symmetrical inward shock wave that produces the high-density plasma. Fast ignition: This method uses two laser blasts. The first blast compresses the fusion fuel, while the second ignites it. this technique had lost favor for energy production. Magneto-inertial fusion or Magnetized Liner Inertial Fusion: This combines a laser pulse with a magnetic pinch. The pinch community refers to it as magnetized liner inertial fusion while the ICF community refers to it as magneto-inertial fusion. Ion Beams: Ion beams replace laser beams to heat the fuel. The main difference is that the beam has momentum due to mass, whereas lasers do not. As of 2019 it appears unlikely that ion beams can be sufficiently focused spatially and in time. Z-machine: Sends an electric current through thin tungsten wires, heating them sufficiently to generate x-rays. Like the indirect drive approach, these x-rays then compress a fuel capsule. Magnetic or electric pinches Z-pinch: A current travels in the z-direction through the plasma. The current generates a magnetic field that compresses the plasma. Pinches were the first method for human-made controlled fusion. The z-pinch has inherent instabilities that limit its compression and heating to values too low for practical fusion. The largest such machine, the UK's ZETA, was the last major experiment of the sort. The problems in z-pinch led to the tokamak design. The dense plasma focus is a possibly superior variation. Theta-pinch: A current circles around the outside of a plasma column, in the theta direction. This induces a magnetic field running down the center of the plasma, as opposed to around it. The early theta-pinch device Scylla was the first to conclusively demonstrate fusion, but later work demonstrated it had inherent limits that made it uninteresting for power production. Sheared Flow Stabilized Z-Pinch: Research at the University of Washington under Uri Shumlak investigated the use of sheared-flow stabilization to smooth out the instabilities of Z-pinch reactors. This involves accelerating neutral gas along the axis of the pinch. Experimental machines included the FuZE and Zap Flow Z-Pinch experimental reactors. In 2017, British technology investor and entrepreneur Benj Conway, together with physicists Brian Nelson and Uri Shumlak, co-founded Zap Energy to attempt to commercialize the technology for power production. Screw Pinch: This method combines a theta and z-pinch for improved stabilization. Inertial electrostatic confinement Fusor: An electric field heats ions to fusion conditions. The machine typically uses two spherical cages, a cathode inside the anode, inside a vacuum. These machines are not considered a viable approach to net power because of their high conduction and radiation losses. They are simple enough to build that amateurs have fused atoms using them. Polywell: Attempts to combine magnetic confinement with electrostatic fields, to avoid the conduction losses generated by the cage. Other Magnetized target fusion: Confines hot plasma using a magnetic field and squeezes it using inertia. Examples include LANL FRX-L machine, General Fusion (piston compression with liquid metal liner), HyperJet Fusion (plasma jet compression with plasma liner). Uncontrolled: Fusion has been initiated by man, using uncontrolled fission explosions to stimulate fusion. Early proposals for fusion power included using bombs to initiate reactions. See Project PACER. Colliding beam fusion: A beam of high energy particles fired at another beam or target can initiate fusion. This was used in the 1970s and 1980s to study the cross sections of fusion reactions. However beam systems cannot be used for power because keeping a beam coherent takes more energy than comes from fusion. Muon-catalyzed fusion: This approach replaces electrons in diatomic molecules of isotopes of hydrogen with muons—more massive particles with the same electric charge. Their greater mass compresses the nuclei enough such that the strong interaction can cause fusion. As of 2007 producing muons required more energy than can be obtained from muon-catalyzed fusion. Lattice confinement fusion: Lattice confinement fusion (LCF) is a type of nuclear fusion in which deuteron-saturated metals are exposed to gamma radiation or ion beams, such as in an IEC fusor, avoiding the confined high-temperature plasmas used in other methods of fusion. Common tools Many approaches, equipment, and mechanisms are employed across multiple projects to address fusion heating, measurement, and power production. Machine learning A deep reinforcement learning system has been used to control a tokamak-based reactor. The system was able to manipulate the magnetic coils to manage the plasma. The system was able to continuously adjust to maintain appropriate behavior (more complex than step-based systems). In 2014, Google began working with California-based fusion company TAE Technologies to control the Joint European Torus (JET) to predict plasma behavior. DeepMind has also developed a control scheme with TCV. Heating Electrostatic heating: an electric field can do work on charged ions or electrons, heating them. Neutral beam injection: hydrogen is ionized and accelerated by an electric field to form a charged beam that is shone through a source of neutral hydrogen gas towards the plasma which itself is ionized and contained by a magnetic field. Some of the intermediate hydrogen gas is accelerated towards the plasma by collisions with the charged beam while remaining neutral: this neutral beam is thus unaffected by the magnetic field and so reaches the plasma. Once inside the plasma the neutral beam transmits energy to the plasma by collisions which ionize it and allow it to be contained by the magnetic field, thereby both heating and refueling the reactor in one operation. The remainder of the charged beam is diverted by magnetic fields onto cooled beam dumps. Radio frequency heating: a radio wave causes the plasma to oscillate (i.e., microwave oven). This is also known as electron cyclotron resonance heating, using for example gyrotrons, or dielectric heating. Magnetic reconnection: when plasma gets dense, its electromagnetic properties can change, which can lead to magnetic reconnection. Reconnection helps fusion because it instantly dumps energy into a plasma, heating it quickly. Up to 45% of the magnetic field energy can heat the ions. Magnetic oscillations: varying electric currents can be supplied to magnetic coils that heat plasma confined within a magnetic wall. Antiproton annihilation: antiprotons injected into a mass of fusion fuel can induce thermonuclear reactions. This possibility as a method of spacecraft propulsion, known as antimatter-catalyzed nuclear pulse propulsion, was investigated at Pennsylvania State University in connection with the proposed AIMStar project. Measurement The diagnostics of a fusion scientific reactor are extremely complex and varied. The diagnostics required for a fusion power reactor will be various but less complicated than those of a scientific reactor as by the time of commercialization, many real-time feedback and control diagnostics will have been perfected. However, the operating environment of a commercial fusion reactor will be harsher for diagnostic systems than in a scientific reactor because continuous operations may involve higher plasma temperatures and higher levels of neutron irradiation. In many proposed approaches, commercialization will require the additional ability to measure and separate diverter gases, for example helium and impurities, and to monitor fuel breeding, for instance the state of a tritium breeding liquid lithium liner. The following are some basic techniques. Flux loop: a loop of wire is inserted into the magnetic field. As the field passes through the loop, a current is made. The current measures the total magnetic flux through that loop. This has been used on the National Compact Stellarator Experiment, the polywell, and the LDX machines. A Langmuir probe, a metal object placed in a plasma, can be employed. A potential is applied to it, giving it a voltage against the surrounding plasma. The metal collects charged particles, drawing a current. As the voltage changes, the current changes. This makes an IV Curve. The IV-curve can be used to determine the local plasma density, potential and temperature. Thomson scattering: "Light scatters" from plasma can be used to reconstruct plasma behavior, including density and temperature. It is common in Inertial confinement fusion, Tokamaks, and fusors. In ICF systems, firing a second beam into a gold foil adjacent to the target makes x-rays that traverse the plasma. In tokamaks, this can be done using mirrors and detectors to reflect light. Neutron detectors: Several types of neutron detectors can record the rate at which neutrons are produced. X-ray detectors Visible, IR, UV, and X-rays are emitted anytime a particle changes velocity. If the reason is deflection by a magnetic field, the radiation is cyclotron radiation at low speeds and synchrotron radiation at high speeds. If the reason is deflection by another particle, plasma radiates X-rays, known as Bremsstrahlung radiation. Power production Neutron blankets absorb neutrons, which heats the blanket. Power can be extracted from the blanket in various ways: Steam turbines can be driven by heat transferred into a working fluid that turns into steam, driving electric generators. Neutron blankets: These neutrons can regenerate spent fission fuel. Tritium can be produced using a breeder blanket of liquid lithium or a helium cooled pebble bed made of lithium-bearing ceramic pebbles. Direct conversion: The kinetic energy of a particle can be converted into voltage. It was first suggested by Richard F. Post in conjunction with magnetic mirrors, in the late 1960s. It has been proposed for Field-Reversed Configurations as well as Dense Plasma Focus devices. The process converts a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. This method has demonstrated an experimental efficiency of 48 percent. Traveling-wave tubes pass charged helium atoms at several megavolts and just coming off the fusion reaction through a tube with a coil of wire around the outside. This passing charge at high voltage pulls electricity through the wire. Confinement Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion. General principles: Equilibrium: The forces acting on the plasma must be balanced. One exception is inertial confinement, where the fusion must occur faster than the dispersal time. Stability: The plasma must be constructed so that disturbances will not lead to the plasma dispersing. Transport or conduction: The loss of material must be sufficiently slow. The plasma carries energy off with it, so rapid loss of material will disrupt fusion. Material can be lost by transport into different regions or conduction through a solid or liquid. To produce self-sustaining fusion, part of the energy released by the reaction must be used to heat new reactants and maintain the conditions for fusion. Magnetic confinement Magnetic Mirror Magnetic mirror effect. If a particle follows the field line and enters a region of higher field strength, the particles can be reflected. Several devices apply this effect. The most famous was the magnetic mirror machines, a series of devices built at LLNL from the 1960s to the 1980s. Other examples include magnetic bottles and Biconic cusp. Because the mirror machines were straight, they had some advantages over ring-shaped designs. The mirrors were easier to construct and maintain and direct conversion energy capture was easier to implement. Poor confinement has led this approach to be abandoned, except in the polywell design. Magnetic loops Magnetic loops bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed systems of this type are the tokamak, the stellarator, and the reversed field pinch. Compact toroids, especially the field-reversed configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. Inertial confinement Inertial confinement is the use of rapid implosion to heat and confine plasma. A shell surrounding the fuel is imploded using a direct laser blast (direct drive), a secondary x-ray blast (indirect drive), or heavy beams. The fuel must be compressed to about 30 times solid density with energetic beams. Direct drive can in principle be efficient, but insufficient uniformity has prevented success.:19–20 Indirect drive uses beams to heat a shell, driving the shell to radiate x-rays, which then implode the pellet. The beams are commonly laser beams, but ion and electron beams have been investigated.:182–193 Electrostatic confinement Electrostatic confinement fusion devices use electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Fusion rates in fusors are low because of competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a magnetically shielded-grid, a penning trap, the polywell, and the F1 cathode driver concept. Fuels The fuels considered for fusion power have all been light elements like the isotopes of hydrogen—protium, deuterium, and tritium. The deuterium and helium-3 reaction requires helium-3, an isotope of helium so scarce on Earth that it would have to be mined extraterrestrially or produced by other nuclear reactions. Ultimately, researchers hope to adopt the protium–boron-11 reaction, because it does not directly produce neutrons, although side reactions can. Deuterium, tritium The easiest nuclear reaction, at the lowest energy, is D+T: + → (3.5 MeV) + (14.1 MeV) This reaction is common in research, industrial and military applications, usually as a neutron source. Deuterium is a naturally occurring isotope of hydrogen and is commonly available. The large mass ratio of the hydrogen isotopes makes their separation easy compared to the uranium enrichment process. Tritium is a natural isotope of hydrogen, but because it has a short half-life of 12.32 years, it is hard to find, store, produce, and is expensive. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions: + → + + → + + The reactant neutron is supplied by the D-T fusion reaction shown above, and the one that has the greatest energy yield. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic, but does not consume the neutron. Neutron multiplication reactions are required to replace the neutrons lost to absorption by other elements. Leading candidate neutron multiplication materials are beryllium and lead, but the 7Li reaction helps to keep the neutron population high. Natural lithium is mainly 7Li, which has a low tritium production cross section compared to 6Li so most reactor designs use breeding blankets with enriched 6Li. Drawbacks commonly attributed to D-T fusion power include: The supply of neutrons results in neutron activation of the reactor materials.:242 80% of the resultant energy is carried off by neutrons, which limits the use of direct energy conversion. It requires the radioisotope tritium. Tritium may leak from reactors. Some estimates suggest that this would represent a substantial environmental radioactivity release. The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of fission power reactors, posing problems for material design. After a series of D-T tests at JET, the vacuum vessel was sufficiently radioactive that it required remote handling for the year following the tests. In a production setting, the neutrons would react with lithium in the breeding blanket composed of lithium ceramic pebbles or liquid lithium, yielding tritium. The energy of the neutrons ends up in the lithium, which would then be transferred to drive electrical production. The lithium blanket protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, use lithium inside the reactor core as a design element. The plasma interacts directly with the lithium, preventing a problem known as "recycling". The advantage of this design was demonstrated in the Lithium Tokamak Experiment. Deuterium Fusing two deuterium nuclei is the second easiest fusion reaction. The reaction has two branches that occur with nearly equal probability: + → + + → + This reaction is also common in research. The optimum energy to initiate this reaction is 15 keV, only slightly higher than that for the D-T reaction. The first branch produces tritium, so that a D-D reactor is not tritium-free, even though it does not require an input of tritium or lithium. Unless the tritons are quickly removed, most of the tritium produced is burned in the reactor, which reduces the handling of tritium, with the disadvantage of producing more, and higher-energy, neutrons. The neutron from the second branch of the D-D reaction has an energy of only , while the neutron from the D-T reaction has an energy of , resulting in greater isotope production and material damage. When the tritons are removed quickly while allowing the 3He to react, the fuel cycle is called "tritium suppressed fusion". The removed tritium decays to 3He with a 12.5 year half life. By recycling the 3He decay into the reactor, the fusion reactor does not require materials resistant to fast neutrons. Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons would be only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from lithium resources and a somewhat softer neutron spectrum. The disadvantage of D-D compared to D-T is that the energy confinement time (at a given pressure) must be 30 times longer and the power produced (at a given pressure and volume) is 68 times less. Assuming complete removal of tritium and 3He recycling, only 6% of the fusion energy is carried by neutrons. The tritium-suppressed D-D fusion requires an energy confinement that is 10 times longer compared to D-T and double the plasma temperature. Deuterium, helium-3 A second-generation approach to controlled fusion power involves combining helium-3 (3He) and deuterium (2H): + → + This reaction produces 4He and a high-energy proton. As with the p-11B aneutronic fusion fuel cycle, most of the reaction energy is released as charged particles, reducing activation of the reactor housing and potentially allowing more efficient energy harvesting (via any of several pathways). In practice, D-D side reactions produce a significant number of neutrons, leaving p-11B as the preferred cycle for aneutronic fusion. Proton, boron-11 Both material science problems and non-proliferation concerns are greatly diminished by aneutronic fusion. Theoretically, the most reactive aneutronic fuel is 3He. However, obtaining reasonable quantities of 3He implies large scale extraterrestrial mining on the Moon or in the atmosphere of Uranus or Saturn. Therefore, the most promising candidate fuel for such fusion is fusing the readily available protium (i.e. a proton) and boron. Their fusion releases no neutrons, but produces energetic charged alpha (helium) particles whose energy can directly be converted to electrical power: + → 3 Side reactions are likely to yield neutrons that carry only about 0.1% of the power,:177–182 which means that neutron scattering is not used for energy transfer and material activation is reduced several thousand-fold. The optimum temperature for this reaction of 123 keV is nearly ten times higher than that for pure hydrogen reactions, and energy confinement must be 500 times better than that required for the D-T reaction. In addition, the power density is 2500 times lower than for D-T, although per unit mass of fuel, this is still considerably higher compared to fission reactors. Because the confinement properties of the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts, such as the Polywell and the Dense Plasma Focus. In 2013, a research team led by Christine Labaune at École Polytechnique, reported a new fusion rate record for proton-boron fusion, with an estimated 80 million fusion reactions during a 1.5 nanosecond laser fire, 100 times greater than reported in previous experiments. Material selection Structural material stability is a critical issue. Materials that can survive the high temperatures and neutron bombardment experienced in a fusion reactor are considered key to success. The principal issues are the conditions generated by the plasma, neutron degradation of wall surfaces, and the related issue of plasma-wall surface conditions. Reducing hydrogen permeability is seen as crucial to hydrogen recycling and control of the tritium inventory. Materials with the lowest bulk hydrogen solubility and diffusivity provide the optimal candidates for stable barriers. A few pure metals, including tungsten and beryllium, and compounds such as carbides, dense oxides, and nitrides have been investigated. Research has highlighted that coating techniques for preparing well-adhered and perfect barriers are of equivalent importance. The most attractive techniques are those in which an ad-layer is formed by oxidation alone. Alternative methods utilize specific gas environments with strong magnetic and electric fields. Assessment of barrier performance represents an additional challenge. Classical coated membranes gas permeation continues to be the most reliable method to determine hydrogen permeation barrier (HPB) efficiency. In 2021, in response to increasing numbers of designs for fusion power reactors for 2040, the United Kingdom Atomic Energy Authority published the UK Fusion Materials Roadmap 2021–2040, focusing on five priority areas, with a focus on tokamak family reactors: Novel materials to minimize the amount of activation in the structure of the fusion power plant; Compounds that can be used within the power plant to optimise breeding of tritium fuel to sustain the fusion process; Magnets and insulators that are resistant to irradiation from fusion reactions—especially under cryogenic conditions; Structural materials able to retain their strength under neutron bombardment at high operating temperatures (over 550 degrees C); Engineering assurance for fusion materials—providing irradiated sample data and modelled predictions such that plant designers, operators and regulators have confidence that materials are suitable for use in future commercial power stations. Superconducting materials In a plasma that is embedded in a magnetic field (known as a magnetized plasma) the fusion rate scales as the magnetic field strength to the 4th power. For this reason, many fusion companies that rely on magnetic fields to control their plasma are trying to develop high temperature superconducting devices. In 2021, SuperOx, a Russian and Japanese company, developed a new manufacturing process for making superconducting YBCO wire for fusion reactors. This new wire was shown to conduct between 700 and 2000 Amps per square millimeter. The company was able to produce 186 miles of wire in nine months. Containment considerations Even on smaller production scales, the containment apparatus is blasted with matter and energy. Designs for plasma containment must consider: A heating and cooling cycle, up to a 10 MW/m2 thermal load. Neutron radiation, which over time leads to neutron activation and embrittlement. High energy ions leaving at tens to hundreds of electronvolts. Alpha particles leaving at millions of electronvolts. Electrons leaving at high energy. Light radiation (IR, visible, UV, X-ray). Depending on the approach, these effects may be higher or lower than fission reactors. One estimate put the radiation at 100 times that of a typical pressurized water reactor. Depending on the approach, other considerations such as electrical conductivity, magnetic permeability, and mechanical strength matter. Materials must also not end up as long-lived radioactive waste. Plasma-wall surface conditions For long term use, each atom in the wall is expected to be hit by a neutron and displaced about 100 times before the material is replaced. These high-energy neutron collisions with the atoms in the wall result in the absorption of the neutrons, forming unstable isotopes of the atoms. When the isotope decays, it may emit alpha particles, protons, or gamma rays. Alpha particles, once stabilized by capturing electrons, form helium atoms which accumulate at grain boundaries and may result in swelling, blistering, or embrittlement of the material. Selection of materials Tungsten is widely regarded as the optimal material for plasma-facing components in next-generation fusion devices due to its unique properties and potential for enhancements. Its low sputtering rates and high melting point make it particularly suitable for the high-stress environments of fusion reactors, allowing it to withstand intense conditions without rapid degradation. Additionally, tungsten's low tritium retention through co-deposition and implantation is essential in fusion contexts, as it helps to minimize the accumulation of this radioactive isotope. Liquid metals (lithium, gallium, tin) have been proposed, e.g., by injection of 1–5 mm thick streams flowing at 10 m/s on solid substrates. Graphite features a gross erosion rate due to physical and chemical sputtering amounting to many meters per year, requiring redeposition of the sputtered material. The redeposition site generally does not exactly match the sputter site, allowing net erosion that may be prohibitive. An even larger problem is that tritium is redeposited with the redeposited graphite. The tritium inventory in the wall and dust could build up to many kilograms, representing a waste of resources and a radiological hazard in case of an accident. Graphite found favor as material for short-lived experiments, but appears unlikely to become the primary plasma-facing material (PFM) in a commercial reactor. Ceramic materials such as silicon carbide (SiC) have similar issues like graphite. Tritium retention in silicon carbide plasma-facing components is approximately 1.5-2 times higher than in graphite, resulting in reduced fuel efficiency and heightened safety risks in fusion reactors. SiC tends to trap more tritium, limiting its availability for fusion and increasing the risk of hazardous accumulation, complicating tritium management. Furthermore, the chemical and physical sputtering of SiC remains significant, contributing to tritium buildup through co-deposition over time and with increasing particle fluence. As a result, carbon-based materials have been excluded from ITER, DEMO, and similar devices. Tungsten's sputtering rate is orders of magnitude smaller than carbon's, and tritium is much less incorporated into redeposited tungsten. However, tungsten plasma impurities are much more damaging than carbon impurities, and self-sputtering can be high, requiring the plasma in contact with the tungsten not be too hot (a few tens of eV rather than hundreds of eV). Tungsten also has issues around eddy currents and melting in off-normal events, as well as some radiological issues. Recent advances in materials for containment apparatus materials have found that certain ceramics can actually improve the longevity of the material of the containment apparatus. Studies on MAX phases, such as titanium silicon carbide, show that under the high operating temperatures of nuclear fusion, the material undergoes a phase transformation from a hexagonal structure to a face-centered-cubic (FCC) structure, driven by helium bubble growth. Helium atoms preferentially accumulate in the Si layer of the hexagonal structure, as the Si atoms are more mobile than the Ti-C slabs. As more atoms are trapped, the Ti-C slab is peeled off, causing the Si atoms to become highly mobile interstitial atoms in the new FCC structure. Lattice strain induced by the He bubbles cause Si atoms to diffuse out of compressive areas, typically towards the surface of the material, forming a protective silicon dioxide layer. Doping vessel materials with iron silicate has emerged as a promising approach to enhance containment materials in fusion reactors, as well. This method targets helium embrittlement at grain boundaries, a common issue that arises as helium atoms accumulate and form bubbles. Over time, these bubbles coalesce at grain boundaries, causing them to expand and degrade the material's structural integrity. By contrast, introducing iron silicate creates nucleation sites within the metal matrix that are more thermodynamically favorable for helium aggregation. This localized congregation around iron silicate nanoparticles induces matrix strain rather than weakening grain boundaries, preserving the material’s strength and longevity. Safety and the environment Accident potential Accident potential and effect on the environment are critical to social acceptance of nuclear fusion, also known as a social license. Fusion reactors are not subject to catastrophic meltdown. It requires precise and controlled temperature, pressure and magnetic field parameters to produce net energy, and any damage or loss of required control would rapidly quench the reaction. Fusion reactors operate with seconds or even microseconds worth of fuel at any moment. Without active refueling, the reactions immediately quench. The same constraints prevent runaway reactions. Although the plasma is expected to have a volume of or more, the plasma typically contains only a few grams of fuel. By comparison, a fission reactor is typically loaded with enough fuel for months or years, and no additional fuel is necessary to continue the reaction. This large fuel supply is what offers the possibility of a meltdown. In magnetic containment, strong fields develop in coils that are mechanically held in place by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to other industrial accidents or an MRI machine quench/explosion, and could be effectively contained within a containment building similar to those used in fission reactors. In laser-driven inertial containment the larger size of the reaction chamber reduces the stress on materials. Although failure of the reaction chamber is possible, stopping fuel delivery prevents catastrophic failure. Most reactor designs rely on liquid hydrogen as a coolant and to convert stray neutrons into tritium, which is fed back into the reactor as fuel. Hydrogen is flammable, and it is possible that hydrogen stored on-site could ignite. In this case, the tritium fraction of the hydrogen would enter the atmosphere, posing a radiation risk. Calculations suggest that about of tritium and other radioactive gases in a typical power station would be present. The amount is small enough that it would dilute to legally acceptable limits by the time they reached the station's perimeter fence. The likelihood of small industrial accidents, including the local release of radioactivity and injury to staff, are estimated to be minor compared to fission. They would include accidental releases of lithium or tritium or mishandling of radioactive reactor components. Magnet quench A magnet quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil exits the superconducting state (becomes normal). This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two. More rarely a magnet defect can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal over several seconds, depending on the size of the superconducting coil. This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and the cryogenic fluid boils away. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces. In practice, magnets usually have safety devices to stop or limit the current when a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air. A large section of the superconducting magnets in CERN's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, destroying multiple magnets. In order to prevent a recurrence, the LHC's superconducting magnets are equipped with fast-ramping heaters that are activated when a quench event is detected. The dipole bending magnets are connected in series. Each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into massive blocks of metal that heat up to several hundred degrees Celsius—because of resistive heating—in seconds. A magnet quench is a "fairly routine event" during the operation of a particle accelerator. Effluents The natural product of the fusion reaction is a small amount of helium, which is harmless to life. Hazardous tritium is difficult to retain completely. Although tritium is volatile and biologically active, the health risk posed by a release is much lower than that of most radioactive contaminants, because of tritium's short half-life (12.32 years) and very low decay energy (~14.95 keV), and because it does not bioaccumulate (it cycles out of the body as water, with a biological half-life of 7 to 14 days). ITER incorporates total containment facilities for tritium. Radioactive waste Fusion reactors create far less radioactive material than fission reactors. Further, the material it creates is less damaging biologically, and the radioactivity dissipates within a time period that is well within existing engineering capabilities for safe long-term waste storage. In specific terms, except in the case of aneutronic fusion, the neutron flux turns the structural materials radioactive. The amount of radioactive material at shut-down may be comparable to that of a fission reactor, with important differences. The half-lives of fusion and neutron activation radioisotopes tend to be less than those from fission, so that the hazard decreases more rapidly. Whereas fission reactors produce waste that remains radioactive for thousands of years, the radioactive material in a fusion reactor (other than tritium) would be the reactor core itself and most of this would be radioactive for about 50 years, with other low-level waste being radioactive for another 100 years or so thereafter. The fusion waste's short half-life eliminates the challenge of long-term storage. By 500 years, the material would have the same radiotoxicity as coal ash. Nonetheless, classification as intermediate level waste rather than low-level waste may complicate safety discussions. The choice of materials is less constrained than in conventional fission, where many materials are required for their specific neutron cross-sections. Fusion reactors can be designed using "low activation", materials that do not easily become radioactive. Vanadium, for example, becomes much less radioactive than stainless steel. Carbon fiber materials are also low-activation, are strong and light, and are promising for laser-inertial reactors where a magnetic field is not required. Nuclear proliferation In some scenarios, fusion power technology could be adapted to produce materials for military purposes. A huge amount of tritium could be produced by a fusion power station; tritium is used in the trigger of hydrogen bombs and in modern boosted fission weapons, but it can be produced in other ways. The energetic neutrons from a fusion reactor could be used to breed weapons-grade plutonium or uranium for an atomic bomb (for example by transmutation of to , or to ). A study conducted in 2011 assessed three scenarios: Small-scale fusion station: As a result of much higher power consumption, heat dissipation and a more recognizable design compared to enrichment gas centrifuges, this choice would be much easier to detect and therefore implausible. Commercial facility: The production potential is significant. But no fertile or fissile substances necessary for the production of weapon-usable materials needs to be present at a civil fusion system at all. If not shielded, detection of these materials can be done by their characteristic gamma radiation. The underlying redesign could be detected by regular design information verification. In the (technically more feasible) case of solid breeder blanket modules, it would be necessary for incoming components to be inspected for the presence of fertile material, otherwise plutonium for several weapons could be produced each year. Prioritizing weapon-grade material regardless of secrecy: The fastest way to produce weapon-usable material was seen in modifying a civil fusion power station. No weapons-compatible material is required during civil use. Even without the need for covert action, such a modification would take about two months to start production and at least an additional week to generate a significant amount. This was considered to be enough time to detect a military use and to react with diplomatic or military means. To stop the production, a military destruction of parts of the facility while leaving out the reactor would be sufficient. Another study concluded "...large fusion reactors—even if not designed for fissile material breeding—could easily produce several hundred kg Pu per year with high weapon quality and very low source material requirements." It was emphasized that the implementation of features for intrinsic proliferation resistance might only be possible at an early phase of research and development. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with magnetic confinement fusion. Fuel reserves Fusion power commonly proposes the use of deuterium as fuel and many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global power output of about 100 EJ/yr (= 1 × 1020 J/yr) and that this does not increase in the future, which is unlikely, then known current lithium reserves would last 3000 years. Lithium from sea water would last 60 million years, however, and a more complicated fusion process using only deuterium would have fuel for 150 billion years. To put this in context, 150 billion years is close to 30 times the remaining lifespan of the Sun, and more than 10 times the estimated age of the universe. Economics The EU spent almost through the 1990s. ITER represents an investment of over twenty billion dollars, and possibly tens of billions more, including in kind contributions. Under the European Union's Sixth Framework Programme, nuclear fusion research received (in addition to ITER funding), compared with for sustainable energy research, putting research into fusion power well ahead of that of any single rival technology. The United States Department of Energy has allocated $US367M–$US671M every year since 2010, peaking in 2020, with plans to reduce investment to $US425M in its FY2021 Budget Request. About a quarter of this budget is directed to support ITER. The size of the investments and time lines meant that fusion research was traditionally almost exclusively publicly funded. However, starting in the 2010s, the promise of commercializing a paradigm-changing low-carbon energy source began to attract a raft of companies and investors. Over two dozen start-up companies attracted over one billion dollars from roughly 2000 to 2020, mainly from 2015, and a further three billion in funding and milestone related commitments in 2021, with investors including Jeff Bezos, Peter Thiel and Bill Gates, as well as institutional investors including Legal & General, and energy companies including Equinor, Eni, Chevron, and the Chinese ENN Group. In 2021, Commonwealth Fusion Systems (CFS) obtained $1.8 billion in scale-up funding, and Helion Energy obtained a half-billion dollars with an additional $1.7 billion contingent on meeting milestones. Scenarios developed in the 2000s and early 2010s discussed the effects of the commercialization of fusion power on the future of human civilization. Using nuclear fission as a guide, these saw ITER and later DEMO as bringing online the first commercial reactors around 2050 and a rapid expansion after mid-century. Some scenarios emphasized "fusion nuclear science facilities" as a step beyond ITER. However, the economic obstacles to tokamak-based fusion power remain immense, requiring investment to fund prototype tokamak reactors and development of new supply chains, a problem which will affect any kind of fusion reactor. Tokamak designs appear to be labour-intensive, while the commercialization risk of alternatives like inertial fusion energy is high due to the lack of government resources. Scenarios since 2010 note computing and material science advances enabling multi-phase national or cost-sharing "Fusion Pilot Plants" (FPPs) along various technology pathways, such as the UK Spherical Tokamak for Energy Production, within the 2030–2040 time frame. Notably, in June 2021, General Fusion announced it would accept the UK government's offer to host the world's first substantial public-private partnership fusion demonstration plant, at Culham Centre for Fusion Energy. The plant will be constructed from 2022 to 2025 and is intended to lead the way for commercial pilot plants in the late 2025s. The plant will be 70% of full scale and is expected to attain a stable plasma of 150 million degrees. In the United States, cost-sharing public-private partnership FPPs appear likely, and in 2022 the DOE announced a new Milestone-Based Fusion Development Program as the centerpiece of its Bold Decadal Vision for Commercial Fusion Energy, which envisages private sector-led teams delivering FPP pre-conceptual designs, defining technology roadmaps, and pursuing the R&D necessary to resolve critical-path scientific and technical issues towards an FPP design. Compact reactor technology based on such demonstration plants may enable commercialization via a fleet approach from the 2030s if early markets can be located. The widespread adoption of non-nuclear renewable energy has transformed the energy landscape. Such renewables are projected to supply 74% of global energy by 2050. The steady fall of renewable energy prices challenges the economic competitiveness of fusion power. Some economists suggest fusion power is unlikely to match other renewable energy costs. Fusion plants are expected to face large start up and capital costs. Moreover, operation and maintenance are likely to be costly. While the costs of the China Fusion Engineering Test Reactor are not well known, an EU DEMO fusion concept was projected to feature a levelized cost of energy (LCOE) of $121/MWh. Fuel costs are low, but economists suggest that the energy cost for a one-gigawatt plant would increase by $16.5 per MWh for every $1 billion increase in the capital investment in construction. There is also the risk that easily obtained lithium will be used up making batteries. Obtaining it from seawater would be very costly and might require more energy than the energy that would be generated. In contrast, renewable levelized cost of energy estimates are substantially lower. For instance, the 2019 levelized cost of energy of solar energy was estimated to be $40-$46/MWh, on shore wind was estimated at $29-$56/MWh, and offshore wind was approximately $92/MWh. However, fusion power may still have a role filling energy gaps left by renewables, depending on how administration priorities for energy and environmental justice influence the market. In the 2020s, socioeconomic studies of fusion that began to consider these factors emerged, and in 2022 EUROFusion launched its Socio-Economic Studies and Prospective Research and Development strands to investigate how such factors might affect commercialization pathways and timetables. Similarly, in April 2023 Japan announced a national strategy to industrialise fusion. Thus, fusion power may work in tandem with other renewable energy sources rather than becoming the primary energy source. In some applications, fusion power could provide the base load, especially if including integrated thermal storage and cogeneration and considering the potential for retrofitting coal plants. Regulation As fusion pilot plants move within reach, legal and regulatory issues must be addressed. In September 2020, the United States National Academy of Sciences consulted with private fusion companies to consider a national pilot plant. The following month, the United States Department of Energy, the Nuclear Regulatory Commission (NRC) and the Fusion Industry Association co-hosted a public forum to begin the process. In November 2020, the International Atomic Energy Agency (IAEA) began working with various nations to create safety standards such as dose regulations and radioactive waste handling. In January and March 2021, NRC hosted two public meetings on regulatory frameworks. A public-private cost-sharing approach was endorsed in the 27 December H.R.133 Consolidated Appropriations Act, 2021, which authorized $325 million over five years for a partnership program to build fusion demonstration facilities, with a 100% match from private industry. Subsequently, the UK Regulatory Horizons Council published a report calling for a fusion regulatory framework by early 2022 in order to position the UK as a global leader in commercializing fusion power. This call was met by the UK government publishing in October 2021 both its Fusion Green Paper and its Fusion Strategy, to regulate and commercialize fusion, respectively. Then, in April 2023, in a decision likely to influence other nuclear regulators, the NRC announced in a unanimous vote that fusion energy would be regulated not as fission but under the same regulatory regime as particle accelerators. Then, in October 2023 the UK government, in enacting the Energy Act 2023, made the UK the first country to legislate for fusion separately from fission, to support planning and investment, including the UK's planned prototype fusion power plant for 2040; STEP the UK is working with Canada and Japan in this regard. Meanwhile, in February 2024 the US House of Representatives passed the Atomic Energy Advancement Act, which includes the Fusion Energy Act, which establishes a regulatory framework for fusion energy systems. Geopolitics Given the potential of fusion to transform the world's energy industry and mitigate climate change, fusion science has traditionally been seen as an integral part of peace-building science diplomacy. However, technological developments and private sector involvement has raised concerns over intellectual property, regulatory administration, global leadership; equity, and potential weaponization. These challenge ITER's peace-building role and led to calls for a global commission. Fusion power significantly contributing to climate change by 2050 seems unlikely without substantial breakthroughs and a space race mentality emerging, but a contribution by 2100 appears possible, with the extent depending on the type and particularly cost of technology pathways. Developments from late 2020 onwards have led to talk of a "new space race" with multiple entrants, pitting the US against China and the UK's STEP FPP, with China now outspending the US and threatening to leapfrog US technology. On 24 September 2020, the United States House of Representatives approved a research and commercialization program. The Fusion Energy Research section incorporated a milestone-based, cost-sharing, public-private partnership program modeled on NASA's COTS program, which launched the commercial space industry. In February 2021, the National Academies published Bringing Fusion to the U.S. Grid, recommending a market-driven, cost-sharing plant for 2035–2040, and the launch of the Congressional Bipartisan Fusion Caucus followed. In December 2020, an independent expert panel reviewed EUROfusion's design and R&D work on DEMO, and EUROfusion confirmed it was proceeding with its Roadmap to Fusion Energy, beginning the conceptual design of DEMO in partnership with the European fusion community, suggesting an EU-backed machine had entered the race. In October 2023, the UK-oriented Agile Nations group announced a fusion working group. One month later, the UK and the US announced a bilateral partnership to accelerate fusion energy. Then, in December 2023 at COP28 the US announced a US global strategy to commercialize fusion energy. Then, in April 2024, Japan and the US announced a similar partnership, and in May of the same year, the G7 announced a G7 Working Group on Fusion Energy to promote international collaborations to accelerate the development of commercial energy and promote R&D between countries, as well as rationalize fusion regulation. Later the same year, the US partnered with the IAEA to launch the Fusion Energy Solutions Taskforce, to collaboratively crowdsource ideas to accelerate commercial fusion energy, in line with the US COP28 statement. Specifically to resolve the tritium supply problem, in February 2024, the UK (UKAEA) and Canada (Canadian Nuclear Laboratories) announced an agreement by which Canada could refurbish its Candu deuterium-uranium tritium-generating heavywater nuclear plants and even build new ones, guaranteeing a supply of tritium into the 2070s, while the UKAEA would test breeder materials and simulate how tritium could be captured, purified, and injected back into the fusion reaction. In 2024, both South Korea and Japan announced major initiatives to accelerate their national fusion strategies, by building electricity-generating public-private fusion plants in the 2030s, aiming to begin operations in the 2040s and 2030s respectively. Advantages Fusion power promises to provide more energy for a given weight of fuel than any fuel-consuming energy source currently in use. The fuel (primarily deuterium) exists abundantly in the ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Although this is only about 0.015%, seawater is plentiful and easy to access, implying that fusion could supply the world's energy needs for millions of years. First generation fusion plants are expected to use the deuterium-tritium fuel cycle. This will require the use of lithium for breeding of the tritium. It is not known for how long global lithium supplies will suffice to supply this need as well as those of the battery and metallurgical industries. It is expected that second generation plants will move on to the more formidable deuterium-deuterium reaction. The deuterium-helium-3 reaction is also of interest, but the light helium isotope is practically non-existent on Earth. It is thought to exist in useful quantities in the lunar regolith, and is abundant in the atmospheres of the gas giant planets. Fusion power could be used for so-called "deep space" propulsion within the solar system and for interstellar space exploration where solar energy is not available, including via antimatter-fusion hybrid drives. Helium production Deuterium–tritium fusion produces helium as a by-product. Disadvantages Fusion power has a number of disadvantages. Because 80 percent of the energy in any reactor fueled by deuterium and tritium appears in the form of neutron streams, such reactors share many of the drawbacks of fission reactors. This includes the production of large quantities of radioactive waste and serious radiation damage to reactor components. Additionally, naturally occurring tritium is extremely rare. While the hope is that fusion reactors can breed their own tritium, tritium self-sufficiency is extremely challenging, not least because tritium is difficult to contain (tritium has leaked from 48 of 65 nuclear sites in the US). In any case the reserve and start-up tritium inventory requirements are likely to be unacceptably large. If reactors can be made to operate using only deuterium fuel, then the tritium replenishment issue is eliminated and neutron radiation damage may be reduced. However, the probabilities of deuterium-deuterium reactions are about 20 times lower than for deuterium-tritium. Additionally, the temperature needed is about 3 times higher than for deuterium-tritium (see cross section). The higher temperatures and lower reaction rates thus significantly complicate the engineering challenges. In any case, other drawbacks remain, for instance reactors requiring only deuterium fueling will have greatly enhanced nuclear weapons proliferation potential. History Early experiments The first machine to achieve controlled thermonuclear fusion was a pinch machine at Los Alamos National Laboratory called Scylla I at the start of 1958. The team that achieved it was led by a British scientist named James Tuck and included a young Marshall Rosenbluth. Tuck had been involved in the Manhattan project, but had switched to working on fusion in the early 1950s. He applied for funding for the project as part of a White House sponsored contest to develop a fusion reactor along with Lyman Spitzer. The previous year, 1957, the British had claimed that they had achieved thermonuclear fusion reactions on the Zeta pinch machine. However, it turned out that the neutrons they had detected were from beam-target interactions, not fusion, and they withdrew the claim. Scylla I was a classified machine at the time, so the achievement was hidden from the public. A traditional Z-pinch passes a current down the center of a plasma, which makes a magnetic force around the outside which squeezes the plasma to fusion conditions. Scylla I was a θ-pinch, which used deuterium to pass a current around the outside of its cylinder to create a magnetic force in the center. After the success of Scylla I, Los Alamos went on to build multiple pinch machines over the next few years. Spitzer continued his stellarator research at Princeton. While fusion did not immediately transpire, the effort led to the creation of the Princeton Plasma Physics Laboratory. First tokamak In the early 1950s, Soviet physicists I.E. Tamm and A.D. Sakharov developed the concept of the tokamak, combining a low-power pinch device with a low-power stellarator. A.D. Sakharov's group constructed the first tokamaks, achieving the first quasistationary fusion reaction.:90 Over time, the "advanced tokamak" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, operation in the "H-mode" island of increased stability, and the compact tokamak, with the magnets on the inside of the vacuum chamber. First inertial confinement experiments Laser fusion was suggested in 1962 by scientists at Lawrence Livermore National Laboratory (LLNL), shortly after the invention of the laser in 1960. Inertial confinement fusion experiments using lasers began as early as 1965. Several laser systems were built at LLNL, including the Argus, the Cyclops, the Janus, the long path, the Shiva laser, and the Nova. Laser advances included frequency-tripling crystals that transformed infrared laser beams into ultraviolet beams and "chirping", which changed a single wavelength into a full spectrum that could be amplified and then reconstituted into one frequency. Laser research cost over one billion dollars in the 1980s. 1980s The Tore Supra, JET, T-15, and JT-60 tokamaks were built in the 1980s. In 1984, Martin Peng of ORNL proposed the spherical tokamak with a much smaller radius. It used a single large conductor in the center, with magnets as half-rings off this conductor. The aspect ratio fell to as low as 1.2.:B247:225 Peng's advocacy caught the interest of Derek Robinson, who built the Small Tight Aspect Ratio Tokamak, (START). 1990s In 1991, the Preliminary Tritium Experiment at the Joint European Torus achieved the world's first controlled release of fusion power. In 1996, Tore Supra created a plasma for two minutes with a current of almost 1 million amperes, totaling 280 MJ of injected and extracted energy. In 1997, JET produced a peak of 16.1 MW of fusion power (65% of heat to plasma), with fusion power of over 10 MW sustained for over 0.5 sec. 2000s "Fast ignition" saved power and moved ICF into the race for energy production. In 2006, China's Experimental Advanced Superconducting Tokamak (EAST) test reactor was completed. It was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields. In March 2009, the laser-driven ICF NIF became operational. In the 2000s, privately backed fusion companies entered the race, including TAE Technologies, General Fusion, and Tokamak Energy. 2010s Private and public research accelerated in the 2010s. General Fusion developed plasma injector technology and Tri Alpha Energy tested its C-2U device. The French Laser Mégajoule began operation. NIF achieved net energy gain in 2013, as defined in the very limited sense as the hot spot at the core of the collapsed target, rather than the whole target. In 2014, Phoenix Nuclear Labs sold a high-yield neutron generator that could sustain 5×1011 deuterium fusion reactions per second over a 24-hour period. In 2015, MIT announced a tokamak it named the ARC fusion reactor, using rare-earth barium-copper oxide (REBCO) superconducting tapes to produce high-magnetic field coils that it claimed could produce comparable magnetic field strength in a smaller configuration than other designs. In October, researchers at the Max Planck Institute of Plasma Physics in Greifswald, Germany, completed building the largest stellarator to date, the Wendelstein 7-X (W7-X). The W7-X stellarator began Operational phase 1 (OP1.1) on 10 December 2015, successfully producing helium plasma. The objective was to test vital systems and understand the machine's physics. By February 2016, hydrogen plasma was achieved, with temperatures reaching up to 100 million Kelvin. The initial tests used five graphite limiters. After over 2,000 pulses and achieving significant milestones, OP1.1 concluded on 10 March 2016. An upgrade followed, and OP1.2 in 2017 aimed to test an uncooled divertor. By June 2018, record temperatures were reached. W7-X concluded its first campaigns with limiter and island divertor tests, achieving notable advancements by the end of 2018. It soon produced helium and hydrogen plasmas lasting up to 30 minutes. In 2017, Helion Energy's fifth-generation plasma machine went into operation. The UK's Tokamak Energy's ST40 generated "first plasma". The next year, Eni announced a $50 million investment in Commonwealth Fusion Systems, to attempt to commercialize MIT's ARC technology. 2020s In January 2021, SuperOx announced the commercialization of a new superconducting wire with more than 700 A/mm2 current capability. TAE Technologies announced results for its Norman device, holding a temperature of about 60 MK for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices. In October, Oxford-based First Light Fusion revealed its projectile fusion project, which fires an aluminum disc at a fusion target, accelerated by a 9 mega-amp electrical pulse, reaching speeds of . The resulting fusion generates neutrons whose energy is captured as heat. On November 8, in an invited talk to the 63rd Annual Meeting of the APS Division of Plasma Physics, the National Ignition Facility claimed to have triggered fusion ignition in the laboratory on August 8, 2021, for the first time in the 60+ year history of the ICF program. The shot yielded 1.3 MJ of fusion energy, an over 8X improvement on tests done in spring of 2021. NIF estimates that 230 kJ of energy reached the fuel capsule, which resulted in an almost 6-fold energy output from the capsule. A researcher from Imperial College London stated that the majority of the field agreed that ignition had been demonstrated. In November 2021, Helion Energy reported receiving $500 million in Series E funding for its seventh-generation Polaris device, designed to demonstrate net electricity production, with an additional $1.7 billion of commitments tied to specific milestones, while Commonwealth Fusion Systems raised an additional $1.8 billion in Series B funding to construct and operate its SPARC tokamak, the single largest investment in any private fusion company. In April 2022, First Light announced that their hypersonic projectile fusion prototype had produced neutrons compatible with fusion. Their technique electromagnetically fires projectiles at Mach 19 at a caged fuel pellet. The deuterium fuel is compressed at Mach 204, reaching pressure levels of 100 TPa. On December 13, 2022, the US Department of Energy reported that researchers at the National Ignition Facility had achieved a net energy gain from a fusion reaction. The reaction of hydrogen fuel at the facility produced about 3.15 MJ of energy while consuming 2.05 MJ of input. However, while the fusion reactions may have produced more than 3 megajoules of energy—more than was delivered to the target—NIF's 192 lasers consumed 322 MJ of grid energy in the conversion process. In May 2023, the United States Department of Energy (DOE) provided a grant of $46 million to eight companies across seven states to support fusion power plant design and research efforts. This funding, under the Milestone-Based Fusion Development Program, aligns with objectives to demonstrate pilot-scale fusion within a decade and to develop fusion as a carbon-neutral energy source by 2050. The granted companies are tasked with addressing the scientific and technical challenges to create viable fusion pilot plant designs in the next 5–10 years. The recipient firms include Commonwealth Fusion Systems, Focused Energy Inc., Princeton Stellarators Inc., Realta Fusion Inc., Tokamak Energy Inc., Type One Energy Group, Xcimer Energy Inc., and Zap Energy Inc. In December 2023, the largest and most advanced tokamak JT-60SA was inaugurated in Naka, Japan. The reactor is a joint project between Japan and the European Union. The reactor had achieved its first plasma in October 2023. Subsequently, South Korea's fusion reactor project, the Korean Superconducting Tokamak Advanced Research, successfully operated for 102 seconds in a high-containment mode (H-mode) containing high ion temperatures of more than 100 million degrees in plasma tests conducted from December 2023 to February 2024. In January 2025, EAST fusion reactor in China was reported to maintain a steady-state high-confinement plasma operation for 1066 seconds. Future development Claims of commercially viable fusion power being relatively imminent have often attracted ridicule within the scientific community. A common joke is that human-engineered fusion has always been promised as 30 years away since the concept was first discussed, or that it has been "20 years away for 50 years". In 2024, Commonwealth Fusion Systems announced plans to build the world's first grid-scale commercial nuclear fusion power plant at the James River Industrial Center in Chesterfield County, Virginia, which is part of the Greater Richmond Region; the plant is designed to produce about 400 MW of electric power, and is intended to come online in the early 2030s. Records Fusion records continue to advance:
Technology
Power generation
null
55076
https://en.wikipedia.org/wiki/Slaughterhouse
Slaughterhouse
In livestock agriculture and the meat industry, a slaughterhouse, also called an abattoir (), is a facility where livestock animals are slaughtered to provide food. Slaughterhouses supply meat, which then becomes the responsibility of a meat-packing facility. Slaughterhouses that produce meat that is not intended for human consumption are sometimes referred to as knacker's yards or knackeries. This is where animals are slaughtered that are not fit for human consumption or that can no longer work on a farm, such as retired work horses. Slaughtering animals on a large scale poses significant issues in terms of logistics, animal welfare, and the environment, and the process must meet public health requirements. Due to public aversion in different cultures, determining where to build slaughterhouses is also a matter of some consideration. Frequently, animal rights groups raise concerns about the methods of transport to and from slaughterhouses, preparation prior to slaughter, animal herding, stunning methods, and the killing itself. History Until modern times, the slaughter of animals generally took place in a haphazard and unregulated manner in diverse places. Early maps of London show numerous stockyards in the periphery of the city, where slaughter occurred in the open air or under cover such as wet markets. A term for such open-air slaughterhouses was shambles, and there are streets named "The Shambles" in some English and Irish towns (e.g., Worcester, York, Bandon) which got their name from having been the site on which butchers killed and prepared animals for consumption. Fishamble Street, Dublin was formerly a fish-shambles. Sheffield had 183 slaughterhouses in 1910, and it was estimated that there were 20,000 in England and Wales. Reform movement The slaughterhouse emerged as a coherent institution in the 19th century. A combination of health and social concerns, exacerbated by the rapid urbanisation experienced during the Industrial Revolution, led social reformers to call for the isolation, sequester and regulation of animal slaughter. As well as the concerns raised regarding hygiene and disease, there were also criticisms of the practice on the grounds that the effect that killing had, both on the butchers and the observers, "educate[d] the men in the practice of violence and cruelty, so that they seem to have no restraint on the use of it." An additional motivation for eliminating private slaughter was to impose a careful system of regulation for the "morally dangerous" task of putting animals to death. As a result of this tension, meat markets within the city were closed and abattoirs built outside city limits. An early framework for the establishment of public slaughterhouses was put in place in Paris in 1810, under the reign of the Emperor Napoleon. Five areas were set aside on the outskirts of the city and the feudal privileges of the guilds were curtailed. As the meat requirements of the growing number of residents in London steadily expanded, the meat markets both within the city and beyond attracted increasing levels of public disapproval. Meat had been traded at Smithfield Market as early as the 10th century. By 1726, it was regarded as "without question, the greatest in the world", by Daniel Defoe. By the middle of the 19th century, in the course of a single year 220,000 head of cattle and 1,500,000 sheep would be "violently forced into an area of five acres, in the very heart of London, through its narrowest and most crowded thoroughfares". By the early 19th century, pamphlets were being circulated arguing in favor of the removal of the livestock market and its relocation outside of the city due to the extremely low hygienic conditions as well as the brutal treatment of the cattle. In 1843, the Farmer's Magazine published a petition signed by bankers, salesmen, aldermen, butchers and local residents against the expansion of the livestock market. The Town Police Clauses Act 1847 created a licensing and registration system, though few slaughter houses were closed. An Act of Parliament was eventually passed in 1852. Under its provisions, a new cattle-market was constructed in Copenhagen Fields, Islington. The new Metropolitan Cattle Market was also opened in 1855, and West Smithfield was left as waste ground for about a decade, until the construction of the new market began in the 1860s under the authority of the 1860 Metropolitan Meat and Poultry Market Act. The market was designed by architect Sir Horace Jones and was completed in 1868. A cut and cover railway tunnel was constructed beneath the market to create a triangular junction with the railway between Blackfriars and King's Cross. This allowed animals to be transported into the slaughterhouse by train and the subsequent transfer of animal carcasses to the Cold Store building, or direct to the meat market via lifts. At the same time, the first large and centralized slaughterhouse in Paris was constructed in 1867 under the orders of Napoleon III at the Parc de la Villette and heavily influenced the subsequent development of the institution throughout Europe. Regulation and expansion These slaughterhouses were regulated by law to ensure good standards of hygiene, the prevention of the spread of disease and the minimization of needless animal cruelty. The slaughterhouse had to be equipped with a specialized water supply system to effectively clean the operating area of blood and offal. Veterinary scientists, notably George Fleming and John Gamgee, campaigned for stringent levels of inspection to ensure that epizootics such as rinderpest (a devastating outbreak of the disease covered all of Britain in 1865) would not be able to spread. By 1874, three meat inspectors were appointed for the London area, and the Public Health Act 1875 required local authorities to provide central slaughterhouses (they were only given powers to close unsanitary slaughterhouses in 1890). Yet the appointment of slaughterhouse inspectors and the establishment of centralised abattoirs took place much earlier in the British colonies, such as the colonies of New South Wales and Victoria, and in Scotland where 80% of cattle were slaughtered in public abattoirs by 1930. In Victoria the Melbourne Abattoirs Act 1850 (NSW) "confined the slaughtering of animals to prescribed public abattoirs, while at the same time prohibiting the killing of sheep, lamb, pigs or goats at any other place within the city limits". Animals were shipped alive to British ports from Ireland, from Europe and from the colonies and slaughtered in large abattoirs at the ports. Conditions were often very poor. Attempts were also made throughout the British Empire to reform the practice of slaughter itself, as the methods used came under increasing criticism for causing undue pain to the animals. The eminent physician, Benjamin Ward Richardson, spent many years in developing more humane methods of slaughter. He brought into use no fewer than fourteen possible anesthetics for use in the slaughterhouse and even experimented with the use of electric current at the Royal Polytechnic Institution. As early as 1853, he designed a lethal chamber that would gas animals to death relatively painlessly, and he founded the Model Abattoir Society in 1882 to investigate and campaign for humane methods of slaughter. The invention of refrigeration and the expansion of transportation networks by sea and rail allowed for the safe exportation of meat around the world. Additionally, meat-packing millionaire Philip Danforth Armour's invention of the "disassembly line" greatly increased the productivity and profit margin of the meat packing industry: "according to some, animal slaughtering became the first mass-production industry in the United States." This expansion has been accompanied by increased concern about the physical and mental conditions of the workers along with controversy over the ethical and environmental implications of slaughtering animals for meat. The Edinburgh abattoir, which was built in 1910, had well lit laboratories, hot and cold water, gas, microscopes and equipment for cultivating organisms. The English 1924 Public Health (Meat) Regulations required notification of slaughter to enable inspection of carcasses and enabled inspected carcasses to be marked. The development of slaughterhouses was linked with industrial expansion of by-products. By 1932 the British by-product industry was worth about £97 million a year, employing 310,000 people. The Aberdeen slaughterhouse sent hooves to Lancashire to make glue, intestines to Glasgow for sausages and hides to the Midland tanneries. In January 1940 the British government took over the 16,000 slaughterhouses and by 1942 there were only 779. Design In the latter part of the 20th century, the layout and design of most U.S. slaughterhouses was influenced by the work of Temple Grandin. She suggested that reducing the stress of animals being led to slaughter may help slaughterhouse operators improve efficiency and profit. In particular she applied an understanding of animal psychology to design pens and corrals which funnel a herd of animals arriving at a slaughterhouse into a single file ready for slaughter. Her corrals employ long sweeping curves so that each animal is prevented from seeing what lies ahead and just concentrates on the hind quarters of the animal in front of it. This design – along with the design elements of solid sides, solid crowd gate, and reduced noise at the end point – work together to encourage animals forward in the chute and to not reverse direction. Mobile design Beginning in 2008 the Local Infrastructure for Local Agriculture, a non-profit committed to revitalizing opportunities for "small farmers and strengthening the connection between local supply and demand", constructed a mobile slaughterhouse facility in efforts for small farmers to process meat quickly and cost effectively. Named the Modular Harvest System, or M.H.S., it received USDA approval in 2010. The M.H.S. consists of three separate trailers: One for slaughtering, one for consumable body parts, and one for other body parts. Preparation of individual cuts is done at a butchery or other meat preparation facility. International variations The standards and regulations governing slaughterhouses vary considerably around the world. In many countries the slaughter of animals is regulated by custom and tradition rather than by law. In the non-Western world, including the Arab world, the Indian sub-continent, etc., both forms of meat are available: one which is produced in modern mechanized slaughterhouses, and the other from local butcher shops. In some communities animal slaughter and permitted species may be controlled by religious laws, most notably halal for Muslims and kashrut for Jewish communities. This can cause conflicts with national regulations when a slaughterhouse adhering to the rules of religious preparation is located in some Western countries. In Jewish law, captive bolts and other methods of pre-slaughter paralysis are generally not permissible, due to it being forbidden for an animal to be stunned prior to slaughter. Various halal food authorities have more recently permitted the use of a recently developed fail-safe system of head-only stunning where the shock is non-fatal, and where it is possible to reverse the procedure and revive the animal after the shock. The use of electronarcosis and other methods of dulling the sensing has been approved by the Egyptian Fatwa Committee. This allows these entities to continue their religious techniques while keeping accordance to the national regulations. In some societies, traditional cultural and religious aversion to slaughter led to prejudice against the people involved. In Japan, where the ban on slaughter of livestock for food was lifted in the late 19th century, the newly found slaughter industry drew workers primarily from villages of burakumin, who traditionally worked in occupations relating to death (such as executioners and undertakers). In some parts of western Japan, prejudice faced by current and former residents of such areas (burakumin "hamlet people") is still a sensitive issue. Because of this, even the Japanese word for "slaughter" (屠殺 tosatsu) is deemed politically incorrect by some pressure groups as its inclusion of the kanji for "kill" (殺) supposedly portrays those who practise it in a negative manner. Some countries have laws that exclude specific animal species or grades of animal from being slaughtered for human consumption, especially those that are taboo food. The former Indian Prime Minister Atal Bihari Vajpayee suggested in 2004 introducing legislation banning the slaughter of cows throughout India, as Hinduism holds cows as sacred and considers their slaughter unthinkable and offensive. This was often opposed on grounds of religious freedom. The slaughter of cows and the importation of beef into the nation of Nepal are strictly forbidden. Freezing works Refrigeration technology allowed meat from the slaughterhouse to be preserved for longer periods. This led to the concept of the slaughterhouse as a freezing works. Prior to this, canning was an option. Freezing works are common in New Zealand, Australia and South Africa. In countries where meat is exported for a substantial profit the freezing works were built near docks, or near transport infrastructure. Mobile poultry processing units (MPPUs) follow the same principles, but typically require only one trailer and, in much of the United States, may legally operate under USDA exemptions not available to red meat processors. Several MPPUs have been in operation since before 2010, under various models of operation and ownership. Law Most countries have laws in regard to the treatment of animals in slaughterhouses. In the United States, there is the Humane Slaughter Act of 1958, a law requiring that all swine, sheep, cattle, and horses be stunned unconscious with application of a stunning device by a trained person before being hoisted up on the line. There is some debate over the enforcement of this act. This act, like those in many countries, exempts slaughter in accordance to religious law, such as kosher shechita and dhabiha halal. Most strict interpretations of kashrut require that the animal be fully sensible when its carotid artery is cut. The novel The Jungle presented a fictionalized account of unsanitary conditions in slaughterhouses and the meatpacking industry during the 1800s. This led directly to an investigation commissioned directly by President Theodore Roosevelt, and to the passage of the Meat Inspection Act and the Pure Food and Drug Act of 1906, which established the Food and Drug Administration. A much larger body of regulation deals with the public health and worker safety regulation and inspection. Animal welfare concerns In 1997, Gail Eisnitz, chief investigator for the Humane Farming Association (HFA), released the book Slaughterhouse. It includes interviews of slaughterhouse workers in the U.S. who say that, because of the speed with which they are required to work, animals are routinely skinned while apparently alive and still blinking, kicking and shrieking. Eisnitz argues that this is not only cruel to the animals but also dangerous for the human workers, as cows weighing several thousands of pounds thrashing around in pain are likely to kick out and debilitate anyone working near them. This would imply that certain slaughterhouses throughout the country are not following the guidelines and regulations spelled out by the Humane Slaughter Act, requiring all animals to be put down by some form, typically electronarcosis, and thus insusceptible to pain before being subjected to any form of violent action. According to the HFA, Eiznitz interviewed slaughterhouse workers representing over two million hours of experience, who, without exception, told her that they have beaten, strangled, boiled and dismembered animals alive or have failed to report those who do. The workers described the effects the violence has had on their personal lives, with several admitting to being physically abusive or taking to alcohol and other drugs. The HFA alleges that workers are required to kill up to 1,100 hogs an hour and end up taking their frustration out on the animals. Eisnitz interviewed one worker, who had worked in ten slaughterhouses, about pig production. He told her: Animal rights activists, anti-speciesists, vegetarians and vegans are prominent critics of slaughterhouses and have created events such as the march to close all slaughterhouses to voice concerns about the conditions in slaughterhouses and ask for their abolition. Some have argued that humane animal slaughter is impossible. Worker exploitation concerns Slaughterhouse mortality impact American slaughterhouse workers are three times more likely to suffer serious injury than the average American worker. NPR reports that pig and cattle slaughterhouse workers are nearly seven times more likely to suffer repetitive strain injuries than average. The Guardian reports that on average there are two amputations a week involving slaughterhouse workers in the United States. On average, one employee of Tyson Foods, the largest meat producer in America, is injured and amputates a finger or limb per month. The Bureau of Investigative Journalism reported that over a period of six years, in the UK 78 slaughter workers lost fingers, parts of fingers or limbs, more than 800 workers had serious injuries, and at least 4,500 had to take more than three days off after accidents. In a 2018 study in the Italian Journal of Food Safety, slaughterhouse workers are instructed to wear ear protectors to protect their hearing from the loud noises in the facility. A 2004 study in the Journal of Occupational and Environmental Medicine found that "excess risks were observed for mortality from all causes, all cancers, and lung cancer" in workers employed in the New Zealand meat processing industry. Psychological impact Slaughterhouse workers have a higher prevalence rate of mental health distress, including anxiety, detachment, depression, emotional numbing, perpetrator trauma, psychosocial distress, and PTSD, violence-supportive attitudes, and an increased crime levels. Slaughterhouse workers have adaptive and maladaptive strategies to cope with the workplace environment and associated stressors. Working at slaughterhouses often leads to a high amount of psychological trauma. A 2016 study in Organization indicates, "Regression analyses of data from 10,605 Danish workers across 44 occupations suggest that slaughterhouse workers consistently experience lower physical and psychological well-being along with increased incidences of negative coping behavior." A 2009 study by criminologist Amy Fitzgerald indicates, "slaughterhouse employment increases total arrest rates, arrests for violent crimes, arrests for rape, and arrests for other sex offenses in comparison with other industries." As authors from the PTSD Journal explain, "These employees are hired to kill animals, such as pigs and cows that are largely gentle creatures. Carrying out this action requires workers to disconnect from what they are doing and from the creature standing before them. This emotional dissonance can lead to consequences such as domestic violence, social withdrawal, anxiety, drug and alcohol abuse, and PTSD." Working conditions Starting in the 1980s, Cargill, Conagra Brands, Tyson Foods and other large food companies moved most slaughterhouse operations to rural areas of the Southern United States which were more hostile to unionization efforts. Slaughterhouses in the United States commonly illegally employ and exploit underage workers and undocumented immigrants. In 2010, Human Rights Watch described slaughterhouse line work in the United States as a human rights crime. In a report by Oxfam America, slaughterhouse workers were observed not being allowed breaks, were often required to wear diapers, and were paid below minimum wage.
Technology
Animal husbandry
null
55115
https://en.wikipedia.org/wiki/Cabbage
Cabbage
Cabbage, comprising several cultivars of Brassica oleracea, is a leafy green, red (purple), or white (pale green) biennial plant grown as an annual vegetable crop for its dense-leaved heads. It is descended from the wild cabbage (B. oleracea var. oleracea), and belongs to the "cole crops" or brassicas, meaning it is closely related to broccoli and cauliflower (var. botrytis); Brussels sprouts (var. gemmifera); and Savoy cabbage (var. sabauda). A cabbage generally weighs between . Smooth-leafed, firm-headed green cabbages are the most common, with smooth-leafed purple cabbages and crinkle-leafed savoy cabbages of both colours being rarer. Under conditions of long sunny days, such as those found at high northern latitudes in summer, cabbages can grow quite large. , the heaviest cabbage was . Cabbage heads are generally picked during the first year of the plant's life cycle, but plants intended for seed are allowed to grow a second year and must be kept separate from other cole crops to prevent cross-pollination. Cabbage is prone to several nutrient deficiencies, as well as to multiple pests, and bacterial and fungal diseases. Cabbage was most likely domesticated somewhere in Europe in ancient history before 1000 BC. Cabbage use in cuisine has been documented since Antiquity. It was described as a table luxury in the Roman Empire. By the Middle Ages, cabbage had become a prominent part of European cuisine, as indicated by manuscript illuminations. New variates were introduced from the Renaissance on, mostly by Germanic-speaking peoples. Savoy cabbage was developed in the 16th century. By the 17th and 18th centuries, cabbage was popularised as staple food in central, northern, and Eastern Europe. It was also employed by European sailors to prevent scurvy during long ship voyages at sea. Starting in the early modern era, cabbage was exported to the Americas, Asia, and around the world. They can be prepared many different ways for eating; they can be pickled, fermented (for dishes such as sauerkraut, kimchi), steamed, stewed, roasted, sautéed, braised, or eaten raw. Raw cabbage is a rich source of vitamin K, vitamin C, and dietary fiber. World production of cabbage and other brassicas in 2020 was 71 million tonnes, led by China with 48% of the total. Description Cabbage seedlings have a thin taproot and cordate (heart-shaped) cotyledons. The first leaves produced are ovate (egg-shaped) with a lobed petiole. Plants are tall in their first year at the mature vegetative stage, and tall when flowering in the second year. Heads average between , with fast-growing, earlier-maturing varieties producing smaller heads. Most cabbages have thick, alternating leaves, with margins that range from wavy or lobed to highly dissected; some varieties have a waxy bloom on the leaves. Plants have root systems that are fibrous and shallow. About 90% of the root mass is in the upper of soil; some lateral roots can penetrate up to deep. The inflorescence is an unbranched and indeterminate terminal raceme measuring tall, with flowers that are yellow or white. Each flower has four petals set in a perpendicular pattern, as well as four sepals, six stamens, and a superior ovary that is two-celled and contains a single stigma and style. Two of the six stamens have shorter filaments. The fruit is a silique that opens at maturity through dehiscence to reveal brown or black seeds that are small and round in shape. Self-pollination is impossible, and plants are cross-pollinated by insects. The initial leaves form a rosette shape comprising 7 to 15 leaves, each measuring by ; after this, leaves with shorter petioles develop and heads form through the leaves cupping inward. Many shapes, colors and leaf textures are found in various cultivated varieties of cabbage. Leaf types are generally divided between crinkled-leaf, loose-head savoys and smooth-leaf firm-head cabbages, while the color spectrum includes white and a range of greens and purples. Oblate, round and pointed shapes are found. Cabbage has been selectively bred for head weight and morphological characteristics, frost hardiness, fast growth and storage ability. The appearance of the cabbage head has been given importance in selective breeding, with varieties being chosen for shape, color, firmness and other physical characteristics. Breeding objectives are now focused on increasing resistance to various insects and diseases and improving the nutritional content of cabbage. Scientific research into the genetic modification of B. oleracea crops, including cabbage, has included European Union and United States explorations of greater insect and herbicide resistance. There are several Guinness Book of World Records entries related to cabbage. These include the heaviest cabbage, at , heaviest red cabbage, at , longest cabbage roll, at , and the largest cabbage dish, at . Taxonomy Cabbage (Brassica oleracea or B. oleracea var. capitata, var. tuba, var. sabauda or var. acephala) is a member of the genus Brassica and the mustard family Brassicaceae. Several other cruciferous vegetables (sometimes known as cole crops) are cultivars of B. oleracea, including broccoli, collard greens, brussels sprouts, kohlrabi and sprouting broccoli. All of these developed from the wild cabbage B. oleracea var. oleracea, also called colewort or field cabbage. This original species evolved over thousands of years into those seen today, as selection resulted in cultivars having different characteristics, such as large heads for cabbage, large leaves for kale and thick stems with flower buds for broccoli. "Cabbage" was originally used to refer to multiple forms of B. oleracea, including those with loose or non-existent heads. A related species, Brassica rapa, is commonly named Chinese, napa or celery cabbage, and has many of the same uses. It is also a part of common names for several unrelated species. These include cabbage bark or cabbage tree (a member of the genus Andira) and cabbage palms, which include several genera of palms such as Mauritia, Roystonea oleracea, Acrocomia and Euterpe oenocarpus. Etymology The original family name of brassicas was Cruciferae, which derived from the flower petal pattern thought by medieval Europeans to resemble a crucifix. The word brassica derives from bresic, a Celtic word for cabbage. The varietal epithet capitata is derived from the Latin word for 'having a head'. Many European and Asiatic names for cabbage are derived from the Celto-Slavic root cap or kap, meaning "head". The late Middle English word cabbage derives from the word caboche ("head"), from the Picard dialect of Old French. This in turn is a variant of the Old French caboce. Cultivation History Although cabbage has an extensive history, it is difficult to trace its exact origins owing to the many varieties of leafy greens classified as "brassicas". A possible wild ancestor of cabbage, Brassica oleracea, originally found in Britain and continental Europe, is tolerant of salt but not encroachment by other plants and consequently inhabits rocky cliffs in cool damp coastal habitats, retaining water and nutrients in its slightly thickened, turgid leaves. However, genetic analysis is consistent with feral origin of this population, deriving from plants escaped from field and gardens. According to the triangle of U theory of the evolution and relationships between Brassica species, B. oleracea and other closely related kale vegetables (cabbages, kale, broccoli, Brussels sprouts, and cauliflower) represent one of three ancestral lines from which all other brassicas originated. Cabbage was probably domesticated later in history than Near Eastern crops such as lentils and summer wheat. Because of the wide range of crops developed from the wild B. oleracea, multiple broadly contemporaneous domestications of cabbage may have occurred throughout Europe. Nonheading cabbages and kale were probably the first to be domesticated, before 1000 BC, perhaps by the Celts of central and western Europe, although recent linguistic and genetic evidence enforces a Mediterranean origin of cultivated brassicas. While unidentified brassicas were part of the highly conservative unchanging Mesopotamian garden repertory, it is believed that the ancient Egyptians did not cultivate cabbage, which is not native to the Nile valley, though the word shaw't in Papyrus Harris of the time of Ramesses III has been interpreted as "cabbage". The ancient Greeks had some varieties of cabbage, as mentioned by Theophrastus, although whether they were more closely related to today's cabbage or to one of the other Brassica crops is unknown. The headed cabbage variety was known to the Greeks as krambe and to the Romans as brassica or olus; the open, leafy variety (kale) was known in Greek as raphanos and in Latin as caulis. Ptolemaic Egyptians knew the cole crops as gramb, under the influence of Greek krambe, which had been a familiar plant to the Macedonian antecedents of the Ptolemies. By early Roman times, Egyptian artisans and children were eating cabbage and turnips among a wide variety of other vegetables and pulses. Chrysippus of Cnidos wrote a treatise on cabbage, which Pliny knew, but it has not survived. The Greeks were convinced that cabbages and grapevines were inimical, and that cabbage planted too near the vine would impart its unwelcome odor to the grapes; this Mediterranean sense of antipathy survives today. Brassica was considered by some Romans a table luxury, although Lucullus considered it unfit for the senatorial table. The more traditionalist Cato the Elder, espousing a simple Republican life, ate his cabbage cooked or raw and dressed with vinegar; he said it surpassed all other vegetables, and approvingly distinguished three varieties; he also gave directions for its medicinal use, which extended to the cabbage-eater's urine, in which infants might be rinsed. Pliny the Elder listed seven varieties, including Pompeii cabbage, Cumae cabbage and Sabellian cabbage. According to Pliny, the Pompeii cabbage, which could not stand cold, is "taller, and has a thick stock near the root, but grows thicker between the leaves, these being scantier and narrower, but their tenderness is a valuable quality". The Pompeii cabbage was also mentioned by Columella in De Re Rustica. Apicius gives several recipes for cauliculi, tender cabbage shoots. The Greeks and Romans claimed medicinal usages for their cabbage varieties that included relief from gout, headaches and the symptoms of poisonous mushroom ingestion. The antipathy towards the vine made it seem that eating cabbage would enable one to avoid drunkenness. Cabbage continued to figure in the materia medica of antiquity as well as at table: in the first century AD Dioscorides mentions two kinds of coleworts with medical uses, the cultivated and the wild, and his opinions continued to be paraphrased in herbals right through the 17th century. At the end of Antiquity cabbage is mentioned in De observatione ciborum ("On the Observance of Foods") by Anthimus, a Greek doctor at the court of Theodoric the Great. Cabbage appears among vegetables directed to be cultivated in the Capitulare de villis, composed in 771–800 AD, that guided the governance of the royal estates of Charlemagne. In Britain, the Anglo-Saxons cultivated cawel. When round-headed cabbages appeared in 14th-century England they were called cabaches and caboches, words drawn from Old French and applied at first to refer to the ball of unopened leaves, the contemporaneous recipe that commences "Take cabbages and quarter them, and seethe them in good broth", also suggests the tightly headed cabbage. Manuscript illuminations show the prominence of cabbage in the cuisine of the High Middle Ages, and cabbage seeds feature among the seed list of purchases for the use of King John II of France when captive in England in 1360, but cabbages were also a familiar staple of the poor: in the lean year of 1420 the "Bourgeois of Paris" noted that "poor people ate no bread, nothing but cabbages and turnips and such dishes, without any bread or salt". French naturalist Jean Ruel made what is considered the first explicit mention of head cabbage in his 1536 botanical treatise De Natura Stirpium, referring to it as capucos coles ("head-coles"). In Istanbul, Sultan Selim III penned a tongue-in-cheek ode to cabbage: without cabbage, the halva feast was not complete. In India, cabbage was one of several vegetable crops introduced by colonizing traders from Portugal, who established trade routes from the 14th to 17th centuries. Carl Peter Thunberg reported that cabbage was not yet known in Japan in 1775. Many cabbage varieties—including some still commonly grown—were introduced in Germany, France, and the Low Countries. During the 16th century, German gardeners developed the savoy cabbage. During the 17th and 18th centuries, cabbage was a food staple in such countries as Germany, England, Ireland and Russia, and pickled cabbage was frequently eaten. Sauerkraut was used by Dutch, Scandinavian and German sailors to prevent scurvy during long ship voyages. Jacques Cartier first brought cabbage to the Americas in 1541–42, and it was probably planted by the early English colonists, despite the lack of written evidence of its existence there until the mid-17th century. By the 18th century, it was commonly planted by both colonists and native American Indians. Cabbage seeds traveled to Australia in 1788 with the First Fleet, and were planted the same year on Norfolk Island. It became a favorite vegetable of Australians by the 1830s and was frequently seen at the Sydney Markets. In Brno, Czech Republic there is an open-air market named after cabbage which has been in operation since 1325, the Zelný trh. Modern cultivation Cabbage is generally grown for its densely leaved heads, produced during the first year of its biennial cycle. Plants perform best when grown in well-drained soil in a location that receives full sun. Different varieties prefer different soil types, ranging from lighter sand to heavier clay, but all prefer fertile ground with a pH between 6.0 and 6.8. For optimal growth, there must be adequate levels of nitrogen in the soil, especially during the early head formation stage, and sufficient phosphorus and potassium during the early stages of expansion of the outer leaves. Temperatures between prompt the best growth, and extended periods of higher or lower temperatures may result in premature bolting (flowering). Flowering induced by periods of low temperatures (a process called vernalization) only occurs if the plant is past the juvenile period. The transition from a juvenile to adult state happens when the stem diameter is about . Vernalization allows the plant to grow to an adequate size before flowering. In certain climates, cabbage can be planted at the beginning of the cold period and survive until a later warm period without being induced to flower, a practice that was common in the eastern US. Plants are generally started in protected locations early in the growing season before being transplanted outside, although some are seeded directly into the ground from which they will be harvested. Seedlings typically emerge in about 4–6 days from seeds planted deep at a soil temperature between . Growers normally place plants apart. Closer spacing reduces the resources available to each plant (especially the amount of light) and increases the time taken to reach maturity. Some varieties of cabbage have been developed for ornamental use; these are generally called "flowering cabbage". They do not produce heads and feature purple or green outer leaves surrounding an inner grouping of smaller leaves in white, red, or pink. Early varieties of cabbage take about 70 days from planting to reach maturity, while late varieties take about 120 days. Cabbages are mature when they are firm and solid to the touch. They are harvested by cutting the stalk just below the bottom leaves with a blade. The outer leaves are trimmed, and any diseased, damaged, or necrotic leaves are removed. Delays in harvest can result in the head splitting as a result of expansion of the inner leaves and continued stem growth. When being grown for seed, cabbages must be isolated from other B. oleracea subspecies, including the wild varieties, by to prevent cross-pollination. Other Brassica species, such as B. rapa, B. juncea, B. nigra, B. napus and Raphanus sativus, do not readily cross-pollinate. Cultivars There are several cultivar groups of cabbage, each including many cultivars: Savoy – Characterized by crimped or curly leaves, mild flavor and tender texture Spring greens (Brassica oleracea) – Loose-headed, commonly sliced and steamed Green – Light to dark green, slightly pointed heads. Red – Smooth red leaves, often used for pickling or stewing White, also called Dutch – Smooth, pale green leaves Some sources only delineate three cultivars: savoy, red and white, with spring greens and green cabbage being subsumed under the last. Cultivation problems Due to its high level of nutrient requirements, cabbage is prone to nutrient deficiencies, including boron, calcium, phosphorus and potassium. There are several physiological disorders that can affect the postharvest appearance of cabbage. Internal tip burn occurs when the margins of inside leaves turn brown, but the outer leaves look normal. Necrotic spot is where there are oval sunken spots a few millimeters across that are often grouped around the midrib. In pepper spot, tiny black spots occur on the areas between the veins, which can increase during storage. Fungal diseases include wirestem, which causes weak or dying transplants; Fusarium yellows, which result in stunted and twisted plants with yellow leaves; and blackleg (see Leptosphaeria maculans), which leads to sunken areas on stems and gray-brown spotted leaves. The fungi Alternaria brassicae and A. brassicicola cause dark leaf spots in affected plants. They are both seedborne and airborne, and typically propagate from spores in infected plant debris left on the soil surface for up to twelve weeks after harvest. Rhizoctonia solani causes the post-emergence disease wirestem, resulting in killed seedlings ("damping-off"), root rot or stunted growth and smaller heads. One of the most common bacterial diseases to affect cabbage is black rot, caused by Xanthomonas campestris, which causes chlorotic and necrotic lesions that start at the leaf margins, and wilting of plants. Clubroot, caused by the soilborne slime mold-like organism Plasmodiophora brassicae, results in swollen, club-like roots. Downy mildew, a parasitic disease caused by the oomycete Peronospora parasitica, produces pale leaves with white, brownish or olive mildew on the lower leaf surfaces; this is often confused with the fungal disease powdery mildew. Pests include root-knot nematodes and cabbage maggots, which produce stunted and wilted plants with yellow leaves; aphids, which induce stunted plants with curled and yellow leaves; harlequin cabbage bugs, which cause white and yellow leaves; thrips, which lead to leaves with white-bronze spots; striped flea beetles, which riddle leaves with small holes; and caterpillars, which leave behind large, ragged holes in leaves. The caterpillar stage of the "small cabbage white butterfly" (Pieris rapae), commonly known in the United States as the "imported cabbage worm", is a major cabbage pest in most countries. The large white butterfly (Pieris brassicae) is prevalent in eastern European countries. The diamondback moth (Plutella xylostella) and the cabbage moth (Mamestra brassicae) thrive in the higher summer temperatures of continental Europe, where they cause considerable damage to cabbage crops. The mustard leaf beetle (Phaedon cochleariae), is a common pest of cabbage plants. The mustard leaf beetle will often choose to feed on cabbage over their natural host plants as cabbage is more abundant in palatable compounds such as glucosinolates that encourage higher consumption. The cabbage looper (Trichoplusia ni) is infamous in North America for its voracious appetite and for producing frass that contaminates plants. In India, the diamondback moth has caused losses up to 90 percent in crops that were not treated with insecticide. Destructive soil insects such as the cabbage root fly (Delia radicum) has larvae can burrow into the part of plant consumed by humans. Planting near other members of the cabbage family, or where these plants have been placed in previous years, can prompt the spread of pests and disease. Excessive water and excessive heat can also cause cultivation problems. Factors that contribute to reduced head weight include: growth in the compacted soils that result from no-till farming practices, drought, waterlogging, insect and disease incidence, and shading and nutrient stress caused by weeds. Production In 2020, world production of cabbages (combined with other brassicas) was 71 million tonnes, led by China with 48% of the world total (table). Other substantial producers were India, Russia, and South Korea. Toxicity When overcooked, toxic hydrogen sulfide gas is produced. Excessive consumption of cabbage may lead to increased intestinal gas which causes bloating and flatulence due to the trisaccharide raffinose, which the human small intestine cannot digest, but is digested by bacteria in the large intestine. Cabbage has been linked to outbreaks of some food-borne illnesses, including Listeria monocytogenes and Clostridium botulinum. The latter toxin has been traced to pre-made, packaged coleslaw mixes, while the spores were found on whole cabbages that were otherwise acceptable in appearance. Shigella species are able to survive in shredded cabbage. Two outbreaks of E. coli in the United States have been linked to cabbage consumption. Biological risk assessments have concluded that there is the potential for further outbreaks linked to uncooked cabbage, due to contamination at many stages of the growing, harvesting and packaging processes. Contaminants from water, humans, animals and soil have the potential to be transferred to cabbage, and from there to the end consumer. Whilst not a toxic vegetable in its natural state, an increase in intestinal gas can lead to the death of many small animals like rabbits due to gastrointestinal stasis. Cabbage and other cruciferous vegetables contain small amounts of thiocyanate, a compound associated with goiter formation when iodine intake is deficient. Uses Culinary The characteristic flavor of cabbage is caused by glucosinolates, a class of sulfur-containing glucosides. Although found throughout the plant, these compounds are concentrated in the highest quantities in the seeds; lesser quantities are found in young vegetative tissue, and they decrease as the tissue ages. Cooked cabbage is often criticized for its pungent, unpleasant odor and taste. These develop when cabbage is overcooked and hydrogen sulfide gas is produced. Cabbage consumption varies widely around the world: Russia has the highest annual per capita consumption at , followed by Belgium at and the Netherlands at . Americans consume annually per capita. Nutrition Raw cabbage is 92% water, 6% carbohydrates, 1% protein, and contains negligible fat. In a 100-gram reference amount, raw cabbage is a rich source of vitamin C and vitamin K, containing 44% and 72%, respectively, of the Daily Value (DV). Cabbage is also a moderate source (10–19% DV) of vitamin B6 and folate, with no other nutrients having significant content per 100-gram serving. Local market and storage Cabbages sold for market are generally smaller, and different varieties are used for those sold immediately upon harvest and those stored before sale. Those used for processing, especially sauerkraut, are larger and have a lower percentage of water. Both hand and mechanical harvesting are used, and hand-harvesting is generally used for cabbages destined for market sales. In commercial-scale operations, hand-harvested cabbages are trimmed, sorted, and packed directly in the field to increase efficiency. Vacuum cooling rapidly refrigerates the vegetable, allowing for earlier shipping and a fresher product. Cabbage can be stored the longest at with a humidity of 90–100%; these conditions will result in up to six months of longevity. When stored under less ideal conditions, cabbage can still last up to four months. Food preparation Cabbage is prepared and consumed in many ways. The simplest options include eating the vegetable raw or steaming it, though many cuisines pickle, stew, sautée or braise cabbage. Pickling is a common way of preserving cabbage, creating dishes such as sauerkraut and kimchi, although kimchi is more often made from Napa cabbage. Savoy cabbages are usually used in salads, while smooth-leaf types are utilized for both market sales and processing. Tofu and cabbage is a staple of Chinese cooking, while the British dish bubble and squeak is made primarily with leftover potato and boiled cabbage and eaten with cold meat. In Poland, cabbage is one of the main food crops, and it features prominently in Polish cuisine. It is frequently eaten, either cooked or as sauerkraut, as a side dish or as an ingredient in such dishes as bigos (cabbage, sauerkraut, meat, and wild mushrooms, among other ingredients) gołąbki (stuffed cabbage) and pierogi (filled dumplings). Other eastern European countries, such as Hungary and Romania, also have traditional dishes that feature cabbage as a main ingredient. In India and Ethiopia, cabbage is often included in spicy salads and braises. In the United States, cabbage is used primarily for the production of coleslaw, followed by market use and sauerkraut production. Phytochemicals Basic research on cabbage phytochemicals is ongoing to discern if certain cabbage compounds may affect health or have potential for anti-disease effects, such as sulforaphane and other glucosinolates. Studies on cruciferous vegetables, including cabbage, include whether they may lower the risk against colon cancer. Cabbage is a source of indole-3-carbinol, a chemical under basic research for its possible properties. Herbalism In addition to its usual purpose as an edible vegetable, cabbage has been used historically in herbalism. The Ancient Greeks recommended consuming the vegetable as a laxative, and used cabbage juice as an antidote for mushroom poisoning, for eye salves, and for liniments for bruises. The ancient Roman, Pliny the Elder, described both culinary and medicinal properties of the vegetable. Ancient Egyptians ate cooked cabbage at the beginning of meals to reduce the intoxicating effects of wine. This traditional usage persisted in European literature until the mid-20th century. The supposed cooling properties of the leaves were used in Britain as a treatment for trench foot in World War I, and as compresses for ulcers and breast abscesses. Other medicinal uses recorded in European folk medicine include treatments for rheumatism, sore throat, hoarseness, colic, and melancholy. Both mashed cabbage and cabbage juice have been used in poultices to remove boils and treat warts, pneumonia, appendicitis, and ulcers.
Biology and health sciences
Brassicales
null
55170
https://en.wikipedia.org/wiki/Genomics
Genomics
Genomics is an interdisciplinary field of molecular biology focusing on the structure, function, evolution, mapping, and editing of genomes. A genome is an organism's complete set of DNA, including all of its genes as well as its hierarchical, three-dimensional structural configuration. In contrast to genetics, which refers to the study of individual genes and their roles in inheritance, genomics aims at the collective characterization and quantification of all of an organism's genes, their interrelations and influence on the organism. Genes may direct the production of proteins with the assistance of enzymes and messenger molecules. In turn, proteins make up body structures such as organs and tissues as well as control chemical reactions and carry signals between cells. Genomics also involves the sequencing and analysis of genomes through uses of high throughput DNA sequencing and bioinformatics to assemble and analyze the function and structure of entire genomes. Advances in genomics have triggered a revolution in discovery-based research and systems biology to facilitate understanding of even the most complex biological systems such as the brain. The field also includes studies of intragenomic (within the genome) phenomena such as epistasis (effect of one gene on another), pleiotropy (one gene affecting more than one trait), heterosis (hybrid vigour), and other interactions between loci and alleles within the genome. History Etymology From the Greek ΓΕΝ gen, "gene" (gamma, epsilon, nu, epsilon) meaning "become, create, creation, birth", and subsequent variants: genealogy, genesis, genetics, genic, genomere, genotype, genus etc. While the word genome (from the German Genom, attributed to Hans Winkler) was in use in English as early as 1926, the term genomics was coined by Tom Roderick, a geneticist at the Jackson Laboratory (Bar Harbor, Maine), over beers with Jim Womack, Tom Shows and Stephen O’Brien at a meeting held in Maryland on the mapping of the human genome in 1986. First as the name for a new journal and then as a whole new science discipline. Early sequencing efforts Following Rosalind Franklin's confirmation of the helical structure of DNA, James D. Watson and Francis Crick's publication of the structure of DNA in 1953 and Fred Sanger's publication of the Amino acid sequence of insulin in 1955, nucleic acid sequencing became a major target of early molecular biologists. In 1964, Robert W. Holley and colleagues published the first nucleic acid sequence ever determined, the ribonucleotide sequence of alanine transfer RNA. Extending this work, Marshall Nirenberg and Philip Leder revealed the triplet nature of the genetic code and were able to determine the sequences of 54 out of 64 codons in their experiments. In 1972, Walter Fiers and his team at the Laboratory of Molecular Biology of the University of Ghent (Ghent, Belgium) were the first to determine the sequence of a gene: the gene for Bacteriophage MS2 coat protein. Fiers' group expanded on their MS2 coat protein work, determining the complete nucleotide-sequence of bacteriophage MS2-RNA (whose genome encodes just four genes in 3569 base pairs [bp]) and Simian virus 40 in 1976 and 1978, respectively. DNA-sequencing technology developed In addition to his seminal work on the amino acid sequence of insulin, Frederick Sanger and his colleagues played a key role in the development of DNA sequencing techniques that enabled the establishment of comprehensive genome sequencing projects. In 1975, he and Alan Coulson published a sequencing procedure using DNA polymerase with radiolabelled nucleotides that he called the Plus and Minus technique. This involved two closely related methods that generated short oligonucleotides with defined 3' termini. These could be fractionated by electrophoresis on a polyacrylamide gel (called polyacrylamide gel electrophoresis) and visualised using autoradiography. The procedure could sequence up to 80 nucleotides in one go and was a big improvement, but was still very laborious. Nevertheless, in 1977 his group was able to sequence most of the 5,386 nucleotides of the single-stranded bacteriophage φX174, completing the first fully sequenced DNA-based genome. The refinement of the Plus and Minus method resulted in the chain-termination, or Sanger method (see below), which formed the basis of the techniques of DNA sequencing, genome mapping, data storage, and bioinformatic analysis most widely used in the following quarter-century of research. In the same year Walter Gilbert and Allan Maxam of Harvard University independently developed the Maxam-Gilbert method (also known as the chemical method) of DNA sequencing, involving the preferential cleavage of DNA at known bases, a less efficient method. For their groundbreaking work in the sequencing of nucleic acids, Gilbert and Sanger shared half the 1980 Nobel Prize in chemistry with Paul Berg (recombinant DNA). Complete genomes The advent of these technologies resulted in a rapid intensification in the scope and speed of completion of genome sequencing projects. The first complete genome sequence of a eukaryotic organelle, the human mitochondrion (16,568 bp, about 16.6 kb [kilobase]), was reported in 1981, and the first chloroplast genomes followed in 1986. In 1992, the first eukaryotic chromosome, chromosome III of brewer's yeast Saccharomyces cerevisiae (315 kb) was sequenced. The first free-living organism to be sequenced was that of Haemophilus influenzae (1.8 Mb [megabase]) in 1995. The following year a consortium of researchers from laboratories across North America, Europe, and Japan announced the completion of the first complete genome sequence of a eukaryote, S. cerevisiae (12.1 Mb), and since then genomes have continued being sequenced at an exponentially growing pace. , the complete sequences are available for: 2,719 viruses, 1,115 archaea and bacteria, and 36 eukaryotes, of which about half are fungi. Most of the microorganisms whose genomes have been completely sequenced are problematic pathogens, such as Haemophilus influenzae, which has resulted in a pronounced bias in their phylogenetic distribution compared to the breadth of microbial diversity. Of the other sequenced species, most were chosen because they were well-studied model organisms or promised to become good models. Yeast (Saccharomyces cerevisiae) has long been an important model organism for the eukaryotic cell, while the fruit fly Drosophila melanogaster has been a very important tool (notably in early pre-molecular genetics). The worm Caenorhabditis elegans is an often used simple model for multicellular organisms. The zebrafish Brachydanio rerio is used for many developmental studies on the molecular level, and the plant Arabidopsis thaliana is a model organism for flowering plants. The Japanese pufferfish (Takifugu rubripes) and the spotted green pufferfish (Tetraodon nigroviridis) are interesting because of their small and compact genomes, which contain very little noncoding DNA compared to most species. The mammals dog (Canis familiaris), brown rat (Rattus norvegicus), mouse (Mus musculus), and chimpanzee (Pan troglodytes) are all important model animals in medical research. A rough draft of the human genome was completed by the Human Genome Project in early 2001, creating much fanfare. This project, completed in 2003, sequenced the entire genome for one specific person, and by 2007 this sequence was declared "finished" (less than one error in 20,000 bases and all chromosomes assembled). In the years since then, the genomes of many other individuals have been sequenced, partly under the auspices of the 1000 Genomes Project, which announced the sequencing of 1,092 genomes in October 2012. Completion of this project was made possible by the development of dramatically more efficient sequencing technologies and required the commitment of significant bioinformatics resources from a large international collaboration. The continued analysis of human genomic data has profound political and social repercussions for human societies. The "omics" revolution The English-language neologism omics informally refers to a field of study in biology ending in -omics, such as genomics, proteomics or metabolomics. The related suffix -ome is used to address the objects of study of such fields, such as the genome, proteome, or metabolome (lipidome) respectively. The suffix -ome as used in molecular biology refers to a totality of some sort; similarly omics has come to refer generally to the study of large, comprehensive biological data sets. While the growth in the use of the term has led some scientists (Jonathan Eisen, among others) to claim that it has been oversold, it reflects the change in orientation towards the quantitative analysis of complete or near-complete assortment of all the constituents of a system. In the study of symbioses, for example, researchers which were once limited to the study of a single gene product can now simultaneously compare the total complement of several types of biological molecules. Genome analysis After an organism has been selected, genome projects involve three components: the sequencing of DNA, the assembly of that sequence to create a representation of the original chromosome, and the annotation and analysis of that representation. Sequencing Historically, sequencing was done in sequencing centers, centralized facilities (ranging from large independent institutions such as Joint Genome Institute which sequence dozens of terabases a year, to local molecular biology core facilities) which contain research laboratories with the costly instrumentation and technical support necessary. As sequencing technology continues to improve, however, a new generation of effective fast turnaround benchtop sequencers has come within reach of the average academic laboratory. On the whole, genome sequencing approaches fall into two broad categories, shotgun and high-throughput (or next-generation) sequencing. Shotgun sequencing Shotgun sequencing is a sequencing method designed for analysis of DNA sequences longer than 1000 base pairs, up to and including entire chromosomes. It is named by analogy with the rapidly expanding, quasi-random firing pattern of a shotgun. Since gel electrophoresis sequencing can only be used for fairly short sequences (100 to 1000 base pairs), longer DNA sequences must be broken into random small segments which are then sequenced to obtain reads. Multiple overlapping reads for the target DNA are obtained by performing several rounds of this fragmentation and sequencing. Computer programs then use the overlapping ends of different reads to assemble them into a continuous sequence. Shotgun sequencing is a random sampling process, requiring over-sampling to ensure a given nucleotide is represented in the reconstructed sequence; the average number of reads by which a genome is over-sampled is referred to as coverage. For much of its history, the technology underlying shotgun sequencing was the classical chain-termination method or 'Sanger method', which is based on the selective incorporation of chain-terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication. Recently, shotgun sequencing has been supplanted by high-throughput sequencing methods, especially for large-scale, automated genome analyses. However, the Sanger method remains in wide use, primarily for smaller-scale projects and for obtaining especially long contiguous DNA sequence reads (>500 nucleotides). Chain-termination methods require a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleosidetriphosphates (dNTPs), and modified nucleotides (dideoxyNTPs) that terminate DNA strand elongation. These chain-terminating nucleotides lack a 3'-OH group required for the formation of a phosphodiester bond between two nucleotides, causing DNA polymerase to cease extension of DNA when a ddNTP is incorporated. The ddNTPs may be radioactively or fluorescently labelled for detection in DNA sequencers. Typically, these machines can sequence up to 96 DNA samples in a single batch (run) in up to 48 runs a day. High-throughput sequencing The high demand for low-cost sequencing has driven the development of high-throughput sequencing technologies that parallelize the sequencing process, producing thousands or millions of sequences at once. High-throughput sequencing is intended to lower the cost of DNA sequencing beyond what is possible with standard dye-terminator methods. In ultra-high-throughput sequencing, as many as 500,000 sequencing-by-synthesis operations may be run in parallel. The Illumina dye sequencing method is based on reversible dye-terminators and was developed in 1996 at the Geneva Biomedical Research Institute, by Pascal Mayer and Laurent Farinelli. In this method, DNA molecules and primers are first attached on a slide and amplified with polymerase so that local clonal colonies, initially coined "DNA colonies", are formed. To determine the sequence, four types of reversible terminator bases (RT-bases) are added and non-incorporated nucleotides are washed away. Unlike pyrosequencing, the DNA chains are extended one nucleotide at a time and image acquisition can be performed at a delayed moment, allowing for very large arrays of DNA colonies to be captured by sequential images taken from a single camera. Decoupling the enzymatic reaction and the image capture allows for optimal throughput and theoretically unlimited sequencing capacity; with an optimal configuration, the ultimate throughput of the instrument depends only on the A/D conversion rate of the camera. The camera takes images of the fluorescently labeled nucleotides, then the dye along with the terminal 3' blocker is chemically removed from the DNA, allowing the next cycle. An alternative approach, ion semiconductor sequencing, is based on standard DNA replication chemistry. This technology measures the release of a hydrogen ion each time a base is incorporated. A microwell containing template DNA is flooded with a single nucleotide, if the nucleotide is complementary to the template strand it will be incorporated and a hydrogen ion will be released. This release triggers an ISFET ion sensor. If a homopolymer is present in the template sequence multiple nucleotides will be incorporated in a single flood cycle, and the detected electrical signal will be proportionally higher. Assembly Sequence assembly refers to aligning and merging fragments of a much longer DNA sequence in order to reconstruct the original sequence. This is needed as current DNA sequencing technology cannot read whole genomes as a continuous sequence, but rather reads small pieces of between 20 and 1000 bases, depending on the technology used. Third generation sequencing technologies such as PacBio or Oxford Nanopore routinely generate sequencing reads 10-100 kb in length; however, they have a high error rate at approximately 1 percent. Typically the short fragments, called reads, result from shotgun sequencing genomic DNA, or gene transcripts (ESTs). Assembly approaches Assembly can be broadly categorized into two approaches: de novo assembly, for genomes which are not similar to any sequenced in the past, and comparative assembly, which uses the existing sequence of a closely related organism as a reference during assembly. Relative to comparative assembly, de novo assembly is computationally difficult (NP-hard), making it less favourable for short-read NGS technologies. Within the de novo assembly paradigm there are two primary strategies for assembly, Eulerian path strategies, and overlap-layout-consensus (OLC) strategies. OLC strategies ultimately try to create a Hamiltonian path through an overlap graph which is an NP-hard problem. Eulerian path strategies are computationally more tractable because they try to find a Eulerian path through a deBruijn graph. Finishing Finished genomes are defined as having a single contiguous sequence with no ambiguities representing each replicon. Annotation The DNA sequence assembly alone is of little value without additional analysis. Genome annotation is the process of attaching biological information to sequences, and consists of three main steps: identifying portions of the genome that do not code for proteins identifying elements on the genome, a process called gene prediction, and attaching biological information to these elements. Automatic annotation tools try to perform these steps in silico, as opposed to manual annotation (a.k.a. curation) which involves human expertise and potential experimental verification. Ideally, these approaches co-exist and complement each other in the same annotation pipeline (also see below). Traditionally, the basic level of annotation is using BLAST for finding similarities, and then annotating genomes based on homologues. More recently, additional information is added to the annotation platform. The additional information allows manual annotators to deconvolute discrepancies between genes that are given the same annotation. Some databases use genome context information, similarity scores, experimental data, and integrations of other resources to provide genome annotations through their Subsystems approach. Other databases (e.g. Ensembl) rely on both curated data sources as well as a range of software tools in their automated genome annotation pipeline. Structural annotation consists of the identification of genomic elements, primarily ORFs and their localisation, or gene structure. Functional annotation consists of attaching biological information to genomic elements. Sequencing pipelines and databases The need for reproducibility and efficient management of the large amount of data associated with genome projects mean that computational pipelines have important applications in genomics. Research areas Functional genomics Functional genomics is a field of molecular biology that attempts to make use of the vast wealth of data produced by genomic projects (such as genome sequencing projects) to describe gene (and protein) functions and interactions. Functional genomics focuses on the dynamic aspects such as gene transcription, translation, and protein–protein interactions, as opposed to the static aspects of the genomic information such as DNA sequence or structures. Functional genomics attempts to answer questions about the function of DNA at the levels of genes, RNA transcripts, and protein products. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional "gene-by-gene" approach. A major branch of genomics is still concerned with sequencing the genomes of various organisms, but the knowledge of full genomes has created the possibility for the field of functional genomics, mainly concerned with patterns of gene expression during various conditions. The most important tools here are microarrays and bioinformatics. Structural genomics Structural genomics seeks to describe the 3-dimensional structure of every protein encoded by a given genome. This genome-based approach allows for a high-throughput method of structure determination by a combination of experimental and modeling approaches. The principal difference between structural genomics and traditional structural prediction is that structural genomics attempts to determine the structure of every protein encoded by the genome, rather than focusing on one particular protein. With full-genome sequences available, structure prediction can be done more quickly through a combination of experimental and modeling approaches, especially because the availability of large numbers of sequenced genomes and previously solved protein structures allow scientists to model protein structure on the structures of previously solved homologs. Structural genomics involves taking a large number of approaches to structure determination, including experimental methods using genomic sequences or modeling-based approaches based on sequence or structural homology to a protein of known structure or based on chemical and physical principles for a protein with no homology to any known structure. As opposed to traditional structural biology, the determination of a protein structure through a structural genomics effort often (but not always) comes before anything is known regarding the protein function. This raises new challenges in structural bioinformatics, i.e. determining protein function from its 3D structure. Epigenomics Epigenomics is the study of the complete set of epigenetic modifications on the genetic material of a cell, known as the epigenome. Epigenetic modifications are reversible modifications on a cell's DNA or histones that affect gene expression without altering the DNA sequence (Russell 2010 p. 475). Two of the most characterized epigenetic modifications are DNA methylation and histone modification. Epigenetic modifications play an important role in gene expression and regulation, and are involved in numerous cellular processes such as in differentiation/development and tumorigenesis. The study of epigenetics on a global level has been made possible only recently through the adaptation of genomic high-throughput assays. Metagenomics Metagenomics is the study of metagenomes, genetic material recovered directly from environmental samples. The broad field may also be referred to as environmental genomics, ecogenomics or community genomics. While traditional microbiology and microbial genome sequencing rely upon cultivated clonal cultures, early environmental gene sequencing cloned specific genes (often the 16S rRNA gene) to produce a profile of diversity in a natural sample. Such work revealed that the vast majority of microbial biodiversity had been missed by cultivation-based methods. Recent studies use "shotgun" Sanger sequencing or massively parallel pyrosequencing to get largely unbiased samples of all genes from all the members of the sampled communities. Because of its power to reveal the previously hidden diversity of microscopic life, metagenomics offers a powerful lens for viewing the microbial world that has the potential to revolutionize understanding of the entire living world. Model systems Viruses and bacteriophages Bacteriophages have played and continue to play a key role in bacterial genetics and molecular biology. Historically, they were used to define gene structure and gene regulation. Also the first genome to be sequenced was a bacteriophage. However, bacteriophage research did not lead the genomics revolution, which is clearly dominated by bacterial genomics. Only very recently has the study of bacteriophage genomes become prominent, thereby enabling researchers to understand the mechanisms underlying phage evolution. Bacteriophage genome sequences can be obtained through direct sequencing of isolated bacteriophages, but can also be derived as part of microbial genomes. Analysis of bacterial genomes has shown that a substantial amount of microbial DNA consists of prophage sequences and prophage-like elements. A detailed database mining of these sequences offers insights into the role of prophages in shaping the bacterial genome: Overall, this method verified many known bacteriophage groups, making this a useful tool for predicting the relationships of prophages from bacterial genomes. Cyanobacteria At present there are 24 cyanobacteria for which a total genome sequence is available. 15 of these cyanobacteria come from the marine environment. These are six Prochlorococcus strains, seven marine Synechococcus strains, Trichodesmium erythraeum IMS101 and Crocosphaera watsonii WH8501. Several studies have demonstrated how these sequences could be used very successfully to infer important ecological and physiological characteristics of marine cyanobacteria. However, there are many more genome projects currently in progress, amongst those there are further Prochlorococcus and marine Synechococcus isolates, Acaryochloris and Prochloron, the N2-fixing filamentous cyanobacteria Nodularia spumigena, Lyngbya aestuarii and Lyngbya majuscula, as well as bacteriophages infecting marine cyanobaceria. Thus, the growing body of genome information can also be tapped in a more general way to address global problems by applying a comparative approach. Some new and exciting examples of progress in this field are the identification of genes for regulatory RNAs, insights into the evolutionary origin of photosynthesis, or estimation of the contribution of horizontal gene transfer to the genomes that have been analyzed. Applications Genomics has provided applications in many fields, including medicine, biotechnology, anthropology and other social sciences. Genomic medicine Next-generation genomic technologies allow clinicians and biomedical researchers to drastically increase the amount of genomic data collected on large study populations. When combined with new informatics approaches that integrate many kinds of data with genomic data in disease research, this allows researchers to better understand the genetic bases of drug response and disease. Early efforts to apply the genome to medicine included those by a Stanford team led by Euan Ashley who developed the first tools for the medical interpretation of a human genome. The Genomes2People research program at Brigham and Women’s Hospital, Broad Institute and Harvard Medical School was established in 2012 to conduct empirical research in translating genomics into health. Brigham and Women's Hospital opened a Preventive Genomics Clinic in August 2019, with Massachusetts General Hospital following a month later. The All of Us research program aims to collect genome sequence data from 1 million participants to become a critical component of the precision medicine research platform and the UK Biobank initiative has studied more than 500.000 individuals with deep genomic and phenotypic data. Synthetic biology and bioengineering The growth of genomic knowledge has enabled increasingly sophisticated applications of synthetic biology. In 2010 researchers at the J. Craig Venter Institute announced the creation of a partially synthetic species of bacterium, Mycoplasma laboratorium, derived from the genome of Mycoplasma genitalium. Population and conservation genomics Population genomics has developed as a popular field of research, where genomic sequencing methods are used to conduct large-scale comparisons of DNA sequences among populations - beyond the limits of genetic markers such as short-range PCR products or microsatellites traditionally used in population genetics. Population genomics studies genome-wide effects to improve our understanding of microevolution so that we may learn the phylogenetic history and demography of a population. Population genomic methods are used for many different fields including evolutionary biology, ecology, biogeography, conservation biology and fisheries management. Similarly, landscape genomics has developed from landscape genetics to use genomic methods to identify relationships between patterns of environmental and genetic variation. Conservationists can use the information gathered by genomic sequencing in order to better evaluate genetic factors key to species conservation, such as the genetic diversity of a population or whether an individual is heterozygous for a recessive inherited genetic disorder. By using genomic data to evaluate the effects of evolutionary processes and to detect patterns in variation throughout a given population, conservationists can formulate plans to aid a given species without as many variables left unknown as those unaddressed by standard genetic approaches.
Biology and health sciences
Genetics
Biology
55172
https://en.wikipedia.org/wiki/Proteomics
Proteomics
Proteomics is the large-scale study of proteins. Proteins are vital macromolecules of all living organisms, with many functions such as the formation of structural fibers of muscle tissue, enzymatic digestion of food, or synthesis and replication of DNA. In addition, other kinds of proteins include antibodies that protect an organism from infection, and hormones that send important signals throughout the body. The proteome is the entire set of proteins produced or modified by an organism or system. Proteomics enables the identification of ever-increasing numbers of proteins. This varies with time and distinct requirements, or stresses, that a cell or organism undergoes. Proteomics is an interdisciplinary domain that has benefited greatly from the genetic information of various genome projects, including the Human Genome Project. It covers the exploration of proteomes from the overall level of protein composition, structure, and activity, and is an important component of functional genomics. Proteomics generally denotes the large-scale experimental analysis of proteins and proteomes, but often refers specifically to protein purification and mass spectrometry. Indeed, mass spectrometry is the most powerful method for analysis of proteomes, both in large samples composed of millions of cells and in single cells. History and etymology The first studies of proteins that could be regarded as proteomics began in 1974, after the introduction of the two-dimensional gel and mapping of the proteins from the bacterium Escherichia coli. Proteome is a blend of the words "protein" and "genome". It was coined in 1994 by then-Ph.D student Marc Wilkins at Macquarie University, which founded the first dedicated proteomics laboratory in 1995. Complexity of the problem After genomics and transcriptomics, proteomics is the next step in the study of biological systems. It is more complicated than genomics because an organism's genome is more or less constant, whereas proteomes differ from cell to cell and from time to time. Distinct genes are expressed in different cell types, which means that even the basic set of proteins produced in a cell must be identified. In the past this phenomenon was assessed by RNA analysis, which was found to lack correlation with protein content. It is now known that mRNA is not always translated into protein, and the amount of protein produced for a given amount of mRNA depends on the gene it is transcribed from and on the cell's physiological state. Proteomics confirms the presence of the protein and provides a direct measure of its quantity. Post-translational modifications Not only does the translation from mRNA cause differences, but many proteins also are subjected to a wide variety of chemical modifications after translation. The most common and widely studied post-translational modifications include phosphorylation and glycosylation. Many of these post-translational modifications are critical to the protein's function. Phosphorylation One such modification is phosphorylation, which happens to many enzymes and structural proteins in the process of cell signaling. The addition of a phosphate to particular amino acids—most commonly serine and threonine mediated by serine-threonine kinases, or more rarely tyrosine mediated by tyrosine kinases—causes a protein to become a target for binding or interacting with a distinct set of other proteins that recognize the phosphorylated domain. Because protein phosphorylation is one of the most studied protein modifications, many "proteomic" efforts are geared to determining the set of phosphorylated proteins in a particular cell or tissue-type under particular circumstances. This alerts the scientist to the signaling pathways that may be active in that instance. Ubiquitination Ubiquitin is a small protein that may be affixed to certain protein substrates by enzymes called E3 ubiquitin ligases. Determining which proteins are poly-ubiquitinated helps understand how protein pathways are regulated. This is, therefore, an additional legitimate "proteomic" study. Similarly, once a researcher determines which substrates are ubiquitinated by each ligase, determining the set of ligases expressed in a particular cell type is helpful. Additional modifications In addition to phosphorylation and ubiquitination, proteins may be subjected to (among others) methylation, acetylation, glycosylation, oxidation, and nitrosylation. Some proteins undergo all these modifications, often in time-dependent combinations. This illustrates the potential complexity of studying protein structure and function. Distinct proteins are made under distinct settings A cell may make different sets of proteins at different times or under different conditions, for example during development, cellular differentiation, cell cycle, or carcinogenesis. Further increasing proteome complexity, as mentioned, most proteins are able to undergo a wide range of post-translational modifications. Therefore, a "proteomics" study may become complex very quickly, even if the topic of study is restricted. In more ambitious settings, such as when a biomarker for a specific cancer subtype is sought, the proteomics scientist might elect to study multiple blood serum samples from multiple cancer patients to minimise confounding factors and account for experimental noise. Thus, complicated experimental designs are sometimes necessary to account for the dynamic complexity of the proteome. Limitations of genomics and proteomics studies Proteomics gives a different level of understanding than genomics for many reasons: the level of transcription of a gene gives only a rough estimate of its level of translation into a protein. An mRNA produced in abundance may be degraded rapidly or translated inefficiently, resulting in a small amount of protein. as mentioned above, many proteins experience post-translational modifications that profoundly affect their activities; for example, some proteins are not active until they become phosphorylated. Methods such as phosphoproteomics and glycoproteomics are used to study post-translational modifications. many transcripts give rise to more than one protein, through alternative splicing or alternative post-translational modifications. many proteins form complexes with other proteins or RNA molecules, and only function in the presence of these other molecules. protein degradation rate plays an important role in protein content. Reproducibility. One major factor affecting reproducibility in proteomics experiments is the simultaneous elution of many more peptides than mass spectrometers can measure. This causes stochastic differences between experiments due to data-dependent acquisition of tryptic peptides. Although early large-scale shotgun proteomics analyses showed considerable variability between laboratories, presumably due in part to technical and experimental differences between laboratories, reproducibility has been improved in more recent mass spectrometry analysis, particularly on the protein level. Notably, targeted proteomics shows increased reproducibility and repeatability compared with shotgun methods, although at the expense of data density and effectiveness. Data quality. Proteomic analysis is highly amenable to automation and large data sets are created, which are processed by software algorithms. Filter parameters are used to reduce the number of false hits, but they cannot be completely eliminated. Scientists have expressed the need for awareness that proteomics experiments should adhere to the criteria of analytical chemistry (sufficient data quality, sanity check, validation). Methods of studying proteins In proteomics, there are multiple methods to study proteins. Generally, proteins may be detected by using either antibodies (immunoassays), electrophoretic separation or mass spectrometry. If a complex biological sample is analyzed, either a very specific antibody needs to be used in quantitative dot blot analysis (QDB), or biochemical separation then needs to be used before the detection step, as there are too many analytes in the sample to perform accurate detection and quantification. Protein detection with antibodies (immunoassays) Antibodies to particular proteins, or their modified forms, have been used in biochemistry and cell biology studies. These are among the most common tools used by molecular biologists today. There are several specific techniques and protocols that use antibodies for protein detection. The enzyme-linked immunosorbent assay (ELISA) has been used for decades to detect and quantitatively measure proteins in samples. The western blot may be used for detection and quantification of individual proteins, where in an initial step, a complex protein mixture is separated using SDS-PAGE and then the protein of interest is identified using an antibody. Modified proteins may be studied by developing an antibody specific to that modification. For example, some antibodies only recognize certain proteins when they are tyrosine-phosphorylated, they are known as phospho-specific antibodies. Also, there are antibodies specific to other modifications. These may be used to determine the set of proteins that have undergone the modification of interest. Immunoassays can also be carried out using recombinantly generated immunoglobulin derivatives or synthetically designed protein scaffolds that are selected for high antigen specificity. Such binders include single domain antibody fragments (Nanobodies), designed ankyrin repeat proteins (DARPins) and aptamers. Disease detection at the molecular level is driving the emerging revolution of early diagnosis and treatment. A challenge facing the field is that protein biomarkers for early diagnosis may be present in very low abundance. The lower limit of detection with conventional immunoassay technology is the upper femtomolar range (10−13 M). Digital immunoassay technology has improved detection sensitivity three logs, to the attomolar range (10−16 M). This capability has the potential to open new advances in diagnostics and therapeutics, but such technologies have been relegated to manual procedures that are not well suited for efficient routine use. Antibody-free protein detection While protein detection with antibodies is still very common in molecular biology, other methods have been developed as well, that do not rely on an antibody. These methods offer various advantages, for instance they often are able to determine the sequence of a protein or peptide, they may have higher throughput than antibody-based, and they sometimes can identify and quantify proteins for which no antibody exists. Detection methods One of the earliest methods for protein analysis has been Edman degradation (introduced in 1967) where a single peptide is subjected to multiple steps of chemical degradation to resolve its sequence. These early methods have mostly been supplanted by technologies that offer higher throughput. More recently implemented methods use mass spectrometry-based techniques, a development that was made possible by the discovery of "soft ionization" methods developed in the 1980s, such as matrix-assisted laser desorption/ionization (MALDI) and electrospray ionization (ESI). These methods gave rise to the top-down and the bottom-up proteomics workflows where often additional separation is performed before analysis (see below). Separation methods For the analysis of complex biological samples, a reduction of sample complexity is required. This may be performed off-line by one-dimensional or two-dimensional separation. More recently, on-line methods have been developed where individual peptides (in bottom-up proteomics approaches) are separated using reversed-phase chromatography and then, directly ionized using ESI; the direct coupling of separation and analysis explains the term "on-line" analysis. Hybrid technologies Several hybrid technologies use antibody-based purification of individual analytes and then perform mass spectrometric analysis for identification and quantification. Examples of these methods are the MSIA (mass spectrometric immunoassay), developed by Randall Nelson in 1995, and the SISCAPA (Stable Isotope Standard Capture with Anti-Peptide Antibodies) method, introduced by Leigh Anderson in 2004. Current research methodologies Fluorescence two-dimensional differential gel electrophoresis (2-D DIGE) may be used to quantify variation in the 2-D DIGE process and establish statistically valid thresholds for assigning quantitative changes between samples. Comparative proteomic analysis may reveal the role of proteins in complex biological systems, including reproduction. For example, treatment with the insecticide triazophos causes an increase in the content of brown planthopper (Nilaparvata lugens (Stål)) male accessory gland proteins (Acps) that may be transferred to females via mating, causing an increase in fecundity (i.e. birth rate) of females. To identify changes in the types of accessory gland proteins (Acps) and reproductive proteins that mated female planthoppers received from male planthoppers, researchers conducted a comparative proteomic analysis of mated N. lugens females. The results indicated that these proteins participate in the reproductive process of N. lugens adult females and males. Proteome analysis of Arabidopsis peroxisomes has been established as the major unbiased approach for identifying new peroxisomal proteins on a large scale. There are many approaches to characterizing the human proteome, which is estimated to contain between 20,000 and 25,000 non-redundant proteins. The number of unique protein species likely will increase by between 50,000 and 500,000 due to RNA splicing and proteolysis events, and when post-translational modification also are considered, the total number of unique human proteins is estimated to range in the low millions. In addition, the first promising attempts to decipher the proteome of animal tumors have recently been reported. This method was used as a functional method in Macrobrachium rosenbergii protein profiling. High-throughput proteomic technologies Proteomics has steadily gained momentum over the past decade with the evolution of several approaches. Few of these are new, and others build on traditional methods. Mass spectrometry-based methods, affinity proteomics, and micro arrays are the most common technologies for large-scale study of proteins. Mass spectrometry and protein profiling There are two mass spectrometry-based methods currently used for protein profiling. The more established and widespread method uses high resolution, two-dimensional electrophoresis to separate proteins from different samples in parallel, followed by selection and staining of differentially expressed proteins to be identified by mass spectrometry. Despite the advances in 2-DE and its maturity, it has its limits as well. The central concern is the inability to resolve all the proteins within a sample, given their dramatic range in expression level and differing properties. The combination of pore size, and protein charge, size and shape can greatly determine migration rate which leads to other complications. The second quantitative approach uses stable isotope tags to differentially label proteins from two different complex mixtures. Here, the proteins within a complex mixture are labeled isotopically first, and then digested to yield labeled peptides. The labeled mixtures are then combined, the peptides separated by multidimensional liquid chromatography and analyzed by tandem mass spectrometry. Isotope coded affinity tag (ICAT) reagents are the widely used isotope tags. In this method, the cysteine residues of proteins get covalently attached to the ICAT reagent, thereby reducing the complexity of the mixtures omitting the non-cysteine residues. Quantitative proteomics using stable isotopic tagging is an increasingly useful tool in modern development. Firstly, chemical reactions have been used to introduce tags into specific sites or proteins for the purpose of probing specific protein functionalities. The isolation of phosphorylated peptides has been achieved using isotopic labeling and selective chemistries to capture the fraction of protein among the complex mixture. Secondly, the ICAT technology was used to differentiate between partially purified or purified macromolecular complexes such as large RNA polymerase II pre-initiation complex and the proteins complexed with yeast transcription factor. Thirdly, ICAT labeling was recently combined with chromatin isolation to identify and quantify chromatin-associated proteins. Finally ICAT reagents are useful for proteomic profiling of cellular organelles and specific cellular fractions. Another quantitative approach is the accurate mass and time (AMT) tag approach developed by Richard D. Smith and coworkers at Pacific Northwest National Laboratory. In this approach, increased throughput and sensitivity is achieved by avoiding the need for tandem mass spectrometry, and making use of precisely determined separation time information and highly accurate mass determinations for peptide and protein identifications. Affinity proteomics Affinity proteomics uses antibodies or other affinity reagents (such as oligonucleotide-based aptamers) as protein-specific detection probes. Currently this method can interrogate several thousand proteins, typically from biofluids such as plasma, serum or cerebrospinal fluid (CSF). A key differentiator for this technology is the ability to analyze hundreds or thousands of samples in a reasonable timeframe (a matter of days or weeks); mass spectrometry-based methods are not scalable to this level of sample throughput for proteomics analyses. Protein chips Balancing the use of mass spectrometers in proteomics and in medicine is the use of protein micro arrays. The aim behind protein micro arrays is to print thousands of protein detecting features for the interrogation of biological samples. Antibody arrays are an example in which a host of different antibodies are arrayed to detect their respective antigens from a sample of human blood. Another approach is the arraying of multiple protein types for the study of properties like protein-DNA, protein-protein and protein-ligand interactions. Ideally, the functional proteomic arrays would contain the entire complement of the proteins of a given organism. The first version of such arrays consisted of 5000 purified proteins from yeast deposited onto glass microscopic slides. Despite the success of first chip, it was a greater challenge for protein arrays to be implemented. Proteins are inherently much more difficult to work with than DNA. They have a broad dynamic range, are less stable than DNA and their structure is difficult to preserve on glass slides, though they are essential for most assays. The global ICAT technology has striking advantages over protein chip technologies. Reverse-phased protein microarrays This is a promising and newer microarray application for the diagnosis, study and treatment of complex diseases such as cancer. The technology merges laser capture microdissection (LCM) with micro array technology, to produce reverse-phase protein microarrays. In this type of microarrays, the whole collection of protein themselves are immobilized with the intent of capturing various stages of disease within an individual patient. When used with LCM, reverse phase arrays can monitor the fluctuating state of proteome among different cell population within a small area of human tissue. This is useful for profiling the status of cellular signaling molecules, among a cross-section of tissue that includes both normal and cancerous cells. This approach is useful in monitoring the status of key factors in normal prostate epithelium and invasive prostate cancer tissues. LCM then dissects these tissue and protein lysates were arrayed onto nitrocellulose slides, which were probed with specific antibodies. This method can track all kinds of molecular events and can compare diseased and healthy tissues within the same patient enabling the development of treatment strategies and diagnosis. The ability to acquire proteomics snapshots of neighboring cell populations, using reverse-phase microarrays in conjunction with LCM has a number of applications beyond the study of tumors. The approach can provide insights into normal physiology and pathology of all the tissues and is invaluable for characterizing developmental processes and anomalies. Protein Detection via Bioorthogonal Chemistry Recent advancements in bioorthogonal chemistry have revealed applications in protein analysis. The extension of using organic molecules to observe their reaction with proteins reveals extensive methods to tag them. Unnatural amino acids and various functional groups represent new growing technologies in proteomics. Specific biomolecules that are capable of being metabolized in cells or tissues are inserted into proteins or glycans. The molecule will have an affinity tag, modifying the protein allowing it to be detected. Azidohomoalanine (AHA) utilizes this affinity tag via incorporation with Met-t-RNA synthetase to incorporate into proteins. This has allowed AHA to assist in determine the identity of newly synthesized proteins created in response to perturbations and to identify proteins secreted by cells. Recent studies using ketones and aldehydes condensations show that they are best suited for in vitro or cell surface labeling. However, using ketones and aldehydes as bioorthogonal reporters revealed slow kinetics indicating that while effective for labeling, the concentration must be high. Certain proteins can be detected via their reactivity to azide groups. Non-proteinogenic amino acids can bear azide groups which react with phosphines in Staudinger ligations. This reaction has already been used to label other biomolecules in living cells and animals. The bioorthoganal field is expanding and is driving further applications within proteomics. It is worthwhile noting the limitations and benefits. Rapid reactions can create bioconjuctions and create high concentrations with low amounts of reactants. Contrarily slow kinetic reactions like aldehyde and ketone condensation while effective require a high concentration making it cost inefficient. Practical applications New drug discovery One major development to come from the study of human genes and proteins has been the identification of potential new drugs for the treatment of disease. This relies on genome and proteome information to identify proteins associated with a disease, which computer software can then use as targets for new drugs. For example, if a certain protein is implicated in a disease, its 3D structure provides the information to design drugs to interfere with the action of the protein. A molecule that fits the active site of an enzyme, but cannot be released by the enzyme, inactivates the enzyme. This is the basis of new drug-discovery tools, which aim to find new drugs to inactivate proteins involved in disease. As genetic differences among individuals are found, researchers expect to use these techniques to develop personalized drugs that are more effective for the individual. Proteomics is also used to reveal complex plant-insect interactions that help identify candidate genes involved in the defensive response of plants to herbivory. A branch of proteomics called chemoproteomics provides numerous tools and techniques to detect protein targets of drugs. Interaction proteomics and protein networks Interaction proteomics is the analysis of protein interactions from scales of binary interactions to proteome- or network-wide. Most proteins function via protein–protein interactions, and one goal of interaction proteomics is to identify binary protein interactions, protein complexes, and interactomes. Several methods are available to probe protein–protein interactions. While the most traditional method is yeast two-hybrid analysis, a powerful emerging method is affinity purification followed by protein mass spectrometry using tagged protein baits. Other methods include surface plasmon resonance (SPR), protein microarrays, dual polarisation interferometry, microscale thermophoresis, kinetic exclusion assay, and experimental methods such as phage display and in silico computational methods. Knowledge of protein-protein interactions is especially useful in regard to biological networks and systems biology, for example in cell signaling cascades and gene regulatory networks (GRNs, where knowledge of protein-DNA interactions is also informative). Proteome-wide analysis of protein interactions, and integration of these interaction patterns into larger biological networks, is crucial towards understanding systems-level biology. Expression proteomics Expression proteomics includes the analysis of protein expression at a larger scale. It helps identify main proteins in a particular sample, and those proteins differentially expressed in related samples—such as diseased vs. healthy tissue. If a protein is found only in a diseased sample then it can be a useful drug target or diagnostic marker. Proteins with the same or similar expression profiles may also be functionally related. There are technologies such as 2D-PAGE and mass spectrometry that are used in expression proteomics. Biomarkers The National Institutes of Health has defined a biomarker as "a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention." Understanding the proteome, the structure and function of each protein and the complexities of protein–protein interactions are critical for developing the most effective diagnostic techniques and disease treatments in the future. For example, proteomics is highly useful in the identification of candidate biomarkers (proteins in body fluids that are of value for diagnosis), identification of the bacterial antigens that are targeted by the immune response, and identification of possible immunohistochemistry markers of infectious or neoplastic diseases. An interesting use of proteomics is using specific protein biomarkers to diagnose disease. A number of techniques allow to test for proteins produced during a particular disease, which helps to diagnose the disease quickly. Techniques include western blot, immunohistochemical staining, enzyme linked immunosorbent assay (ELISA) or mass spectrometry. Secretomics, a subfield of proteomics that studies secreted proteins and secretion pathways using proteomic approaches, has recently emerged as an important tool for the discovery of biomarkers of disease. Proteogenomics In proteogenomics, proteomic technologies such as mass spectrometry are used for improving gene annotations. Parallel analysis of the genome and the proteome facilitates discovery of post-translational modifications and proteolytic events, especially when comparing multiple species (comparative proteogenomics). Structural proteomics Structural proteomics includes the analysis of protein structures at large-scale. It compares protein structures and helps identify functions of newly discovered genes. The structural analysis also helps to understand that where drugs bind to proteins and also shows where proteins interact with each other. This understanding is achieved using different technologies such as X-ray crystallography and NMR spectroscopy. Bioinformatics for proteomics (proteome informatics) Much proteomics data is collected with the help of high throughput technologies such as mass spectrometry and microarray. It would often take weeks or months to analyze the data and perform comparisons by hand. For this reason, biologists and chemists are collaborating with computer scientists and mathematicians to create programs and pipeline to computationally analyze the protein data. Using bioinformatics techniques, researchers are capable of faster analysis and data storage. A good place to find lists of current programs and databases is on the ExPASy bioinformatics resource portal. The applications of bioinformatics-based proteomics include medicine, disease diagnosis, biomarker identification, and many more. Protein identification Mass spectrometry and microarray produce peptide fragmentation information but do not give identification of specific proteins present in the original sample. Due to the lack of specific protein identification, past researchers were forced to decipher the peptide fragments themselves. However, there are currently programs available for protein identification. These programs take the peptide sequences output from mass spectrometry and microarray and return information about matching or similar proteins. This is done through algorithms implemented by the program which perform alignments with proteins from known databases such as UniProt and PROSITE to predict what proteins are in the sample with a degree of certainty. Protein structure The biomolecular structure forms the 3D configuration of the protein. Understanding the protein's structure aids in the identification of the protein's interactions and function. It used to be that the 3D structure of proteins could only be determined using X-ray crystallography and NMR spectroscopy. As of 2017, Cryo-electron microscopy is a leading technique, solving difficulties with crystallization (in X-ray crystallography) and conformational ambiguity (in NMR); resolution was 2.2Å as of 2015. Now, through bioinformatics, there are computer programs that can in some cases predict and model the structure of proteins. These programs use the chemical properties of amino acids and structural properties of known proteins to predict the 3D model of sample proteins. This also allows scientists to model protein interactions on a larger scale. In addition, biomedical engineers are developing methods to factor in the flexibility of protein structures to make comparisons and predictions. Post-translational modifications Most programs available for protein analysis are not written for proteins that have undergone post-translational modifications. Some programs will accept post-translational modifications to aid in protein identification but then ignore the modification during further protein analysis. It is important to account for these modifications since they can affect the protein's structure. In turn, computational analysis of post-translational modifications has gained the attention of the scientific community. The current post-translational modification programs are only predictive. Chemists, biologists and computer scientists are working together to create and introduce new pipelines that allow for analysis of post-translational modifications that have been experimentally identified for their effect on the protein's structure and function. Computational methods in studying protein biomarkers One example of the use of bioinformatics and the use of computational methods is the study of protein biomarkers. Computational predictive models have shown that extensive and diverse feto-maternal protein trafficking occurs during pregnancy and can be readily detected non-invasively in maternal whole blood. This computational approach circumvented a major limitation, the abundance of maternal proteins interfering with the detection of fetal proteins, to fetal proteomic analysis of maternal blood. Computational models can use fetal gene transcripts previously identified in maternal whole blood to create a comprehensive proteomic network of the term neonate. Such work shows that the fetal proteins detected in pregnant woman's blood originate from a diverse group of tissues and organs from the developing fetus. The proteomic networks contain many biomarkers that are proxies for development and illustrate the potential clinical application of this technology as a way to monitor normal and abnormal fetal development. An information-theoretic framework has also been introduced for biomarker discovery, integrating biofluid and tissue information. This new approach takes advantage of functional synergy between certain biofluids and tissues with the potential for clinically significant findings not possible if tissues and biofluids were considered individually. By conceptualizing tissue-biofluid as information channels, significant biofluid proxies can be identified and then used for the guided development of clinical diagnostics. Candidate biomarkers are then predicted based on information transfer criteria across the tissue-biofluid channels. Significant biofluid-tissue relationships can be used to prioritize clinical validation of biomarkers. Emerging trends A number of emerging concepts have the potential to improve the current features of proteomics. Obtaining absolute quantification of proteins and monitoring post-translational modifications are the two tasks that impact the understanding of protein function in healthy and diseased cells. Further, the throughput and sensitivity of proteomic assays, often measured as samples analyzed per day and depth of proteome coverage, respectively, have driven development of cutting-edge instrumentation and methodologies. For many cellular events, the protein concentrations do not change; rather, their function is modulated by post-translational modifications (PTM). Methods of monitoring PTM are an underdeveloped area in proteomics. Selecting a particular subset of protein for analysis substantially reduces protein complexity, making it advantageous for diagnostic purposes where blood is the starting material. Another important aspect of proteomics, yet not addressed, is that proteomics methods should focus on studying proteins in the context of the environment. The increasing use of chemical cross-linkers, introduced into living cells to fix protein-protein, protein-DNA and other interactions, may ameliorate this problem partially. The challenge is to identify suitable methods of preserving relevant interactions. Another goal for studying proteins is development of more sophisticated methods to image proteins and other molecules in living cells and real-time. Systems biology Advances in quantitative proteomics would clearly enable more in-depth analysis of cellular systems. Another research frontier is the analysis of single cells, and protein covariation across single cells which reflects biological processes such as protein complex formation, immune functions, as well as cell cycle and priming of cancer cells for drug resistance Biological systems are subject to a variety of perturbations (cell cycle, cellular differentiation, carcinogenesis, environment (biophysical), etc.). Transcriptional and translational responses to these perturbations results in functional changes to the proteome implicated in response to the stimulus. Therefore, describing and quantifying proteome-wide changes in protein abundance is crucial towards understanding biological phenomenon more holistically, on the level of the entire system. In this way, proteomics can be seen as complementary to genomics, transcriptomics, epigenomics, metabolomics, and other -omics approaches in integrative analyses attempting to define biological phenotypes more comprehensively. As an example, The Cancer Proteome Atlas provides quantitative protein expression data for ~200 proteins in over 4,000 tumor samples with matched transcriptomic and genomic data from The Cancer Genome Atlas. Similar datasets in other cell types, tissue types, and species, particularly using deep shotgun mass spectrometry, will be an immensely important resource for research in fields like cancer biology, developmental and stem cell biology, medicine, and evolutionary biology. Human plasma proteome Characterizing the human plasma proteome has become a major goal in the proteomics arena, but it is also the most challenging proteomes of all human tissues. It contains immunoglobulin, cytokines, protein hormones, and secreted proteins indicative of infection on top of resident, hemostatic proteins. It also contains tissue leakage proteins due to the blood circulation through different tissues in the body. The blood thus contains information on the physiological state of all tissues and, combined with its accessibility, makes the blood proteome invaluable for medical purposes. It is thought that characterizing the proteome of blood plasma is a daunting challenge. The depth of the plasma proteome encompasses a dynamic range of more than 1010 between the highest abundant protein (albumin) and the lowest (some cytokines) and is thought to be one of the main challenges for proteomics. Temporal and spatial dynamics further complicate the study of human plasma proteome. The turnover of some proteins is quite faster than others and the protein content of an artery may substantially vary from that of a vein. All these differences make even the simplest proteomic task of cataloging the proteome seem out of reach. To tackle this problem, priorities need to be established. Capturing the most meaningful subset of proteins among the entire proteome to generate a diagnostic tool is one such priority. Secondly, since cancer is associated with enhanced glycosylation of proteins, methods that focus on this part of proteins will also be useful. Again: multiparameter analysis best reveals a pathological state. As these technologies improve, the disease profiles should be continually related to respective gene expression changes. Due to the above-mentioned problems plasma proteomics remained challenging. However, technological advancements and continuous developments seem to result in a revival of plasma proteomics as it was shown recently by a technology called plasma proteome profiling. Due to such technologies researchers were able to investigate inflammation processes in mice, the heritability of plasma proteomes as well as to show the effect of such a common life style change like weight loss on the plasma proteome. Journals Numerous journals are dedicated to the field of proteomics and related areas. Note that journals dealing with proteins are usually more focused on structure and function while proteomics journals are more focused on the large-scale analysis of whole proteomes or at least large sets of proteins. Some relevant proteomics journals are listed below (with their publishers). Molecular and Cellular Proteomics (ASBMB) Journal of Proteome Research (ACS) Journal of Proteomics (Elsevier) Proteomics (Wiley)
Biology and health sciences
Biology basics
Biology
55212
https://en.wikipedia.org/wiki/Newton%27s%20laws%20of%20motion
Newton's laws of motion
Newton's laws of motion are three physical laws that describe the relationship between the motion of an object and the forces acting on it. These laws, which provide the basis for Newtonian mechanics, can be paraphrased as follows: A body remains at rest, or in motion at a constant speed in a straight line, except insofar as it is acted upon by a force. At any instant of time, the net force on a body is equal to the body's acceleration multiplied by its mass or, equivalently, the rate at which the body's momentum is changing with time. If two bodies exert forces on each other, these forces have the same magnitude but opposite directions. The three laws of motion were first stated by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), originally published in 1687. Newton used them to investigate and explain the motion of many physical objects and systems. In the time since Newton, new insights, especially around the concept of energy, built the field of classical mechanics on his foundations. Limitations to Newton's laws have also been discovered; new theories are necessary when objects move at very high speeds (special relativity), are very massive (general relativity), or are very small (quantum mechanics). Prerequisites Newton's laws are often stated in terms of point or particle masses, that is, bodies whose volume is negligible. This is a reasonable approximation for real bodies when the motion of internal parts can be neglected, and when the separation between bodies is much larger than the size of each. For instance, the Earth and the Sun can both be approximated as pointlike when considering the orbit of the former around the latter, but the Earth is not pointlike when considering activities on its surface. The mathematical description of motion, or kinematics, is based on the idea of specifying positions using numerical coordinates. Movement is represented by these numbers changing over time: a body's trajectory is represented by a function that assigns to each value of a time variable the values of all the position coordinates. The simplest case is one-dimensional, that is, when a body is constrained to move only along a straight line. Its position can then be given by a single number, indicating where it is relative to some chosen reference point. For example, a body might be free to slide along a track that runs left to right, and so its location can be specified by its distance from a convenient zero point, or origin, with negative numbers indicating positions to the left and positive numbers indicating positions to the right. If the body's location as a function of time is , then its average velocity over the time interval from to is Here, the Greek letter (delta) is used, per tradition, to mean "change in". A positive average velocity means that the position coordinate increases over the interval in question, a negative average velocity indicates a net decrease over that interval, and an average velocity of zero means that the body ends the time interval in the same place as it began. Calculus gives the means to define an instantaneous velocity, a measure of a body's speed and direction of movement at a single moment of time, rather than over an interval. One notation for the instantaneous velocity is to replace with the symbol , for example,This denotes that the instantaneous velocity is the derivative of the position with respect to time. It can roughly be thought of as the ratio between an infinitesimally small change in position to the infinitesimally small time interval over which it occurs. More carefully, the velocity and all other derivatives can be defined using the concept of a limit. A function has a limit of at a given input value if the difference between and can be made arbitrarily small by choosing an input sufficiently close to . One writes, Instantaneous velocity can be defined as the limit of the average velocity as the time interval shrinks to zero: Acceleration is to velocity as velocity is to position: it is the derivative of the velocity with respect to time. Acceleration can likewise be defined as a limit:Consequently, the acceleration is the second derivative of position, often written . Position, when thought of as a displacement from an origin point, is a vector: a quantity with both magnitude and direction. Velocity and acceleration are vector quantities as well. The mathematical tools of vector algebra provide the means to describe motion in two, three or more dimensions. Vectors are often denoted with an arrow, as in , or in bold typeface, such as . Often, vectors are represented visually as arrows, with the direction of the vector being the direction of the arrow, and the magnitude of the vector indicated by the length of the arrow. Numerically, a vector can be represented as a list; for example, a body's velocity vector might be , indicating that it is moving at 3 metres per second along the horizontal axis and 4 metres per second along the vertical axis. The same motion described in a different coordinate system will be represented by different numbers, and vector algebra can be used to translate between these alternatives. The study of mechanics is complicated by the fact that household words like energy are used with a technical meaning. Moreover, words which are synonymous in everyday speech are not so in physics: force is not the same as power or pressure, for example, and mass has a different meaning than weight. The physics concept of force makes quantitative the everyday idea of a push or a pull. Forces in Newtonian mechanics are often due to strings and ropes, friction, muscle effort, gravity, and so forth. Like displacement, velocity, and acceleration, force is a vector quantity. Laws First law Translated from Latin, Newton's first law reads, Every object perseveres in its state of rest, or of uniform motion in a right line, except insofar as it is compelled to change that state by forces impressed thereon. Newton's first law expresses the principle of inertia: the natural behavior of a body is to move in a straight line at constant speed. A body's motion preserves the status quo, but external forces can perturb this. The modern understanding of Newton's first law is that no inertial observer is privileged over any other. The concept of an inertial observer makes quantitative the everyday idea of feeling no effects of motion. For example, a person standing on the ground watching a train go past is an inertial observer. If the observer on the ground sees the train moving smoothly in a straight line at a constant speed, then a passenger sitting on the train will also be an inertial observer: the train passenger feels no motion. The principle expressed by Newton's first law is that there is no way to say which inertial observer is "really" moving and which is "really" standing still. One observer's state of rest is another observer's state of uniform motion in a straight line, and no experiment can deem either point of view to be correct or incorrect. There is no absolute standard of rest. Newton himself believed that absolute space and time existed, but that the only measures of space or time accessible to experiment are relative. Second law The change of motion of an object is proportional to the force impressed; and is made in the direction of the straight line in which the force is impressed. By "motion", Newton meant the quantity now called momentum, which depends upon the amount of matter contained in a body, the speed at which that body is moving, and the direction in which it is moving. In modern notation, the momentum of a body is the product of its mass and its velocity: where all three quantities can change over time. Newton's second law, in modern form, states that the time derivative of the momentum is the force: If the mass does not change with time, then the derivative acts only upon the velocity, and so the force equals the product of the mass and the time derivative of the velocity, which is the acceleration: As the acceleration is the second derivative of position with respect to time, this can also be written The forces acting on a body add as vectors, and so the total force on a body depends upon both the magnitudes and the directions of the individual forces. When the net force on a body is equal to zero, then by Newton's second law, the body does not accelerate, and it is said to be in mechanical equilibrium. A state of mechanical equilibrium is stable if, when the position of the body is changed slightly, the body remains near that equilibrium. Otherwise, the equilibrium is unstable. A common visual representation of forces acting in concert is the free body diagram, which schematically portrays a body of interest and the forces applied to it by outside influences. For example, a free body diagram of a block sitting upon an inclined plane can illustrate the combination of gravitational force, "normal" force, friction, and string tension. Newton's second law is sometimes presented as a definition of force, i.e., a force is that which exists when an inertial observer sees a body accelerating. In order for this to be more than a tautology — acceleration implies force, force implies acceleration — some other statement about force must also be made. For example, an equation detailing the force might be specified, like Newton's law of universal gravitation. By inserting such an expression for into Newton's second law, an equation with predictive power can be written. Newton's second law has also been regarded as setting out a research program for physics, establishing that important goals of the subject are to identify the forces present in nature and to catalogue the constituents of matter. Third law To every action, there is always opposed an equal reaction; or, the mutual actions of two bodies upon each other are always equal, and directed to contrary parts. Overly brief paraphrases of the third law, like "action equals reaction" might have caused confusion among generations of students: the "action" and "reaction" apply to different bodies. For example, consider a book at rest on a table. The Earth's gravity pulls down upon the book. The "reaction" to that "action" is not the support force from the table holding up the book, but the gravitational pull of the book acting on the Earth. Newton's third law relates to a more fundamental principle, the conservation of momentum. The latter remains true even in cases where Newton's statement does not, for instance when force fields as well as material bodies carry momentum, and when momentum is defined properly, in quantum mechanics as well. In Newtonian mechanics, if two bodies have momenta and respectively, then the total momentum of the pair is , and the rate of change of is By Newton's second law, the first term is the total force upon the first body, and the second term is the total force upon the second body. If the two bodies are isolated from outside influences, the only force upon the first body can be that from the second, and vice versa. By Newton's third law, these forces have equal magnitude but opposite direction, so they cancel when added, and is constant. Alternatively, if is known to be constant, it follows that the forces have equal magnitude and opposite direction. Candidates for additional laws Various sources have proposed elevating other ideas used in classical mechanics to the status of Newton's laws. For example, in Newtonian mechanics, the total mass of a body made by bringing together two smaller bodies is the sum of their individual masses. Frank Wilczek has suggested calling attention to this assumption by designating it "Newton's Zeroth Law". Another candidate for a "zeroth law" is the fact that at any instant, a body reacts to the forces applied to it at that instant. Likewise, the idea that forces add like vectors (or in other words obey the superposition principle), and the idea that forces change the energy of a body, have both been described as a "fourth law". Moreover, some texts organize the basic ideas of Newtonian mechanics into different postulates, other than the three laws as commonly phrased, with the goal of being more clear about what is empirically observed and what is true by definition. Examples The study of the behavior of massive bodies using Newton's laws is known as Newtonian mechanics. Some example problems in Newtonian mechanics are particularly noteworthy for conceptual or historical reasons. Uniformly accelerated motion If a body falls from rest near the surface of the Earth, then in the absence of air resistance, it will accelerate at a constant rate. This is known as free fall. The speed attained during free fall is proportional to the elapsed time, and the distance traveled is proportional to the square of the elapsed time. Importantly, the acceleration is the same for all bodies, independently of their mass. This follows from combining Newton's second law of motion with his law of universal gravitation. The latter states that the magnitude of the gravitational force from the Earth upon the body is where is the mass of the falling body, is the mass of the Earth, is Newton's constant, and is the distance from the center of the Earth to the body's location, which is very nearly the radius of the Earth. Setting this equal to , the body's mass cancels from both sides of the equation, leaving an acceleration that depends upon , , and , and can be taken to be constant. This particular value of acceleration is typically denoted : If the body is not released from rest but instead launched upwards and/or horizontally with nonzero velocity, then free fall becomes projectile motion. When air resistance can be neglected, projectiles follow parabola-shaped trajectories, because gravity affects the body's vertical motion and not its horizontal. At the peak of the projectile's trajectory, its vertical velocity is zero, but its acceleration is downwards, as it is at all times. Setting the wrong vector equal to zero is a common confusion among physics students. Uniform circular motion When a body is in uniform circular motion, the force on it changes the direction of its motion but not its speed. For a body moving in a circle of radius at a constant speed , its acceleration has a magnitudeand is directed toward the center of the circle. The force required to sustain this acceleration, called the centripetal force, is therefore also directed toward the center of the circle and has magnitude . Many orbits, such as that of the Moon around the Earth, can be approximated by uniform circular motion. In such cases, the centripetal force is gravity, and by Newton's law of universal gravitation has magnitude , where is the mass of the larger body being orbited. Therefore, the mass of a body can be calculated from observations of another body orbiting around it. Newton's cannonball is a thought experiment that interpolates between projectile motion and uniform circular motion. A cannonball that is lobbed weakly off the edge of a tall cliff will hit the ground in the same amount of time as if it were dropped from rest, because the force of gravity only affects the cannonball's momentum in the downward direction, and its effect is not diminished by horizontal movement. If the cannonball is launched with a greater initial horizontal velocity, then it will travel farther before it hits the ground, but it will still hit the ground in the same amount of time. However, if the cannonball is launched with an even larger initial velocity, then the curvature of the Earth becomes significant: the ground itself will curve away from the falling cannonball. A very fast cannonball will fall away from the inertial straight-line trajectory at the same rate that the Earth curves away beneath it; in other words, it will be in orbit (imagining that it is not slowed by air resistance or obstacles). Harmonic motion Consider a body of mass able to move along the axis, and suppose an equilibrium point exists at the position . That is, at , the net force upon the body is the zero vector, and by Newton's second law, the body will not accelerate. If the force upon the body is proportional to the displacement from the equilibrium point, and directed to the equilibrium point, then the body will perform simple harmonic motion. Writing the force as , Newton's second law becomes This differential equation has the solution where the frequency is equal to , and the constants and can be calculated knowing, for example, the position and velocity the body has at a given time, like . One reason that the harmonic oscillator is a conceptually important example is that it is good approximation for many systems near a stable mechanical equilibrium. For example, a pendulum has a stable equilibrium in the vertical position: if motionless there, it will remain there, and if pushed slightly, it will swing back and forth. Neglecting air resistance and friction in the pivot, the force upon the pendulum is gravity, and Newton's second law becomes where is the length of the pendulum and is its angle from the vertical. When the angle is small, the sine of is nearly equal to (see small-angle approximation), and so this expression simplifies to the equation for a simple harmonic oscillator with frequency . A harmonic oscillator can be damped, often by friction or viscous drag, in which case energy bleeds out of the oscillator and the amplitude of the oscillations decreases over time. Also, a harmonic oscillator can be driven by an applied force, which can lead to the phenomenon of resonance. Objects with variable mass Newtonian physics treats matter as being neither created nor destroyed, though it may be rearranged. It can be the case that an object of interest gains or loses mass because matter is added to or removed from it. In such a situation, Newton's laws can be applied to the individual pieces of matter, keeping track of which pieces belong to the object of interest over time. For instance, if a rocket of mass , moving at velocity , ejects matter at a velocity relative to the rocket, then where is the net external force (e.g., a planet's gravitational pull). Work and energy The concept of energy was developed after Newton's time, but it has become an inseparable part of what is considered "Newtonian" physics. Energy can broadly be classified into kinetic, due to a body's motion, and potential, due to a body's position relative to others. Thermal energy, the energy carried by heat flow, is a type of kinetic energy not associated with the macroscopic motion of objects but instead with the movements of the atoms and molecules of which they are made. According to the work-energy theorem, when a force acts upon a body while that body moves along the line of the force, the force does work upon the body, and the amount of work done is equal to the change in the body's kinetic energy. In many cases of interest, the net work done by a force when a body moves in a closed loop — starting at a point, moving along some trajectory, and returning to the initial point — is zero. If this is the case, then the force can be written in terms of the gradient of a function called a scalar potential: This is true for many forces including that of gravity, but not for friction; indeed, almost any problem in a mechanics textbook that does not involve friction can be expressed in this way. The fact that the force can be written in this way can be understood from the conservation of energy. Without friction to dissipate a body's energy into heat, the body's energy will trade between potential and (non-thermal) kinetic forms while the total amount remains constant. Any gain of kinetic energy, which occurs when the net force on the body accelerates it to a higher speed, must be accompanied by a loss of potential energy. So, the net force upon the body is determined by the manner in which the potential energy decreases. Rigid-body motion and rotation A rigid body is an object whose size is too large to neglect and which maintains the same shape over time. In Newtonian mechanics, the motion of a rigid body is often understood by separating it into movement of the body's center of mass and movement around the center of mass. Center of mass Significant aspects of the motion of an extended body can be understood by imagining the mass of that body concentrated to a single point, known as the center of mass. The location of a body's center of mass depends upon how that body's material is distributed. For a collection of pointlike objects with masses at positions , the center of mass is located at where is the total mass of the collection. In the absence of a net external force, the center of mass moves at a constant speed in a straight line. This applies, for example, to a collision between two bodies. If the total external force is not zero, then the center of mass changes velocity as though it were a point body of mass . This follows from the fact that the internal forces within the collection, the forces that the objects exert upon each other, occur in balanced pairs by Newton's third law. In a system of two bodies with one much more massive than the other, the center of mass will approximately coincide with the location of the more massive body. Rotational analogues of Newton's laws When Newton's laws are applied to rotating extended bodies, they lead to new quantities that are analogous to those invoked in the original laws. The analogue of mass is the moment of inertia, the counterpart of momentum is angular momentum, and the counterpart of force is torque. Angular momentum is calculated with respect to a reference point. If the displacement vector from a reference point to a body is and the body has momentum , then the body's angular momentum with respect to that point is, using the vector cross product, Taking the time derivative of the angular momentum gives The first term vanishes because and point in the same direction. The remaining term is the torque, When the torque is zero, the angular momentum is constant, just as when the force is zero, the momentum is constant. The torque can vanish even when the force is non-zero, if the body is located at the reference point () or if the force and the displacement vector are directed along the same line. The angular momentum of a collection of point masses, and thus of an extended body, is found by adding the contributions from each of the points. This provides a means to characterize a body's rotation about an axis, by adding up the angular momenta of its individual pieces. The result depends on the chosen axis, the shape of the body, and the rate of rotation. Multi-body gravitational system Newton's law of universal gravitation states that any body attracts any other body along the straight line connecting them. The size of the attracting force is proportional to the product of their masses, and inversely proportional to the square of the distance between them. Finding the shape of the orbits that an inverse-square force law will produce is known as the Kepler problem. The Kepler problem can be solved in multiple ways, including by demonstrating that the Laplace–Runge–Lenz vector is constant, or by applying a duality transformation to a 2-dimensional harmonic oscillator. However it is solved, the result is that orbits will be conic sections, that is, ellipses (including circles), parabolas, or hyperbolas. The eccentricity of the orbit, and thus the type of conic section, is determined by the energy and the angular momentum of the orbiting body. Planets do not have sufficient energy to escape the Sun, and so their orbits are ellipses, to a good approximation; because the planets pull on one another, actual orbits are not exactly conic sections. If a third mass is added, the Kepler problem becomes the three-body problem, which in general has no exact solution in closed form. That is, there is no way to start from the differential equations implied by Newton's laws and, after a finite sequence of standard mathematical operations, obtain equations that express the three bodies' motions over time. Numerical methods can be applied to obtain useful, albeit approximate, results for the three-body problem. The positions and velocities of the bodies can be stored in variables within a computer's memory; Newton's laws are used to calculate how the velocities will change over a short interval of time, and knowing the velocities, the changes of position over that time interval can be computed. This process is looped to calculate, approximately, the bodies' trajectories. Generally speaking, the shorter the time interval, the more accurate the approximation. Chaos and unpredictability Nonlinear dynamics Newton's laws of motion allow the possibility of chaos. That is, qualitatively speaking, physical systems obeying Newton's laws can exhibit sensitive dependence upon their initial conditions: a slight change of the position or velocity of one part of a system can lead to the whole system behaving in a radically different way within a short time. Noteworthy examples include the three-body problem, the double pendulum, dynamical billiards, and the Fermi–Pasta–Ulam–Tsingou problem. Newton's laws can be applied to fluids by considering a fluid as composed of infinitesimal pieces, each exerting forces upon neighboring pieces. The Euler momentum equation is an expression of Newton's second law adapted to fluid dynamics. A fluid is described by a velocity field, i.e., a function that assigns a velocity vector to each point in space and time. A small object being carried along by the fluid flow can change velocity for two reasons: first, because the velocity field at its position is changing over time, and second, because it moves to a new location where the velocity field has a different value. Consequently, when Newton's second law is applied to an infinitesimal portion of fluid, the acceleration has two terms, a combination known as a total or material derivative. The mass of an infinitesimal portion depends upon the fluid density, and there is a net force upon it if the fluid pressure varies from one side of it to another. Accordingly, becomes where is the density, is the pressure, and stands for an external influence like a gravitational pull. Incorporating the effect of viscosity turns the Euler equation into a Navier–Stokes equation: where is the kinematic viscosity. Singularities It is mathematically possible for a collection of point masses, moving in accord with Newton's laws, to launch some of themselves away so forcefully that they fly off to infinity in a finite time. This unphysical behavior, known as a "noncollision singularity", depends upon the masses being pointlike and able to approach one another arbitrarily closely, as well as the lack of a relativistic speed limit in Newtonian physics. It is not yet known whether or not the Euler and Navier–Stokes equations exhibit the analogous behavior of initially smooth solutions "blowing up" in finite time. The question of existence and smoothness of Navier–Stokes solutions is one of the Millennium Prize Problems. Relation to other formulations of classical physics Classical mechanics can be mathematically formulated in multiple different ways, other than the "Newtonian" description (which itself, of course, incorporates contributions from others both before and after Newton). The physical content of these different formulations is the same as the Newtonian, but they provide different insights and facilitate different types of calculations. For example, Lagrangian mechanics helps make apparent the connection between symmetries and conservation laws, and it is useful when calculating the motion of constrained bodies, like a mass restricted to move along a curving track or on the surface of a sphere. Hamiltonian mechanics is convenient for statistical physics, leads to further insight about symmetry, and can be developed into sophisticated techniques for perturbation theory. Due to the breadth of these topics, the discussion here will be confined to concise treatments of how they reformulate Newton's laws of motion. Lagrangian Lagrangian mechanics differs from the Newtonian formulation by considering entire trajectories at once rather than predicting a body's motion at a single instant. It is traditional in Lagrangian mechanics to denote position with and velocity with . The simplest example is a massive point particle, the Lagrangian for which can be written as the difference between its kinetic and potential energies: where the kinetic energy is and the potential energy is some function of the position, . The physical path that the particle will take between an initial point and a final point is the path for which the integral of the Lagrangian is "stationary". That is, the physical path has the property that small perturbations of it will, to a first approximation, not change the integral of the Lagrangian. Calculus of variations provides the mathematical tools for finding this path. Applying the calculus of variations to the task of finding the path yields the Euler–Lagrange equation for the particle, Evaluating the partial derivatives of the Lagrangian gives which is a restatement of Newton's second law. The left-hand side is the time derivative of the momentum, and the right-hand side is the force, represented in terms of the potential energy. Landau and Lifshitz argue that the Lagrangian formulation makes the conceptual content of classical mechanics more clear than starting with Newton's laws. Lagrangian mechanics provides a convenient framework in which to prove Noether's theorem, which relates symmetries and conservation laws. The conservation of momentum can be derived by applying Noether's theorem to a Lagrangian for a multi-particle system, and so, Newton's third law is a theorem rather than an assumption. Hamiltonian In Hamiltonian mechanics, the dynamics of a system are represented by a function called the Hamiltonian, which in many cases of interest is equal to the total energy of the system. The Hamiltonian is a function of the positions and the momenta of all the bodies making up the system, and it may also depend explicitly upon time. The time derivatives of the position and momentum variables are given by partial derivatives of the Hamiltonian, via Hamilton's equations. The simplest example is a point mass constrained to move in a straight line, under the effect of a potential. Writing for the position coordinate and for the body's momentum, the Hamiltonian is In this example, Hamilton's equations are and Evaluating these partial derivatives, the former equation becomes which reproduces the familiar statement that a body's momentum is the product of its mass and velocity. The time derivative of the momentum is which, upon identifying the negative derivative of the potential with the force, is just Newton's second law once again. As in the Lagrangian formulation, in Hamiltonian mechanics the conservation of momentum can be derived using Noether's theorem, making Newton's third law an idea that is deduced rather than assumed. Among the proposals to reform the standard introductory-physics curriculum is one that teaches the concept of energy before that of force, essentially "introductory Hamiltonian mechanics". Hamilton–Jacobi The Hamilton–Jacobi equation provides yet another formulation of classical mechanics, one which makes it mathematically analogous to wave optics. This formulation also uses Hamiltonian functions, but in a different way than the formulation described above. The paths taken by bodies or collections of bodies are deduced from a function of positions and time . The Hamiltonian is incorporated into the Hamilton–Jacobi equation, a differential equation for . Bodies move over time in such a way that their trajectories are perpendicular to the surfaces of constant , analogously to how a light ray propagates in the direction perpendicular to its wavefront. This is simplest to express for the case of a single point mass, in which is a function , and the point mass moves in the direction along which changes most steeply. In other words, the momentum of the point mass is the gradient of : The Hamilton–Jacobi equation for a point mass is The relation to Newton's laws can be seen by considering a point mass moving in a time-independent potential , in which case the Hamilton–Jacobi equation becomes Taking the gradient of both sides, this becomes Interchanging the order of the partial derivatives on the left-hand side, and using the power and chain rules on the first term on the right-hand side, Gathering together the terms that depend upon the gradient of , This is another re-expression of Newton's second law. The expression in brackets is a total or material derivative as mentioned above, in which the first term indicates how the function being differentiated changes over time at a fixed location, and the second term captures how a moving particle will see different values of that function as it travels from place to place: Relation to other physical theories Thermodynamics and statistical physics In statistical physics, the kinetic theory of gases applies Newton's laws of motion to large numbers (typically on the order of the Avogadro number) of particles. Kinetic theory can explain, for example, the pressure that a gas exerts upon the container holding it as the aggregate of many impacts of atoms, each imparting a tiny amount of momentum. The Langevin equation is a special case of Newton's second law, adapted for the case of describing a small object bombarded stochastically by even smaller ones. It can be writtenwhere is a drag coefficient and is a force that varies randomly from instant to instant, representing the net effect of collisions with the surrounding particles. This is used to model Brownian motion. Electromagnetism Newton's three laws can be applied to phenomena involving electricity and magnetism, though subtleties and caveats exist. Coulomb's law for the electric force between two stationary, electrically charged bodies has much the same mathematical form as Newton's law of universal gravitation: the force is proportional to the product of the charges, inversely proportional to the square of the distance between them, and directed along the straight line between them. The Coulomb force that a charge exerts upon a charge is equal in magnitude to the force that exerts upon , and it points in the exact opposite direction. Coulomb's law is thus consistent with Newton's third law. Electromagnetism treats forces as produced by fields acting upon charges. The Lorentz force law provides an expression for the force upon a charged body that can be plugged into Newton's second law in order to calculate its acceleration. According to the Lorentz force law, a charged body in an electric field experiences a force in the direction of that field, a force proportional to its charge and to the strength of the electric field. In addition, a moving charged body in a magnetic field experiences a force that is also proportional to its charge, in a direction perpendicular to both the field and the body's direction of motion. Using the vector cross product, If the electric field vanishes (), then the force will be perpendicular to the charge's motion, just as in the case of uniform circular motion studied above, and the charge will circle (or more generally move in a helix) around the magnetic field lines at the cyclotron frequency . Mass spectrometry works by applying electric and/or magnetic fields to moving charges and measuring the resulting acceleration, which by the Lorentz force law yields the mass-to-charge ratio. Collections of charged bodies do not always obey Newton's third law: there can be a change of one body's momentum without a compensatory change in the momentum of another. The discrepancy is accounted for by momentum carried by the electromagnetic field itself. The momentum per unit volume of the electromagnetic field is proportional to the Poynting vector. There is subtle conceptual conflict between electromagnetism and Newton's first law: Maxwell's theory of electromagnetism predicts that electromagnetic waves will travel through empty space at a constant, definite speed. Thus, some inertial observers seemingly have a privileged status over the others, namely those who measure the speed of light and find it to be the value predicted by the Maxwell equations. In other words, light provides an absolute standard for speed, yet the principle of inertia holds that there should be no such standard. This tension is resolved in the theory of special relativity, which revises the notions of space and time in such a way that all inertial observers will agree upon the speed of light in vacuum. Special relativity In special relativity, the rule that Wilczek called "Newton's Zeroth Law" breaks down: the mass of a composite object is not merely the sum of the masses of the individual pieces. Newton's first law, inertial motion, remains true. A form of Newton's second law, that force is the rate of change of momentum, also holds, as does the conservation of momentum. However, the definition of momentum is modified. Among the consequences of this is the fact that the more quickly a body moves, the harder it is to accelerate, and so, no matter how much force is applied, a body cannot be accelerated to the speed of light. Depending on the problem at hand, momentum in special relativity can be represented as a three-dimensional vector, , where is the body's rest mass and is the Lorentz factor, which depends upon the body's speed. Alternatively, momentum and force can be represented as four-vectors. Newton's third law must be modified in special relativity. The third law refers to the forces between two bodies at the same moment in time, and a key feature of special relativity is that simultaneity is relative. Events that happen at the same time relative to one observer can happen at different times relative to another. So, in a given observer's frame of reference, action and reaction may not be exactly opposite, and the total momentum of interacting bodies may not be conserved. The conservation of momentum is restored by including the momentum stored in the field that describes the bodies' interaction. Newtonian mechanics is a good approximation to special relativity when the speeds involved are small compared to that of light. General relativity General relativity is a theory of gravity that advances beyond that of Newton. In general relativity, the gravitational force of Newtonian mechanics is reimagined as curvature of spacetime. A curved path like an orbit, attributed to a gravitational force in Newtonian mechanics, is not the result of a force deflecting a body from an ideal straight-line path, but rather the body's attempt to fall freely through a background that is itself curved by the presence of other masses. A remark by John Archibald Wheeler that has become proverbial among physicists summarizes the theory: "Spacetime tells matter how to move; matter tells spacetime how to curve." Wheeler himself thought of this reciprocal relationship as a modern, generalized form of Newton's third law. The relation between matter distribution and spacetime curvature is given by the Einstein field equations, which require tensor calculus to express. The Newtonian theory of gravity is a good approximation to the predictions of general relativity when gravitational effects are weak and objects are moving slowly compared to the speed of light. Quantum mechanics Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is very different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object has, one considers what result might appear when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence. The Ehrenfest theorem provides a connection between quantum expectation values and Newton's second law, a connection that is necessarily inexact, as quantum physics is fundamentally different from classical. In quantum physics, position and momentum are represented by mathematical entities known as Hermitian operators, and the Born rule is used to calculate the expectation values of a position measurement or a momentum measurement. These expectation values will generally change over time; that is, depending on the time at which (for example) a position measurement is performed, the probabilities for its different possible outcomes will vary. The Ehrenfest theorem says, roughly speaking, that the equations describing how these expectation values change over time have a form reminiscent of Newton's second law. However, the more pronounced quantum effects are in a given situation, the more difficult it is to derive meaningful conclusions from this resemblance. History The concepts invoked in Newton's laws of motion — mass, velocity, momentum, force — have predecessors in earlier work, and the content of Newtonian physics was further developed after Newton's time. Newton combined knowledge of celestial motions with the study of events on Earth and showed that one theory of mechanics could encompass both. As noted by scholar I. Bernard Cohen, Newton's work was more than a mere synthesis of previous results, as he selected certain ideas and further transformed them, with each in a new form that was useful to him, while at the same time proving false of certain basic or fundamental principles of scientists such as Galileo Galilei, Johannes Kepler, René Descartes, and Nicolaus Copernicus. He approached natural philosophy with mathematics in a completely novel way, in that instead of a preconceived natural philosophy, his style was to begin with a mathematical construct, and build on from there, comparing it to the real world to show that his system accurately accounted for it. Antiquity and medieval background Aristotle and "violent" motion The subject of physics is often traced back to Aristotle, but the history of the concepts involved is obscured by multiple factors. An exact correspondence between Aristotelian and modern concepts is not simple to establish: Aristotle did not clearly distinguish what we would call speed and force, used the same term for density and viscosity, and conceived of motion as always through a medium, rather than through space. In addition, some concepts often termed "Aristotelian" might better be attributed to his followers and commentators upon him. These commentators found that Aristotelian physics had difficulty explaining projectile motion. Aristotle divided motion into two types: "natural" and "violent". The "natural" motion of terrestrial solid matter was to fall downwards, whereas a "violent" motion could push a body sideways. Moreover, in Aristotelian physics, a "violent" motion requires an immediate cause; separated from the cause of its "violent" motion, a body would revert to its "natural" behavior. Yet, a javelin continues moving after it leaves the thrower's hand. Aristotle concluded that the air around the javelin must be imparted with the ability to move the javelin forward. Philoponus and impetus John Philoponus, a Byzantine Greek thinker active during the sixth century, found this absurd: the same medium, air, was somehow responsible both for sustaining motion and for impeding it. If Aristotle's idea were true, Philoponus said, armies would launch weapons by blowing upon them with bellows. Philoponus argued that setting a body into motion imparted a quality, impetus, that would be contained within the body itself. As long as its impetus was sustained, the body would continue to move. In the following centuries, versions of impetus theory were advanced by individuals including Nur ad-Din al-Bitruji, Avicenna, Abu'l-Barakāt al-Baghdādī, John Buridan, and Albert of Saxony. In retrospect, the idea of impetus can be seen as a forerunner of the modern concept of momentum. The intuition that objects move according to some kind of impetus persists in many students of introductory physics. Inertia and the first law The French philosopher René Descartes introduced the concept of inertia by way of his "laws of nature" in The World (Traité du monde et de la lumière) written 1629–33. However, The World purported a heliocentric worldview, and in 1633 this view had given rise a great conflict between Galileo Galilei and the Roman Catholic Inquisition. Descartes knew about this controversy and did not wish to get involved. The World was not published until 1664, ten years after his death. The modern concept of inertia is credited to Galileo. Based on his experiments, Galileo concluded that the "natural" behavior of a moving body was to keep moving, until something else interfered with it. In Two New Sciences (1638) Galileo wrote:Galileo recognized that in projectile motion, the Earth's gravity affects vertical but not horizontal motion. However, Galileo's idea of inertia was not exactly the one that would be codified into Newton's first law. Galileo thought that a body moving a long distance inertially would follow the curve of the Earth. This idea was corrected by Isaac Beeckman, Descartes, and Pierre Gassendi, who recognized that inertial motion should be motion in a straight line. Descartes published his laws of nature (laws of motion) with this correction in Principles of Philosophy (Principia Philosophiae) in 1644, with the heliocentric part toned down. According to American philosopher Richard J. Blackwell, Dutch scientist Christiaan Huygens had worked out his own, concise version of the law in 1656. It was not published until 1703, eight years after his death, in the opening paragraph of De Motu Corporum ex Percussione. According to Huygens, this law was already known by Galileo and Descartes among others. Force and the second law Christiaan Huygens, in his Horologium Oscillatorium (1673), put forth the hypothesis that "By the action of gravity, whatever its sources, it happens that bodies are moved by a motion composed both of a uniform motion in one direction or another and of a motion downward due to gravity." Newton's second law generalized this hypothesis from gravity to all forces. One important characteristic of Newtonian physics is that forces can act at a distance without requiring physical contact. For example, the Sun and the Earth pull on each other gravitationally, despite being separated by millions of kilometres. This contrasts with the idea, championed by Descartes among others, that the Sun's gravity held planets in orbit by swirling them in a vortex of transparent matter, aether. Newton considered aetherial explanations of force but ultimately rejected them. The study of magnetism by William Gilbert and others created a precedent for thinking of immaterial forces, and unable to find a quantitatively satisfactory explanation of his law of gravity in terms of an aetherial model, Newton eventually declared, "I feign no hypotheses": whether or not a model like Descartes's vortices could be found to underlie the Principia's theories of motion and gravity, the first grounds for judging them must be the successful predictions they made. And indeed, since Newton's time every attempt at such a model has failed. Momentum conservation and the third law Johannes Kepler suggested that gravitational attractions were reciprocal — that, for example, the Moon pulls on the Earth while the Earth pulls on the Moon — but he did not argue that such pairs are equal and opposite. In his Principles of Philosophy (1644), Descartes introduced the idea that during a collision between bodies, a "quantity of motion" remains unchanged. Descartes defined this quantity somewhat imprecisely by adding up the products of the speed and "size" of each body, where "size" for him incorporated both volume and surface area. Moreover, Descartes thought of the universe as a plenum, that is, filled with matter, so all motion required a body to displace a medium as it moved. During the 1650s, Huygens studied collisions between hard spheres and deduced a principle that is now identified as the conservation of momentum. Christopher Wren would later deduce the same rules for elastic collisions that Huygens had, and John Wallis would apply momentum conservation to study inelastic collisions. Newton cited the work of Huygens, Wren, and Wallis to support the validity of his third law. Newton arrived at his set of three laws incrementally. In a 1684 manuscript written to Huygens, he listed four laws: the principle of inertia, the change of motion by force, a statement about relative motion that would today be called Galilean invariance, and the rule that interactions between bodies do not change the motion of their center of mass. In a later manuscript, Newton added a law of action and reaction, while saying that this law and the law regarding the center of mass implied one another. Newton probably settled on the presentation in the Principia, with three primary laws and then other statements reduced to corollaries, during 1685. After the Principia Newton expressed his second law by saying that the force on a body is proportional to its change of motion, or momentum. By the time he wrote the Principia, he had already developed calculus (which he called "the science of fluxions"), but in the Principia he made no explicit use of it, perhaps because he believed geometrical arguments in the tradition of Euclid to be more rigorous. Consequently, the Principia does not express acceleration as the second derivative of position, and so it does not give the second law as . This form of the second law was written (for the special case of constant force) at least as early as 1716, by Jakob Hermann; Leonhard Euler would employ it as a basic premise in the 1740s. Euler pioneered the study of rigid bodies and established the basic theory of fluid dynamics. Pierre-Simon Laplace's five-volume Traité de mécanique céleste (1798–1825) forsook geometry and developed mechanics purely through algebraic expressions, while resolving questions that the Principia had left open, like a full theory of the tides. The concept of energy became a key part of Newtonian mechanics in the post-Newton period. Huygens' solution of the collision of hard spheres showed that in that case, not only is momentum conserved, but kinetic energy is as well (or, rather, a quantity that in retrospect we can identify as one-half the total kinetic energy). The question of what is conserved during all other processes, like inelastic collisions and motion slowed by friction, was not resolved until the 19th century. Debates on this topic overlapped with philosophical disputes between the metaphysical views of Newton and Leibniz, and variants of the term "force" were sometimes used to denote what we would call types of energy. For example, in 1742, Émilie du Châtelet wrote, "Dead force consists of a simple tendency to motion: such is that of a spring ready to relax; living force is that which a body has when it is in actual motion." In modern terminology, "dead force" and "living force" correspond to potential energy and kinetic energy respectively. Conservation of energy was not established as a universal principle until it was understood that the energy of mechanical work can be dissipated into heat. With the concept of energy given a solid grounding, Newton's laws could then be derived within formulations of classical mechanics that put energy first, as in the Lagrangian and Hamiltonian formulations described above. Modern presentations of Newton's laws use the mathematics of vectors, a topic that was not developed until the late 19th and early 20th centuries. Vector algebra, pioneered by Josiah Willard Gibbs and Oliver Heaviside, stemmed from and largely supplanted the earlier system of quaternions invented by William Rowan Hamilton.
Physical sciences
Physics
null
55222
https://en.wikipedia.org/wiki/Buckminsterfullerene
Buckminsterfullerene
Buckminsterfullerene is a type of fullerene with the formula C60. It has a cage-like fused-ring structure (truncated icosahedron) made of twenty hexagons and twelve pentagons, and resembles a soccer ball. Each of its 60 carbon atoms is bonded to its three neighbors. Buckminsterfullerene is a black solid that dissolves in hydrocarbon solvents to produce a violet solution. The substance was discovered in 1985 and has received intense study, although few real world applications have been found. Molecules of buckminsterfullerene (or of fullerenes in general) are commonly nicknamed buckyballs. Occurrence Buckminsterfullerene is the most common naturally occurring fullerene. Small quantities of it can be found in soot. It also exists in space. Neutral C60 has been observed in planetary nebulae and several types of star. The ionised form, C60+, has been identified in the interstellar medium, where it is the cause of several absorption features known as diffuse interstellar bands in the near-infrared. History Theoretical predictions of buckminsterfullerene molecules appeared in the late 1960s and early 1970s. It was first generated in 1984 by Eric Rohlfing, Donald Cox, and Andrew Kaldor using a laser to vaporize carbon in a supersonic helium beam, although the group did not realize that buckminsterfullerene had been produced. In 1985 their work was repeated by Harold Kroto, James R. Heath, Sean C. O'Brien, Robert Curl, and Richard Smalley at Rice University, who recognized the structure of C60 as buckminsterfullerene. Concurrent but unconnected to the Kroto-Smalley work, astrophysicists were working with spectroscopists to study infrared emissions from giant red carbon stars. Smalley and team were able to use a laser vaporization technique to create carbon clusters which could potentially emit infrared at the same wavelength as had been emitted by the red carbon star. Hence, the inspiration came to Smalley and team to use the laser technique on graphite to generate fullerenes. Using laser evaporation of graphite the Smalley team found Cn clusters (where and even) of which the most common were C60 and C70. A solid rotating graphite disk was used as the surface from which carbon was vaporized using a laser beam creating hot plasma that was then passed through a stream of high-density helium gas. The carbon species were subsequently cooled and ionized resulting in the formation of clusters. Clusters ranged in molecular masses, but Kroto and Smalley found predominance in a C60 cluster that could be enhanced further by allowing the plasma to react longer. They also discovered that C60 is a cage-like molecule, a regular truncated icosahedron. The experimental evidence, a strong peak at 720 atomic mass units, indicated that a carbon molecule with 60 carbon atoms was forming, but provided no structural information. The research group concluded after reactivity experiments, that the most likely structure was a spheroidal molecule. The idea was quickly rationalized as the basis of an icosahedral symmetry closed cage structure. Kroto, Curl, and Smalley were awarded the 1996 Nobel Prize in Chemistry for their roles in the discovery of buckminsterfullerene and the related class of molecules, the fullerenes. In 1989 physicists Wolfgang Krätschmer, Konstantinos Fostiropoulos, and Donald R. Huffman observed unusual optical absorptions in thin films of carbon dust (soot). The soot had been generated by an arc-process between two graphite electrodes in a helium atmosphere where the electrode material evaporates and condenses forming soot in the quenching atmosphere. Among other features, the IR spectra of the soot showed four discrete bands in close agreement to those proposed for C60. Another paper on the characterization and verification of the molecular structure followed on in the same year (1990) from their thin film experiments, and detailed also the extraction of an evaporable as well as benzene-soluble material from the arc-generated soot. This extract had TEM and X-ray crystal analysis consistent with arrays of spherical C60 molecules, approximately 1.0 nm in van der Waals diameter as well as the expected molecular mass of 720 Da for C60 (and 840 Da for C70) in their mass spectra. The method was simple and efficient to prepare the material in gram amounts per day (1990) which has boosted the fullerene research and is even today applied for the commercial production of fullerenes. The discovery of practical routes to C60 led to the exploration of a new field of chemistry involving the study of fullerenes. Etymology The discoverers of the allotrope named the newfound molecule after American architect R. Buckminster Fuller, who designed many geodesic dome structures that look similar to C60 and who had died in 1983, the year before discovery. Another common name for buckminsterfullerene is "buckyballs". Synthesis Soot is produced by laser ablation of graphite or pyrolysis of aromatic hydrocarbons. Fullerenes are extracted from the soot with organic solvents using a Soxhlet extractor. This step yields a solution containing up to 75% of C60, as well as other fullerenes. These fractions are separated using chromatography. Generally, the fullerenes are dissolved in a hydrocarbon or halogenated hydrocarbon and separated using alumina columns. Structure Buckminsterfullerene is a truncated icosahedron with 60 vertices, 32 faces (20 hexagons and 12 pentagons where no pentagons share a vertex), and 90 edges (60 edges between 5-membered & 6-membered rings and 30 edges are shared between 6-membered & 6-membered rings), with a carbon atom at the vertices of each polygon and a bond along each polygon edge. The van der Waals diameter of a molecule is about 1.01 nanometers (nm). The nucleus to nucleus diameter of a molecule is about 0.71 nm. The molecule has two bond lengths. The 6:6 ring bonds (between two hexagons) can be considered "double bonds" and are shorter than the 6:5 bonds (between a hexagon and a pentagon). Its average bond length is 0.14 nm. Each carbon atom in the structure is bonded covalently with 3 others. A carbon atom in the can be substituted by a nitrogen or boron atom yielding a or C59B respectively. Properties For a time buckminsterfullerene was the largest known molecule observed to exhibit wave–particle duality. In 2020 the dye molecule phthalocyanine exhibited the duality that is more famously attributed to light, electrons and other small particles and molecules. Solution Fullerenes are sparingly soluble in aromatic solvents and carbon disulfide, but insoluble in water. Solutions of pure C60 have a deep purple color which leaves a brown residue upon evaporation. The reason for this color change is the relatively narrow energy width of the band of molecular levels responsible for green light absorption by individual C60 molecules. Thus individual molecules transmit some blue and red light resulting in a purple color. Upon drying, intermolecular interaction results in the overlap and broadening of the energy bands, thereby eliminating the blue light transmittance and causing the purple to brown color change. crystallises with some solvents in the lattice ("solvates"). For example, crystallization of C60 from benzene solution yields triclinic crystals with the formula C60·4C6H6. Like other solvates, this one readily releases benzene to give the usual face-centred cubic C60. Millimeter-sized crystals of C60 and can be grown from solution both for solvates and for pure fullerenes. Solid In solid buckminsterfullerene, the C60 molecules adopt the fcc (face-centered cubic) motif. They start rotating at about −20 °C. This change is associated with a first-order phase transition to an fcc structure and a small, yet abrupt increase in the lattice constant from 1.411 to 1.4154 nm. solid is as soft as graphite, but when compressed to less than 70% of its volume it transforms into a superhard form of diamond (see aggregated diamond nanorod). films and solution have strong non-linear optical properties; in particular, their optical absorption increases with light intensity (saturable absorption). forms a brownish solid with an optical absorption threshold at ≈1.6 eV. It is an n-type semiconductor with a low activation energy of 0.1–0.3 eV; this conductivity is attributed to intrinsic or oxygen-related defects. Fcc C60 contains voids at its octahedral and tetrahedral sites which are sufficiently large (0.6 and 0.2 nm respectively) to accommodate impurity atoms. When alkali metals are doped into these voids, C60 converts from a semiconductor into a conductor or even superconductor. Chemical reactions and properties Redox (electron-transfer reactions) undergoes six reversible, one-electron reductions, ultimately generating . Its oxidation is irreversible. The first reduction occurs at ≈-1.0 V (Fc/), showing that C60 is a reluctant electron acceptor. tends to avoid having double bonds in the pentagonal rings, which makes electron delocalization poor, and results in not being "superaromatic". C60 behaves like an electron deficient alkene. For example, it reacts with some nucleophiles. Hydrogenation C60 exhibits a small degree of aromatic character, but it still reflects localized double and single C–C bond characters. Therefore, C60 can undergo addition with hydrogen to give polyhydrofullerenes. C60 also undergoes Birch reduction. For example, C60 reacts with lithium in liquid ammonia, followed by tert-butanol to give a mixture of polyhydrofullerenes such as C60H18, C60H32, C60H36, with C60H32 being the dominating product. This mixture of polyhydrofullerenes can be re-oxidized by 2,3-dichloro-5,6-dicyano-1,4-benzoquinone to give C60 again. A selective hydrogenation method exists. Reaction of C60 with 9,9′,10,10′-dihydroanthracene under the same conditions, depending on the time of reaction, gives C60H32 and C60H18 respectively and selectively. Halogenation Addition of fluorine, chlorine, and bromine occurs for C60. Fluorine atoms are small enough for a 1,2-addition, while Cl2 and Br2 add to remote C atoms due to steric factors. For example, in C60Br8 and C60Br24, the Br atoms are in 1,3- or 1,4-positions with respect to each other. Under various conditions a vast number of halogenated derivatives of C60 can be produced, some with an extraordinary selectivity on one or two isomers over the other possible ones. Addition of fluorine and chlorine usually results in a flattening of the C60 framework into a drum-shaped molecule. Addition of oxygen atoms Solutions of C60 can be oxygenated to the epoxide C60O. Ozonation of C60 in 1,2-xylene at 257K gives an intermediate ozonide C60O3, which can be decomposed into 2 forms of C60O. Decomposition of C60O3 at 296 K gives the epoxide, but photolysis gives a product in which the O atom bridges a 5,6-edge. Cycloadditions The Diels–Alder reaction is commonly employed to functionalize C60. Reaction of C60 with appropriate substituted diene gives the corresponding adduct. The Diels–Alder reaction between C60 and 3,6-diaryl-1,2,4,5-tetrazines affords C62. The C62 has the structure in which a four-membered ring is surrounded by four six-membered rings. The C60 molecules can also be coupled through a [2+2] cycloaddition, giving the dumbbell-shaped compound C120. The coupling is achieved by high-speed vibrating milling of C60 with a catalytic amount of KCN. The reaction is reversible as C120 dissociates back to two C60 molecules when heated at . Under high pressure and temperature, repeated [2+2] cycloaddition between C60 results in polymerized fullerene chains and networks. These polymers remain stable at ambient pressure and temperature once formed, and have remarkably interesting electronic and magnetic properties, such as being ferromagnetic above room temperature. Free radical reactions Reactions of C60 with free radicals readily occur. When C60 is mixed with a disulfide RSSR, the radical C60SR• forms spontaneously upon irradiation of the mixture. Stability of the radical species C60Y• depends largely on steric factors of Y. When tert-butyl halide is photolyzed and allowed to react with C60, a reversible inter-cage C–C bond is formed: Cyclopropanation (Bingel reaction) Cyclopropanation (the Bingel reaction) is another common method for functionalizing C60. Cyclopropanation of C60 mostly occurs at the junction of 2 hexagons due to steric factors. The first cyclopropanation was carried out by treating the β-bromomalonate with C60 in the presence of a base. Cyclopropanation also occur readily with diazomethanes. For example, diphenyldiazomethane reacts readily with C60 to give the compound C61Ph2. Phenyl-C61-butyric acid methyl ester derivative prepared through cyclopropanation has been studied for use in organic solar cells. Redox reactions – C60 anions and cations C60 anions The LUMO in C60 is triply degenerate, with the HOMO–LUMO separation relatively small. This small gap suggests that reduction of C60 should occur at mild potentials leading to fulleride anions, [C60]n− (n = 1–6). The midpoint potentials of 1-electron reduction of buckminsterfullerene and its anions is given in the table below: C60 forms a variety of charge-transfer complexes, for example with tetrakis(dimethylamino)ethylene: C60 + C2(NMe2)4 → [C2(NMe2)4]+[C60]− This salt exhibits ferromagnetism at 16 K. C60 cations C60 oxidizes with difficulty. Three reversible oxidation processes have been observed by using cyclic voltammetry with ultra-dry methylene chloride and a supporting electrolyte with extremely high oxidation resistance and low nucleophilicity, such as [nBu4N] [AsF6]. Metal complexes C60 forms complexes akin to the more common alkenes. Complexes have been reported molybdenum, tungsten, platinum, palladium, iridium, and titanium. The pentacarbonyl species are produced by photochemical reactions. M(CO)6 + C60 → M(η2-C60)(CO)5 + CO (M = Mo, W) In the case of platinum complex, the labile ethylene ligand is the leaving group in a thermal reaction: Pt(η2-C2H4)(PPh3)2 + C60 → Pt(η2-C60)(PPh3)2 + C2H4 Titanocene complexes have also been reported: (η5-Cp)2Ti(η2-(CH3)3SiC≡CSi(CH3)3) + C60 → (η5-Cp)2Ti(η2-C60) + (CH3)3SiC≡CSi(CH3)3 Coordinatively unsaturated precursors, such as Vaska's complex, for adducts with C60: trans-Ir(CO)Cl(PPh3)2 + C60 → Ir(CO)Cl(η2-C60)(PPh3)2 One such iridium complex, [Ir(η2-C60)(CO)Cl(Ph2CH2C6H4OCH2Ph)2] has been prepared where the metal center projects two electron-rich 'arms' that embrace the C60 guest. Endohedral fullerenes Metal atoms or certain small molecules such as H2 and noble gas can be encapsulated inside the C60 cage. These endohedral fullerenes are usually synthesized by doping in the metal atoms in an arc reactor or by laser evaporation. These methods gives low yields of endohedral fullerenes, and a better method involves the opening of the cage, packing in the atoms or molecules, and closing the opening using certain organic reactions. This method, however, is still immature and only a few species have been synthesized this way. Endohedral fullerenes show distinct and intriguing chemical properties that can be completely different from the encapsulated atom or molecule, as well as the fullerene itself. The encapsulated atoms have been shown to perform circular motions inside the C60 cage, and their motion has been followed using NMR spectroscopy. Potential applications in technology The optical absorption properties of C60 match the solar spectrum in a way that suggests that C60-based films could be useful for photovoltaic applications. Because of its high electronic affinity it is one of the most common electron acceptors used in donor/acceptor based solar cells. Conversion efficiencies up to 5.7% have been reported in C60–polymer cells. Potential applications in health Ingestion and risks C60 is sensitive to light, so leaving C60 under light exposure causes it to degrade, becoming dangerous. The ingestion of C60 solutions that have been exposed to light could lead to developing cancer (tumors). So the management of C60 products for human ingestion requires cautionary measures such as: elaboration in very dark environments, encasing into bottles of great opacity, and storing in dark places, and others like consumption under low light conditions and using labels to warn about the problems with light. Solutions of C60 dissolved in olive oil or water, as long as they are preserved from light, have been found nontoxic to rodents. Otherwise, a study found that C60 remains in the body for a longer time than usual, especially in the liver, where it tends to be accumulated, and therefore has the potential to induce detrimental health effects. Oils with C60 and risks An experiment in 2011–2012 administered a solution of C60 in olive oil to rats, achieving a major prolongation of their lifespan. Since then, many oils with C60 have been sold as antioxidant products, but it does not avoid the problem of their sensitivity to light, that can turn them toxic. A later research confirmed that exposure to light degrades solutions of C60 in oil, making it toxic and leading to a "massive" increase of the risk of developing cancer (tumors) after its consumption. To avoid the degradation by effect of light, C60 oils must be made in very dark environments, encased into bottles of great opacity, and kept in darkness, consumed under low light conditions and accompanied by labels to warn about the dangers of light for C60. Some producers have been able to dissolve C60 in water to avoid possible problems with oils, but that would not protect C60 from light, so the same cautions are needed.
Physical sciences
Group 14
Chemistry
55233
https://en.wikipedia.org/wiki/High-energy%20astronomy
High-energy astronomy
High-energy astronomy is the study of astronomical objects that release electromagnetic radiation of highly energetic wavelengths. It includes X-ray astronomy, gamma-ray astronomy, extreme UV astronomy, neutrino astronomy, and studies of cosmic rays. The physical study of these phenomena is referred to as high-energy astrophysics. Astronomical objects commonly studied in this field may include black holes, neutron stars, active galactic nuclei, supernovae, kilonovae, supernova remnants, and gamma-ray bursts. Missions Some space and ground-based telescopes that have studied high energy astronomy include the following: AGILE AMS-02 AUGER CALET Chandra Fermi HAWC H.E.S.S. IceCube INTEGRAL MAGIC NuSTAR Proton Swift TA XMM-Newton VERITAS
Physical sciences
High-energy astronomy
Astronomy
55236
https://en.wikipedia.org/wiki/Compton%20scattering
Compton scattering
Compton scattering (or the Compton effect) is the quantum theory of high frequency photons scattering following an interaction with a charged particle, usually an electron. Specifically, when the photon hits electrons, it releases loosely bound electrons from the outer valence shells of atoms or molecules. The effect was discovered in 1923 by Arthur Holly Compton while researching the scattering of X-rays by light elements, and earned him the Nobel Prize for Physics in 1927. The Compton effect significantly deviated from dominating classical theories, using both special relativity and quantum mechanics to explain the interaction between high frequency photons and charged particles. Photons can interact with matter at the atomic level (e.g. photoelectric effect and Rayleigh scattering), at the nucleus, or with just an electron. Pair production and the Compton effect occur at the level of the electron. When a high frequency photon scatters due to an interaction with a charged particle, there is a decrease in the energy of the photon and thus, an increase in its wavelength. This tradeoff between wavelength and energy in response to the collision is the Compton effect. Because of conservation of energy, the lost energy from the photon is transferred to the recoiling particle (such an electron would be called a "Compton Recoil electron"). This implies that if the recoiling particle initially carried more energy than the photon, the reverse would occur. This is known as inverse Compton scattering, in which the scattered photon increases in energy. Introduction In Compton's original experiment (see Fig. 1), the energy of the X ray photon (≈ 17 keV) was significantly larger than the binding energy of the atomic electron, so the electrons could be treated as being free after scattering. The amount by which the light's wavelength changes is called the Compton shift. Although nucleus Compton scattering exists, Compton scattering usually refers to the interaction involving only the electrons of an atom. The Compton effect was observed by Arthur Holly Compton in 1923 at Washington University in St. Louis and further verified by his graduate student Y. H. Woo in the years following. Compton was awarded the 1927 Nobel Prize in Physics for the discovery. The effect is significant because it demonstrates that light cannot be explained purely as a wave phenomenon. Thomson scattering, the classical theory of an electromagnetic wave scattered by charged particles, cannot explain shifts in wavelength at low intensity: classically, light of sufficient intensity for the electric field to accelerate a charged particle to a relativistic speed will cause radiation-pressure recoil and an associated Doppler shift of the scattered light, but the effect would become arbitrarily small at sufficiently low light intensities regardless of wavelength. Thus, if we are to explain low-intensity Compton scattering, light must behave as if it consists of particles. Or the assumption that the electron can be treated as free is invalid resulting in the effectively infinite electron mass equal to the nuclear mass (see e.g. the comment below on elastic scattering of X-rays being from that effect). Compton's experiment convinced physicists that light can be treated as a stream of particle-like objects (quanta called photons), whose energy is proportional to the light wave's frequency. As shown in Fig. 2, the interaction between an electron and a photon results in the electron being given part of the energy (making it recoil), and a photon of the remaining energy being emitted in a different direction from the original, so that the overall momentum of the system is also conserved. If the scattered photon still has enough energy, the process may be repeated. In this scenario, the electron is treated as free or loosely bound. Experimental verification of momentum conservation in individual Compton scattering processes by Bothe and Geiger as well as by Compton and Simon has been important in disproving the BKS theory. Compton scattering is commonly described as inelastic scattering. This is because, unlike the more common Thomson scattering that happens at the low-energy limit, the energy in the scattered photon in Compton scattering is less than the energy of the incident photon. As the electron is typically weakly bound to the atom, the scattering can be viewed from either the perspective of an electron in a potential well, or as an atom with a small ionization energy. In the former perspective, energy of the incident photon is transferred to the recoil particle, but only as kinetic energy. The electron gains no internal energy, respective masses remain the same, the mark of an elastic collision. From this perspective, Compton scattering could be considered elastic because the internal state of the electron does not change during the scattering process. In the latter perspective, the atom's state is change, constituting an inelastic collision. Whether Compton scattering is considered elastic or inelastic depends on which perspective is being used, as well as the context. Compton scattering is one of four competing processes when photons interact with matter. At energies of a few eV to a few keV, corresponding to visible light through soft X-rays, a photon can be completely absorbed and its energy can eject an electron from its host atom, a process known as the photoelectric effect. High-energy photons of and above may bombard the nucleus and cause an electron and a positron to be formed, a process called pair production; even-higher-energy photons (beyond a threshold energy of at least , depending on the nuclei involved), can eject a nucleon or alpha particle from the nucleus in a process called photodisintegration. Compton scattering is the most important interaction in the intervening energy region, at photon energies greater than those typical of the photoelectric effect but less than the pair-production threshold. Description of the phenomenon By the early 20th century, research into the interaction of X-rays with matter was well under way. It was observed that when X-rays of a known wavelength interact with atoms, the X-rays are scattered through an angle and emerge at a different wavelength related to . Although classical electromagnetism predicted that the wavelength of scattered rays should be equal to the initial wavelength, multiple experiments had found that the wavelength of the scattered rays was longer (corresponding to lower energy) than the initial wavelength. In 1923, Compton published a paper in the Physical Review that explained the X-ray shift by attributing particle-like momentum to light quanta (Albert Einstein had proposed light quanta in 1905 in explaining the photo-electric effect, but Compton did not build on Einstein's work). The energy of light quanta depends only on the frequency of the light. In his paper, Compton derived the mathematical relationship between the shift in wavelength and the scattering angle of the X-rays by assuming that each scattered X-ray photon interacted with only one electron. His paper concludes by reporting on experiments which verified his derived relation: where is the initial wavelength, is the wavelength after scattering, is the Planck constant, is the electron rest mass, is the speed of light, and is the scattering angle. The quantity is known as the Compton wavelength of the electron; it is equal to . The wavelength shift is at least zero (for ) and at most twice the Compton wavelength of the electron (for ). Compton found that some X-rays experienced no wavelength shift despite being scattered through large angles; in each of these cases the photon failed to eject an electron. Thus the magnitude of the shift is related not to the Compton wavelength of the electron, but to the Compton wavelength of the entire atom, which can be upwards of 10000 times smaller. This is known as "coherent" scattering off the entire atom since the atom remains intact, gaining no internal excitation. In Compton's original experiments the wavelength shift given above was the directly measurable observable. In modern experiments it is conventional to measure the energies, not the wavelengths, of the scattered photons. For a given incident energy , the outgoing final-state photon energy, , is given by Derivation of the scattering formula A photon with wavelength collides with an electron in an atom, which is treated as being at rest. The collision causes the electron to recoil, and a new photon ′ with wavelength ′ emerges at angle from the photon's incoming path. Let ′ denote the electron after the collision. Compton allowed for the possibility that the interaction would sometimes accelerate the electron to speeds sufficiently close to the velocity of light as to require the application of Einstein's special relativity theory to properly describe its energy and momentum. At the conclusion of Compton's 1923 paper, he reported results of experiments confirming the predictions of his scattering formula, thus supporting the assumption that photons carry momentum as well as quantized energy. At the start of his derivation, he had postulated an expression for the momentum of a photon from equating Einstein's already established mass-energy relationship of to the quantized photon energies of , which Einstein had separately postulated. If , the equivalent photon mass must be . The photon's momentum is then simply this effective mass times the photon's frame-invariant velocity . For a photon, its momentum , and thus can be substituted for for all photon momentum terms which arise in course of the derivation below. The derivation which appears in Compton's paper is more terse, but follows the same logic in the same sequence as the following derivation. The conservation of energy merely equates the sum of energies before and after scattering. Compton postulated that photons carry momentum; thus from the conservation of momentum, the momenta of the particles should be similarly related by in which () is omitted on the assumption it is effectively zero. The photon energies are related to the frequencies by where h is the Planck constant. Before the scattering event, the electron is treated as sufficiently close to being at rest that its total energy consists entirely of the mass-energy equivalence of its (rest) mass , After scattering, the possibility that the electron might be accelerated to a significant fraction of the speed of light, requires that its total energy be represented using the relativistic energy–momentum relation Substituting these quantities into the expression for the conservation of energy gives This expression can be used to find the magnitude of the momentum of the scattered electron, Note that this magnitude of the momentum gained by the electron (formerly zero) exceeds the energy/c lost by the photon, Equation (1) relates the various energies associated with the collision. The electron's momentum change involves a relativistic change in the energy of the electron, so it is not simply related to the change in energy occurring in classical physics. The change of the magnitude of the momentum of the photon is not just related to the change of its energy; it also involves a change in direction. Solving the conservation of momentum expression for the scattered electron's momentum gives Making use of the scalar product yields the square of its magnitude, In anticipation of being replaced with , multiply both sides by , After replacing the photon momentum terms with , we get a second expression for the magnitude of the momentum of the scattered electron, Equating the alternate expressions for this momentum gives which, after evaluating the square and canceling and rearranging terms, further yields Dividing both sides by yields Finally, since = = , It can further be seen that the angle of the outgoing electron with the direction of the incoming photon is specified by Applications Compton scattering Compton scattering is of prime importance to radiobiology, as it is the most probable interaction of gamma rays and high energy X-rays with atoms in living beings and is applied in radiation therapy. Compton scattering is an important effect in gamma spectroscopy which gives rise to the Compton edge, as it is possible for the gamma rays to scatter out of the detectors used. Compton suppression is used to detect stray scatter gamma rays to counteract this effect. Magnetic Compton scattering Magnetic Compton scattering is an extension of the previously mentioned technique which involves the magnetisation of a crystal sample hit with high energy, circularly polarised photons. By measuring the scattered photons' energy and reversing the magnetisation of the sample, two different Compton profiles are generated (one for spin up momenta and one for spin down momenta). Taking the difference between these two profiles gives the magnetic Compton profile (MCP), given by – a one-dimensional projection of the electron spin density. where is the number of spin-unpaired electrons in the system, and are the three-dimensional electron momentum distributions for the majority spin and minority spin electrons respectively. Since this scattering process is incoherent (there is no phase relationship between the scattered photons), the MCP is representative of the bulk properties of the sample and is a probe of the ground state. This means that the MCP is ideal for comparison with theoretical techniques such as density functional theory. The area under the MCP is directly proportional to the spin moment of the system and so, when combined with total moment measurements methods (such as SQUID magnetometry), can be used to isolate both the spin and orbital contributions to the total moment of a system. The shape of the MCP also yields insight into the origin of the magnetism in the system. Inverse Compton scattering Inverse Compton scattering is important in astrophysics. In X-ray astronomy, the accretion disk surrounding a black hole is presumed to produce a thermal spectrum. The lower energy photons produced from this spectrum are scattered to higher energies by relativistic electrons in the surrounding corona. This is surmised to cause the power law component in the X-ray spectra (0.2–10 keV) of accreting black holes. The effect is also observed when photons from the cosmic microwave background (CMB) move through the hot gas surrounding a galaxy cluster. The CMB photons are scattered to higher energies by the electrons in this gas, resulting in the Sunyaev–Zel'dovich effect. Observations of the Sunyaev–Zel'dovich effect provide a nearly redshift-independent means of detecting galaxy clusters. Some synchrotron radiation facilities scatter laser light off the stored electron beam. This Compton backscattering produces high energy photons in the MeV to GeV range subsequently used for nuclear physics experiments. Non-linear inverse Compton scattering Non-linear inverse Compton scattering (NICS) is the scattering of multiple low-energy photons, given by an intense electromagnetic field, in a high-energy photon (X-ray or gamma ray) during the interaction with a charged particle, such as an electron. It is also called non-linear Compton scattering and multiphoton Compton scattering. It is the non-linear version of inverse Compton scattering in which the conditions for multiphoton absorption by the charged particle are reached due to a very intense electromagnetic field, for example the one produced by a laser. Non-linear inverse Compton scattering is an interesting phenomenon for all applications requiring high-energy photons since NICS is capable of producing photons with energy comparable to the charged particle rest energy and higher. As a consequence NICS photons can be used to trigger other phenomena such as pair production, Compton scattering, nuclear reactions, and can be used to probe non-linear quantum effects and non-linear QED.
Physical sciences
Electromagnetic radiation
Physics
55245
https://en.wikipedia.org/wiki/Lockheed%20SR-71%20Blackbird
Lockheed SR-71 Blackbird
The Lockheed SR-71 "Blackbird" is a retired long-range, high-altitude, Mach 3+ strategic reconnaissance aircraft developed and manufactured by the American aerospace company Lockheed Corporation. Its nicknames include "Blackbird" and "Habu". The SR-71 was developed in the 1960s as a black project by Lockheed's Skunk Works division. American aerospace engineer Clarence "Kelly" Johnson was responsible for many of the SR-71's innovative concepts. Its shape was based on the Lockheed A-12, a pioneer in stealth technology with its reduced radar cross section, but the SR-71 was longer and heavier to carry more fuel and a crew of two in tandem cockpits. The SR-71 was revealed to the public in July 1964 and entered service in the United States Air Force (USAF) in January 1966. During missions, the SR-71 operated at high speeds and altitudes (Mach 3.2 and ), allowing it to evade or outrace threats. If a surface-to-air missile launch was detected, the standard evasive action was to accelerate and outpace the missile. Equipment for the plane's aerial reconnaissance missions included signals-intelligence sensors, side-looking airborne radar, and a camera. On average, an SR-71 could fly just once per week because of the lengthy preparations needed. A total of 32 aircraft were built; 12 were lost in accidents, none to enemy action. In 1974, a pair of SR-71 flights set the records for highest sustained flight and quickest flight between London and New York. In 1976, it became the fastest airbreathing manned aircraft, previously held by its predecessor, the closely related Lockheed YF-12. , the Blackbird still holds all three world records. In 1989, the USAF retired the SR-71, largely for political reasons, although several were briefly reactivated before their second retirement in 1998. NASA was the final operator of the Blackbird, using it as a research platform, until it was retired again in 1999. Since its retirement, the SR-71's role has been taken up by a combination of reconnaissance satellites and unmanned aerial vehicles (UAVs). As of 2018, Lockheed Martin was developing a proposed UAV successor, the SR-72, with plans to fly it in 2025. Development Background Lockheed's previous reconnaissance aircraft was the relatively slow U-2, designed for the Central Intelligence Agency (CIA). In late 1957, the CIA approached the defense contractor Lockheed to build an undetectable spy plane. The project, named Archangel, was led by Kelly Johnson, head of Lockheed's Skunk Works unit in Burbank, California. The work on project Archangel began in the second quarter of 1958, with aim of flying higher and faster than the U-2. Of 11 successive designs drafted in a span of 10 months, "A-10" was the front-runner, although its shape made it vulnerable to radar detection. After a meeting with the CIA in March 1959, the design was modified to reduce its radar cross-section by 90%. On 11 February 1960, the CIA approved a US$96 million (~$ in ) contract for Skunk Works to build a dozen A-12 spy planes. Three months later, the May 1960 downing of Francis Gary Powers's U-2 underscored the need for less vulnerable reconnaissance aircraft. The A-12 first flew at Groom Lake (Area 51), Nevada, on 25 April 1962. Thirteen were built, plus five more of two variants: three of the YF-12 interceptor prototype and two of the M-21 drone carrier. The aircraft was to be powered by the Pratt & Whitney J58 engine, but J58 development was taking longer than scheduled, so it was initially equipped with the lower-thrust Pratt & Whitney J75 to enable flight testing to begin. The J58s were retrofitted as they became available, and became the standard engine for all subsequent aircraft in the series (A-12, YF-12, M-21), as well as the SR-71. The A-12 flew missions over Vietnam and North Korea before its retirement in 1968. The program's cancellation was announced on 28 December 1966, due both to budget concerns and because of the forthcoming SR-71, a derivative of the A-12. Designation as SR-71 The SR-71 designation is a continuation of the pre-1962 bomber series; the last aircraft built using the series was the XB-70 Valkyrie. However, a bomber variant of the Blackbird was briefly given the B-71 designator, which was retained when the type was changed to SR-71. During the later stages of its testing, the B-70 was proposed for a reconnaissance/strike role, with an "RS-70" designation. When the A-12's performance potential was clearly found to be much greater, the USAF ordered a variant of the A-12 in December 1962, which was originally named R-12 by Lockheed. This USAF version was longer and heavier than the original A-12 because it had a longer fuselage to hold more fuel. The R-12 also had a crew of two in tandem cockpits, and reshaped fuselage chines. Reconnaissance equipment included signals intelligence sensors, a side-looking airborne radar, and a photo camera. The CIA's A-12 was a better photo-reconnaissance platform than the USAF's R-12: since the A-12 flew higher and faster, and with only a pilot, it had room to carry a better camera and more instruments. The A-12 flew covert missions while the SR-71 flew overt missions; the latter had USAF markings and pilots carried Geneva Conventions Identification Cards. During the 1964 campaign, Republican presidential nominee Barry Goldwater repeatedly criticized President Lyndon B. Johnson and his administration for falling behind the Soviet Union in developing new weapons. Johnson decided to counter this criticism by revealing the existence of the YF-12A USAF interceptor, which also served as cover for the still-secret A-12 and the USAF reconnaissance model since July 1964. USAF Chief of Staff General Curtis LeMay preferred the SR (Strategic Reconnaissance) designation and wanted the RS-71 to be named SR-71. Before the July speech, LeMay lobbied to modify Johnson's speech to read "SR-71" instead of "RS-71". The media transcript given to the press at the time still had the earlier RS-71 designation in places, creating the story that the president had misread the aircraft's designation. To conceal the A-12's existence, Johnson referred only to the A-11, while revealing the existence of a high-speed, high-altitude reconnaissance aircraft. In 1968, Secretary of Defense Robert McNamara canceled the F-12 interceptor program. The specialized tooling used to manufacture both the YF-12 and the SR-71 was also ordered destroyed. Production of the SR-71 totaled 32 aircraft: 29 SR-71As, two SR-71Bs, and one SR-71C. Design Overview The SR-71 was designed for flight at over Mach 3 with tandem cockpits for a crew of two: a pilot; and a reconnaissance systems officer who navigated and operated the surveillance systems. It was extremely important for the pilot and RSO to work well together as a crew. The SR-71 was designed with the smallest radar cross-section that Lockheed could achieve, an early attempt at stealth design. Aircraft were painted black. This color radiated heat from the surface more effectively than the bare metal, reducing the temperature of the skin and thermal stresses on the airframe. The appearance of the painted aircraft gave it the nickname "Blackbird". Airframe, canopy, and landing gear Titanium was used for 85% of the structure, with much of the rest being polymer composite materials. To control costs, Lockheed used a more easily worked titanium alloy, which softened at a lower temperature. The challenges posed led Lockheed to develop new fabrication methods, which have since been used in the manufacture of other aircraft. Lockheed found that washing welded titanium requires distilled water, as the chlorine present in tap water is corrosive; cadmium-plated tools could not be used, as they also caused corrosion. Metallurgical contamination was another problem; at one point, 80% of the delivered titanium for manufacture was rejected on these grounds. The high temperatures generated in flight required special design and operating techniques. Major sections of the skin of the inboard wings were corrugated, not smooth. Aerodynamicists initially opposed the concept, disparagingly referring to the aircraft as a Mach 3 variant of the 1920s-era Ford Trimotor, which was known for its corrugated aluminum skin. But high heat would have caused a smooth skin to split or curl, whereas the corrugated skin could expand vertically and horizontally and had increased longitudinal strength. Fuselage panels were manufactured to fit only loosely with the aircraft on the ground. Proper alignment was achieved as the airframe heated up, with thermal expansion of several inches. Because of this, and the lack of a fuel-sealing system that could remain leak-free with the extreme temperature cycles during flight, the aircraft leaked JP-7 fuel on the ground prior to takeoff, annoying ground crews. The outer windscreen of the cockpit was made of three layers of glass with cooling sections between them. The ANS navigation window was made of solid quartz and was fused ultrasonically to the titanium frame. The temperature of the exterior of the windscreen could reach during a mission. The Blackbird's tires, manufactured by B.F. Goodrich, contained aluminum and were inflated with nitrogen. They cost $2,300 each and generally required replacing within 20 missions. The Blackbird landed at more than and deployed a drag parachute to reduce landing roll and brake and tire wear. Shape and threat avoidance The SR-71 was the second operational aircraft, after the Lockheed A-12,, designed to be hard to spot on radar. Early studies in stealth technology indicated that a shape with flattened, tapering sides would reflect most radar energy away from a beam's place of origin, so Lockheed's engineers added chines and canted the vertical control surfaces inward. Special radar-absorbing materials were incorporated into sawtooth-shaped sections of the aircraft's skin. Cesium-based fuel additives were used to somewhat reduce the visibility of exhaust plumes to radar, although exhaust streams remained quite apparent. Ultimately, engineers produced an aircraft with a wing area of about but a radar cross-section (RCS) of around . Johnson later conceded that Soviet radar technology advanced faster than the stealth technology employed against it. While the SR-71 carried radar countermeasures to evade interception efforts, its greatest protection was its combination of high altitude and very high speed, which made it invulnerable at the time. Along with its low radar cross-section, these qualities gave a very short time for an enemy surface-to-air missile (SAM) site to acquire and track the aircraft on radar. By the time the SAM site could track the SR-71, it was often too late to launch a SAM, and the SR-71 would be out of range before the SAM could catch up to it. If the SAM site could track the SR-71 and fire a SAM in time, the SAM would expend nearly all of the delta-v of its boost and sustainer phases just reaching the SR-71's altitude; at this point, out of thrust, it could do little more than follow its ballistic arc. Merely accelerating would typically be enough for an SR-71 to evade a SAM; changes by the pilots in the SR-71's speed, altitude, and heading were also often enough to spoil any radar lock on the plane by SAM sites or enemy fighters. At sustained speeds of more than Mach 3.2, the plane was faster than the Soviet Union's fastest interceptor, the Mikoyan-Gurevich MiG-25, which also could not reach the SR-71's altitude. No SR-71 was ever shot down. The SR-71 featured chines, a pair of sharp edges leading aft from either side of the nose along the fuselage. These were not a feature on the early A-3 design; Frank Rodgers, a doctor at the Scientific Engineering Institute, a CIA front organization, discovered that a cross-section of a sphere had a greatly reduced radar reflection, and adapted a cylindrical-shaped fuselage by stretching out the sides of the fuselage. After the advisory panel provisionally selected Convair's FISH design over the A-3 on the basis of RCS, Lockheed adopted chines for its A-4 through A-6 designs. Aerodynamicists discovered that the chines generated powerful vortices and created additional lift, leading to unexpected aerodynamic performance improvements. For example, they allowed a reduction in the wings' angle of incidence, which added stability and reduced drag at high speeds, allowing more weight to be carried, such as fuel. Landing speeds were also reduced, as the chines' vortices created turbulent flow over the wings at high angles of attack, making it harder to stall. The chines also acted like leading-edge extensions, which increase the agility of fighters such as the F-5, F-16, F/A-18, MiG-29, and Su-27. The addition of chines also allowed the removal of the planned canard foreplanes. Propulsion system or powerplant Complete powerplant The SR-71 used the same powerplant as the A-12 and YF-12. It consists of three main parts: inlet, J58 engine and its nacelle, and ejector nozzle. "Typical for any supersonic powerplant the engine cannot be considered separately from the rest of the powerplant. Rather, it may be regarded as the heat pump in the over-all system of inlet, engine, and nozzle. The net thrust available to propel the aircraft may be to a large extent controlled by the performance of the inlet and nozzle rather than by the physical potentialities of the engine alone." This is illustrated for the Blackbird by the thrust contributions from each component at Mach 3+ with maximum afterburner: inlet 54%, engine 17.6%, ejector nozzle 28.4%. When stationary and at low speeds the inlet caused a loss in engine thrust. This was due to the flow restriction through the inlet when stationary. Thrust was recovered with ram pressure as flight speed increased (uninstalled thrust 34,000 lb, installed at zero airspeed 25,500 lb rising through 30,000 lb at 210 knots, unstick speed). At supersonic speeds not all the airflow approaching the inlet capture area entered the inlet. At supersonic speeds an intake always adapts to the engine requirements, rather than forcing air into the engine, and the unwanted air flows around the outside of the cowl, causing spillage drag. More than half the air approaching the capture area had to be spilled at low supersonic speeds and the amount reduced as the design speed was approached because the inlet airflow had been designed to match the engine demand at that speed and the chosen design point ambient temperature. At this speed the spike shock touched the cowl lip and there was minimal spillage (with its attendant drag) as shown by Campbell. The inlet and engine matching was also shown by Brown, who emphasized the benefit of increased engine airflow at higher Mach numbers that came with the introduction of the bleed bypass cycle. These two authors show the disparity between inlet and engine for the Blackbird in terms of airflow and it is further explained in more general terms by Oates. Engine operation was adversely affected when operating behind an unstarted inlet. In this condition the inlet behaved like a subsonic inlet design (known as a pitot type) at high supersonic speeds, with very low airflow to the engine. Fuel was automatically diverted, by the fuel derich system, from the combustor to prevent turbine over-temperature. All three parts were linked by the secondary airflow. The inlet needed the boundary layers removed from its spike and cowl surfaces. The one with the higher pressure recovery, the cowl shock-trap bleed, was chosen as secondary air to ventilate and cool the outside of the engine. It was assisted from the inlet by the pumping action of the engine exhaust in the ejector nozzle, cushioning the engine exhaust as it expanded over a wide range of pressure ratios which increased with flight speed. Mach 3.2 in a standard day atmosphere was the design point for the aircraft. However, in practice the SR-71 was more efficient at even faster speeds and colder temperatures. The specific range charts showed for a standard day temperature, and a particular weight, that Mach 3.0 cruise used 38,000 lb per hour of fuel. At 3.15 Mach the fuel flow was 36,000 lb/hr. Flying in colder temperatures (known as temperature deviations from the standard day) would also reduce the fuel used, e.g. with a -10 degC temperature the fuel flow was 35,000 lb/hr. During one mission, SR-71 pilot Brian Shul flew faster than usual to avoid multiple interception attempts. It was discovered after the flight that this had reduced the fuel consumption. It is possible to match the powerplant for optimum performance at only one ambient temperature because the airflows for a supersonic inlet and engine vary differently with ambient temperature. For an inlet, the airflow varies inversely with the square root of the temperature, and for the engine, it varies with the direct inverse. Inlet The inlet needed internal supersonic diffusion since external compression used on slower aircraft caused too high a drag at Blackbird speeds. The aerodynamic features and functioning of the inlet are the subject of a patent, "Supersonic Inlet For Jet Engines" by the inlet designer, David Campbell. When operating as an efficient supersonic compressor (known as started), supersonic diffusion takes place in front of the cowl and internally in a converging passage as far as a terminal shock where the passage area starts increasing and subsonic diffusion takes place. The inlet may also operate very inefficiently if the terminal shock is not held in position by a control system. In this instance, if the shock moves forward of the minimum area (throat) it will be in an unstable position and shoots forward in an instant to a stable position outside the cowl (known as unstarted). The features of the inlet and what they do are also explained in the "A-12 Utility Flight Manual" and in a presentation by Lockheed Technical Fellow Emeritus Tom Anderson. All features are visible in varying degrees in Figures 1, 4 and 5. They are 1) centerbody or spike in fully forward position, 2) spike boundary layer bleed slots where normal shock is located, 3) cowl boundary layer bleed 'shock trap' entrance, 4) streamlined bodies known as 'mice' in subsonic flow, 5) forward bypass bleed ports between each of the 'mice', 6) rear bypass ring, 7) louvers on external surface for spike boundary layer overboard, 8) louvers on external surface for front bypass overboard. Venting this bypass overboard could affect the aircraft flying qualities because it produced high drag, 6,000 lb at cruise with 50% door opening, compared to the total aircraft drag of 14,000 lb. In the early years of operation, the analog computers would not always keep up with rapidly changing inputs from the nose boom. If the duct back pressure became too great and the spike was incorrectly positioned, the shock wave would suddenly blow out the front of the inlet, causing an "inlet unstart". During unstarts, afterburner extinctions were common. The remaining engine's asymmetrical thrust would cause the aircraft to yaw violently to one side. SAS, autopilot, and manual control inputs would attempt to regain controlled flight, but often extreme yaw would reduce airflow in the opposite engine and stimulate "sympathetic stalls". This generated a rapid counter-yawing, often coupled with loud "banging" noises, and a rough ride during which crews' helmets would sometimes strike their cockpit canopies. One response to a single unstart was unstarting both inlets to prevent yawing, then restarting them both. After wind tunnel testing and computer modeling by NASA Dryden test center, Lockheed installed an electronic control to detect unstart conditions and perform this reset action without pilot intervention. During troubleshooting of the unstart issue, NASA also discovered the vortices from the nose chines were entering the engine and interfering with engine efficiency. NASA developed a computer to control the engine bypass doors which countered this issue and improved efficiency. Beginning in 1980, the analog inlet control system was replaced by a digital system, Digital Automatic Flight and Inlet Control System (DAFICS), which reduced unstart instances. Engine and nacelle The engine was an extensively re-designed version of the J58-P2, an existing supersonic engine which had run 700 development hours in support of proposals to power various aircraft for the U.S. Navy. Only the compressor and turbine aerodynamics were retained. New design requirements for cruise at Mach 3.2 included: operating with very high ram temperature air entering the compressor, at a continuous turbine temperature capability hotter than previous experience (Pratt & Whitney J75) continuous use of maximum afterburning the use of new, more expensive, materials and fluids required to withstand unprecedented high temperatures The engine was an afterburning turbojet for take-off and transonic flight (bleed bypass closed) and a low bypass augmented turbofan for supersonic acceleration (bleed bypass open). It approximated a ramjet during high speed supersonic cruise (with a pressure loss, compressor to exhaust, of 80% which was typical of a ramjet). It was a low bypass turbofan for subsonic loiter (bleed bypass open). Analysis of the J58-P2 supersonic performance showed the high compressor inlet temperature would have caused stalling, choking and blade breakages in the compressor as a result of operating at low corrected speeds on the compressor map. These problems were resolved by Pratt & Whitney engineer Robert Abernethy and are explained in his patent, "Recover Bleed Air Turbojet". His solution was to 1) incorporate six air-bleed tubes, prominent on the outside of the engine, to transfer 20% of the compressor air to the afterburner, and 2) to modify the inlet guide vanes with a 2-position, trailing edge flap. The compressor bleed enabled the compressor to operate more efficiently and with the resulting increase in engine airflow matched the inlet design flow with an installed thrust increase of 47%. A continuous turbine temperature of was enabled with air-cooled first stage turbine vane and blades. Continuous operation of maximum afterburning was enabled by passing relatively cool air from the compressor along the inner surface of the duct and nozzle. Ceramic thermal barrier coatings were also used. The secondary airflow through the nacelle comes from the cowl boundary layer bleed system which is oversized (flows more than boundary layer) to give a high enough pressure recovery to support the ejector pumping action. Additional air comes from the rear bypass doors and, for low speed operation with negligible inlet ram, from suck-in doors by the compressor case. Ejector Nozzle The nozzle had to operate efficiently over a wide range of pressure ratios from low, with no inlet ram with a stationary aircraft, to 31 times the external pressure at . A blow-in door ejector nozzle had been invented by Pratt & Whitney engineer Stuart Hamilton in the late 1950s and described in his patent "Variable Area Exhaust Nozzle". In this description the nozzle is an integral part of the engine (as it was in the contemporary Mach 3 General Electric YJ93. For the Blackbird powerplant the nozzle was more efficient structurally (lighter) by incorporating it as part of the airframe because it carried fin and wing loads through the ejector shroud. The nozzle used secondary air from two sources, the inlet cowl boundary layer and rear bypass from immediately in front of the compressor. It used external flow on the nacelle through the tertiary blow-in doors until ram closed them at Mach 1.5. Only secondary air was used at higher speeds with the blow-in doors closed. At low flight speeds the engine exhaust pressure at the primary nozzle exit was greater than ambient so tended to over-expand to lower than ambient in the shroud causing impingement shocks. Secondary and blow-in door air surrounding the exhaust cushioned it preventing over-expansion. Inlet ram pressure increased with flight speed and the higher pressure in the exhaust system closed, first the blow-in doors and then started to open the nozzle flaps until they were fully open at Mach 2.4. The final nozzle area did not increase with further increase in flight speed (for complete expansion to ambient and greater internal thrust) because its external diameter, greater than nacelle diameter would cause too much drag. Fuel JP-7 fuel was used. It was difficult to ignite. To start the engines, triethylborane (TEB), which ignites on contact with air, was injected to produce temperatures high enough to ignite the JP-7. The TEB produced a characteristic green flame, which could often be seen during engine ignition. The fuel was used as a heat sink for the rest of the aircraft to cool the pilot and the electronics. An electric starting system was not possible due to the limited capacity of the cooling system, so the chemical ignition system was used. On a typical mission, the SR-71 took off with a partial fuel load to reduce stress on the brakes and tires during takeoff and also ensure it could successfully take off should one engine fail. Within 20 seconds, the aircraft traveled , reached , and lifted off. It reached of altitude in less than two minutes, and the typical cruising altitude in another 17 minutes, having used one third of its fuel. It is a common misconception that the planes refueled shortly after takeoff because the fuel tanks, which formed the outer skin of the aircraft, leaked on the ground. It was not possible to prevent leaks when the aircraft skin was cold and the tanks only sealed when the skin warmed as the aircraft speed increased. The ability of the sealant to prevent leaks was compromised by the expansion and contraction of the skin with each flight. However, the amount of fuel that leaked, measured as drops per minute on the ground from specific locations, was not enough to make refueling necessary. The SR-71 also required in-flight refueling to replenish fuel during long-duration missions. Supersonic flights generally lasted no more than 90 minutes before the pilot had to find a tanker. Specialized KC-135Q tankers were required to refuel the SR-71. The KC-135Q had a modified high-speed boom, which would allow refueling of the Blackbird at near the tanker's maximum airspeed. The tanker also had special fuel systems for moving JP-4 (for the KC-135Q itself) and JP-7 (for the SR-71) between different tanks. As an aid to the pilot when refueling, the cockpit was fitted with a peripheral vision horizon display. This unusual instrument projected a barely visible artificial horizon line across the top of the entire instrument panel, which gave the pilot subliminal cues on aircraft attitude. If a KC-135Q was not available any tanker with JP-4 or JP-5 could be used in an emergency to avoid losing the aircraft, but with a Mach 1.5 speed limit. On hot days, when approaching the maximum fuel load of , the left engine had to be run with minimum afterburner to maintain probe contact. Astro-inertial navigation system Nortronics, Northrop Corporation's electronics development division, had developed an astro-inertial guidance system (ANS), which could correct inertial navigation system errors with celestial observations, for the SM-62 Snark missile, and a separate system for the ill-fated AGM-48 Skybolt missile, the latter of which was adapted for the SR-71. Before takeoff, a primary alignment brought the ANS's inertial components to a high degree of accuracy. In flight, the ANS, which sat behind the reconnaissance systems officer's (RSO's), position, tracked stars through a circular quartz glass window on the upper fuselage. Its "blue light" source star tracker, which could see stars during both day and night, would continuously track a variety of stars as the aircraft's changing position brought them into view. The system's digital computer ephemeris contained data on a list of stars used for celestial navigation: the list first included 56 stars and was later expanded to 61. The ANS could supply altitude and position to flight controls and other systems, including the mission data recorder, automatic navigation to preset destination points, automatic pointing and control of cameras and sensors, and optical or SLR sighting of fixed points loaded into the ANS before takeoff. According to Richard Graham, a former SR-71 pilot, the navigation system was good enough to limit drift to off the direction of travel at Mach 3. Sensors and payloads The SR-71 originally included optical/infrared imagery systems; side-looking airborne radar (SLAR); electronic intelligence (ELINT) gathering systems; defensive systems for countering missile and airborne fighters; and recorders for SLAR, ELINT, and maintenance data. The SR-71 carried a Fairchild tracking camera and an infrared camera, both of which ran during the entire mission. As the SR-71 had a second cockpit behind the pilot for the RSO, it could not carry the A-12's principal sensor, a single large-focal-length optical camera that sat in the "Q-Bay" behind the A-12's single cockpit. Instead, the SR-71's camera systems could be located either in the fuselage chines or the removable nose/chine section. Wide-area imaging was provided by two of Itek's Operational Objective Cameras, which provided stereo imagery across the width of the flight track, or an Itek Optical Bar Camera, which gave continuous horizon-to-horizon coverage. A closer view of the target area was given by the HYCON Technical Objective Camera (TEOC), which could be directed up to 45° left or right of the centerline. Initially, the TEOCs could not match the resolution of the A-12's larger camera, but rapid improvements in both the camera and film improved this performance. SLAR, built by Goodyear Aerospace, could be carried in the removable nose. In later life, the radar was replaced by Loral's Advanced Synthetic Aperture Radar System (ASARS-1). Both the first SLAR and ASARS-1 were ground-mapping imaging systems, collecting data either in fixed swaths left or right of centerline or from a spot location for higher resolution. ELINT-gathering systems, called the Electro Magnetic Reconnaissance System, built by AIL could be carried in the chine bays to analyze electronic signal fields being passed through, and were programmed to identify items of interest. Over its operational life, the Blackbird carried various electronic countermeasures (ECMs), including warning and active electronic systems built by several ECM companies and called Systems A, A2, A2C, B, C, C2, E, G, H, and M. On a given mission, an aircraft carried several of these frequency/purpose payloads to meet the expected threats. Major Jerry Crew, an RSO, told Air & Space/Smithsonian that he used a jammer to try to confuse surface-to-air missile sites as their crews tracked his airplane, but once his threat-warning receiver told him a missile had been launched, he switched off the jammer to prevent the missile from homing in on its signal. After landing, information from the SLAR, ELINT gathering systems, and the maintenance data recorder were subjected to postflight ground analysis. In the later years of its operational life, a datalink system could send ASARS-1 and ELINT data from about of track coverage to a suitably equipped ground station. Life support Flying at meant that crews could not use standard masks, which could not provide enough oxygen above . Specialized protective pressurized suits were produced for crew members by the David Clark Company for the A-12, YF-12, M-21 and SR-71. Furthermore, an emergency ejection at Mach 3.2 would subject crews to temperatures of about ; thus, during a high-altitude ejection scenario, an onboard oxygen supply would keep the suit pressurized during the descent. The cockpit could be pressurized to an altitude of during flight. The cabin needed a heavy-duty cooling system, as cruising at Mach 3.2 would heat the aircraft's external surface well beyond and the inside of the windshield to . An air conditioner used a heat exchanger to dump heat from the cockpit into the fuel prior to combustion. The same air-conditioning system was also used to keep the front (nose) landing gear bay cool, thereby eliminating the need for the special aluminum-impregnated tires similar to those used on the main landing gear. Blackbird pilots and RSOs were provided with food and drink for the long reconnaissance flights. Water bottles had long straws which crewmembers guided into an opening in the helmet by looking in a mirror. Food was contained in sealed containers similar to toothpaste tubes which delivered food to the crewmember's mouth through the helmet opening. Operational history Main era The first flight of an SR-71 took place on 22 December 1964, at USAF Plant 42 in Palmdale, California, piloted by Bob Gilliland. The SR-71 reached a top speed of Mach 3.4 during flight testing, with pilot Major Brian Shul reporting a speed in excess of Mach 3.5 on an operational sortie while evading a missile over Libya. The first SR-71 to enter service was delivered to the 4200th (later, 9th) Strategic Reconnaissance Wing at Beale Air Force Base, California, in January 1966. SR-71s first arrived at the 9th SRW's Operating Location (OL-8) at Kadena Air Base, Okinawa, Japan on 8 March 1968. These deployments were code-named "Glowing Heat", while the program as a whole was code-named "Senior Crown". Reconnaissance missions over North Vietnam were code-named "Black Shield" and then renamed "Giant Scale" in late 1968. On 21 March 1968, Major (later General) Jerome F. O'Malley and Major Edward D. Payne flew the first operational SR-71 sortie in SR-71 serial number from Kadena AFB, Okinawa. During its career, this aircraft (976) accumulated 2,981 flying hours and flew 942 total sorties (more than any other SR-71), including 257 operational missions, from Beale AFB; Palmdale, California; Kadena Air Base, Okinawa, Japan; and RAF Mildenhall, UK. The aircraft was flown to the National Museum of the United States Air Force near Dayton, Ohio in March 1990. The USAF could fly each SR-71, on average, once per week, because of the extended turnaround required after mission recovery. Very often an aircraft would return with rivets missing, delaminated panels or other broken parts such as inlets requiring repair or replacement. There were cases of the aircraft not being ready to fly again for a month due to the repairs needed. Rob Vermeland, Lockheed Martin's manager of Advanced Development Program, said in an interview in 2015 that high-tempo operations were not realistic for the SR-71. "If we had one sitting in the hangar here and the crew chief was told there was a mission planned right now, then 19 hours later it would be safely ready to take off." From the beginning of the Blackbird's reconnaissance missions over North Vietnam and Laos in 1968, the SR-71s averaged approximately one sortie a week for nearly two years. By 1970, the SR-71s were averaging two sorties per week, and by 1972, they were flying nearly one sortie every day. Two SR-71s were lost during these missions, one in 1970 and the second aircraft in 1972, both due to mechanical malfunctions. Over the course of its reconnaissance missions during the Vietnam War, the North Vietnamese fired approximately 800 SAMs at SR-71s, none of which managed to score a hit. Pilots did report that missiles launched without radar guidance and no launch detection, had passed as close as from the aircraft. While deployed at Okinawa, the SR-71s and their aircrew members gained the nickname Habu (as did the A-12s preceding them) after a pit viper indigenous to Japan, which the Okinawans thought the plane resembled. Operational highlights for the entire Blackbird family (YF-12, A-12, and SR-71) as of about 1990 included: 3,551 mission sorties flown 17,300 total sorties flown 11,008 mission flight hours 53,490 total flight hours 2,752 hours Mach 3 time (missions) 11,675 hours Mach 3 time (total) Only one crew member, Jim Zwayer, a Lockheed flight-test reconnaissance and navigation systems specialist, was killed in a flight accident. The rest of the crew members ejected safely or evacuated their aircraft on the ground. An SR-71 was used domestically in 1971 to assist the FBI in their manhunt for the skyjacker D.B. Cooper. The Blackbird was to retrace and photograph the flightpath of the hijacked 727 from Seattle to Reno and attempt to locate any of the items that Cooper was known to have parachuted with from the aircraft. Five flights were attempted but on each occasion no photographs of the flight path were obtained due to low visibility. European flights European operations were flown from RAF Mildenhall, England, with two weekly routes. One was along the Norwegian west coast and up the Kola Peninsula, monitoring several large naval bases belonging to the Soviet Navy's Northern Fleet. Over the years, there were several emergency landings in Norway, four in Bodø and two of them in 1981, flying from Beale, in 1985. Rescue parties were sent in to repair the planes before leaving. On one occasion, one complete wing with engine was replaced as the easiest way to get the plane airborne again. The other route was known as the Baltic Express, which started from Mildenhall and went through Jutland and the Danish straits before going out over the Baltic Sea. At the time, the USSR controlled the airspace from the DDR to the Gulf of Finland, with Finland and Sweden pursuing neutrality in the Cold War. This meant that NATO aircraft entering the Baltic Sea had to fly through a narrow corridor of international airspace between Scania and Western Pomerania, which was monitored by both the Swedish and Soviet Air Forces. Starting a counter-clockwise 30 minute loop, the Blackbirds would then reconnoiter along the Soviet Union's coastal border, before slowing down to Mach 2.54 to make a left turn south of Åland, and then follow the Swedish coast back towards Denmark. If the SR-71s attempted the turn at Mach 3, they could end up violating Swedish airspace, and the Swedes would direct Viggens to intercept the offending aircraft. The combination of a monitored entry point and a fixed route allowed the Swedes and the Soviets a chance to scramble interceptors. Swedish radar stations would observe the 15th Air Army dispatch Su-15s from Latvia, and MiG-21s and MiG-23s from Estonia, although only the Sukhois would have even a slim chance of successfully intercepting the American aircraft. The greater Soviet threat came from the MiG-25s stationed at Finow-Eberswalde in the DDR. The Swedes noted that the Soviets usually would send a single MiG-25 "Foxbat" from Finow to intercept the SR-71 on their way back out of the Baltic Sea. With the Blackbird flying at , the Foxbat would regularly close to an altitude of , precisely behind the SR-71, before disengaging. The Swedes interpreted this regularity as a sign that the MiG-25 had successfully simulated a shoot-down. The Swedes themselves would typically assert their neutrality by dispatching Saab 37 Viggens from Ängelholm, Norrköping or Ronneby. Limited by a top speed of Mach 2.1 and a service ceiling of , the Viggen pilots would line up for a frontal attack, and rely on their state-of-the-art avionics in order to climb at the right time and attain a missile lock on the SR-71. Precise timing and target illumination would be maintained with target location data supplied to the Viggen's fire-control computer from ground-based radars, with the most common site for the lock-on being the thin stretch of international airspace between Öland and Gotland. Out of 322 recorded Baltic Express sorties between 1977 and 1988, the Swedish Air Force claims that they succeeded in attaining missile lock on the SR-71 in 51 of them. However, with a combined closing speed of Mach 5, the Swedes were reliant on the Blackbird not changing course. On 29 June 1987, an SR-71 was on a mission around the Baltic Sea to spy on Soviet postings when one of the engines exploded. The aircraft, which was at altitude, quickly lost altitude and turned 180° to the left and turned over Gotland to search for the Swedish coast. Thus, Swedish airspace was violated, whereupon two unarmed Saab JA 37 Viggens on an exercise at the height of Västervik were ordered there. The mission was to do an incident preparedness check and identify an aircraft of high interest. It was found that the plane was in obvious distress and a decision was made that the Swedish Air Force would escort the plane out of the Baltic Sea. A second round of armed JA-37s from Ängelholm replaced the first pair and completed the escort to Danish airspace. The event had been classified for over 30 years, and when the report was unsealed, data from the NSA showed that multiple MiG-25s with the order to shoot down the SR-71 or force it to land, had started right after the engine failure. A MiG-25 had locked a missile on the damaged SR-71, but as the aircraft was under escort, no missiles were fired. On 28 November 2018, the four Swedish pilots involved were awarded medals from the USAF. Initial retirement The two most widely proposed reasons for the SR-71's retirement in 1989, offered by the Air Force to Congress, were that the plane was too expensive to build and maintain, and had been rendered redundant by other evolving reconnaissance methods, such as unmanned vehicles (UAVs) and satellites. Another view held by officers and legislators is that the SR-71 was terminated due to Pentagon politics. In 1996, a former 1st-SRS and 9th-SRW commander, Graham, presented a strongly supported opinion that the SR-71 provided some intelligence capabilities that none of its alternatives could provide in the 1990s, when the SR-71 was retired. Opinion remained divided as to how crucial, or disposable, those unique advantages properly were. Graham noted that in the 1970s and early 1980s, in order to be selected into the SR-71 program, a pilot or navigator (RSO) had to be a top-quality USAF officer, so SR-71 squadron and wing commanders often pursued career advancement with promotion into higher positions within the USAF and the Pentagon. These generals were adept at communicating the value of the SR-71 to a USAF command staff and a Congress who often lacked a basic understanding of how the SR-71 worked and what it did. However, by the mid-1980s, these "SR-71 generals" all had retired, and a new generation of USAF generals had come to believe that the SR-71 had become redundant, and wanted to pursue newer, top secret programs like the new B-2 Spirit strategic bomber program. Graham said that the last-mentioned one was only a sales pitch, not a fact, at the time in the 1990s. The USAF may have seen the SR-71 as a bargaining chip to ensure the survival of other priorities. Also, the SR-71 program's "product", which was operational and strategic intelligence, was not seen by these generals as being very valuable to the USAF. The primary consumers of this intelligence were the CIA, NSA, and DIA. A general misunderstanding of the nature of aerial reconnaissance and a lack of knowledge about the SR-71 in particular (due to its secretive development and operations) was used by detractors to discredit the aircraft, with the assurance given that a replacement was under development. Dick Cheney told the Senate Appropriations Committee that the SR-71 cost $85,000 per hour to operate. Opponents estimated the aircraft's support cost at $400 to $700 million per year, though the cost was actually closer to $300 million. The SR-71, while much more capable than the Lockheed U-2 in terms of range, speed, and survivability, suffered the lack of a data link, which the U-2 had been upgraded to carry. This meant that much of the SR-71's imagery and radar data could not be used in real time, but had to wait until the aircraft returned to base. This lack of immediate real-time capability was used as one of the justifications to close down the program. The counterargument was that the longer the SR-71 was not upgraded as aggressively as it ought to have been, the more people could say that it was obsolescent, which was in their interest as champions of other programs (a self-fulfilling bias). Attempts to add a datalink to the SR-71 were stymied early on by the same factions in the Pentagon and Congress who were already set on the program's demise, even in the early 1980s. These same factions also forced expensive sensor upgrades to the SR-71, which did little to increase its mission capabilities, but could be used as justification for complaining about the cost of the program. In 1988, Congress was convinced to allocate $160,000 to keep six SR-71s and a trainer model in flyable storage that could become flightworthy within 60 days. However, the USAF refused to spend the money. While the SR-71 survived attempts to retire it in 1988, partly due to the unmatched ability to provide high-quality coverage of the Kola Peninsula for the US Navy, the decision to retire the SR-71 from active duty came in 1989, with the last missions flown in October that year. Four months after the plane's retirement, General Norman Schwarzkopf Jr., was told that the expedited reconnaissance, which the SR-71 could have provided, was unavailable during Operation Desert Storm. The SR-71 program's main operational capabilities came to a close at the end of fiscal year 1989 (October 1989). The 1st Strategic Reconnaissance Squadron (1 SRS) kept its pilots and aircraft operational and active, and flew some operational reconnaissance missions through the end of 1989 and into 1990, due to uncertainty over the timing of the final termination of funding for the program. The squadron finally closed in mid-1990, and the aircraft were distributed to static display locations, with a number kept in reserve storage. Reactivation Due to unease over political situations in the Middle East and North Korea, the U.S. Congress re-examined the SR-71 beginning in 1993. Rear Admiral Thomas F. Hall addressed the question of why the SR-71 was retired, saying it was under "the belief that, given the time delay associated with mounting a mission, conducting a reconnaissance, retrieving the data, processing it, and getting it out to a field commander, that you had a problem in timelines that was not going to meet the tactical requirements on the modern battlefield. And the determination was that if one could take advantage of technology and develop a system that could get that data back real time... that would be able to meet the unique requirements of the tactical commander." Hall also stated they were "looking at alternative means of doing [the job of the SR-71]." Macke told the committee that they were "flying U-2s, RC-135s, [and] other strategic and tactical assets" to collect information in some areas. Senator Robert Byrd and other senators complained that the "better than" successor to the SR-71 had yet to be developed at the cost of the "good enough" serviceable aircraft. They maintained that, in a time of constrained military budgets, designing, building, and testing an aircraft with the same capabilities as the SR-71 would be impossible. Congress's disappointment with the lack of a suitable replacement for the Blackbird was cited concerning whether to continue funding imaging sensors on the U-2. Congressional conferees stated the "experience with the SR-71 serves as a reminder of the pitfalls of failing to keep existing systems up-to-date and capable in the hope of acquiring other capabilities." It was agreed to add $100 million to the budget to return three SR-71s to service, but it was emphasized that this "would not prejudice support for long-endurance UAVs" [such as the Global Hawk]. The funding was later cut to $72.5 million. The Skunk Works was able to return the aircraft to service under budget at $72 million. Retired USAF Colonel Jay Murphy was made the Program Manager for Lockheed's reactivation plans. Retired USAF Colonels Don Emmons and Barry MacKean were put under government contract to remake the plane's logistic and support structure. Still-active USAF pilots and Reconnaissance Systems Officers (RSOs) who had worked with the aircraft were asked to volunteer to fly the reactivated planes. The aircraft was under the command and control of the 9th Reconnaissance Wing at Beale Air Force Base and flew out of a renovated hangar at Edwards Air Force Base. Modifications were made to provide a data-link with "near real-time" transmission of the Advanced Synthetic Aperture Radar's imagery to sites on the ground. Final retirement The reactivation met much resistance: the USAF had not budgeted for the aircraft, and UAV developers worried that their programs would suffer if money was shifted to support the SR-71s. Also, with the allocation requiring yearly reaffirmation by Congress, long-term planning for the SR-71 was difficult. In 1996, the USAF claimed that specific funding had not been authorized, and moved to ground the program. Congress reauthorized the funds, but, in October 1997, President Bill Clinton attempted to use the line-item veto to cancel the $39 million (~$ in ) allocated for the SR-71. In June 1998, the U.S. Supreme Court ruled that the line-item veto was unconstitutional. All this left the SR-71's status uncertain until September 1998, when the USAF called for the funds to be redistributed; the USAF permanently retired it in 1998. NASA operated the two last airworthy Blackbirds until 1999. All other Blackbirds have been moved to museums except for the two SR-71s and a few D-21 drones retained by the NASA Dryden Flight Research Center (later renamed the Armstrong Flight Research Center). Timeline 1950s–1960s 24 December 1957: First J58 engine run 1 May 1960: Francis Gary Powers is shot down in a Lockheed U-2 over the Soviet Union 13 June 1962: SR-71 mock-up reviewed by the USAF 30 July 1962: J58 completes pre-flight testing 28 December 1962: Lockheed signs contract to build six SR-71 aircraft 25 July 1964: President Johnson makes public announcement of SR-71 29 October 1964: SR-71 prototype (AF Ser. No. 61-7950) delivered to Air Force Plant 42 at Palmdale, California 7 December 1964: Beale AFB, California, announced as base for SR-71 22 December 1964: First flight of the SR-71, with Lockheed test pilot Robert J "Bob" Gilliland at Palmdale, California 21 July 1967: Jim Watkins and Dave Dempster fly first international sortie in SR-71A, AF Ser. No. 61-7972, when the Astro-Inertial Navigation System (ANS) fails on a training mission and they accidentally fly into Mexican airspace 5 February 1968: Lockheed ordered to destroy A-12, YF-12, and SR-71 tooling 8 March 1968: First SR-71A (AF Ser. No. 61-7978) arrives at Kadena AB, Okinawa to replace A-12s 21 March 1968: First SR-71 (AF Ser. No. 61-7976) operational mission flown from Kadena AB over Vietnam 29 May 1968: CMSgt Bill Gornik begins the tie-cutting tradition of Habu crews' neckties 13 December 1969: Two SR-71s deployed to Taiwan. 1970s–1980s 3 December 1975: First flight of SR-71A (AF Ser. No. 61-7959) in "big tail" configuration 20 April 1976: TDY operations started at RAF Mildenhall, United Kingdom with SR-71A, AF Ser. No. 61-7972 27–28 July 1976: SR-71A sets speed and altitude records (altitude in horizontal flight: and speed over a straight course: ) August 1980: Honeywell starts conversion of AFICS to DAFICS 15 January 1982: SR-71B, AF Ser. No. 61-7956, flies its 1,000th sortie 21 April 1989: SR-71, AF Ser. No. 61-7974, is lost due to an engine explosion after taking off from Kadena AB, the last Blackbird to be lost 22 November 1989: USAF SR-71 program officially terminated 1990s 6 March 1990: Last SR-71 flight under Senior Crown program, setting four speed records en route to the Smithsonian Institution 25 July 1991: SR-71B, AF Ser. No. 61-7956/NASA No. 831 officially delivered to NASA Dryden Flight Research Center at Edwards AFB, California October 1991: NASA engineer Marta Bohn-Meyer becomes the first female SR-71 crew member 28 September 1994: Congress votes to allocate $100 million for reactivation of three SR-71s 28 June 1995: First reactivated SR-71 returns to USAF as Detachment 2 9 October 1999: The last flight of the SR-71 (AF Serial No. 61-7980/NASA 844) Records The SR-71 was the world's fastest and highest-flying air-breathing operational manned aircraft throughout its career and it still holds that record. On 28 July 1976, SR-71 serial number , piloted by then Captain Robert Helt, broke the world record: an "absolute altitude record" of . Several aircraft have exceeded this altitude in zoom climbs, but not in sustained flight. That same day SR-71 serial number set an absolute speed record of , approximately Mach 3.3. SR-71 pilot Brian Shul states in his book The Untouchables that he flew in excess of Mach 3.5 on 15 April 1986 over Libya to evade a missile. The SR-71 also holds the "speed over a recognized course" record for flying from New York to London—distance , , and an elapsed time of 1 hour 54 minutes and 56.4 seconds—set on 1 September 1974, while flown by USAF pilot James V. Sullivan and Noel F. Widdifield, reconnaissance systems officer (RSO). This equates to an average speed of about Mach 2.72, including deceleration for in-flight refueling. Peak speeds during this flight were likely closer to the declassified top speed of over Mach 3.2. For comparison, the best commercial Concorde flight time was 2 hours 52 minutes and the Boeing 747 averages 6 hours 15 minutes. On 26 April 1971, 61–7968, flown by majors Thomas B. Estes and Dewain C. Vick, flew over in 10 hours and 30 minutes. This flight was awarded the 1971 Mackay Trophy for the "most meritorious flight of the year" and the 1972 Harmon Trophy for "most outstanding international achievement in the art/science of aeronautics". When the SR-71 was retired in 1990, one Blackbird was flown from its birthplace at USAF Plant 42 in Palmdale, California, to go on exhibit at what is now the Smithsonian Institution's Steven F. Udvar-Hazy Center in Chantilly, Virginia. On 6 March 1990, Lt. Col. Raymond E. and Lt. Col. Joseph T. Vida piloted SR-71 S/N on its final Senior Crown flight and set four new speed records in the process: Los Angeles, California, to Washington, D.C., distance , average speed , and an elapsed time of 64 minutes 20 seconds. West Coast to East Coast, distance , average speed , and an elapsed time of 67 minutes 54 seconds. Kansas City, Missouri, to Washington, D.C., distance , average speed , and an elapsed time of 25 minutes 59 seconds. St. Louis, Missouri, to Cincinnati, Ohio, distance , average speed , and an elapsed time of 8 minutes 32 seconds. These four speed records were accepted by the National Aeronautic Association (NAA), the recognized body for aviation records in the United States. Additionally, Air & Space/Smithsonian reported that the USAF clocked the SR-71 at one point in its flight reaching . After the Los Angeles–Washington flight, on 6 March 1990, Senator John Glenn addressed the United States Senate, chastising the Department of Defense for not using the SR-71 to its full potential: Successor Speculation existed regarding a replacement for the SR-71, including a rumored aircraft codenamed Aurora. The limitations of reconnaissance satellites, which take up to 24 hours to arrive in the proper orbit to photograph a particular target, make them slower to respond to demand than reconnaissance planes. The fly-over orbit of spy satellites may also be predicted and can allow assets to be hidden when the satellite passes, a drawback not shared by aircraft. Thus, there are doubts that the US has abandoned the concept of spy planes to complement reconnaissance satellites. Unmanned aerial vehicles (UAVs) are also used for aerial reconnaissance in the 21st century, being able to overfly hostile territory without putting human pilots at risk, as well as being smaller and harder to detect than manned aircraft. On 1 November 2013, media outlets reported that Skunk Works has been working on an unmanned reconnaissance airplane it has named SR-72, which would fly twice as fast as the SR-71, at Mach 6. However, the USAF is officially pursuing the Northrop Grumman RQ-180 UAV to assume the SR-71's strategic ISR role. Variants SR-71A was the main production variant. SR-71B was a trainer variant. SR-71C was a hybrid trainer aircraft composed of the rear fuselage of the first YF-12A (S/N ) and the forward fuselage from an SR-71 static test unit. The YF-12 had been wrecked in a 1966 landing accident. It has been reported that this Blackbird was seemingly not quite straight and had a yaw at supersonic speeds. However, this was caused by a mis-aligned pitot tube reporting a 4° yaw that was not actually present. It was soon corrected and then flew normally. It was nicknamed "The Bastard". Operators United States Air Force Air Force Systems Command Air Force Flight Test Center – Edwards AFB, California 4786th Test Squadron 1965–1970 SR-71 Flight Test Group 1970–1990 Strategic Air Command 9th Strategic Reconnaissance Wing – Beale AFB, California 1st Strategic Reconnaissance Squadron 1966–1990 99th Strategic Reconnaissance Squadron 1966–1971 Detachment 1, Kadena Air Base, Japan 1968–1990 Detachment 4, RAF Mildenhall. England 1976–1990 Air Combat Command Detachment 2, 9th Reconnaissance Wing – Edwards AFB, California 1995–1997 (Forward Operating Locations at Eielson AFB, Alaska; Griffis AFB, New York; Seymour-Johnson AFB, North Carolina; Diego Garcia and Bodo, Norway 1973–1990) National Aeronautics and Space Administration (NASA) Dryden Flight Research Center – Edwards AFB, California 1991–1999 Accidents and aircraft disposition Twelve SR-71s were lost and one pilot died in accidents during the aircraft's service career. Eleven of these accidents happened between 1966 and 1972. Some secondary references use incorrect 64-series aircraft serial numbers (e.g. SR-71C ) After completion of all USAF and NASA SR-71 operations at Edwards AFB, the SR-71 Flight Simulator was moved in July 2006 to the Frontiers of Flight Museum at Love Field Airport in Dallas, Texas. Specifications (SR-71A)
Technology
Specific aircraft
null
55309
https://en.wikipedia.org/wiki/Blood%20type
Blood type
A blood type (also known as a blood group) is a classification of blood, based on the presence and absence of antibodies and inherited antigenic substances on the surface of red blood cells (RBCs). These antigens may be proteins, carbohydrates, glycoproteins, or glycolipids, depending on the blood group system. Some of these antigens are also present on the surface of other types of cells of various tissues. Several of these red blood cell surface antigens can stem from one allele (or an alternative version of a gene) and collectively form a blood group system. Blood types are inherited and represent contributions from both parents of an individual. a total of 45 human blood group systems are recognized by the International Society of Blood Transfusion (ISBT). The two most important blood group systems are ABO and Rh; they determine someone's blood type (A, B, AB, and O, with + or − denoting RhD status) for suitability in blood transfusion. Blood group systems A complete blood type would describe each of the 45 blood groups, and an individual's blood type is one of many possible combinations of blood-group antigens. Almost always, an individual has the same blood group for life, but very rarely an individual's blood type changes through addition or suppression of an antigen in infection, malignancy, or autoimmune disease. Another more common cause of blood type change is a bone marrow transplant. Bone-marrow transplants are performed for many leukemias and lymphomas, among other diseases. If a person receives bone marrow from someone of a different ABO type (e.g., a type O patient receives a type A bone marrow), the patient's blood type should eventually become the donor's type, as the patient's hematopoietic stem cells (HSCs) are destroyed, either by ablation of the bone marrow or by the donor's T-cells. Once all the patient's original red blood cells have died, they will have been fully replaced by new cells derived from the donor HSCs. Provided the donor had a different ABO type, the new cells' surface antigens will be different from those on the surface of the patient's original red blood cells. Some blood types are associated with inheritance of other diseases; for example, the Kell antigen is sometimes associated with McLeod syndrome.For another example, Von Willebrand disease may be more severe or apparent in people with blood type O. Certain blood types may affect susceptibility to infections. For example, people with blood type O may be less susceptible to pro-thrombotic events induced by COVID-19 or long covid. Another example being the resistance to specific malaria species seen in individuals lacking the Duffy antigen. The Duffy antigen, presumably as a result of natural selection, is less common in population groups from areas having a high incidence of malaria. ABO blood group system The ABO blood group system involves two antigens and two antibodies found in human blood. The two antigens are antigen A and antigen B. The two antibodies are antibody A and antibody B. The antigens are present on the red blood cells and the antibodies in the serum. Regarding the antigen property of the blood all human beings can be classified into four groups, those with antigen A (group A), those with antigen B (group B), those with both antigen A and B (group AB) and those with neither antigen (group O). The antibodies present together with the antigens are found as follows: Antigen A with antibody B Antigen B with antibody A Antigen AB with neither antibody A nor B Antigen null (group O) with both antibody A and B There is an agglutination reaction between similar antigen and antibody (for example, antigen A agglutinates the antibody A and antigen B agglutinates the antibody B). Thus, transfusion can be considered safe as long as the serum of the recipient does not contain antibodies for the blood cell antigens of the donor. The ABO system is the most important blood-group system in human-blood transfusion. The associated anti-A and anti-B antibodies are usually immunoglobulin M, abbreviated IgM, antibodies. It has been hypothesized that ABO IgM antibodies are produced in the first years of life by sensitization to environmental substances such as food, bacteria, and viruses, although blood group compatibility rules are applied to newborn and infants as a matter of practice. The original terminology used by Karl Landsteiner in 1901 for the classification was A/B/C; in later publications "C" became "O". Type O is often called 0 (zero, or null) in other languages. Rh blood group system The Rh system (Rh meaning Rhesus) is the second most significant blood-group system in human-blood transfusion with currently 50 antigens. The most significant Rh antigen is the D antigen, because it is the most likely to provoke an immune system response of the five main Rh antigens. It is common for D-negative individuals not to have any anti-D IgG or IgM antibodies, because anti-D antibodies are not usually produced by sensitization against environmental substances. However, D-negative individuals can produce IgG anti-D antibodies following a sensitizing event: possibly a fetomaternal transfusion of blood from a fetus in pregnancy or occasionally a blood transfusion with D positive RBCs. Rh disease can develop in these cases. Rh negative blood types are much less common in Asian populations (0.3%) than they are in European populations (15%). The presence or absence of the Rh(D) antigen is signified by the + or − sign, so that, for example, the A− group is ABO type A and does not have the Rh (D) antigen. ABO and Rh distribution by country As with many other genetic traits, the distribution of ABO and Rh blood groups varies significantly between populations. While theories are still debated in the scientific community as to why blood types vary geographically and why they emerged in the first place, evidence suggests that the evolution of blood types may be driven by genetic selection for those types whose antigens confer resistance to particular diseases in certain regions – such as the prevalence of blood type O in malaria-endemic countries where individuals of blood type O exhibit the highest rates of survival. Other blood group systems 42 blood-group systems have been identified by the International Society for Blood Transfusion in addition to the ABO and Rh systems. Thus, in addition to the ABO antigens and Rh antigens, many other antigens are expressed on the RBC surface membrane. For example, an individual can be AB, D positive, and at the same time M and N positive (MNS system), K positive (Kell system), Lea or Leb negative (Lewis system), and so on, being positive or negative for each blood group system antigen. Many of the blood group systems were named after the patients in whom the corresponding antibodies were initially encountered. Blood group systems other than ABO and Rh pose a potential, yet relatively low, risk of complications upon mixing of blood from different people. Following is a comparison of clinically relevant characteristics of antibodies against the main human blood group systems: Clinical significance Blood transfusion Transfusion medicine is a specialized branch of hematology that is concerned with the study of blood groups, along with the work of a blood bank to provide a transfusion service for blood and other blood products. Across the world, blood products must be prescribed by a medical doctor (licensed physician or surgeon) in a similar way as medicines. Much of the routine work of a blood bank involves testing blood from both donors and recipients to ensure that every individual recipient is given blood that is compatible and as safe as possible. If a unit of incompatible blood is transfused between a donor and recipient, a severe acute hemolytic reaction with hemolysis (RBC destruction), kidney failure and shock is likely to occur, and death is a possibility. Antibodies can be highly active and can attack RBCs and bind components of the complement system to cause massive hemolysis of the transfused blood. Patients should ideally receive their own blood or type-specific blood products to minimize the chance of a transfusion reaction. It is also possible to use the patient's own blood for transfusion. This is called autotransfusion, which is always compatible with the patient. The procedure of washing a patient's own red blood cells goes as follows: The patient's lost blood is collected and washed with a saline solution. The washing procedure yields concentrated washed red blood cells. The last step is reinfusing the packed red blood cells into the patient. There are multiple ways to wash red blood cells. The two main ways are centrifugation and filtration methods. This procedure can be performed with microfiltration devices like the Hemoclear filter. Risks can be further reduced by cross-matching blood, but this may be skipped when blood is required for an emergency. Cross-matching involves mixing a sample of the recipient's serum with a sample of the donor's red blood cells and checking if the mixture agglutinates, or forms clumps. If agglutination is not obvious by direct vision, blood bank technologist usually check for agglutination with a microscope. If agglutination occurs, that particular donor's blood cannot be transfused to that particular recipient. In a blood bank it is vital that all blood specimens are correctly identified, so labelling has been standardized using a barcode system known as ISBT 128. The blood group may be included on identification tags or on tattoos worn by military personnel, in case they should need an emergency blood transfusion. Frontline German Waffen-SS had blood group tattoos during World War II. Rare blood types can cause supply problems for blood banks and hospitals. For example, Duffy-negative blood occurs much more frequently in people of African origin, and the rarity of this blood type in the rest of the population can result in a shortage of Duffy-negative blood for these patients. Similarly, for RhD negative people there is a risk associated with travelling to parts of the world where supplies of RhD negative blood are rare, particularly East Asia, where blood services may endeavor to encourage Westerners to donate blood. Hemolytic disease of the newborn (HDN) A pregnant woman may carry a fetus with a blood type which is different from her own. Typically, this is an issue if a Rh- mother has a child with a Rh+ father, and the fetus ends up being Rh+ like the father. In those cases, the mother can make IgG blood group antibodies. This can happen if some of the fetus' blood cells pass into the mother's blood circulation (e.g. a small fetomaternal hemorrhage at the time of childbirth or obstetric intervention), or sometimes after a therapeutic blood transfusion. This can cause Rh disease or other forms of hemolytic disease of the newborn (HDN) in the current pregnancy and/or subsequent pregnancies. Sometimes this is lethal for the fetus; in these cases it is called hydrops fetalis. If a pregnant woman is known to have anti-D antibodies, the Rh blood type of a fetus can be tested by analysis of fetal DNA in maternal plasma to assess the risk to the fetus of Rh disease. One of the major advances of twentieth century medicine was to prevent this disease by stopping the formation of Anti-D antibodies by D negative mothers with an injectable medication called Rho(D) immune globulin. Antibodies associated with some blood groups can cause severe HDN, others can only cause mild HDN and others are not known to cause HDN. Blood products To provide maximum benefit from each blood donation and to extend shelf-life, blood banks fractionate some whole blood into several products. The most common of these products are packed RBCs, plasma, platelets, cryoprecipitate, and fresh frozen plasma (FFP). FFP is quick-frozen to retain the labile clotting factors V and VIII, which are usually administered to patients who have a potentially fatal clotting problem caused by a condition such as advanced liver disease, overdose of anticoagulant, or disseminated intravascular coagulation (DIC). Units of packed red cells are made by removing as much of the plasma as possible from whole blood units. Clotting factors synthesized by modern recombinant methods are now in routine clinical use for hemophilia, as the risks of infection transmission that occur with pooled blood products are avoided. Red blood cell compatibility Blood group AB individuals have both A and B antigens on the surface of their RBCs, and their blood plasma does not contain any antibodies against either A or B antigen. Therefore, an individual with type AB blood can receive blood from any group (with AB being preferable), but cannot donate blood to any group other than AB. They are known as universal recipients. Blood group A individuals have the A antigen on the surface of their RBCs, and blood serum containing IgM antibodies against the B antigen. Therefore, a group A individual can receive blood only from individuals of groups A or O (with A being preferable), and can donate blood to individuals with type A or AB. Blood group B individuals have the B antigen on the surface of their RBCs, and blood serum containing IgM antibodies against the A antigen. Therefore, a group B individual can receive blood only from individuals of groups B or O (with B being preferable), and can donate blood to individuals with type B or AB. Blood group O (or blood group zero in some countries) individuals do not have either A or B antigens on the surface of their RBCs, and their blood serum contains IgM anti-A and anti-B antibodies. Therefore, a group O individual can receive blood only from a group O individual, but can donate blood to individuals of any ABO blood group (i.e., A, B, O or AB). If a patient needs an urgent blood transfusion, and if the time taken to process the recipient's blood would cause a detrimental delay, O negative blood can be used. Because it is compatible with anyone, there are some concerns that O negative blood is often overused and consequently is always in short supply. According to the American Association of Blood Banks and the British Chief Medical Officer's National Blood Transfusion Committee, the use of group O RhD negative red cells should be restricted to persons with O negative blood, women who might be pregnant, and emergency cases in which blood-group testing is genuinely impracticable. Table note 1. Assumes absence of atypical antibodies that would cause an incompatibility between donor and recipient blood, as is usual for blood selected by cross matching. An Rh D-negative patient who does not have any anti-D antibodies (never being previously sensitized to D-positive RBCs) can receive a transfusion of D-positive blood once, but this would cause sensitization to the D antigen, and a female patient would become at risk for hemolytic disease of the newborn. If a D-negative patient has developed anti-D antibodies, a subsequent exposure to D-positive blood would lead to a potentially dangerous transfusion reaction. Rh D-positive blood should never be given to D-negative women of child-bearing age or to patients with D antibodies, so blood banks must conserve Rh-negative blood for these patients. In extreme circumstances, such as for a major bleed when stocks of D-negative blood units are very low at the blood bank, D-positive blood might be given to D-negative females above child-bearing age or to Rh-negative males, providing that they did not have anti-D antibodies, to conserve D-negative blood stock in the blood bank. The converse is not true; Rh D-positive patients do not react to D negative blood. This same matching is done for other antigens of the Rh system as C, c, E and e and for other blood group systems with a known risk for immunization such as the Kell system in particular for females of child-bearing age or patients with known need for many transfusions. Plasma compatibility Blood plasma compatibility is the inverse of red blood cell compatibility. Type AB plasma carries neither anti-A nor anti-B antibodies and can be transfused to individuals of any blood group; but type AB patients can only receive type AB plasma. Type O carries both antibodies, so individuals of blood group O can receive plasma from any blood group, but type O plasma can be used only by type O recipients. Table note 1. Assuming absence of strong atypical antibodies in donor plasma Rh D antibodies are uncommon, so generally neither D negative nor D positive blood contain anti-D antibodies. If a potential donor is found to have anti-D antibodies or any strong atypical blood group antibody by antibody screening in the blood bank, they would not be accepted as a donor (or in some blood banks the blood would be drawn but the product would need to be appropriately labeled); therefore, donor blood plasma issued by a blood bank can be selected to be free of D antibodies and free of other atypical antibodies, and such donor plasma issued from a blood bank would be suitable for a recipient who may be D positive or D negative, as long as blood plasma and the recipient are ABO compatible. Universal donors and universal recipients In transfusions of packed red blood cells, individuals with type O Rh D negative blood are often called universal donors. Those with type AB Rh D positive blood are called universal recipients. However, these terms are only generally true with respect to possible reactions of the recipient's anti-A and anti-B antibodies to transfused red blood cells, and also possible sensitization to Rh D antigens. One exception is individuals with hh antigen system (also known as the Bombay phenotype) who can only receive blood safely from other hh donors, because they form antibodies against the H antigen present on all red blood cells. Blood donors with exceptionally strong anti-A, anti-B or any atypical blood group antibody may be excluded from blood donation. In general, while the plasma fraction of a blood transfusion may carry donor antibodies not found in the recipient, a significant reaction is unlikely because of dilution. Additionally, red blood cell surface antigens other than A, B and Rh D, might cause adverse reactions and sensitization, if they can bind to the corresponding antibodies to generate an immune response. Transfusions are further complicated because platelets and white blood cells (WBCs) have their own systems of surface antigens, and sensitization to platelet or WBC antigens can occur as a result of transfusion. For transfusions of plasma, this situation is reversed. Type O plasma, containing both anti-A and anti-B antibodies, can only be given to O recipients. The antibodies will attack the antigens on any other blood type. Conversely, AB plasma can be given to patients of any ABO blood group, because it does not contain any anti-A or anti-B antibodies. Blood typing Typically, blood type tests are performed through addition of a blood sample to a solution containing antibodies corresponding to each antigen. The presence of an antigen on the surface of the blood cells is indicated by agglutination. Blood group genotyping In addition to the current practice of serologic testing of blood types, the progress in molecular diagnostics allows the increasing use of blood group genotyping. In contrast to serologic tests reporting a direct blood type phenotype, genotyping allows the prediction of a phenotype based on the knowledge of the molecular basis of the currently known antigens. This allows a more detailed determination of the blood type and therefore a better match for transfusion, which can be crucial in particular for patients with needs for many transfusions to prevent allo-immunization. History Blood types were first discovered by an Austrian physician, Karl Landsteiner, working at the Pathological-Anatomical Institute of the University of Vienna (now Medical University of Vienna). In 1900, he found that blood sera from different persons would clump together (agglutinate) when mixed in test tubes, and not only that, some human blood also agglutinated with animal blood. He wrote a two-sentence footnote: This was the first evidence that blood variation exists in humans. The next year, in 1901, he made a definitive observation that blood serum of an individual would agglutinate with only those of certain individuals. Based on this he classified human bloods into three groups, namely group A, group B, and group C. He defined that group A blood agglutinates with group B, but never with its own type. Similarly, group B blood agglutinates with group A. Group C blood is different in that it agglutinates with both A and B. This was the discovery of blood groups for which Landsteiner was awarded the Nobel Prize in Physiology or Medicine in 1930. (C was later renamed to O after the German Ohne, meaning without, or zero, or null.) Another group (later named AB) was discovered a year later by Landsteiner's students Adriano Sturli and Alfred von Decastello without designating the name (simply referring it to as "no particular type"). Thus, after Landsteiner, three blood types were initially recognised, namely A, B, and C. Czech serologist Jan Janský was the first to recognise and designate four blood types in 1907 that he published in a local journal, using the Roman numerical I, II, III, and IV (corresponding to modern O, A, B, and AB respectively). Unknown to Janský, American physician William L. Moss introduced an almost identical classification in 1910, but with Moss's I and IV corresponding to Janský's IV and I. Thus the existence of two systems immediately created confusion and potential danger in medical practice. Moss's system was adopted in Britain, France, and the US, while Janský's was preferred in most other European countries and some parts of the US. It was reported that "The practically universal use of the Moss classification at that time was completely and purposely cast aside. Therefore in place of bringing order out of chaos, chaos was increased in the larger cities." To resolve the confusion, the American Association of Immunologists, the Society of American Bacteriologists, and the Association of Pathologists and Bacteriologists made a joint recommendation in 1921 that the Jansky classification be adopted based on priority. But it was not followed particularly where Moss's system had been used. In 1927, Landsteiner, who had moved to the Rockefeller Institute for Medical Research in New York, and as a member of a committee of the National Research Council concerned with blood grouping suggested to substitute Janský's and Moss's systems with the letters O, A, B, and AB, first introduced by Polish physician Ludwik Hirszfeld and German physician Emil von Dungern. There was another confusion on the use of O which was introduced in 1910. It was never clear whether it was meant for the figure 0, German null for zero or the upper case letter O for ohne, meaning without; Landsteiner chose the latter. In 1928 the Permanent Commission on Biological Standardization adopted Landsteiner's proposal and stated:This classification became widely accepted and after the early 1950s it was universally followed. Hirszfeld and Dungern discovered the inheritance of blood types as Mendelian genetics in 1910 and the existence of sub-types of A in 1911. In 1927, Landsteiner, with Philip Levine, discovered the MN blood group system, and the P system. Development of the Coombs test in 1945, the advent of transfusion medicine, and the understanding of ABO hemolytic disease of the newborn led to discovery of more blood groups. , the International Society of Blood Transfusion (ISBT) recognizes 47 blood groups. Society and culture A popular pseudoscientific belief in Eastern Asian countries (especially in Japan and South Korea) known as 血液型 ketsuekigata / hyeoraekhyeong is that a person's ABO blood type is predictive of their personality, character, and compatibility with others. Researchers have established no scientific basis exists for blood type personality categorization, and studies have found no "significant relationship between personality and blood type, rendering the theory 'obsolete' and concluding that no basis exists to assume that personality is anything more than randomly associated with blood type."
Biology and health sciences
Fields of medicine
null
55313
https://en.wikipedia.org/wiki/Allergy
Allergy
Allergies, also known as allergic diseases, are various conditions caused by hypersensitivity of the immune system to typically harmless substances in the environment. These diseases include hay fever, food allergies, atopic dermatitis, allergic asthma, and anaphylaxis. Symptoms may include red eyes, an itchy rash, sneezing, coughing, a runny nose, shortness of breath, or swelling. Note that food intolerances and food poisoning are separate conditions. Common allergens include pollen and certain foods. Metals and other substances may also cause such problems. Food, insect stings, and medications are common causes of severe reactions. Their development is due to both genetic and environmental factors. The underlying mechanism involves immunoglobulin E antibodies (IgE), part of the body's immune system, binding to an allergen and then to a receptor on mast cells or basophils where it triggers the release of inflammatory chemicals such as histamine. Diagnosis is typically based on a person's medical history. Further testing of the skin or blood may be useful in certain cases. Positive tests, however, may not necessarily mean there is a significant allergy to the substance in question. Early exposure of children to potential allergens may be protective. Treatments for allergies include avoidance of known allergens and the use of medications such as steroids and antihistamines. In severe reactions, injectable adrenaline (epinephrine) is recommended. Allergen immunotherapy, which gradually exposes people to larger and larger amounts of allergen, is useful for some types of allergies such as hay fever and reactions to insect bites. Its use in food allergies is unclear. Allergies are common. In the developed world, about 20% of people are affected by allergic rhinitis, food allergy affects 10% of adults and 8% of children, and about 20% have or have had atopic dermatitis at some point in time. Depending on the country, about 1–18% of people have asthma. Anaphylaxis occurs in between 0.05–2% of people. Rates of many allergic diseases appear to be increasing. The word "allergy" was first used by Clemens von Pirquet in 1906. Signs and symptoms Many allergens such as dust or pollen are airborne particles. In these cases, symptoms arise in areas in contact with air, such as the eyes, nose, and lungs. For instance, allergic rhinitis, also known as hay fever, causes irritation of the nose, sneezing, itching, and redness of the eyes. Inhaled allergens can also lead to increased production of mucus in the lungs, shortness of breath, coughing, and wheezing. Aside from these ambient allergens, allergic reactions can result from foods, insect stings, and reactions to medications like aspirin and antibiotics such as penicillin. Symptoms of food allergy include abdominal pain, bloating, vomiting, diarrhea, itchy skin, and hives. Food allergies rarely cause respiratory (asthmatic) reactions, or rhinitis. Insect stings, food, antibiotics, and certain medicines may produce a systemic allergic response that is also called anaphylaxis; multiple organ systems can be affected, including the digestive system, the respiratory system, and the circulatory system. Depending on the severity, anaphylaxis can include skin reactions, bronchoconstriction, swelling, low blood pressure, coma, and death. This type of reaction can be triggered suddenly, or the onset can be delayed. The nature of anaphylaxis is such that the reaction can seem to be subsiding but may recur throughout a period of time. Skin Substances that come into contact with the skin, such as latex, are also common causes of allergic reactions, known as contact dermatitis or eczema. Skin allergies frequently cause rashes, or swelling and inflammation within the skin, in what is known as a "weal and flare" reaction characteristic of hives and angioedema. With insect stings, a large local reaction may occur in the form of an area of skin redness greater than 10 cm in size that can last one to two days. This reaction may also occur after immunotherapy. The way our body responds to foreign invaders on the molecular level is similar to how our allergens are treated even on the skin. Our skin forms an effective barrier to the entry of most allergens but this barrier cannot withstand everything that comes at it because at the end of the day, it is only our skin. A situation such as an insect sting can breach the barrier and inject allergen to the affected spot. When an allergen enters the epidermis or dermis, it triggers a localized allergic reaction which activates the mast cells in the skin resulting in an immediate increase in vascular permeability, leading to fluid leakage and swelling in the affected area. Mast-cell activation also stimulates a skin lesion called the wheal-and-flare reaction. This is when the release of chemicals from local nerve endings by a nerve axon reflex, causes the vasodilatations of surrounding cutaneous blood vessels, which causes redness of the surrounding skin. As a part of the allergy response, our body has developed a secondary response which in some individuals causes a more widespread and sustained edematous response. This usually occurs about 8 hours after the allergen originally comes in contact with the skin. When an allergen is ingested, a dispersed form of wheal-and-flare reaction, known as urticaria or hives will appear when the allergen enters the bloodstream and eventually reaches the skin. The way our skin reacts to different allergens gives allergists the upper hand and allows them to test for allergies by injecting a very small amount of an allergen into the skin. Even though these injections are very small and local, they still pose the risk of causing systematic anaphylaxis. Cause Risk factors for allergies can be placed in two broad categories, namely host and environmental factors. Host factors include heredity, sex, race, and age, with heredity being by far the most significant. However, there has been a recent increase in the incidence of allergic disorders that cannot be explained by genetic factors alone. Four major environmental candidates are alterations in exposure to infectious diseases during early childhood, environmental pollution, allergen levels, and dietary changes. Dust mites Dust mite allergy, also known as house dust allergy, is a sensitization and allergic reaction to the droppings of house dust mites. The allergy is common and can trigger allergic reactions such as asthma, eczema, or itching. The mite's gut contains potent digestive enzymes (notably peptidase 1) that persist in their feces and are major inducers of allergic reactions such as wheezing. The mite's exoskeleton can also contribute to allergic reactions. Unlike scabies mites or skin follicle mites, house dust mites do not burrow under the skin and are not parasitic. Foods A wide variety of foods can cause allergic reactions, but 90% of allergic responses to foods are caused by cow's milk, soy, eggs, wheat, peanuts, tree nuts, fish, and shellfish. Other food allergies, affecting less than 1 person per 10,000 population, may be considered "rare". The most common food allergy in the US population is a sensitivity to crustacea. Although peanut allergies are notorious for their severity, peanut allergies are not the most common food allergy in adults or children. Severe or life-threatening reactions may be triggered by other allergens and are more common when combined with asthma. Rates of allergies differ between adults and children. Children can sometimes outgrow peanut allergies. Egg allergies affect one to two percent of children but are outgrown by about two-thirds of children by the age of 5. The sensitivity is usually to proteins in the white, rather than the yolk. Milk-protein allergies—distinct from lactose intolerance—are most common in children. Approximately 60% of milk-protein reactions are immunoglobulin E–mediated, with the remaining usually attributable to inflammation of the colon. Some people are unable to tolerate milk from goats or sheep as well as from cows, and many are also unable to tolerate dairy products such as cheese. Roughly 10% of children with a milk allergy will have a reaction to beef. Lactose intolerance, a common reaction to milk, is not a form of allergy at all, but due to the absence of an enzyme in the digestive tract. Those with tree nut allergies may be allergic to one or many tree nuts, including pecans, pistachios, and walnuts. In addition, seeds, including sesame seeds and poppy seeds, contain oils in which protein is present, which may elicit an allergic reaction. Allergens can be transferred from one food to another through genetic engineering; however, genetic modification can also remove allergens. Little research has been done on the natural variation of allergen concentrations in unmodified crops. Latex Latex can trigger an IgE-mediated cutaneous, respiratory, and systemic reaction. The prevalence of latex allergy in the general population is believed to be less than one percent. In a hospital study, 1 in 800 surgical patients (0.125 percent) reported latex sensitivity, although the sensitivity among healthcare workers is higher, between seven and ten percent. Researchers attribute this higher level to the exposure of healthcare workers to areas with significant airborne latex allergens, such as operating rooms, intensive-care units, and dental suites. These latex-rich environments may sensitize healthcare workers who regularly inhale allergenic proteins. The most prevalent response to latex is an allergic contact dermatitis, a delayed hypersensitive reaction appearing as dry, crusted lesions. This reaction usually lasts 48–96 hours. Sweating or rubbing the area under the glove aggravates the lesions, possibly leading to ulcerations. Anaphylactic reactions occur most often in sensitive patients who have been exposed to a surgeon's latex gloves during abdominal surgery, but other mucosal exposures, such as dental procedures, can also produce systemic reactions. Latex and banana sensitivity may cross-react. Furthermore, those with latex allergy may also have sensitivities to avocado, kiwifruit, and chestnut. These people often have perioral itching and local urticaria. Only occasionally have these food-induced allergies induced systemic responses. Researchers suspect that the cross-reactivity of latex with banana, avocado, kiwifruit, and chestnut occurs because latex proteins are structurally homologous with some other plant proteins. Medications About 10% of people report that they are allergic to penicillin; however, of that 10%, 90% turn out not to be. Serious allergies only occur in about 0.03%. Insect stings One of the main sources of human allergies is insects. An allergy to insects can be brought on by bites, stings, ingestion, and inhalation. Toxins interacting with proteins Another non-food protein reaction, urushiol-induced contact dermatitis, originates after contact with poison ivy, eastern poison oak, western poison oak, or poison sumac. Urushiol, which is not itself a protein, acts as a hapten and chemically reacts with, binds to, and changes the shape of integral membrane proteins on exposed skin cells. The immune system does not recognize the affected cells as normal parts of the body, causing a T-cell-mediated immune response. Of these poisonous plants, sumac is the most virulent. The resulting dermatological response to the reaction between urushiol and membrane proteins includes redness, swelling, papules, vesicles, blisters, and streaking. Estimates vary on the population fraction that will have an immune system response. Approximately 25% of the population will have a strong allergic response to urushiol. In general, approximately 80–90% of adults will develop a rash if they are exposed to of purified urushiol, but some people are so sensitive that it takes only a molecular trace on the skin to initiate an allergic reaction. Genetics Allergic diseases are strongly familial; identical twins are likely to have the same allergic diseases about 70% of the time; the same allergy occurs about 40% of the time in non-identical twins. Allergic parents are more likely to have allergic children and those children's allergies are likely to be more severe than those in children of non-allergic parents. Some allergies, however, are not consistent along genealogies; parents who are allergic to peanuts may have children who are allergic to ragweed. The likelihood of developing allergies is inherited and related to an irregularity in the immune system, but the specific allergen is not. The risk of allergic sensitization and the development of allergies varies with age, with young children most at risk. Several studies have shown that IgE levels are highest in childhood and fall rapidly between the ages of 10 and 30 years. The peak prevalence of hay fever is highest in children and young adults and the incidence of asthma is highest in children under 10. Ethnicity may play a role in some allergies; however, racial factors have been difficult to separate from environmental influences and changes due to migration. It has been suggested that different genetic loci are responsible for asthma, to be specific, in people of European, Hispanic, Asian, and African origins. When we think about how different we all look and perceive our surroundings, it becomes unimaginable to think about how different all the ways we are different on the molecular level. Everything from how we react to foreign bodies to how we respond to those bodies and why. This is all because of our genetic markup; our DNA, which is made up of genes that encode for specific molecules or whole complexes. Due to the variability in responses and how the disease manifests differently in individuals, a clear genetic basis for the predisposition and severity of allergic diseases has not yet been fully established. A lot of what causes the allergy is the way our body extremely reacts to the environment so the genes that cause these things are related to regulation of molecules. Researchers have worked to characterize genes involved in inflammation and the maintenance of mucosal integrity. The identified genes associated with allergic disease severity, progression, and development primarily function in four areas: regulating inflammatory responses (IFN-α, TLR-1, IL-13, IL-4, IL-5, HLA-G, iNOS), maintaining vascular endothelium and mucosal lining (FLG, PLAUR, CTNNA3, PDCH1, COL29A1), mediating immune cell function (PHF11, H1R, HDC, TSLP, STAT6, RERE, PPP2R3C), and influencing susceptibility to allergic sensitization (e.g., ORMDL3, CHI3L1). Multiple studies have investigated the genetic profiles of individuals with predispositions to and experiences of allergic diseases, revealing a complex polygenic architecture. Specific genetic loci, such as MIIP, CXCR4, SCML4, CYP1B1, ICOS, and LINC00824, have been directly associated with allergic disorders. Additionally, some loci show pleiotropic effects, linking them to both autoimmune and allergic conditions, including PRDM2, G3BP1, HBS1L, and POU2AF1. These genes engage in shared inflammatory pathways across various epithelial tissues—such as the skin, esophagus, vagina, and lung—highlighting common genetic factors that contribute to the pathogenesis of asthma and other allergic diseases. In atopic patients, transcriptome studies have identified IL-13-related pathways as key for eosinophilic airway inflammation and remodeling. That causes the body to experience the type of airflow restriction of allergic asthma. Expression of genes was quite variable: genes associated with inflammation were found almost exclusively in superficial airways, while genes related to airway remodeling were mainly present in endobronchial biopsy specimens. This enhanced gene profile was similar across multiple sample sizes – nasal brushing, sputum, endobronchial brushing – demonstrating the importance of eosinophilic inflammation, mast cell degranulation and group 3 innate lymphoid cells in severe adult-onset asthma. IL-13 is an immunoregulatory cytokine that is made mostly by activated T-helper 2 (Th2) cells. It is an important cytokine for many steps in B-cell maturation and differentiation, since it increases CD23 and MHC class II molecules, and aids in B-cell isotype switching to IgE. IL-13 also suppresses macrophage function by reducing the release of pro-inflammatory cytokines and chemokines. The more striking thing is that IL-13 is the prime mover in allergen-induced asthma via pathways that are independent of IgE and eosinophils. Hygiene hypothesis Allergic diseases are caused by inappropriate immunological responses to harmless antigens driven by a TH2-mediated immune response. Many bacteria and viruses elicit a TH1-mediated immune response, which down-regulates TH2 responses. The first proposed mechanism of action of the hygiene hypothesis was that insufficient stimulation of the TH1 arm of the immune system leads to an overactive TH2 arm, which in turn leads to allergic disease. In other words, individuals living in too sterile an environment are not exposed to enough pathogens to keep the immune system busy. Since our bodies evolved to deal with a certain level of such pathogens, when they are not exposed to this level, the immune system will attack harmless antigens, and thus normally benign microbial objects—like pollen—will trigger an immune response. The hygiene hypothesis was developed to explain the observation that hay fever and eczema, both allergic diseases, were less common in children from larger families, which were, it is presumed, exposed to more infectious agents through their siblings, than in children from families with only one child. It is used to explain the increase in allergic diseases that have been seen since industrialization, and the higher incidence of allergic diseases in more developed countries. The hygiene hypothesis has now expanded to include exposure to symbiotic bacteria and parasites as important modulators of immune system development, along with infectious agents. Epidemiological data support the hygiene hypothesis. Studies have shown that various immunological and autoimmune diseases are much less common in the developing world than the industrialized world, and that immigrants to the industrialized world from the developing world increasingly develop immunological disorders in relation to the length of time since arrival in the industrialized world. Longitudinal studies in the third world demonstrate an increase in immunological disorders as a country grows more affluent and, it is presumed, cleaner. The use of antibiotics in the first year of life has been linked to asthma and other allergic diseases. The use of antibacterial cleaning products has also been associated with higher incidence of asthma, as has birth by caesarean section rather than vaginal birth. Stress Chronic stress can aggravate allergic conditions. This has been attributed to a T helper 2 (TH2)-predominant response driven by suppression of interleukin 12 by both the autonomic nervous system and the hypothalamic–pituitary–adrenal axis. Stress management in highly susceptible individuals may improve symptoms. Other environmental factors Allergic diseases are more common in industrialized countries than in countries that are more traditional or agricultural, and there is a higher rate of allergic disease in urban populations versus rural populations, although these differences are becoming less defined. Historically, the trees planted in urban areas were predominantly male to prevent litter from seeds and fruits, but the high ratio of male trees causes high pollen counts, a phenomenon that horticulturist Tom Ogren has called "botanical sexism". Alterations in exposure to microorganisms is another plausible explanation, at present, for the increase in atopic allergy. Endotoxin exposure reduces release of inflammatory cytokines such as TNF-α, IFNγ, interleukin-10, and interleukin-12 from white blood cells (leukocytes) that circulate in the blood. Certain microbe-sensing proteins, known as Toll-like receptors, found on the surface of cells in the body are also thought to be involved in these processes. Parasitic worms and similar parasites are present in untreated drinking water in developing countries, and were present in the water of developed countries until the routine chlorination and purification of drinking water supplies. Recent research has shown that some common parasites, such as intestinal worms (e.g., hookworms), secrete chemicals into the gut wall (and, hence, the bloodstream) that suppress the immune system and prevent the body from attacking the parasite. This gives rise to a new slant on the hygiene hypothesis theory—that co-evolution of humans and parasites has led to an immune system that functions correctly only in the presence of the parasites. Without them, the immune system becomes unbalanced and oversensitive. In particular, research suggests that allergies may coincide with the delayed establishment of gut flora in infants. However, the research to support this theory is conflicting, with some studies performed in China and Ethiopia showing an increase in allergy in people infected with intestinal worms. Clinical trials have been initiated to test the effectiveness of certain worms in treating some allergies. It may be that the term 'parasite' could turn out to be inappropriate, and in fact a hitherto unsuspected symbiosis is at work. For more information on this topic, see Helminthic therapy. Pathophysiology Acute response In the initial stages of allergy, a type I hypersensitivity reaction against an allergen encountered for the first time and presented by a professional antigen-presenting cell causes a response in a type of immune cell called a TH2 lymphocyte, a subset of T cells that produce a cytokine called interleukin-4 (IL-4). These TH2 cells interact with other lymphocytes called B cells, whose role is production of antibodies. Coupled with signals provided by IL-4, this interaction stimulates the B cell to begin production of a large amount of a particular type of antibody known as IgE. Secreted IgE circulates in the blood and binds to an IgE-specific receptor (a kind of Fc receptor called FcεRI) on the surface of other kinds of immune cells called mast cells and basophils, which are both involved in the acute inflammatory response. The IgE-coated cells, at this stage, are sensitized to the allergen. If later exposure to the same allergen occurs, the allergen can bind to the IgE molecules held on the surface of the mast cells or basophils. Cross-linking of the IgE and Fc receptors occurs when more than one IgE-receptor complex interacts with the same allergenic molecule and activates the sensitized cell. Activated mast cells and basophils undergo a process called degranulation, during which they release histamine and other inflammatory chemical mediators (cytokines, interleukins, leukotrienes, and prostaglandins) from their granules into the surrounding tissue causing several systemic effects, such as vasodilation, mucous secretion, nerve stimulation, and smooth muscle contraction. This results in rhinorrhea, itchiness, dyspnea, and anaphylaxis. Depending on the individual, allergen, and mode of introduction, the symptoms can be system-wide (classical anaphylaxis) or localized to specific body systems. Asthma is localized to the respiratory system and eczema is localized to the dermis. Late-phase response After the chemical mediators of the acute response subside, late-phase responses can often occur. This is due to the migration of other leukocytes such as neutrophils, lymphocytes, eosinophils, and macrophages to the initial site. The reaction is usually seen 2–24 hours after the original reaction. Cytokines from mast cells may play a role in the persistence of long-term effects. Late-phase responses seen in asthma are slightly different from those seen in other allergic responses, although they are still caused by release of mediators from eosinophils and are still dependent on activity of TH2 cells. Allergic contact dermatitis Although allergic contact dermatitis is termed an "allergic" reaction (which usually refers to type I hypersensitivity), its pathophysiology involves a reaction that more correctly corresponds to a type IV hypersensitivity reaction. In type IV hypersensitivity, there is activation of certain types of T cells (CD8+) that destroy target cells on contact, as well as activated macrophages that produce hydrolytic enzymes. Diagnosis Effective management of allergic diseases relies on the ability to make an accurate diagnosis. Allergy testing can help confirm or rule out allergies. Correct diagnosis, counseling, and avoidance advice based on valid allergy test results reduce the incidence of symptoms and need for medications, and improve quality of life. To assess the presence of allergen-specific IgE antibodies, two different methods can be used: a skin prick test, or an allergy blood test. Both methods are recommended, and they have similar diagnostic value. Skin prick tests and blood tests are equally cost-effective, and health economic evidence shows that both tests were cost-effective compared with no test. Early and more accurate diagnoses save cost due to reduced consultations, referrals to secondary care, misdiagnosis, and emergency admissions. Allergy undergoes dynamic changes over time. Regular allergy testing of relevant allergens provides information on if and how patient management can be changed to improve health and quality of life. Annual testing is often the practice for determining whether allergy to milk, egg, soy, and wheat have been outgrown, and the testing interval is extended to 2–3 years for allergy to peanut, tree nuts, fish, and crustacean shellfish. Results of follow-up testing can guide decision-making regarding whether and when it is safe to introduce or re-introduce allergenic food into the diet. Skin prick testing Skin testing is also known as "puncture testing" and "prick testing" due to the series of tiny punctures or pricks made into the patient's skin. Tiny amounts of suspected allergens and/or their extracts (e.g., pollen, grass, mite proteins, peanut extract) are introduced to sites on the skin marked with pen or dye (the ink/dye should be carefully selected, lest it cause an allergic response itself). A negative and positive control are also included for comparison (eg, negative is saline or glycerin; positive is histamine). A small plastic or metal device is used to puncture or prick the skin. Sometimes, the allergens are injected "intradermally" into the patient's skin, with a needle and syringe. Common areas for testing include the inside forearm and the back. If the patient is allergic to the substance, then a visible inflammatory reaction will usually occur within 30 minutes. This response will range from slight reddening of the skin to a full-blown hive (called "wheal and flare") in more sensitive patients similar to a mosquito bite. Interpretation of the results of the skin prick test is normally done by allergists on a scale of severity, with +/− meaning borderline reactivity, and 4+ being a large reaction. Increasingly, allergists are measuring and recording the diameter of the wheal and flare reaction. Interpretation by well-trained allergists is often guided by relevant literature. In general, a positive response is interpreted when the wheal of an antigen is ≥3mm larger than the wheal of the negative control (eg, saline or glycerin). Some patients may believe they have determined their own allergic sensitivity from observation, but a skin test has been shown to be much better than patient observation to detect allergy. If a serious life-threatening anaphylactic reaction has brought a patient in for evaluation, some allergists will prefer an initial blood test prior to performing the skin prick test. Skin tests may not be an option if the patient has widespread skin disease or has taken antihistamines in the last several days. Patch testing Patch testing is a method used to determine if a specific substance causes allergic inflammation of the skin. It tests for delayed reactions. It is used to help ascertain the cause of skin contact allergy or contact dermatitis. Adhesive patches, usually treated with several common allergic chemicals or skin sensitizers, are applied to the back. The skin is then examined for possible local reactions at least twice, usually at 48 hours after application of the patch, and again two or three days later. Blood testing An allergy blood test is quick and simple and can be ordered by a licensed health care provider (e.g., an allergy specialist) or general practitioner. Unlike skin-prick testing, a blood test can be performed irrespective of age, skin condition, medication, symptom, disease activity, and pregnancy. Adults and children of any age can get an allergy blood test. For babies and very young children, a single needle stick for allergy blood testing is often gentler than several skin pricks. An allergy blood test is available through most laboratories. A sample of the patient's blood is sent to a laboratory for analysis, and the results are sent back a few days later. Multiple allergens can be detected with a single blood sample. Allergy blood tests are very safe since the person is not exposed to any allergens during the testing procedure. After the onset of anaphylaxis or a severe allergic reaction, guidelines recommend emergency departments obtain a time-sensitive blood test to determine blood tryptase levels and assess for mast cell activation. The test measures the concentration of specific IgE antibodies in the blood. Quantitative IgE test results increase the possibility of ranking how different substances may affect symptoms. A rule of thumb is that the higher the IgE antibody value, the greater the likelihood of symptoms. Allergens found at low levels that today do not result in symptoms cannot help predict future symptom development. The quantitative allergy blood result can help determine what a patient is allergic to, help predict and follow the disease development, estimate the risk of a severe reaction, and explain cross-reactivity. A low total IgE level is not adequate to rule out sensitization to commonly inhaled allergens. Statistical methods, such as ROC curves, predictive value calculations, and likelihood ratios have been used to examine the relationship of various testing methods to each other. These methods have shown that patients with a high total IgE have a high probability of allergic sensitization, but further investigation with allergy tests for specific IgE antibodies for a carefully chosen of allergens is often warranted. Laboratory methods to measure specific IgE antibodies for allergy testing include enzyme-linked immunosorbent assay (ELISA, or EIA), radioallergosorbent test (RAST), fluorescent enzyme immunoassay (FEIA), and chemiluminescence immunoassay (CLIA). Other testing Challenge testing: Challenge testing is when tiny amounts of a suspected allergen are introduced to the body orally, through inhalation, or via other routes. Except for testing food and medication allergies, challenges are rarely performed. When this type of testing is chosen, it must be closely supervised by an allergist. Elimination/challenge tests: This testing method is used most often with foods or medicines. A patient with a suspected allergen is instructed to modify his diet to totally avoid that allergen for a set time. If the patient experiences significant improvement, he may then be "challenged" by reintroducing the allergen, to see if symptoms are reproduced. Unreliable tests: There are other types of allergy testing methods that are unreliable, including applied kinesiology (allergy testing through muscle relaxation), cytotoxicity testing, urine autoinjection, skin titration (Rinkel method), and provocative and neutralization (subcutaneous) testing or sublingual provocation. Differential diagnosis Before a diagnosis of allergic disease can be confirmed, other plausible causes of the presenting symptoms must be considered. Vasomotor rhinitis, for example, is one of many illnesses that share symptoms with allergic rhinitis, underscoring the need for professional differential diagnosis. Once a diagnosis of asthma, rhinitis, anaphylaxis, or other allergic disease has been made, there are several methods for discovering the causative agent of that allergy. Prevention Giving peanut products early in childhood may decrease the risk of allergies, and only breastfeeding during at least the first few months of life may decrease the risk of allergic dermatitis. There is little evidence that a mother's diet during pregnancy or breastfeeding affects the risk of allergies, although there has been some research to show that irregular cow's milk exposure might increase the risk of cow's milk allergy. There is some evidence that delayed introduction of certain foods is useful, and that early exposure to potential allergens may actually be protective. Fish oil supplementation during pregnancy is associated with a lower risk of food sensitivities. Probiotic supplements during pregnancy or infancy may help to prevent atopic dermatitis. Management Management of allergies typically involves avoiding the allergy trigger and taking medications to improve the symptoms. Allergen immunotherapy may be useful for some types of allergies. Medication Several medications may be used to block the action of allergic mediators, or to prevent activation of cells and degranulation processes. These include antihistamines, glucocorticoids, epinephrine (adrenaline), mast cell stabilizers, and antileukotriene agents are common treatments of allergic diseases. Anticholinergics, decongestants, and other compounds thought to impair eosinophil chemotaxis are also commonly used. Although rare, the severity of anaphylaxis often requires epinephrine injection, and where medical care is unavailable, a device known as an epinephrine autoinjector may be used. Immunotherapy Allergen immunotherapy is useful for environmental allergies, allergies to insect bites, and asthma. Its benefit for food allergies is unclear and thus not recommended. Immunotherapy involves exposing people to larger and larger amounts of allergen in an effort to change the immune system's response. Meta-analyses have found that injections of allergens under the skin is effective in the treatment in allergic rhinitis in children and in asthma. The benefits may last for years after treatment is stopped. It is generally safe and effective for allergic rhinitis and conjunctivitis, allergic forms of asthma, and stinging insects. To a lesser extent, the evidence also supports the use of sublingual immunotherapy for rhinitis and asthma. For seasonal allergies the benefit is small. In this form the allergen is given under the tongue and people often prefer it to injections. Immunotherapy is not recommended as a stand-alone treatment for asthma. Alternative medicine An experimental treatment, enzyme potentiated desensitization (EPD), has been tried for decades but is not generally accepted as effective. EPD uses dilutions of allergen and an enzyme, beta-glucuronidase, to which T-regulatory lymphocytes are supposed to respond by favoring desensitization, or down-regulation, rather than sensitization. EPD has also been tried for the treatment of autoimmune diseases, but evidence does not show effectiveness. A review found no effectiveness of homeopathic treatments and no difference compared with placebo. The authors concluded that based on rigorous clinical trials of all types of homeopathy for childhood and adolescence ailments, there is no convincing evidence that supports the use of homeopathic treatments. According to the National Center for Complementary and Integrative Health, U.S., the evidence is relatively strong that saline nasal irrigation and butterbur are effective, when compared to other alternative medicine treatments, for which the scientific evidence is weak, negative, or nonexistent, such as honey, acupuncture, omega 3's, probiotics, astragalus, capsaicin, grape seed extract, Pycnogenol, quercetin, spirulina, stinging nettle, tinospora, or guduchi. Epidemiology The allergic diseases—hay fever and asthma—have increased in the Western world over the past 2–3 decades. Increases in allergic asthma and other atopic disorders in industrialized nations, it is estimated, began in the 1960s and 1970s, with further increases occurring during the 1980s and 1990s, although some suggest that a steady rise in sensitization has been occurring since the 1920s. The number of new cases per year of atopy in developing countries has, in general, remained much lower. Changing frequency Although genetic factors govern susceptibility to atopic disease, increases in atopy have occurred within too short a period to be explained by a genetic change in the population, thus pointing to environmental or lifestyle changes. Several hypotheses have been identified to explain this increased rate. Increased exposure to perennial allergens may be due to housing changes and increased time spent indoors, and a decreased activation of a common immune control mechanism may be caused by changes in cleanliness or hygiene, and exacerbated by dietary changes, obesity, and decline in physical exercise. The hygiene hypothesis maintains that high living standards and hygienic conditions exposes children to fewer infections. It is thought that reduced bacterial and viral infections early in life direct the maturing immune system away from TH1 type responses, leading to unrestrained TH2 responses that allow for an increase in allergy. Changes in rates and types of infection alone, however, have been unable to explain the observed increase in allergic disease, and recent evidence has focused attention on the importance of the gastrointestinal microbial environment. Evidence has shown that exposure to food and fecal-oral pathogens, such as hepatitis A, Toxoplasma gondii, and Helicobacter pylori (which also tend to be more prevalent in developing countries), can reduce the overall risk of atopy by more than 60%, and an increased rate of parasitic infections has been associated with a decreased prevalence of asthma. It is speculated that these infections exert their effect by critically altering TH1/TH2 regulation. Important elements of newer hygiene hypotheses also include exposure to endotoxins, exposure to pets and growing up on a farm. History Some symptoms attributable to allergic diseases are mentioned in ancient sources. Particularly, three members of the Roman Julio-Claudian dynasty (Augustus, Claudius and Britannicus) are suspected to have a family history of atopy. The concept of "allergy" was originally introduced in 1906 by the Viennese pediatrician Clemens von Pirquet, after he noticed that patients who had received injections of horse serum or smallpox vaccine usually had quicker, more severe reactions to second injections. Pirquet called this phenomenon "allergy" from the Ancient Greek words ἄλλος allos meaning "other" and ἔργον ergon meaning "work". All forms of hypersensitivity used to be classified as allergies, and all were thought to be caused by an improper activation of the immune system. Later, it became clear that several different disease mechanisms were implicated, with a common link to a disordered activation of the immune system. In 1963, a new classification scheme was designed by Philip Gell and Robin Coombs that described four types of hypersensitivity reactions, known as Type I to Type IV hypersensitivity. With this new classification, the word allergy, sometimes clarified as a true allergy, was restricted to type I hypersensitivities (also called immediate hypersensitivity), which are characterized as rapidly developing reactions involving IgE antibodies. A major breakthrough in understanding the mechanisms of allergy was the discovery of the antibody class labeled immunoglobulin E (IgE). IgE was simultaneously discovered in 1966–67 by two independent groups: Ishizaka's team at the Children's Asthma Research Institute and Hospital in Denver, USA, and by Gunnar Johansson and Hans Bennich in Uppsala, Sweden. Their joint paper was published in April 1969. Diagnosis Radiometric assays include the radioallergosorbent test (RAST test) method, which uses IgE-binding (anti-IgE) antibodies labeled with radioactive isotopes for quantifying the levels of IgE antibody in the blood. The RAST methodology was invented and marketed in 1974 by Pharmacia Diagnostics AB, Uppsala, Sweden, and the acronym RAST is actually a brand name. In 1989, Pharmacia Diagnostics AB replaced it with a superior test named the ImmunoCAP Specific IgE blood test, which uses the newer fluorescence-labeled technology. American College of Allergy Asthma and Immunology (ACAAI) and the American Academy of Allergy Asthma and Immunology (AAAAI) issued the Joint Task Force Report "Pearls and pitfalls of allergy diagnostic testing" in 2008, and is firm in its statement that the term RAST is now obsolete: The updated version, the ImmunoCAP Specific IgE blood test, is the only specific IgE assay to receive Food and Drug Administration approval to quantitatively report to its detection limit of 0.1kU/L. Medical specialty The medical speciality that studies, diagnoses and treats diseases caused by allergies is called allergology. An allergist is a physician specially trained to manage and treat allergies, asthma, and the other allergic diseases. In the United States physicians holding certification by the American Board of Allergy and Immunology (ABAI) have successfully completed an accredited educational program and evaluation process, including a proctored examination to demonstrate knowledge, skills, and experience in patient care in allergy and immunology. Becoming an allergist/immunologist requires completion of at least nine years of training. After completing medical school and graduating with a medical degree, a physician will undergo three years of training in internal medicine (to become an internist) or pediatrics (to become a pediatrician). Once physicians have finished training in one of these specialties, they must pass the exam of either the American Board of Pediatrics (ABP), the American Osteopathic Board of Pediatrics (AOBP), the American Board of Internal Medicine (ABIM), or the American Osteopathic Board of Internal Medicine (AOBIM). Internists or pediatricians wishing to focus on the sub-specialty of allergy-immunology then complete at least an additional two years of study, called a fellowship, in an allergy/immunology training program. Allergist/immunologists listed as ABAI-certified have successfully passed the certifying examination of the ABAI following their fellowship. In the United Kingdom, allergy is a subspecialty of general medicine or pediatrics. After obtaining postgraduate exams (MRCP or MRCPCH), a doctor works for several years as a specialist registrar before qualifying for the General Medical Council specialist register. Allergy services may also be delivered by immunologists. A 2003 Royal College of Physicians report presented a case for improvement of what were felt to be inadequate allergy services in the UK. In 2006, the House of Lords convened a subcommittee. It concluded likewise in 2007 that allergy services were insufficient to deal with what the Lords referred to as an "allergy epidemic" and its social cost; it made several recommendations. Research Low-allergen foods are being developed, as are improvements in skin prick test predictions; evaluation of the atopy patch test, wasp sting outcomes predictions, a rapidly disintegrating epinephrine tablet, and anti-IL-5 for eosinophilic diseases.
Biology and health sciences
Illness and injury
null
55380
https://en.wikipedia.org/wiki/Disk%20partitioning
Disk partitioning
Disk partitioning or disk slicing is the creation of one or more regions on secondary storage, so that each region can be managed separately. These regions are called partitions. It is typically the first step of preparing a newly installed disk after a partitioning scheme is chosen for the new disk before any file system is created. The disk stores the information about the partitions' locations and sizes in an area known as the partition table that the operating system reads before any other part of the disk. Each partition then appears to the operating system as a distinct "logical" disk that uses part of the actual disk. System administrators use a program called a partition editor to create, resize, delete, and manipulate the partitions. Partitioning allows the use of different filesystems to be installed for different kinds of files. Separating user data from system data can prevent the system partition from becoming full and rendering the system unusable. Partitioning can also make backing up easier. A disadvantage is that it can be difficult to properly size partitions, resulting in having one partition with too much free space and another nearly totally allocated. History IBM in its 1983 release of PC DOS version 2.0 was an early if not first use of the term partition to describe dividing a block storage device such as an HDD into physical segments. The term's usage is now ubiquitous. Other terms used include logical disk, minidisk, portions, pseudo-disk, section, slice and virtual drive. One of the earliest such segmentation of a disk drive was IBM's 1966 usage in its CP-67 operating system of minidisk as a separate segment of a hard disk drive. Partitioning schemes DOS, Windows, and OS/2 With DOS, Microsoft Windows, and OS/2, a common practice is to use one primary partition for the active file system that will contain the operating system, the page/swap file, all utilities, applications, and user data. On most Windows consumer computers, the drive letter C: is routinely assigned to this primary partition. Other partitions may exist on the HDD that may or may not be visible as drives, such as recovery partitions or partitions with diagnostic tools or data. (Windows drive letters do not correspond to partitions in a one-to-one fashion, so there may be more or fewer drive letters than partitions.) Microsoft Windows 2000, XP, Vista, and Windows 7 include a 'Disk Management' program which allows for the creation, deletion and resizing of FAT and NTFS partitions. The Windows Disk Manager in Windows Vista and Windows 7 utilizes a 1 MB partition alignment scheme which is fundamentally incompatible with Windows 2000, XP, OS/2, DOS as well as many other operating systems. Unix-like systems On Unix-based and Unix-like operating systems such as Linux, macOS, BSD, and Solaris, it is possible to use multiple partitions on a disk device. Each partition can be formatted with a file system or as a swap partition. Multiple partitions allow directories such as /boot, /tmp, /usr, /var, or /home to be allocated their own filesystems. Such a scheme has a number of advantages: If one file system gets corrupted, the data outside that filesystem/partition may stay intact, minimizing data loss. Specific file systems can be mounted with different parameters, e.g., read-only, or with the execution of setuid files disabled. A runaway program that uses up all available space on a non-system filesystem does not fill up critical filesystems. Keeping user data such as documents separate from system files allows the system to be updated with lessened risk of disturbing the data. A common minimal configuration for Linux systems is to use three partitions: one holding the system files mounted on "/" (the root directory), one holding user configuration files and data mounted on /home (home directory), and a swap partition. By default, macOS systems also use a single partition for the entire filesystem and use a swap file inside the file system (like Windows) rather than a swap partition. In Solaris, partitions are sometimes known as slices. This is a conceptual reference to the slicing of a cake into several pieces. The term "slice" is used in the FreeBSD operating system to refer to Master Boot Record partitions, to avoid confusion with FreeBSD's own disklabel-based partitioning scheme. However, GUID Partition Table partitions are referred to as "partition" worldwide. Multi-boot systems Multi-boot systems are computers where the user can boot into more than one distinct operating system (OS) stored in separate storage devices or in separate partitions of the same storage device. In such systems a menu at startup gives a choice of which OS to boot/start (and only one OS at a time is loaded). This is distinct from virtual operating systems, in which one operating system is run as a self-contained virtual "program" within another already-running operating system. (An example is a Windows OS "virtual machine" running from within a Linux OS.) GUID Partition Table The GUID Partition Table (Globally Unique IDentifier) is a part of the Unified Extensible Firmware Interface (UEFI) standard for the layout of the partition table on a physical hard disk. Many operating systems now support this standard. However, Windows does not support this on BIOS based computers. Partition recovery When a partition is deleted, its entry is removed from a table and the data is no longer accessible. The data remains on the disk until it is overwritten. Specialized recovery utilities may be able to locate lost file systems and recreate a partition table which includes entries for these recovered file systems. Some disk utilities may overwrite a number of beginning sectors of a partition they delete. For example, if Windows Disk Management (Windows 2000/XP, etc.) is used to delete a partition, it will overwrite the first sector (relative sector 0) of the partition before removing it. It still may be possible to restore a FAT or NTFS partition if a backup boot sector is available. Compressed disks HDDs can be compressed to create additional space. In DOS and early Microsoft Windows, programs such as Stacker (DR-DOS except 6.0), SuperStor (DR DOS 6.0), DoubleSpace (MS-DOS 6.0–6.2), or DriveSpace (MS-DOS 6.22, Windows 9x) were used. This compression was done by creating a very large file on the partition, then storing the disk's data in this file. At startup, device drivers opened this file and assigned it a separate letter. Frequently, to avoid confusion, the original partition and the compressed drive had their letters swapped, so that the compressed disk is C:, and the uncompressed area (often containing system files) is given a higher name. Versions of Windows using the NT kernel, including the most recent version, Windows 10, contain intrinsic disk compression capability. The use of separate disk compression utilities has declined sharply. Partition table A partition table is a table maintained on a disk by the operating system that outlines and describes the partitions on that disk. The terms partition table and partition map are similar terms and can be used interchangeably. The term is most commonly associated with the MBR partition table of a Master Boot Record (MBR) in PCs, but it may be used generically to refer to other formats that divide a disk drive into partitions, such as: GUID Partition Table (GPT), Apple partition map (APM), or BSD disklabel. PC partition types MBR This section describes the master boot record (MBR) partitioning scheme, as used historically in DOS, Microsoft Windows and Linux (among others) on PC-compatible computer systems. As of the mid-2010s, most new computers use the GUID Partition Table (GPT) partitioning scheme instead. For examples of other partitioning schemes, see the general article on partition tables. The total data storage space of a PC HDD on which MBR partitioning is implemented can contain at most four primary partitions, or alternatively three primary partitions and an extended partition. The Partition Table, located in the master boot record, contains 16-byte entries, each of which describes a partition. The partition type is identified by a 1-byte code found in its partition table entry. Some of these codes (such as and ) may be used to indicate the presence of an extended partition. Most are used by an operating system's bootloader (that examines partition tables) to decide if a partition contains a file system that can be mounted / accessed for reading or writing data. Primary partition A primary partition contains one file system. In DOS and all early versions of Microsoft Windows systems, Microsoft required what it called the system partition to be the first partition. All Windows operating systems from Windows 95 onwards can be located on (almost) any partition, but the boot files (io.sys, bootmgr, ntldr, etc.) must reside on a primary partition. However, other factors, such as a PC's BIOS (see Boot sequence on standard PC) may also impose specific requirements as to which partition must contain the primary OS. The partition type code for a primary partition can either correspond to a file system contained within (e.g., means either an NTFS or an OS/2 HPFS file system) or indicate that the partition has a special use (e.g., code usually indicates a Linux swap partition). The FAT16 and FAT32 file systems have made use of a number of partition type codes due to the limits of various DOS and Windows OS versions. Though a Linux operating system may recognize a number of different file systems (ext4, ext3, ext2, ReiserFS, etc.), they have all consistently used the same partition type code: (Linux native file system). Extended partition An HDD may contain only one extended partition, but that extended partition can be subdivided into multiple logical partitions. DOS/Windows systems may then assign a unique drive letter to each logical partition. GUID partition table (GPT) only has the primary partition, doesn't have the extended partition and the logical partition. Boot partitions BIOS boot partition BIOS boot partition (BIOS BP) is a share of the storage device used to keep software that boots the operating system, a bootloader. It may be an operating system kernel image or bootloader or a completely separate piece of software. EFI system partition EFI system partition is the same as BIOS BP, but is loaded by EFI firmware instead of BIOS.
Technology
Data storage and memory
null
55397
https://en.wikipedia.org/wiki/Hadean
Hadean
The Hadean ( ) is the first and oldest of the four known geologic eons of Earth's history, starting with the planet's formation about 4.6 billion years ago (estimated 4567.30 ± 0.16 million years ago set by the age of the oldest solid material in the Solar System — protoplanetary disk dust particles — found as chondrules and calcium–aluminium-rich inclusions in some meteorites about 4.567 billion years old), and ended 4.031 billion years ago. The interplanetary collision that created the Moon occurred early in this eon. The Hadean eon was succeeded by the Archean eon, with the Late Heavy Bombardment hypothesized to have occurred at the Hadean-Archean boundary. Hadean rocks are very rare, largely consisting of granular zircons from one locality (Jack Hills) in Western Australia. Hadean geophysical models remain controversial among geologists: plate tectonics and the growth of cratons into continents may have started in the Hadean, but there is still uncertainty. Earth in the early Hadean had a very thick hydride-rich atmosphere whose composition likely resembled the solar nebula and the gas giants, with mostly water vapor, methane and ammonia. As the Earth's surface cooled, vaporized atmospheric water condensed into liquid water and eventually a superocean covering nearly all of the planet was formed, turning Earth into an ocean planet. Volcanic outgassing and asteroid bombardments further altered the Hadean atmosphere eventually into the nitrogen- and carbon dioxide-rich, weakly reducing Paleoarchean atmosphere. Etymology The eon's name "Hadean" comes from Hades, the Greek god of the underworld (whose name is also used to describe the underworld itself), referring to the hellish conditions then prevailing on early Earth: the planet had just been formed from recent accretion, and its surface was still molten with superheated lava due to that, the abundance of short-lived radioactive elements, and frequent impact events with other Solar System bodies. The term was coined by American geologist Preston Cloud, originally to label the period before the earliest known rocks on Earth. W.B. Harland later coined an almost synonymous term, the Priscoan period, from priscus, a Latin word for 'ancient'. Other, older texts refer to the eon as the Pre-Archean. Rock dating Prior to the 1980s and the discovery of Hadean lithic fragments, scientific narratives of the early Earth explanations were almost entirely in the hands of geodynamic modelers. In the last decades of the 20th century, geologists identified a few Hadean rocks from western Greenland, northwestern Canada, and Western Australia. In 2015, traces of carbon minerals interpreted as "remains of biotic life" were found in 4.1-billion-year-old rocks in Western Australia. The oldest dated zircon crystals, enclosed in a metamorphosed sandstone conglomerate in the Jack Hills of the Narryer Gneiss Terrane of Western Australia, date to 4.404 ± 0.008 Ga. This zircon is a slight outlier, with the oldest consistently dated zircon falling closer to 4.35 Ga—around 200 million years after the hypothesized time of Earth's formation. In many other areas, xenocryst (or relict) Hadean zircons enclosed in older rocks indicate that younger rocks have formed on older terranes and have incorporated some of the older material. One example occurs in the Guiana shield from the Iwokrama Formation of southern Guyana where zircon cores have been dated at 4.22 Ga. Atmosphere A sizable quantity of water would have been in the material that formed Earth. Water molecules would have escaped Earth's gravity more easily when the planet was less massive during its formation. Photodissociation by short-wave ultraviolet in sunlight could split surface water molecules into oxygen and hydrogen, the former of which would readily react to form compounds in the then-reducing atmosphere, while the latter (along with the similarly light helium) would be expected to continually leave the atmosphere (as it does to the present day) due to atmospheric escape. Part of the ancient planet is theorized to have been disrupted by the impact that created the Moon, which should have caused the melting of one or two large regions of Earth. Earth's present composition suggests that there was not complete remelting as it is difficult to completely melt and mix huge rock masses. However, a fair fraction of material should have been vaporized by this impact. The material would have condensed within 2,000 years. The initial magma ocean solidified within 5 million years, leaving behind hot volatiles which probably resulted in a heavy atmosphere with hydrogen and water vapor. The initial heavy atmosphere had a surface temperature of and an atmospheric pressure of above 27 standard atmospheres. Oceans Studies of zircons have found that liquid water may have existed between 4.0 and 4.4 billion years ago, very soon after the formation of Earth. Liquid water oceans existed despite the high surface temperature, because at an atmospheric pressure of 27 atmospheres, water remains liquid even at those high temperatures. The most likely source of the water in the Hadean ocean was outgassing from the Earth's mantle. Bombardment origin of a substantial amount of water is unlikely, due to the incompatibility of isotope fractions between the Earth and comets. Asteroid impacts during the Hadean and into the Archean would have periodically disrupted the ocean. The geological record from 3.2 Gya contains evidence of multiple impacts of objects up to in diameter. Each such impact would have boiled off up to of a global ocean, and temporarily raised the atmospheric temperature to . However, the frequency of meteorite impacts is still under study: the Earth may have gone through long periods when liquid oceans and life were possible. The liquid water would absorb the carbon dioxide in the early atmosphere; this would not be enough by itself to substantially reduce the amount of . Plate tectonics A 2008 study of zircons found that Australian Hadean rock contains minerals pointing to the existence of plate tectonics as early as 4 billion years ago (approximately 600 million years after Earth's formation). However, some geologists suggest that the zircons could have been formed by meteorite impacts. The direct evidence of Hadean geology from zircons is limited, because the zircons are largely gathered in one locality in Australia. Geophysical models are underconstrained, but can paint a general picture of the state of Earth in the Hadean. Mantle convection in the Hadean was likely vigorous, due to lower viscosity. The lower viscosity was due to the high levels of radiogenic heat and the fact that water in the mantle had not yet fully outgassed. Whether the vigorous convection led to plate tectonics in the Hadean or was confined under a rigid lid is still a matter of debate. The presence of Hadean oceans is thought to have triggered plate tectonics. Subduction due to plate tectonics would have removed carbonate from the early oceans, contributing to the removal of the -rich early atmosphere. Removal of this early atmosphere is evidence of Hadean plate tectonics. If plate tectonics occurred in the Hadean, it would have formed continental crust. Different models predict different amounts of continental crust during the Hadean. The work of Dhiume et al. predicts that by the end of the Hadean, the continental crust had only 25% of today's area. The models of Korenaga, et al. predict that the continental crust grew to present-day volume sometime between 4.2 and 4.0 Gya. Continents The amount of exposed land in the Hadean is only loosely dependent on the amount of continental crust: it also depends on the ocean level. In models where plate tectonics started in the Archean, Earth has a global ocean in the Hadean. The high heat of the mantle may have made it difficult to support high elevations in the Hadean. If continents did form in the Hadean, their growth competed with outgassing of water from the mantle. Continents may have appeared in the mid-Hadean, and then disappeared under a thick ocean by the end of the Hadean. The limited amount of land has implications for the origin of life. Possible life Abundant Hadean-like geothermal microenvironments were shown by Salditt et al. to have the potential to support the synthesis and replication of RNA and thus possibly the evolution of a primitive life form. Porous rock systems comprising heated air-water interfaces were shown to allow ribozyme-catalyzed RNA replication of sense and antisense strands followed by subsequent strand dissociation, thus enabling combined synthesis, release and folding of active ribozymes. Such a primitive RNA system also may have been able to undergo template strand switching during replication (genetic recombination) as occurs during the RNA replication of extant coronaviruses. A study published in 2024 inferred the last common ancestor of all current life to have emerged during the Hadean, between 4.09 and 4.33 Gya. Although the early part of the Late Heavy Bombardment happened during the Hadean, the impacts were frequent only on a cosmic scale, with thousands or even millions of years between each event. As Earth already had oceans, life would have been possible, but vulnerable to extinction events caused by those impacts. The risk would not be on the frequency, but on the size of the impactor, and remains on the Moon suggest impactors bigger than the Chicxulub impactor that caused the extinction of dinosaurs. An impactor big enough may erase all life on the planet, although some models suggest that microscopic life may still survive it underground or in the oceanic depths.
Physical sciences
Geological timescale
Earth science
55406
https://en.wikipedia.org/wiki/Banded%20iron%20formation
Banded iron formation
Banded iron formations (BIFs; also called banded ironstone formations) are distinctive units of sedimentary rock consisting of alternating layers of iron oxides and iron-poor chert. They can be up to several hundred meters in thickness and extend laterally for several hundred kilometers. Almost all of these formations are of Precambrian age and are thought to record the oxygenation of the Earth's oceans. Some of the Earth's oldest rock formations, which formed about (Ma), are associated with banded iron formations. Banded iron formations are thought to have formed in sea water as the result of oxygen production by photosynthetic cyanobacteria. The oxygen combined with dissolved iron in Earth's oceans to form insoluble iron oxides, which precipitated out, forming a thin layer on the ocean floor. Each band is similar to a varve, resulting from cyclic variations in oxygen production. Banded iron formations were first discovered in northern Michigan in 1844. Banded iron formations account for more than 60% of global iron reserves and provide most of the iron ore presently mined. Most formations can be found in Australia, Brazil, Canada, India, Russia, South Africa, Ukraine, and the United States. Description A typical banded iron formation consists of repeated, thin layers (a few millimeters to a few centimeters in thickness) of silver to black iron oxides, either magnetite (Fe3O4) or hematite (Fe2O3), alternating with bands of iron-poor chert, often red in color, of similar thickness. A single banded iron formation can be up to several hundred meters in thickness and extend laterally for several hundred kilometers. Banded iron formation is more precisely defined as chemically precipitated sedimentary rock containing greater than 15% iron. However, most BIFs have a higher content of iron, typically around 30% by mass, so that roughly half the rock is iron oxides and the other half is silica. The iron in BIFs is divided roughly equally between the more oxidized ferric form, Fe(III), and the more reduced ferrous form, Fe(II), so that the ratio Fe(III)/Fe(II+III) typically varies from 0.3 to 0.6. This indicates a predominance of magnetite, in which the ratio is 0.67, over hematite, for which the ratio is 1. In addition to the iron oxides (hematite and magnetite), the iron sediment may contain the iron-rich carbonates siderite and ankerite, or the iron-rich silicates minnesotaite and greenalite. Most BIFs are chemically simple, containing little but iron oxides, silica, and minor carbonate, though some contain significant calcium and magnesium, up to 9% and 6.7% as oxides respectively. When used in the singular, the term banded iron formation refers to the sedimentary lithology just described. The plural form, banded iron formations, is used informally to refer to stratigraphic units that consist primarily of banded iron formation. A well-preserved banded iron formation typically consists of macrobands several meters thick that are separated by thin shale beds. The macrobands in turn are composed of characteristic alternating layers of chert and iron oxides, called mesobands, that are several millimeters to a few centimeters thick. Many of the chert mesobands contain microbands of iron oxides that are less than a millimeter thick, while the iron mesobands are relatively featureless. BIFs tend to be extremely hard, tough, and dense, making them highly resistant to erosion, and they show fine details of stratification over great distances, suggesting they were deposited in a very low-energy environment; that is, in relatively deep water, undisturbed by wave motion or currents. BIFs only rarely interfinger with other rock types, tending to form sharply bounded discrete units that never grade laterally into other rock types. Banded iron formations of the Great Lakes region and the Frere Formation of western Australia are somewhat different in character and are sometimes described as granular iron formations or GIFs. Their iron sediments are granular to oolitic in character, forming discrete grains about a millimeter in diameter, and they lack microbanding in their chert mesobands. They also show more irregular mesobanding, with indications of ripples and other sedimentary structures, and their mesobands cannot be traced out any great distance. Though they form well-defined, discrete units, these are commonly interbedded with coarse to medium-grained epiclastic sediments (sediments formed by weathering of rock). These features suggest a higher energy depositional environment, in shallower water disturbed by wave motions. However, they otherwise resemble other banded iron formations. The great majority of banded iron formations are Archean or Paleoproterozoic in age. However, a small number of BIFs are Neoproterozoic in age, and are frequently, if not universally, associated with glacial deposits, often containing glacial dropstones. They also tend to show a higher level of oxidation, with hematite prevailing over magnetite, and they typically contain a small amount of phosphate, about 1% by mass. Mesobanding is often poor to nonexistent and soft-sediment deformation structures are common. This suggests very rapid deposition. However, like the granular iron formations of the Great Lakes, the Neoproterozoic occurrences are widely described as banded iron formations. Banded iron formations are distinct from most Phanerozoic ironstones. Ironstones are relatively rare and are thought to have been deposited in marine anoxic events, in which the depositional basin became depleted in free oxygen. They are composed of iron silicates and oxides without appreciable chert but with significant phosphorus content, which is lacking in BIFs. No classification scheme for banded iron formations has gained complete acceptance. In 1954, Harold Lloyd James advocated a classification based on four lithological facies (oxide, carbonate, silicate, and sulfide) assumed to represent different depths of deposition, but this speculative model did not hold up. In 1980, Gordon A. Gross advocated a twofold division of BIFs into an Algoma type and a Lake Superior type, based on the character of the depositional basin. Algoma BIFs are found in relatively small basins in association with greywackes and other volcanic rocks and are assumed to be associated with volcanic centers. Lake Superior BIFs are found in larger basins in association with black shales, quartzites, and dolomites, with relatively minor tuffs or other volcanic rocks, and are assumed to have formed on a continental shelf. This classification has been more widely accepted, but the failure to appreciate that it is strictly based on the characteristics of the depositional basin and not the lithology of the BIF itself has led to confusion, and some geologists have advocated for its abandonment. However, the classification into Algoma versus Lake Superior types continues to be used. Occurrence Banded iron formations are almost exclusively Precambrian in age, with most deposits dating to the late Archean (2800–2500 Ma) with a secondary peak of deposition in the Orosirian period of the Paleoproterozoic (1850 Ma). Minor amounts were deposited in the early Archean and in the Neoproterozoic (750 Ma). The youngest known banded iron formation is an Early Cambrian formation in western China. Because the processes by which BIFs are formed appear to be restricted to early geologic time, and may reflect unique conditions of the Precambrian world, they have been intensively studied by geologists. Banded iron formations are found worldwide, in every continental shield of every continent. The oldest BIFs are associated with greenstone belts and include the BIFs of the Isua Greenstone Belt, the oldest known, which have an estimated age of 3700 to 3800 Ma. The Temagami banded iron deposits formed over a 50-million-year period, from 2736 to 2687 Ma, and reached a thickness of . Other examples of early Archean BIFs are found in the Abitibi greenstone belts, the greenstone belts of the Yilgarn and Pilbara cratons, the Baltic shield, and the cratons of the Amazon, north China, and south and west Africa. The most extensive banded iron formations belong to what A.F. Trendall calls the Great Gondwana BIFs. These are late Archean in age and are not associated with greenstone belts. They are relatively undeformed and form extensive topographic plateaus, such as the Hamersley Range. The banded iron formations here were deposited from 2470 to 2450 Ma and are the thickest and most extensive in the world, with a maximum thickness in excess of . Similar BIFs are found in the Carajás Formation of the Amazon craton, the Cauê Itabirite of the São Francisco craton, the Kuruman Iron Formation and Penge Iron Formation of South Africa, and the Mulaingiri Formation of India. Paleoproterozoic banded iron formations are found in the Iron Range and other parts of the Canadian Shield. The Iron Range is a group of four major deposits: the Mesabi Range, the Vermilion Range, the Gunflint Range, and the Cuyuna Range. All are part of the Animikie Group and were deposited between 2500 and 1800 Ma. These BIFs are predominantly granular iron formations. Neoproterozoic banded iron formations include the Urucum in Brazil, Rapitan in the Yukon, and the Damara Belt in southern Africa. They are relatively limited in size, with horizontal extents not more than a few tens of kilometers and thicknesses not more than about . These are widely thought to have been deposited under unusual anoxic oceanic conditions associated with the "Snowball Earth." Origins Banded iron formation provided some of the first evidence for the timing of the Great Oxidation Event, 2,400 Ma. With his 1968 paper on the early atmosphere and oceans of the Earth, Preston Cloud established the general framework that has been widely, if not universally, accepted for understanding the deposition of BIFs. Cloud postulated that banded iron formations were a consequence of anoxic, iron-rich waters from the deep ocean welling up into a photic zone inhabited by cyanobacteria that had evolved the capacity to carry out oxygen-producing photosynthesis, but which had not yet evolved enzymes (such as superoxide dismutase) for living in an oxygenated environment. Such organisms would have been protected from their own oxygen waste through its rapid removal via the reservoir of reduced ferrous iron, Fe(II), in the early ocean. The oxygen released by photosynthesis oxidized the Fe(II) to ferric iron, Fe(III), which precipitated out of the sea water as insoluble iron oxides that settled to the ocean floor. Cloud suggested that banding resulted from fluctuations in the population of cyanobacteria due to free radical damage by oxygen. This also explained the relatively limited extent of early Archean deposits. The great peak in BIF deposition at the end of the Archean was thought to be the result of the evolution of mechanisms for living with oxygen. This ended self-poisoning and produced a population explosion in the cyanobacteria that rapidly depleted the remaining supply of reduced iron and ended most BIF deposition. Oxygen then began to accumulate in the atmosphere. Some details of Cloud's original model were abandoned. For example, improved dating of Precambrian strata has shown that the late Archean peak of BIF deposition was spread out over tens of millions of years, rather than taking place in a very short interval of time following the evolution of oxygen-coping mechanisms. However, his general concepts continue to shape thinking about the origins of banded iron formations. In particular, the concept of the upwelling of deep ocean water, rich in reduced iron, into an oxygenated surface layer poor in iron remains a key element of most theories of deposition. The few formations deposited after 1,800 Ma may point to intermittent low levels of free atmospheric oxygen, while the small peak at may be associated with the hypothetical Snowball Earth. Formation processes The microbands within chert layers are most likely varves produced by annual variations in oxygen production. Diurnal microbanding would require a very high rate of deposition of 2 meters per year or 5 km/Ma. Estimates of deposition rate based on various models of deposition and sensitive high-resolution ion microprobe (SHRIMP) estimates of the age of associated tuff beds suggest a deposition rate in typical BIFs of 19 to 270 m/Ma, which are consistent either with annual varves or rhythmites produced by tidal cycles. Preston Cloud proposed that mesobanding was a result of self-poisoning by early cyanobacteria as the supply of reduced iron was periodically depleted. Mesobanding has also been interpreted as a secondary structure, not present in the sediments as originally laid down, but produced during compaction of the sediments. Another theory is that mesobands are primary structures resulting from pulses of activity along mid-ocean ridges that change the availability of reduced iron on time scales of decades. In the case of granular iron formations, the mesobands are attributed to winnowing of sediments in shallow water, in which wave action tended to segregate particles of different size and composition. For banded iron formations to be deposited, several preconditions must be met. The deposition basin must contain waters that are ferruginous (rich in iron). This implies they are also anoxic, since ferrous iron oxidizes to ferric iron within hours or days in the presence of dissolved oxygen. This would prevent transport of large quantities of iron from its sources to the deposition basin. The waters must not be euxinic (rich in hydrogen sulfide), since this would cause the ferrous iron to precipitate out as pyrite. There must be an oxidation mechanism active within the depositional basin that steadily converts the reservoir of ferrous iron to ferric iron. Source of reduced iron There must be an ample source of reduced iron that can circulate freely into the deposition basin. Plausible sources of iron include hydrothermal vents along mid-ocean ridges, windblown dust, rivers, glacial ice, and seepage from continental margins. The importance of various sources of reduced iron has likely changed dramatically across geologic time. This is reflected in the division of BIFs into Algoma and Lake Superior-type deposits. Algoma-type BIFs formed primarily in the Archean. These older BIFs tend to show a positive europium anomaly consistent with a hydrothermal source of iron. By contrast, Lake Superior-type banded iron formations primarily formed during the Paleoproterozoic era, and lack the europium anomalies of the older Algoma-type BIFs, suggesting a much greater input of iron weathered from continents. Absence of oxygen or hydrogen sulfide The absence of hydrogen sulfide in anoxic ocean water can be explained either by reduced sulfur flux into the deep ocean or a lack of dissimilatory sulfate reduction (DSR), the process by which microorganisms use sulfate in place of oxygen for respiration. The product of DSR is hydrogen sulfide, which readily precipitates iron out of solution as pyrite. The requirement of an anoxic, but not euxinic, deep ocean for deposition of banded iron formation suggests two models to explain the end of BIF deposition 1.8 billion years ago. The "Holland ocean" model proposes that the deep ocean became sufficiently oxygenated at that time to end transport of reduced iron. Heinrich Holland argues that the absence of manganese deposits during the pause between Paleoproterozoic and Neoproterozoic BIFs is evidence that the deep ocean had become at least slightly oxygenated. The "Canfield ocean" model proposes that, to the contrary, the deep ocean became euxinic and transport of reduced iron was blocked by precipitation as pyrite. Banded iron formations in northern Minnesota are overlain by a thick layer of ejecta from the Sudbury Basin impact. An asteroid (estimated at across) impacted into waters about deep 1.849 billion years ago, coincident with the pause in BIF deposition. Computer models suggest that the impact would have generated a tsunami at least high at the point of impact, and high about away. It has been suggested that the immense waves and large underwater landslides triggered by the impact caused the mixing of a previously stratified ocean, oxygenated the deep ocean, and ended BIF deposition shortly after the impact. Oxidation Although Cloud argued that microbial activity was a key process in the deposition of banded iron formation, the role of oxygenic versus anoxygenic photosynthesis continues to be debated, and nonbiogenic processes have also been proposed. Oxygenic photosynthesis Cloud's original hypothesis was that ferrous iron was oxidized in a straightforward manner by molecular oxygen present in the water: The oxygen comes from the photosynthetic activities of cyanobacteria. Oxidation of ferrous iron may have been hastened by aerobic iron-oxidizing bacteria, which can increase rates of oxidation by a factor of 50 under conditions of low oxygen. Anoxygenic photosynthesis Oxygenic photosynthesis is not the only biogenic mechanism for deposition of banded iron formations. Some geochemists have suggested that banded iron formations could form by direct oxidation of iron by microbial anoxygenic phototrophs. The concentrations of phosphorus and trace metals in BIFs are consistent with precipitation through the activities of iron-oxidizing bacteria. Iron isotope ratios in the oldest banded iron formations (3700-3800 Ma), at Isua, Greenland, are best explained by assuming extremely low oxygen levels (<0.001% of modern O2 levels in the photic zone) and anoxygenic photosynthetic oxidation of Fe(II): This requires that dissimilatory iron reduction, the biological process in which microorganisms substitute Fe(III) for oxygen in respiration, was not yet widespread. By contrast, Lake Superior-type banded iron formations show iron isotope ratios that suggest that dissimilatory iron reduction expanded greatly during this period. An alternate route is oxidation by anaerobic denitrifying bacteria. This requires that nitrogen fixation by microorganisms is also active. Abiogenic mechanisms The lack of organic carbon in banded iron formation argues against microbial control of BIF deposition. On the other hand, there is fossil evidence for abundant photosynthesizing cyanobacteria at the start of BIF deposition and of hydrocarbon markers in shales within banded iron formation of the Pilbara craton. The carbon that is present in banded iron formations is enriched in the light isotope, 12C, an indicator of a biological origin. If a substantial part of the original iron oxides was in the form of hematite, then any carbon in the sediments might have been oxidized by the decarbonization reaction: Trendall and J.G. Blockley proposed, but later rejected, the hypothesis that banded iron formation might be a peculiar kind of Precambrian evaporite. Other proposed abiogenic processes include radiolysis by the radioactive isotope of potassium, 40K, or annual turnover of basin water combined with upwelling of iron-rich water in a stratified ocean. Another abiogenic mechanism is photooxidation of iron by sunlight. Laboratory experiments suggest that this could produce a sufficiently high deposition rate under likely conditions of pH and sunlight. However, if the iron came from a shallow hydrothermal source, other laboratory experiments suggest that precipitation of ferrous iron as carbonates or silicates could seriously compete with photooxidation. Diagenesis Regardless of the precise mechanism of oxidation, the oxidation of ferrous to ferric iron likely caused the iron to precipitate out as a ferric hydroxide gel. Similarly, the silica component of the banded iron formations likely precipitated as a hydrous silica gel. The conversion of iron hydroxide and silica gels to banded iron formation is an example of diagenesis, the conversion of sediments into solid rock. There is evidence that banded iron formations formed from sediments with nearly the same chemical composition as is found in the BIFs today. The BIFs of the Hamersley Range show great chemical homogeneity and lateral uniformity, with no indication of any precursor rock that might have been altered to the current composition. This suggests that, other than dehydration and decarbonization of the original ferric hydroxide and silica gels, diagenesis likely left the composition unaltered and consisted of crystallization of the original gels. Decarbonization may account for the lack of carbon and preponderance of magnetite in older banded iron formations. The relatively high content of hematite in Neoproterozoic BIFs suggests they were deposited very quickly and via a process that did not produce great quantities of biomass, so that little carbon was present to reduce hematite to magnetite. However, it is possible that BIF was altered from carbonate rock or from hydrothermal mud during late stages of diagenesis. A 2018 study found no evidence that magnetite in BIF formed by decarbonization, and suggests that it formed from thermal decomposition of siderite via the reaction The iron may have originally precipitated as greenalite and other iron silicates. Macrobanding is then interpreted as a product of compaction of the original iron silicate mud. This produced siderite-rich bands that served as pathways for fluid flow and formation of magnetite. The Great Oxidation Event The peak of deposition of banded iron formations in the late Archean, and the end of deposition in the Orosirian, have been interpreted as markers for the Great Oxygenation Event. Prior to 2.45 billion years ago, the high degree of mass-independent fractionation of sulfur (MIF-S) indicates an extremely oxygen-poor atmosphere. The peak of banded iron formation deposition coincides with the disappearance of the MIF-S signal, which is interpreted as the permanent appearance of oxygen in the atmosphere between 2.41 and 2.35 billion years ago. This was accompanied by the development of a stratified ocean with a deep anoxic layer and a shallow oxidized layer. The end of deposition of BIF at 1.85 billion years ago is attributed to the oxidation of the deep ocean. Snowball Earth hypothesis Until 1992 it was assumed that the rare, later (younger) banded iron deposits represented unusual conditions where oxygen was depleted locally. Iron-rich waters would then form in isolation and subsequently come into contact with oxygenated water. The Snowball Earth hypothesis provided an alternative explanation for these younger deposits. In a Snowball Earth state the continents, and possibly seas at low latitudes, were subject to a severe ice age circa 750 to 580 Ma that nearly or totally depleted free oxygen. Dissolved iron then accumulated in the oxygen-poor oceans (possibly from seafloor hydrothermal vents). Following the thawing of the Earth, the seas became oxygenated once more causing the precipitation of the iron. Banded iron formations of this period are predominantly associated with the Sturtian glaciation. An alternative mechanism for banded iron formations in the Snowball Earth era suggests the iron was deposited from metal-rich brines in the vicinity of hydrothermally active rift zones due to glacially-driven thermal overturn. The limited extent of these BIFs compared with the associated glacial deposits, their association with volcanic formations, and variation in thickness and facies favor this hypothesis. Such a mode of formation does not require a global anoxic ocean, but is consistent with either a Snowball Earth or Slushball Earth model. Economic geology Banded iron formations provide most of the iron ore presently mined. More than 60% of global iron reserves are in the form of banded iron formation, most of which can be found in Australia, Brazil, Canada, India, Russia, South Africa, Ukraine, and the United States. Different mining districts coined their own names for BIFs. The term "banded iron formation" was coined in the iron districts of Lake Superior, where the ore deposits of the Mesabi, Marquette, Cuyuna, Gogebic, and Menominee iron ranges were also variously known as "jasper", "jaspilite", "iron-bearing formation", or taconite. Banded iron formations were described as "itabarite" in Brazil, as "ironstone" in South Africa, and as "BHQ" (banded hematite quartzite) in India. Banded iron formation was first discovered in northern Michigan in 1844, and mining of these deposits prompted the earliest studies of BIFs, such as those of Charles R. Van Hise and Charles Kenneth Leith. Iron mining operations on the Mesabi and Cuyuna Ranges evolved into enormous open pit mines, where steam shovels and other industrial machines could remove massive amounts of ore. Initially the mines exploited large beds of hematite and goethite weathered out of the banded iron formations, and some of this "natural ore" had been extracted by 1980. By 1956, large-scale commercial production from the BIF itself began at the Peter Mitchell Mine near Babbitt, Minnesota. Production in Minnesota was of ore concentrate per year in 2016, which is about 75% of total U.S. production. Magnetite-rich banded iron formation, known locally as taconite, is ground to a powder, and the magnetite is separated with powerful magnets and pelletized for shipment and smelting. Iron ore became a global commodity after the Second World War, and with the end of the embargo against exporting iron ore from Australia in 1960, the Hamersley Range became a major mining district. The banded iron formations here are the thickest and most extensive in the world, originally covering an area of and containing about of iron. The range contains 80 percent of all identified iron ore reserves in Australia. Over of iron ore is removed from the range every year. The Itabarite banded iron formations of Brazil cover at least and are up to thick. These form the Quadrilatero Ferrifero or Iron Quadrangle, which resembles the Iron Range mines of United States in that the favored ore is hematite weathered out of the BIFs. Production from the Iron Quadrangle helps make Brazil the second largest producer of iron ore after Australia, with monthly exports averaging from December 2007 to May 2018. Mining of ore from banded iron formations at Anshan in north China began in 1918. When Japan occupied Northeast China in 1931, these mills were turned into a Japanese-owned monopoly, and the city became a significant strategic industrial hub during the Second World War. Total production of processed iron in Manchuria reached in 1931–1932. By 1942, Anshan's Shōwa Steel Works total production capacity reached per annum, making it one of the major iron and steel centers in the world. Production was severely disrupted during the Soviet occupation of Manchuria in 1945 and the subsequent Chinese Civil War. However, from 1948 to 2001, the steel works produced 290 million tons of steel, of pig iron and of rolled steel. Annual production capacity is of pig iron, of steel and of rolled steel. A quarter of China's total iron ore reserves, about , are located in Anshan.
Physical sciences
Sedimentary rocks
Earth science
55526
https://en.wikipedia.org/wiki/Donkey
Donkey
The donkey or ass is a domesticated equine. It derives from the African wild ass, Equus africanus, and may be classified either as a subspecies thereof, Equus africanus asinus, or as a separate species, Equus asinus. It was domesticated in Africa some years ago, and has been used mainly as a working animal since that time. There are more than 40 million donkeys in the world, mostly in underdeveloped countries, where they are used principally as draught or pack animals. While working donkeys are often associated with those living at or below subsistence, small numbers of donkeys or asses are kept for breeding, as pets, and for livestock protection in developed countries. An adult male donkey is a jack or jackass, an adult female is a jenny or jennet, and an immature donkey of either sex is a foal. Jacks are often mated with female horses (mares) to produce mules; the less common hybrid of a male horse (stallion) and jenny is a hinny. Nomenclature Traditionally, the scientific name for the donkey is Equus asinus asinus, on the basis of the principle of priority used for scientific names of animals. However, the International Commission on Zoological Nomenclature ruled in 2003 that if the domestic and the wild species are considered subspecies of a common species, the scientific name of the wild species has priority, even when that subspecies was described after the domestic subspecies. This means that the proper scientific name for the donkey is Equus africanus asinus when it is considered a subspecies and Equus asinus when it is considered a species. At one time, the synonym ass was the more common term for the donkey. The first recorded use of donkey was in either 1784 or 1785. While the word ass has cognates in most other Indo-European languages, donkey is an etymologically obscure word for which no credible cognate has been identified. Hypotheses on its derivation include the following: perhaps from Spanish for its don-like gravity; the donkey was also known as "the King of Spain's trumpeter". perhaps a diminutive of dun (dull grayish-brown), a typical donkey colour. perhaps from the name Duncan. perhaps of imitative origin. From the 18th century, donkey gradually replaced ass and jenny replaced she-ass, which is now considered archaic. The change may have come about through a tendency to avoid pejorative terms in speech and may be comparable to the substitution in North American English of rooster for cock, or that of rabbit for coney, which was formerly homophonic with cunny (a variation of the word cunt). By the end of the 17th century, changes in pronunciation of both ass and arse had caused them to become homophones in some varieties of English. Other words used for the ass in English from this time include cuddy in Scotland, neddy in southwestern England and dicky in southeastern England; moke is documented in the 19th century and may be of Welsh or Romani origin. Burro is a word for donkey in both Spanish and Portuguese. In the United States, it is commonly applied to the feral donkeys that live west of the Rocky Mountains; it may also refer to any small donkey. History The genus Equus, which includes all extant equines, is believed to have evolved from Dinohippus, via the intermediate form Plesippus. One of the oldest species is Equus simplicidens, described as zebra-like with a donkey-shaped head. The oldest fossil to date is approximately 3.5 million years old, and was located in the US state of Idaho. The genus appears to have spread quickly into the Old World, with the similarly aged Equus livenzovensis documented from western Europe and Russia. Molecular phylogenies indicate the most recent common ancestor of all modern equids (members of the genus Equus) lived ~5.6 (3.9–7.8) mya. Direct paleogenomic sequencing of a 700,000-year-old middle Pleistocene horse metapodial bone from Canada implies a more recent 4.07 Myr before present date for the most recent common ancestor (MRCA) within the range of 4.0 to 4.5 Myr BP. The oldest divergencies are the Asian hemiones (subgenus E. (Asinus), including the kulan, onager, and kiang), followed by the African zebras (subgenera E. (Dolichohippus), and E. (Hippotigris)). All other modern forms including the domesticated horse (and many fossil Pliocene and Pleistocene forms) belong to the subgenus E. (Equus) which diverged ~4.8 (3.2–6.5) million years ago. The ancestors of the modern donkey are the Nubian and Somalian subspecies of African wild ass. Remains of domestic donkeys dating to the fourth millennium BC have been found in Ma'adi in Lower Egypt, and it is believed that the domestication of the donkey was accomplished long after the domestication of cattle, sheep and goats in the seventh and eighth millennia BC. Donkeys were probably first domesticated by pastoral people in Nubia, and they supplanted the ox as the chief pack animal of that culture. The domestication of donkeys served to increase the mobility of pastoral cultures, having the advantage over ruminants of not needing time to chew their cud, and were vital in the development of long-distance trade across Egypt. In the Dynasty IV era of Egypt, between 2675 and 2565 BC, wealthy members of society were known to own over 1,000 donkeys, employed in agriculture, as dairy and meat animals and as pack animals. In 2003, the tomb of either King Narmer or King Hor-Aha (two of the first Egyptian pharaohs) was excavated and the skeletons of ten donkeys were found buried in a manner usually used with high ranking humans. These burials show the importance of donkeys to the early Egyptian state and its ruler. By the end of the fourth millennium BC, the donkey had spread to Southwest Asia, and the main breeding centre had shifted to Mesopotamia by 1800 BC. The breeding of large, white riding asses made Damascus famous, while Syrian breeders developed at least three other breeds, including one preferred by women for its easy gait. The Muscat or Yemen ass was developed in Arabia. By the second millennium BC, the donkey was brought to Europe, possibly at the same time as viticulture was introduced, as the donkey is associated with the Syrian god of wine, Dionysus. Greeks spread both of these to many of their colonies, including those in what are now Italy, France and Spain; Romans dispersed them throughout their empire. The first donkeys came to the Americas on ships of the Second Voyage of Christopher Columbus, and were landed at Hispaniola in 1495. The first to reach North America may have been two animals taken to Mexico by Juan de Zumárraga, the first bishop of Mexico, who arrived there on 6 December 1528, while the first donkeys to reach what is now the United States may have crossed the Rio Grande with Juan de Oñate in April 1598. From that time on they spread northward, finding use in missions and mines. Donkeys were documented as present in what today is Arizona in 1679. By the Gold Rush years of the 19th century, the burro was the beast of burden of choice of early prospectors in the western United States. By the end of the placer mining boom, many of them escaped or were abandoned, and a feral population established itself. Conservation status About 41 million donkeys were reported worldwide in 2006. China had the most with 11 million, followed by Pakistan, Ethiopia and Mexico. As of 2017, however, the Chinese population was reported to have dropped to 3 million, with African populations under pressure as well, due to increasing trade and demand for donkey products in China. Some researchers believe the actual number may be somewhat higher since many donkeys go uncounted. The number of breeds and percentage of world population for each of the FAO's world regions was in 2006: In 1997, the number of donkeys in the world was reported to be continuing to grow, as it had steadily done throughout most of history; factors cited as contributing to this were increasing human population, progress in economic development and social stability in some poorer nations, conversion of forests to farm and range land, rising prices of motor vehicles and fuel, and the popularity of donkeys as pets. Since then, the world population of donkeys is reported to be rapidly shrinking, falling from 43.7 million to 43.5 million between 1995 and 2000, and to only 41 million in 2006. The fall in population is pronounced in developed countries; in Europe, the total number of donkeys fell from 3 million in 1944 to just over 1 million in 1994. The Domestic Animal Diversity Information System (DAD-IS) of the FAO listed 189 breeds of ass in June 2011. In 2000 the number of breeds of donkey recorded worldwide was 97, and in 1995 it was 77. The rapid increase is attributed to attention paid to identification and recognition of donkey breeds by the FAO's Animal Genetic Resources project. The rate of recognition of new breeds has been particularly high in some developed countries. In France only one breed, the Baudet du Poitou, was recognised until the early 1990s; by 2005, a further six donkey breeds had official recognition. In developed countries, the welfare of donkeys both at home and abroad has become a concern, and a number of sanctuaries for retired and rescued donkeys have been set up. The largest is The Donkey Sanctuary near Sidmouth, England, which also supports donkey welfare projects in Egypt, Ethiopia, India, Kenya, and Mexico. In 2017, a drop in the number of Chinese donkeys, combined with the fact that they are slow to reproduce, meant that Chinese suppliers began to look to Africa. As a result of the increase in demand, and the price that could be charged, Kenya opened three donkey abattoirs. Concerns for donkeys' well-being, however, have resulted in a number of African countries (including Uganda, Tanzania, Botswana, Niger, Burkina Faso, Mali, and Senegal) banning China from buying their donkey products. In 2019, The Donkey Sanctuary warned that the global donkey population could be reduced by half over the next half decade as the demand for ejiao increases in China. Characteristics Donkeys vary considerably in size, depending on both breed and environmental conditions, and heights at the withers range from less than to approximately . Working donkeys in the poorest countries have a life expectancy of 12 to 15 years; in more prosperous countries, they may have a lifespan of 30 to 50 years. Donkeys are adapted to marginal desert lands. Unlike wild and feral horses, wild donkeys in dry areas are solitary and do not form harems. Each adult donkey establishes a home range; breeding over a large area may be dominated by one jack. The loud call or bray of the donkey, which typically lasts for twenty seconds and can be heard for over three kilometres, may help keep in contact with other donkeys over the wide spaces of the desert. Donkeys have large ears, which may pick up more distant sounds, and may help cool the donkey's blood. Donkeys can defend themselves by biting, striking with the front hooves or kicking with the hind legs. Their vocalization, called a bray, is often represented in English as "hee haw". Cross on back Most donkeys have dorsal and shoulder stripes, primitive markings which form a distinctive cross pattern on their backs. Breeding A jenny is normally pregnant for about 12 months, though the gestation period varies from 11 to 14 months, and usually gives birth to a single foal. Births of twins are rare, though less so than in horses. About 1.7 percent of donkey pregnancies result in twins; both foals survive in about 14 percent of those. In general jennies have a conception rate that is lower than that of horses (i.e., less than the 60–65% rate for mares). Although jennies come into heat within 9 or 10 days of giving birth, their fertility remains low, and it is likely the reproductive tract has not returned to normal. Thus it is usual to wait one or two further oestrous cycles before rebreeding, unlike the practice with mares. Jennies are usually very protective of their foals, and some will not come into estrus while they have a foal at side. The time lapse involved in rebreeding, and the length of a jenny's gestation, means that a jenny will have fewer than one foal per year. Because of this and the longer gestation period, donkey breeders do not expect to obtain a foal every year, as horse breeders often do, but may plan for three foals in four years. Donkeys can interbreed with other members of the family Equidae, and are commonly interbred with horses. The hybrid between a jack and a mare is a mule, valued as a working and riding animal in many countries. Some large donkey breeds such as the Asino di Martina Franca, the Baudet du Poitou and the Mammoth Jack are raised only for mule production. The hybrid between a stallion and a jenny is a hinny, and is less common. Like other inter-species hybrids, mules and hinnies are usually sterile. Donkeys can also breed with zebras, in which case the offspring is called a zonkey (among other names). Behaviour Donkeys have a notorious reputation for stubbornness, but this has been attributed to a much stronger sense of self-preservation than exhibited by horses. Likely based on a stronger prey instinct and a weaker connection with humans, it is considerably more difficult to force or frighten a donkey into doing something it perceives to be dangerous for whatever reason. Once a person has earned their confidence they can be willing and companionable partners and very dependable in work. Although formal studies of their behaviour and cognition are rather limited, donkeys appear to be quite intelligent, cautious, friendly, playful, and eager to learn. Use The donkey has been used as a working animal for at least years. Of the more than 40 million donkeys in the world, about 96% are in underdeveloped countries, where they are used principally as pack animals or for draught work in transport or agriculture. After human labour, the donkey is the cheapest form of agricultural power. They may also be ridden, or used for threshing, raising water, milling and other work. Some cultures that prohibit women from working with oxen in agriculture do not extend this taboo to donkeys. In developed countries where their use as beasts of burden has disappeared, donkeys are used to sire mules, to guard sheep, for donkey rides for children or tourists, and as pets. Donkeys may be pastured or stabled with horses and ponies, and are thought to have a calming effect on nervous horses. If a donkey is introduced to a mare and foal, the foal may turn to the donkey for support after it has been weaned from its mother. In the United States, Canada, and Australia, donkeys are used as livestock guard animals for smaller livestock such as sheep. When working as livestock guard animals, also called predator control animals or mobile flock protectors, donkeys will bray loudly and attack potential predators by kicking out with their front hooves. In 2019, donkeys comprised 14.2% of livestock guard animals in the United States. A few donkeys are milked or raised for meat. Approximately 3.5 million donkeys and mules are slaughtered each year for meat worldwide. In Italy, which has the highest consumption of equine meat in Europe and where donkey meat is the main ingredient of several regional dishes, about 1,000 donkeys were slaughtered in 2010, yielding approximately of meat. Asses' milk may command good prices: the average price in Italy in 2009 was €15 per litre, and a price of €6 per 100 ml was reported from Croatia in 2008; it is used for soaps and cosmetics as well as dietary purposes. The niche markets for both milk and meat are expanding. In the past, donkey skin was used in the production of parchment. In 2017, the UK based charity The Donkey Sanctuary estimated that 1.8 million skins were traded every year, but the demand could be as high as 10 million. In China, donkey meat is considered a delicacy with some restaurants specializing in such dishes, and Guo Li Zhuang restaurants offer the genitals of donkeys in dishes. Donkey-hide gelatin is produced by soaking and stewing the hide to make a traditional Chinese medicine product. Ejiao, the gelatine produced by boiling donkey skins, can sell for up to $388 per kilogram, at October 2017 prices. In warfare During World War I John Simpson Kirkpatrick, a British stretcher bearer serving with the Australian and New Zealand Army Corps, and Richard Alexander "Dick" Henderson of the New Zealand Medical Corps used donkeys to rescue wounded soldiers from the battlefield at Gallipoli. According to British food writer Matthew Fort, donkeys were used in the Italian Army. The Mountain Fusiliers each had a donkey to carry their gear, and in extreme circumstances the animal could be eaten. Donkeys have also been used to carry explosives in conflicts that include the war in Afghanistan and others. Care Shoeing Donkey hooves are more elastic than those of horses, and do not naturally wear down as fast. Regular clipping may be required; neglect can lead to permanent damage. Working donkeys may need to be shod. Donkey shoes are similar to horseshoes, but usually smaller and without toe-clips. Nutrition In their native arid and semi-arid climates, donkeys spend more than half of each day foraging and feeding, often on poor quality scrub. The donkey has a tough digestive system in which roughage is efficiently broken down by hind gut fermentation, microbial action in the caecum and large intestine. While there is no marked structural difference between the gastro-intestinal tract of a donkey and that of a horse, the digestion of the donkey is more efficient. It needs less food than a horse or pony of comparable height and weight, approximately 1.5 percent of body weight per day in dry matter, compared to the 2–2.5 percent consumption rate possible for a horse. Donkeys are also less prone to colic. The reasons for this difference are not fully understood; the donkey may have different intestinal flora to the horse, or a longer gut retention time. Donkeys obtain most of their energy from structural carbohydrates. Some suggest that a donkey needs to be fed only straw (preferably barley straw), supplemented with controlled grazing in the summer or hay in the winter, to get all the energy, protein, fat and vitamins it requires; others recommend some grain to be fed, particularly to working animals, and others advise against feeding straw. They do best when allowed to consume small amounts of food over long periods. They can meet their nutritional needs on 6 to 7 hours of grazing per day on average dryland pasture that is not stressed by drought. If they are worked long hours or do not have access to pasture, they require hay or a similar dried forage, with no more than a 1:4 ratio of legumes to grass. They also require salt and mineral supplements, and access to clean, fresh water. In temperate climates the forage available is often too abundant and too rich; over-feeding may cause weight gain and obesity, and lead to metabolic disorders such as founder (laminitis) and hyperlipaemia, or to gastric ulcers. Throughout the world, working donkeys are associated with the very poor, with those living at or below subsistence level. Few receive adequate food, and in general donkeys throughout the Third World are under-nourished and over-worked. Feral populations In some areas domestic donkeys have returned to the wild and established feral populations such as those of the burro of North America and the Asinara donkey of Sardinia, Italy, both of which have protected status. Feral donkeys can also cause problems, notably in environments that have evolved free of any form of equid, such as Hawaii. There is a small community of feral donkeys on St. John, U.S. Virgin Islands, that descend from the animals brought by Danish colonists for agricultural work. While they add to the island's charm, they also cause issues like vegetation damage and road hazards, leading to population management efforts. In Australia, where there may be 5 million feral donkeys, they are regarded as an invasive pest and have a serious impact on the environment. They may compete with livestock and native animals for resources, spread weeds and diseases, foul or damage watering holes and cause erosion. Donkey hybrids The earliest documented donkey hybrid was the kunga, which was used as a draft animal in the Syrian and Mesopotamian kingdoms of the second half of the 3rd millennium BCE. A cross between a captive male Syrian wild ass and a female domesticated donkey (jenny), they represent the earliest known example of human-directed animal hybridization. They were produced at a breeding center at Nagar (modern Tell Brak) and were sold or given as gifts throughout the region, where they became significant status symbols, pulling battle wagons and the chariots of kings, and also being sacrificed to bury with high-status people. They fell out of favor following the introduction of the domestic horse and its donkey hybrid, the mule, into the region at the end of the 3rd millennium BCE. A male donkey (jack) crossed with a female horse produces a mule, while a male horse crossed with a jenny produces a hinny. Horse–donkey hybrids are almost always sterile because of a failure of their developing gametes to complete meiosis. The lower progesterone production of the jenny may also lead to early embryonic loss. In addition, there are reasons not directly related to reproductive biology. Due to different mating behavior, jacks are often more willing to cover mares than stallions are to breed jennies. Further, mares are usually larger than jennies and thus have more room for the ensuing foal to grow in the womb, resulting in a larger animal at birth. It is commonly believed that mules are more easily handled and also physically stronger than hinnies, making them more desirable for breeders to produce. The offspring of a zebra–donkey cross is called a zonkey, zebroid, zebrass, or zedonk; zebra mule is an older term, but still used in some regions today. The foregoing terms generally refer to hybrids produced by breeding a male zebra to a female donkey. Zebra hinny, zebret and zebrinny all refer to the cross of a female zebra with a male donkey. Zebrinnies are rarer than zedonkies because female zebras in captivity are most valuable when used to produce full-blooded zebras. There are not enough female zebras breeding in captivity to spare them for hybridizing; there is no such limitation on the number of female donkeys breeding.
Biology and health sciences
Perissodactyla
null
55528
https://en.wikipedia.org/wiki/Clam
Clam
Clam is a common name for several kinds of bivalve mollusc. The word is often applied only to those that are edible and live as infauna, spending most of their lives halfway buried in the sand of the sea floor or riverbeds. Clams have two shells of equal size connected by two adductor muscles and have a powerful burrowing foot. They live in both freshwater and marine environments; in salt water they prefer to burrow down into the mud and the turbidity of the water required varies with species and location; the greatest diversity of these is in North America. Clams in the culinary sense do not live attached to a substrate (whereas oysters and mussels do) and do not live near the bottom (whereas scallops do). In culinary usage, clams are commonly eaten marine bivalves, as in clam digging and the resulting soup, clam chowder. Many edible clams such as palourde clams are ovoid or triangular; however razor clams have an elongated parallel-sided shell, suggesting an old-fashioned straight razor. Some clams have life cycles of only one year, whilst at least one has been aged to more than 500 years. All clams have two calcareous shells or valves joined near a hinge with a flexible ligament and all are filter feeders. Anatomy A clam's shell consists of two (usually equal) valves, which are connected by a hinge joint and a ligament that can be internal or external. The ligament provides tension to bring the valves apart, whilst one or two adductor muscles can contract to close the valves. Clams also have kidneys, a heart, a mouth, a stomach, and a nervous system. Many have a siphon. Food source and ecology Clams are shellfish that make up an important part of the web of life that keeps the seas functioning, both as filter feeders and as a food source for many different animals. Extant mammals that eat clams include both the Pacific and Atlantic species of walrus, all known subspecies of harbour seals in both the Atlantic and Pacific, most species of sea lions, including the California sea lion, bearded seals and even species of river otters that will consume the freshwater species found in Asia and North America. Birds of all kinds will also eat clams if they can catch them in the littoral zone: roseate spoonbills of North and South America, the Eurasian oystercatcher, whooping crane and common crane, the American flamingo of Florida and the Caribbean Sea, and the common sandpiper are just a handful of the numerous birds that feast on clams all over the world. Most species of octopus have clams as a staple of their diet, up to and including the giants like the Giant Pacific octopus. Culinary Cultures around the world eat clams along with many other types of shellfish. North America In culinary use, within the eastern coast of the United States and large swathes of the Maritimes of Canada, the term "clam" most often refers to the hard clam, Mercenaria mercenaria. It may also refer to a few other common edible species, such as the soft-shell clam, Mya arenaria, and the ocean quahog, Arctica islandica. Another species commercially exploited on the Atlantic Coast of the United States is the surf clam, Spisula solidissima. Scallops are also used for food nationwide, but not cockles: they are more difficult to get than in Europe because of their habit of being further out in the tide than European species on the West Coast, and on the East Coast they are often found in salt marshes and mudflats where mosquitoes are abundant. There are several edible species in the Eastern United States: Americardia media, also known as the strawberry cockle, is found from Cape Hatteras down into the Caribbean Sea and all of Florida; Trachycardium muricatum has a similar range to the strawberry cockle; and Dinocardium robustum, which grows to be many times the size of the European cockle. Historically, they were caught on a small scale on the Outer Banks, barrier islands off North Carolina, and put in soups, steamed or pickled. Up and down the coast of the Eastern U.S., the bamboo clam, Ensis directus, is prized by Americans for making clam strips, although because of its nature of burrowing into the sand very close to the beach, it cannot be harvested by mechanical means without damaging the beaches. The bamboo clam is also notorious for having a very sharp edge of its shell, and when harvested by hand must be handled with great care. On the U.S. West Coast, there are several species that have been consumed for thousands of years, evidenced by middens full of clamshells near the shore and their consumption by nations including the Chumash of California, the Nisqually of Washington state and the Tsawwassen of British Columbia. The butter clam, Saxidomus gigantea, the Pacific razor clam, Siliqua patula, gaper clams Tresus capax, the geoduck clam, Panopea generosa and the Pismo clam, Tivela stultorum are all eaten as delicacies. Clams can be eaten raw, steamed, boiled, baked or fried. They can also be made into clam chowder, clams casino, clam cakes, or stuffies, or they can be cooked using hot rocks and seaweed in a New England clam bake. On the West Coast, they are an ingredient in making cioppino and local variants of ceviche. Asia India Clams are eaten more in the coastal regions of India, especially in the Konkan, Kerala, Bengal and coastal regions of Karnataka, Tamil Nadu regions. In Kerala, clams are used to make curries and fried with coconut. In the Malabar region it is known as "elambakka" and in middle kerala it is known as "kakka". Clam curry made with coconut is a dish from Malabar especially in the Thalassery region. On the southwestern coast of India, also known as the Konkan region of Maharashtra, clams are used in curries and side dishes, like Tisaryachi Ekshipi, which is clams with one shell on. Beary Muslim households in the Mangalore region prepare a main dish with clams called Kowldo Pinde. In Udupi and Mangalore regions, it is called in the local Tulu language. It is used to prepare many dishes like , , and . Japan In Japan, clams are often an ingredient of mixed seafood dishes. They can also be made into hot pot, miso soup or tsukudani. The more commonly used varieties of clams in Japanese cooking are the Shijimi (Corbicula japonica), the Asari (Venerupis philippinarum) and the Hamaguri (Meretrix lusoria). Europe Great Britain The rocky terrain and pebbly shores of the seacoast that surrounds the entire island provide ample habitat for shellfish, and clams are most definitely included in that description. The oddity here is that for a nation whose fortunes have been tied to the sea for hundreds of years, 70% of the seafood cultivated for aquaculture or commercial harvesting is exported to the continent. Historically, Britain has been an island most famous for its passion for beef and dairy products, although there is evidence going back to before most recorded history of coastal shell middens near Weymouth and present day York. (There is also evidence of more thriving local trade in sea products in general by noting the Worshipful Company of Fishmongers was founded in 1272 in London.) Present-day younger populations are eating more of the catch than a generation ago, and there is a prevalence of YouTube videos of locavore scavenging. Shellfish have provided a staple of the British diet since the earliest occupations of the British Isles, as evidenced by the large numbers of remains found in midden mounds near occupied sites. Staple favourites of the British public and local scavengers include the razorfish, Ensis siliqua, a slightly smaller cousin of the bamboo clam of eastern North America. These can be found for sale in open-air markets like Billingsgate Market in London; they have a similar taste to their North American cousin. Cockles, specifically the common cockle, are a staple find on beaches in western Wales and further north in the Dee Estuary. The accidentally introduced hard-shell quahog is also found in British waters, mainly those near England, and does see some use in British cuisine. The Palourde clam by far is the most common native clam and it is both commercially harvested as well as locally collected, and Spisula solida, a relative of the Atlantic surf clam on the other side of the Atlantic, is seeing increased interest as a food source and aquaculture candidate; it is mainly found in the British Isles in Europe. Italy In Italy, clams are often an ingredient of mixed seafood dishes or are eaten together with pasta. The more commonly used varieties of clams in Italian cooking are the vongola (Venerupis decussata), the cozza (Mytilus galloprovincialis) and the tellina (Donax trunculus). Though dattero di mare (Lithophaga lithophaga) was once eaten, overfishing drove it to the verge of extinction (it takes 15 to 35 years to reach adult size and could only be harvested by smashing the calcarean rocks that form its habitat) and the Italian government has declared it an endangered species since 1998 and its harvest and sale are forbidden. Religion In Islam, clams are halal to eat as per three Sunni sects, but not in Hanafi, as only fish are considered halal in Hanafi jurisprudence, not other aquatic animals. In Judaism, clams are treif, i.e. non-kosher. As currency Some species of clam, particularly Mercenaria mercenaria, were in the past used by the Algonquians of Eastern North America to manufacture wampum, a type of sacred jewellery; and to make shell money. Species Edible: Ark clams, family Arcidae (most popular in Indonesia and Singapore) Atlantic jackknife clam: Ensis directus Atlantic surf clam: Spisula solidissima Common cockle: Cerastoderma edule (Native to most of Europe, with very large populations in Ireland and Great Britain) Atlantic Giant Cockle: Dinocardium robustum Geoduck: Panopea abrupta or Panope generosa (largest burrowing clam in the world) Gould's razor shell, Solen strictus (popular in Korea, Japan, and Taiwan) Grooved carpet shell: Ruditapes decussatus Hard clam or Northern Quahog: Mercenaria mercenaria (Native to Eastern USA and Maritime Canada) Lyrate Asiatic hard clam: Meretrix lyrata Manila clam: Venerupis philippinarum Ocean quahog: Arctica islandica Pacific razor clam: Siliqua patula Pipis, Plebidonax deltoides and Paphies australis Pismo clam: Tivela stultorum Pod razor clam: Ensis siliqua Spoot: Ensis magnus Soft clam: Mya arenaria Not usually considered edible: Nut clams or pointed nut clams, family Nuculidae Duck clams or trough shells, family Mactridae Marsh clams, family Corbiculidae File clams, family Limidae Giant clam: Tridacna gigas This clam is native to East Asia and is edible, but should be avoided because of slow reproduction. Asian or Asiatic clam: genus Corbicula Peppery furrow shell: Scrobicularia plana
Biology and health sciences
Mollusks
null
55530
https://en.wikipedia.org/wiki/Personal%20protective%20equipment
Personal protective equipment
Personal protective equipment (PPE) is protective clothing, helmets, goggles, or other garments or equipment designed to protect the wearer's body from injury or infection. The hazards addressed by protective equipment include physical, electrical, heat, chemical, biohazards, and airborne particulate matter. Protective equipment may be worn for job-related occupational safety and health purposes, as well as for sports and other recreational activities. Protective clothing is applied to traditional categories of clothing, and protective gear applies to items such as pads, guards, shields, or masks, and others. PPE suits can be similar in appearance to a cleanroom suit. The purpose of personal protective equipment is to reduce employee exposure to hazards when engineering controls and administrative controls are not feasible or effective to reduce these risks to acceptable levels. PPE is needed when there are hazards present. PPE has the serious limitation that it does not eliminate the hazard at the source and may result in employees being exposed to the hazard if the equipment fails. Any item of PPE imposes a barrier between the wearer/user and the working environment. This can create additional strains on the wearer, impair their ability to carry out their work and create significant levels of discomfort. Any of these can discourage wearers from using PPE correctly, therefore placing them at risk of injury, ill-health or, under extreme circumstances, death. Good ergonomic design can help to minimise these barriers and can therefore help to ensure safe and healthy working conditions through the correct use of PPE. Practices of occupational safety and health can use hazard controls and interventions to mitigate workplace hazards, which pose a threat to the safety and quality of life of workers. The hierarchy of hazard controls provides a policy framework which ranks the types of hazard controls in terms of absolute risk reduction. At the top of the hierarchy are elimination and substitution, which remove the hazard entirely or replace the hazard with a safer alternative. If elimination or substitution measures cannot be applied, engineering controls and administrative controlswhich seek to design safer mechanisms and coach safer human behaviorare implemented. Personal protective equipment ranks last on the hierarchy of controls, as the workers are regularly exposed to the hazard, with a barrier of protection. The hierarchy of controls is important in acknowledging that, while personal protective equipment has tremendous utility, it is not the desired mechanism of control in terms of worker safety. History Early PPE such as body armor, boots and gloves focused on protecting the wearer's body from physical injury. The plague doctors of sixteenth-century Europe also wore protective uniforms consisting of a full-length gown, helmet, glass eye coverings, gloves and boots (see Plague doctor costume) to prevent contagion when dealing with plague victims. These were made of thick material which was then covered in wax to make it water-resistant. A mask with a beak-like structure was filled with pleasant-smelling flowers, herbs and spices to prevent the spread of miasma, the prescientific belief of bad smells which spread disease through the air. In more recent years, scientific personal protective equipment is generally believed to have begun with the cloth facemasks promoted by Wu Lien-teh in the 1910–11 Manchurian pneumonic plague outbreak, although some doctors and scientists of the time doubted the efficacy of facemasks in preventing the spread of that disease since they didn't believe it was transmitted through the air. Types Personal protective equipment can be categorized by the area of the body protected, by the type of hazard, and by the type of garment or accessory. A single itemfor example, bootsmay provide multiple forms of protection: a steel toe cap and steel insoles for protection of the feet from crushing or puncture injuries, impervious rubber and lining for protection from water and chemicals, high reflectivity and heat resistance for protection from radiant heat, and high electrical resistivity for protection from electric shock. The protective attributes of each piece of equipment must be compared with the hazards expected to be found in the workplace. More breathable types of personal protective equipment may not lead to more contamination but do result in greater user satisfaction. Respirators Respirators are protective breathing equipment, which protect the user from inhaling contaminants in the air, thus preserving the health of their respiratory tract. There are two main types of respirators. One type of respirator functions by filtering out chemicals and gases, or airborne particles, from the air breathed by the user. The filtration may be either passive or active (powered). Gas masks and particulate respirators (like N95 masks) are examples of this type of respirator. A second type of respirator protects users by providing clean, respirable air from another source. This type includes airline respirators and self-contained breathing apparatus (SCBA). In work environments, respirators are relied upon when adequate ventilation is not available or other engineering control systems are not feasible or inadequate. In the United Kingdom, an organization that has extensive expertise in respiratory protective equipment is the Institute of Occupational Medicine. This expertise has been built on a long-standing and varied research programme that has included the setting of workplace protection factors to the assessment of efficacy of masks available through high street retail outlets. The Health and Safety Executive (HSE), NHS Health Scotland and Healthy Working Lives (HWL) have jointly developed the RPE (Respiratory Protective Equipment) Selector Tool, which is web-based. This interactive tool provides descriptions of different types of respirators and breathing apparatuses, as well as "dos and don'ts" for each type. In the United States, The National Institute for Occupational Safety and Health (NIOSH) provides recommendations on respirator use, in accordance to NIOSH federal respiratory regulations 42 CFR Part 84. The National Personal Protective Technology Laboratory (NPPTL) of NIOSH is tasked towards actively conducting studies on respirators and providing recommendations. Surgical masks Surgical masks are sometimes considered as PPE, but are not considered as respirators, being unable to stop submicron particles from passing through, and also having unrestricted air flow at the edges of the masks. Surgical masks are not certified for the prevention of tuberculosis. Skin protection Occupational skin diseases such as contact dermatitis, skin cancers, and other skin injuries and infections are the second-most common type of occupational disease and can be very costly. Skin hazards, which lead to occupational skin disease, can be classified into four groups. Chemical agents can come into contact with the skin through direct contact with contaminated surfaces, deposition of aerosols, immersion or splashes. Physical agents such as extreme temperatures and ultraviolet or solar radiation can be damaging to the skin over prolonged exposure. Mechanical trauma occurs in the form of friction, pressure, abrasions, lacerations and contusions. Biological agents such as parasites, microorganisms, plants and animals can have varied effects when exposed to the skin. Any form of PPE that acts as a barrier between the skin and the agent of exposure can be considered skin protection. Because much work is done with the hands, gloves are an essential item in providing skin protection. Some examples of gloves commonly used as PPE include rubber gloves, cut-resistant gloves, chainsaw gloves and heat-resistant gloves. For sports and other recreational activities, many different gloves are used for protection, generally against mechanical trauma. Other than gloves, any other article of clothing or protection worn for a purpose serve to protect the skin. Lab coats for example, are worn to protect against potential splashes of chemicals. Face shields serve to protect one's face from potential impact hazards, chemical splashes or possible infectious fluid. Many migrant workers need training in PPE for Heat Related Illnesses prevention (HRI). Based on study results, research identified some potential gaps in heat safety education. While some farm workers reported receiving limited training on pesticide safety, others did not. This could be remedied by incoming groups of farm workers receiving video and in-person training on HRI prevention. These educational programs for farm workers are most effective when they are based on health behavior theories, use adult learning principles and employ train-the-trainer approaches. Eye protection Each day, about 2,000 US workers have a job-related eye injury that requires medical attention. Eye injuries can happen through a variety of means. Most eye injuries occur when solid particles such as metal slivers, wood chips, sand or cement chips get into the eye. Smaller particles in smokes and larger particles such as broken glass also account for particulate matter-causing eye injuries. Blunt force trauma can occur to the eye when excessive force comes into contact with the eye. Chemical burns, biological agents, and thermal agents, from sources such as welding torches and UV light, also contribute to occupational eye injury. While the required eye protection varies by occupation, the safety provided can be generalized. Safety glasses provide protection from external debris, and should provide side protection via a wrap-around design or side shields. Goggles provide better protection than safety glasses, and are effective in preventing eye injury from chemical splashes, impact, dusty environments and welding. Goggles with high air flow should be used to prevent fogging. Face shields provide additional protection and are worn over the standard eyewear; they also provide protection from impact, chemical, and blood-borne hazards. Full-facepiece respirators are considered the best form of eye protection when respiratory protection is needed as well, but may be less effective against potential impact hazards to the eye. Eye protection for welding is shaded to different degrees, depending on the specific operation. Hearing protection Industrial noise is often overlooked as an occupational hazard, as it is not visible to the eye. Overall, about 22 million workers in the United States are exposed to potentially damaging noise levels each year. Occupational hearing loss accounted for 14% of all occupational illnesses in 2007, with about 23,000 cases significant enough to cause permanent hearing impairment. About 82% of occupational hearing loss cases occurred to workers in the manufacturing sector. In the US the Occupational Safety and Health Administration establishes occupational noise exposure standards. The National Institute for Occupational Safety and Health recommends that worker exposures to noise be reduced to a level equivalent to 85 dBA for eight hours to reduce occupational noise-induced hearing loss. PPE for hearing protection consists of earplugs and earmuffs. Workers who are regularly exposed to noise levels above the NIOSH recommendation should be provided with hearing protection by the employers, as they are a low-cost intervention. A personal attenuation rating can be objectively measured through a hearing protection fit-testing system. The effectiveness of hearing protection varies with the training offered on their use. Protective clothing and ensembles This form of PPE is all-encompassing and refers to the various suits and uniforms worn to protect the user from harm. Lab coats worn by scientists and ballistic vests worn by law enforcement officials, which are worn on a regular basis, would fall into this category. Entire sets of PPE, worn together in a combined suit, are also in this category. Ensembles Below are some examples of ensembles of personal protective equipment, worn together for a specific occupation or task, to provide maximum protection for the user: PPE gowns are used by medical personnel like doctors and nurses. Chainsaw protection (especially a helmet with face guard, hearing protection, kevlar chaps, anti-vibration gloves, and chainsaw safety boots). Bee-keepers wear various levels of protection depending on the temperament of their bees and the reaction of the bees to nectar availability. At minimum, most beekeepers wear a brimmed hat and a veil made of fine mesh netting. The next level of protection involves leather gloves with long gauntlets and some way of keeping bees from crawling up one's trouser legs. In extreme cases, specially fabricated shirts and trousers can serve as barriers to the bees' stingers. Diving equipment, for underwater diving, constitutes equipment such as a diving helmet or diving mask, an underwater breathing apparatus, and a diving suit. Firefighters wear PPE designed to provide protection against fires and various fumes and gases. PPE worn by firefighters include bunker gear, self-contained breathing apparatus, a helmet, safety boots, and a PASS device. In sports Participants in sports often wear protective equipment. Studies performed on the injuries of professional athletes, such as that on NFL players, question the effectiveness of existing personal protective equipment. Limits of the definition The definition of what constitutes personal protective equipment varies by country. In the United States, the laws regarding PPE also vary by state. In 2011, workplace safety complaints were brought against Hustler and other adult film production companies by the AIDS Healthcare Foundation, leading to several citations brought by Cal/OSHA. The failure to use condoms by adult film stars was a violation of Cal/OSHA's Blood borne Pathogens Program, Personal Protective Equipment. This example shows that personal protective equipment can cover a variety of occupations in the United States, and has a wide-ranging definition. Legislation United States The National Defense Authorization Act for 2022 defines personal protective equipment as Under this Act, US military services are prohibited from purchasing PPE from suppliers in North Korea, China, Russia or Iran, unless there are problems with the supply or cost of PPE of "satisfactory quality and quantity". European Union At the European Union level, personal protective equipment is governed by Directive 89/686/EEC on personal protective equipment (PPE). The Directive is designed to ensure that PPE meets common quality and safety standards by setting out basic safety requirements for personal protective equipment, as well as conditions for its placement on the market and free movement within the EU single market. It covers "any device or appliance designed to be worn or held by an individual for protection against one or more health and safety hazards". The directive was adopted on 21 January 1989 and came into force on 1 July 1992. The European Commission additionally allowed for a transition period until 30 June 1995 to give companies sufficient time to adapt to the legislation. After this date, all PPE placed on the market in EU Member States was required to comply with the requirements of Directive 89/686/EEC and carry the CE Marking. Article 1 of Directive 89/686/EEC defines personal protective equipment as any device or appliance designed to be worn or held by an individual for protection against one or more health and safety hazards. PPE which falls under the scope of the Directive is divided into three categories: Category I: simple design (e.g. gardening gloves, footwear, ski goggles) Category II: PPE not falling into category I or III (e.g. personal flotation devices, dry and wet suits, motorcycle personal protective equipment) Category III: complex design (e.g. respiratory equipment, harnesses) Directive 89/686/EEC on personal protective equipment does not distinguish between PPE for professional use and PPE for leisure purposes. Personal protective equipment falling within the scope of the Directive must comply with the basic health and safety requirements set out in Annex II of the Directive. To facilitate conformity with these requirements, harmonized standards are developed at the European or international level by the European Committee for Standardization (CEN, CENELEC) and the International Organization for Standardization in relation to the design and manufacture of the product. Usage of the harmonized standards is voluntary and provides presumption of conformity. However, manufacturers may choose an alternative method of complying with the requirements of the Directive. Personal protective equipment excluded from the scope of the Directive includes: PPE designed for and used by the armed forces or in the maintenance of law and order; PPE for self-defence (e.g. aerosol canisters, personal deterrent weapons); PPE designed and manufactured for personal use against adverse atmospheric conditions (e.g. seasonal clothing, umbrellas), damp and water (e.g. dish-washing gloves) and heat; PPE used on vessels and aircraft but not worn at all times; helmets and visors intended for users of two- or three-wheeled motor vehicles. The European Commission is currently working to revise Directive 89/686/EEC. The revision will look at the scope of the Directive, the conformity assessment procedures and technical requirements regarding market surveillance. It will also align the Directive with the New Legislative Framework. The European Commission is likely to publish its proposal in 2013. It will then be discussed by the European Parliament and Council of the European Union under the ordinary legislative procedure before being published in the Official Journal of the European Union and becoming law. Research Research studies in the form of randomized controlled trials and simulation studies are needed to determine the most effective types of PPE for preventing the transmission of infectious diseases to healthcare workers. There is low certainty evidence that supports making improvements or modifications to PPE in order to help decrease contamination. Examples of modifications include adding tabs to masks or gloves to ease removal and designing protective gowns so that gloves are removed at the same time. In addition, there is low certainty evidence that the following PPE approaches or techniques may lead to reduced contamination and improved compliance with PPE protocols: Wearing double gloves, following specific doffing (removal) procedures such as those from the CDC, and providing people with spoken instructions while removing PPE.
Biology and health sciences
Health and fitness
null
55557
https://en.wikipedia.org/wiki/Timeline
Timeline
A timeline is a list of events displayed in chronological order. It is typically a graphic design showing a long bar labelled with dates paralleling it, and usually contemporaneous events. Timelines can use any suitable scale representing time, suiting the subject and data; many use a linear scale, in which a unit of distance is equal to a set amount of time. This timescale is dependent on the events in the timeline. A timeline of evolution can be over millions of years, whereas a timeline for the day of the September 11 attacks can take place over minutes, and that of an explosion over milliseconds. While many timelines use a linear timescale—especially where very large or small timespans are relevant -- logarithmic timelines entail a logarithmic scale of time; some "hurry up and wait" chronologies are depicted with zoom lens metaphors. More usually, "timeline" refers merely to a data set which could be displayed as described above. For example, this meaning is used in the titles of many Wikipedia articles starting "Timeline of ..." History Time and space (particularly the line) are intertwined concepts in human thought. The line is ubiquitous in clocks in the form of a circle, time is spoken of in terms of length, intervals, a before and an after. The idea of orderly, segmented time is also represented in almanacs, calendars, charts, graphs, genealogical and evolutionary trees, where the line is central. Originally, chronological events were arranged in a mostly textual form. This took form in annals, like king lists. Alongside them, the table was used like in the Greek tables of Olympiads and Roman lists of consuls and triumphs. Annals had little narrative and noted what happened to people, making no distinction between natural and human actions. In Europe, from the 4th century, the dominant chronological notation was the table. This can be partially credited to Eusebius, who laid out the relations between Jewish, pagan, and Christian histories in parallel columns, culminating in the Roman Empire, according to the Christian view when Christ was born to spread salvation as far as possible. His work was widely copied and was among the first printed books. This served the idea of Christian world history and providential time. The table is easy to produce, append, and read with indices, so it also fit the Renaissance scholars' absorption of a wide variety of sources with its focus on commonalities. These uses made the table with years in one column and places of events (kingdoms) on the top the dominant visual structure of time. By the 17th century, historians had started to claim that chronology and geography were the two sources of precise information which bring order to the chaos of history. In geography, Renaissance mapmakers updated Ptolemy's maps and the map became a symbol of the power of monarchs, and knowledge. Likewise, the idea that a singular chronology of world history from contemporary sources is possible affected historians. The want for precision in chronology gave rise to adding historical eclipses to tables, like in the case of Gerardus Mercator. Various graphical experiments emerged, from fitting the whole of history on a calendar year to series of historical drawings, in the hopes of making a metaphorical map of time. Developments in printing and engraving that made practical larger and more detailed book illustrations allowed these changes, but in the 17th century, the table with some modifications continued to dominate. The modern timeline emerged in Joseph Priestley's A Chart of Biography, published in 1765. It presented dates simply and provided an analogue for the concept of historical progress that was becoming popular in the 18th century. However, as Priestley recognized, history is not totally linear. The table has the advantage in that it can present many of these intersections and branching paths. For Priestley, its main use was a "mechanical help to the knowledge of history", not as an image of history. Regardless, the timeline had become very popular during the 18th and 19th centuries. Positivism emerged in the 19th century and the development of chronophotography and tree ring analysis made visible time taking place at various speeds. This encouraged people to think that events might be truly objectively recorded. However, in some cases, filling in a timeline with more data only pushed it towards impracticality. Jacques Barbeu-Duborg's 1753 Chronologie Universelle was mounted on a 54-feet-long (16½ m) scroll. Charles Joseph Minard's 1869 thematic map of casualties of the French army in its Russian campaign put much less focus on the one-directional line. Charles Renouvier's 1876 Uchronie, a branching map of the history of Europe, depicted both the actual course of history and counterfactual paths. At the end of the 19th century, Henri Bergson declared the metaphor of the timeline to be deceiving in Time and Free Will. The question of big history and deep time engendered estranging forms of the timeline, like in Olaf Stapledon's 1930 work Last and First Men where timelines are drawn on scales from the historical to the cosmological. Similar techniques are used by the Long Now Foundation, and the difficulties of chronological representation have been presented by visual artists including Francis Picabia, On Kawara, J. J. Grandville, and Saul Steinberg. Types There are different types of timelines: Text timelines, labeled as text Number timelines, the labels are numbers, commonly line graphs Interactive, clickable, zoomable Video timelines There are many methods to visualize timelines. Historically, timelines were static images and were generally drawn or printed on paper. Timelines relied heavily on graphic design, and the ability of the artist to visualize the data. Uses Timelines are often used in education to help students and researchers with understanding the order or chronology of historical events and trends for a subject. To show time on a specific scale on an axis, a timeline can visualize time lapses between events, durations (such as lifetimes or wars), and the simultaneity or the overlap of spans and events. In historical studies Timelines are particularly useful for studying history, as they convey a sense of change over time. Wars and social movements are often shown as timelines. Timelines are also useful for biographies. Examples include: Timeline of the civil rights movement Timeline of European exploration Timeline of European imperialism Timeline of Solar System exploration Timeline of United States history Timeline of World War I List of timelines of World War II Timeline of religion In natural sciences Timelines are also used in the natural world and sciences, such as in astronomy, biology, chemistry, and geology: 2009 swine flu pandemic timeline Chronology of the universe Geologic time scale Timeline of the evolutionary history of life Timeline of crystallography In project management Another type of timeline is used for project management. Timelines help team members know what milestones need to be achieved and under what time schedule. An example is establishing a project timeline in the implementation phase of the life cycle of a computer system. Software Timelines (no longer constrained by previous space and functional limitations) are now digital and interactive, generally created with computer software. Microsoft Encarta encyclopedia provided one of the earliest multimedia timelines intended for students and the general public. ChronoZoom is another examplespa of interactive timeline software.
Technology
Timekeeping
null
55584
https://en.wikipedia.org/wiki/Gout
Gout
Gout ( ) is a form of inflammatory arthritis characterized by recurrent attacks of pain in a red, tender, hot, and swollen joint, caused by the deposition of needle-like crystals of uric acid known as monosodium urate crystals. Pain typically comes on rapidly, reaching maximal intensity in less than 12 hours. The joint at the base of the big toe is affected (Podagra) in about half of cases. It may also result in tophi, kidney stones, or kidney damage. Gout is due to persistently elevated levels of uric acid (urate) in the blood (hyperuricemia). This occurs from a combination of diet, other health problems, and genetic factors. At high levels, uric acid crystallizes and the crystals deposit in joints, tendons, and surrounding tissues, resulting in an attack of gout. Gout occurs more commonly in those who regularly drink beer or sugar-sweetened beverages; eat foods that are high in purines such as liver, shellfish, or anchovies; or are overweight. Diagnosis of gout may be confirmed by the presence of crystals in the joint fluid or in a deposit outside the joint. Blood uric acid levels may be normal during an attack. Treatment with nonsteroidal anti-inflammatory drugs (NSAIDs), glucocorticoids, or colchicine improves symptoms. Once the acute attack subsides, levels of uric acid can be lowered via lifestyle changes and in those with frequent attacks, allopurinol or probenecid provides long-term prevention. Taking vitamin C and having a diet high in low-fat dairy products may be preventive. Gout affects about 1–2% of adults in the developed world at some point in their lives. It has become more common in recent decades. This is believed to be due to increasing risk factors in the population, such as metabolic syndrome, longer life expectancy, and changes in diet. Older males are most commonly affected. Gout was historically known as "the disease of kings" or "rich man's disease". It has been recognized at least since the time of the ancient Egyptians. Signs and symptoms Gout can present in several ways, although the most common is a recurrent attack of acute inflammatory arthritis (a red, tender, hot, swollen joint). The metatarsophalangeal joint at the base of the big toe is affected most often, accounting for half of cases. Other joints, such as the heels, knees, wrists, and fingers, may also be affected. Joint pain usually begins during the night and peaks within 24 hours of onset. This is mainly due to lower body temperature. Other symptoms may rarely occur along with the joint pain, including fatigue and high fever. Long-standing elevated uric acid levels (hyperuricemia) may result in other symptoms, including hard, painless deposits of uric acid crystals called tophi. Extensive tophi may lead to chronic arthritis due to bone erosion. Elevated levels of uric acid may also lead to crystals precipitating in the kidneys, resulting in kidney stone formation and subsequent acute uric acid nephropathy. Cause The crystallization of uric acid, often related to relatively high levels in the blood, is the underlying cause of gout. This can occur because of diet, genetic predisposition, or underexcretion of urate, the salts of uric acid. Underexcretion of uric acid by the kidney is the primary cause of hyperuricemia in about 90% of cases, while overproduction is the cause in less than 10%. About 10% of people with hyperuricemia develop gout at some point in their lifetimes. The risk, however, varies depending on the degree of hyperuricemia. When levels are between 415 and 530 μmol/L (7 and 8.9 mg/dL), the risk is 0.5% per year, while in those with a level greater than 535 μmol/L (9 mg/dL), the risk is 4.5% per year. Lifestyle Dietary causes account for about 12% of gout, and include a strong association with the consumption of alcohol, sugar-sweetened beverages, meat, and seafood. The dietary mechanisms and nutritional basis involved in gout provide evidence for strategies of prevention and improvement of gout, and dietary modifications based on effective regulatory mechanisms may be a promising strategy to reduce the high prevalence of gout. Among foods richest in purines yielding high amounts of uric acid are dried anchovies, shrimp, organ meat, dried mushrooms, seaweed, and beer yeast. Chicken and potatoes also appear related. Other triggers include physical trauma and surgery. Studies in the early 2000s found that other dietary factors are not relevant. Specifically, a diet with moderate purine-rich vegetables (e.g., beans, peas, lentils, and spinach) is not associated with gout. Neither is total dietary protein. Alcohol consumption is strongly associated with increased risk, with wine presenting somewhat less of a risk than beer or spirits. Eating skim milk powder enriched with glycomacropeptide (GMP) and G600 milk fat extract may reduce pain but may result in diarrhea and nausea. Physical fitness, healthy weight, low-fat dairy products, and to a lesser extent, coffee and taking vitamin C, appear to decrease the risk of gout; however, taking vitamin C supplements does not appear to have a significant effect in people who already have established gout. Peanuts, brown bread, and fruit also appear protective. This is believed to be partly due to their effect in reducing insulin resistance. Other than dietary and lifestyle choices, the recurrence of gout attacks is also linked to the weather. High ambient temperature and low relative humidity may increase the risk of a gout attack. Genetics Gout is partly genetic, contributing to about 60% of variability in uric acid level. The SLC2A9, SLC22A12, and ABCG2 genes have been found to be commonly associated with gout and variations in them can approximately double the risk. Loss-of-function mutations in SLC2A9 and SLC22A12 causes low blood uric acid levels by reducing urate absorption and unopposed urate secretion. The rare genetic disorders familial juvenile hyperuricemic nephropathy, medullary cystic kidney disease, phosphoribosylpyrophosphate synthetase superactivity and hypoxanthine-guanine phosphoribosyltransferase deficiency as seen in Lesch–Nyhan syndrome, are complicated by gout. Medical conditions Gout frequently occurs in combination with other medical problems. Metabolic syndrome, a combination of abdominal obesity, hypertension, insulin resistance, and abnormal lipid levels, occurs in nearly 75% of cases. Other conditions commonly complicated by gout include lead poisoning, kidney failure, hemolytic anemia, psoriasis, solid organ transplants, and myeloproliferative disorders such as polycythemia. A body mass index greater than or equal to 35 increases male risk of gout threefold. Chronic lead exposure and lead-contaminated alcohol are risk factors for gout due to the harmful effect of lead on kidney function. Medication Diuretics have been associated with attacks of gout, but a low dose of hydrochlorothiazide does not seem to increase risk. Other medications that increase the risk include niacin, aspirin (acetylsalicylic acid), ACE inhibitors, angiotensin receptor blockers, beta blockers, ritonavir, and pyrazinamide. The immunosuppressive drugs ciclosporin and tacrolimus are also associated with gout, the former more so when used in combination with hydrochlorothiazide. Hyperuricemia may be induced by excessive use of Vitamin D supplements. Levels of serum uric acid have been positively associated with 25(OH) D. The incidence of hyperuricemia increased 9.4% for every 10 nmol/L increase in 25(OH) D (P < 0.001). Pathophysiology Gout is a disorder of purine metabolism, and occurs when its final metabolite, uric acid, crystallizes in the form of monosodium urate, precipitating and forming deposits (tophi) in joints, on tendons, and in the surrounding tissues. Microscopic tophi may be walled off by a ring of proteins, which blocks interaction of the crystals with cells and therefore avoids inflammation. Naked crystals may break out of walled-off tophi due to minor physical damage to the joint, medical or surgical stress, or rapid changes in uric acid levels. When they break through the tophi, they trigger a local immune-mediated inflammatory reaction in macrophages, which is initiated by the NLRP3 inflammasome protein complex. Activation of the NLRP3 inflammasome recruits the enzyme caspase 1, which converts pro-interleukin 1β into active interleukin 1β, one of the key proteins in the inflammatory cascade. An evolutionary loss of urate oxidase (uricase), which breaks down uric acid, in humans and higher primates has made this condition common. The triggers for precipitation of uric acid are not well understood. While it may crystallize at normal levels, it is more likely to do so as levels increase. Other triggers believed to be important in acute episodes of arthritis include cool temperatures, rapid changes in uric acid levels, acidosis, articular hydration and extracellular matrix proteins. The increased precipitation at low temperatures partly explains why the joints in the feet are most commonly affected. Rapid changes in uric acid may occur due to factors including trauma, surgery, chemotherapy and diuretics. The starting or increasing of urate-lowering medications can lead to an acute attack of gout with febuxostat of a particularly high risk. Calcium channel blockers and losartan are associated with a lower risk of gout compared to other medications for hypertension. Diagnosis Gout may be diagnosed and treated without further investigations in someone with hyperuricemia and the classic acute arthritis of the base of the great toe (known as podagra). Synovial fluid analysis should be done if the diagnosis is in doubt. Plain X-rays are usually normal and are not useful for confirming a diagnosis of early gout. They may show signs of chronic gout such as bone erosion. Synovial fluid A definitive diagnosis of gout is based upon the identification of monosodium urate crystals in synovial fluid or a tophus. All synovial fluid samples obtained from undiagnosed inflamed joints by arthrocentesis should be examined for these crystals. Under polarized light microscopy, they have a needle-like morphology and strong negative birefringence. This test is difficult to perform and requires a trained observer. The fluid must be examined relatively soon after aspiration, as temperature and pH affect solubility. Blood tests Hyperuricemia is a classic feature of gout, but nearly half of the time gout occurs without hyperuricemia and most people with raised uric acid levels never develop gout. Thus, the diagnostic utility of measuring uric acid levels is limited. Hyperuricemia is defined as a plasma urate level greater than 420 μmol/L (7.0 mg/dL) in males and 360 μmol/L (6.0 mg/dL) in females. Other blood tests commonly performed are white blood cell count, electrolytes, kidney function and erythrocyte sedimentation rate (ESR). However, both the white blood cells and ESR may be elevated due to gout in the absence of infection. A white blood cell count as high as 40.0×109/l (40,000/mm3) has been documented. Differential diagnosis The most important differential diagnosis in gout is septic arthritis. This should be considered in those with signs of infection or those who do not improve with treatment. To help with diagnosis, a synovial fluid Gram stain and culture may be performed. Other conditions that can look similar include CPPD (pseudogout), rheumatoid arthritis, psoriatic arthritis, palindromic rheumatism, and reactive arthritis. Gouty tophi, in particular when not located in a joint, can be mistaken for basal cell carcinoma or other neoplasms. Prevention Risk of gout attacks can be lowered by complete abstinence from drinking alcoholic beverages, reducing the intake of fructose (e.g. high fructose corn syrup), sucrose, and purine-rich foods of animal origin, such as organ meats and seafood. Eating dairy products, vitamin C-rich foods, coffee, and cherries may help prevent gout attacks, as does losing weight. Gout may be secondary to sleep apnea via the release of purines from oxygen-starved cells. Treatment of apnea can lessen the occurrence of attacks. Medications As of 2020, allopurinol is generally the recommended preventative treatment if medications are used. A number of other medications may occasionally be considered to prevent further episodes of gout, including probenecid, febuxostat, benzbromarone, and colchicine. Long term medications are not recommended until a person has had two attacks of gout, unless destructive joint changes, tophi, or urate nephropathy exist. It is not until this point that medications are cost-effective. They are not usually started until one to two weeks after an acute flare has resolved, due to theoretical concerns of worsening the attack. They are often used in combination with either an NSAID or colchicine for the first three to six months. While it has been recommended that urate-lowering measures should be increased until serum uric acid levels are below 300–360 μmol/L (5.0–6.0 mg/dL), there is little evidence to support this practice over simply putting people on a standard dose of allopurinol. If these medications are in chronic use at the time of an attack, it is recommended that they be continued. Levels that cannot be brought below 6.0 mg/dL while attacks continue indicates refractory gout. While historically it is not recommended to start allopurinol during an acute attack of gout, this practice appears acceptable. Allopurinol blocks uric acid production, and is the most commonly used agent. Long term therapy is safe and well-tolerated and can be used in people with renal impairment or urate stones, although hypersensitivity occurs in a small number of individuals. The HLA-B*58:01 allele of the human leukocyte antigen B (HLA-B) is strongly associated with severe cutaneous adverse reactions during treatment with allopurinol and is most common among Asian subpopulations, notably those of Korean, Han-Chinese, or Thai descent. Febuxostat is only recommended in those who cannot tolerate allopurinol. There are concerns about more deaths with febuxostat compared to allopurinol. Febuxostat may also increase the rate of gout flares during early treatment. However, there is tentative evidence that febuxostat may bring down urate levels more than allopurinol. Probenecid appears to be less effective than allopurinol and is a second line agent. Probenecid may be used if undersecretion of uric acid is present (24-hour urine uric acid less than 800 mg). It is, however, not recommended if a person has a history of kidney stones. Probenecid can be used in a combined therapy with allopurinol is more effective than allopurinol monotherapy. Pegloticase is an option for the 3% of people who are intolerant to other medications. It is a third line agent. Pegloticase is given as an intravenous infusion every two weeks, and reduces uric acid levels. Pegloticase is useful decreasing tophi but has a high rate of side effects and many people develop resistance to it. Using lesinurad plus febuxostat is more beneficial for tophi resolution than lesinural with febuxostat, with similar side effects. Lesinural plus allopurinol is not effective for tophi resolution. Potential side effects include kidney stones, anemia and joint pain. In 2016, it was withdrawn from the European market. Lesinurad reduces blood uric acid levels by preventing uric acid absorption in the kidneys. It was approved in the United States for use together with allopurinol, among those who were unable to reach their uric acid level targets. Side effects include kidney problems and kidney stones. Treatment The initial aim of treatment is to settle the symptoms of an acute attack. Repeated attacks can be prevented by medications that reduce serum uric acid levels. Tentative evidence supports the application of ice for 20 to 30 minutes several times a day to decrease pain. Options for acute treatment include nonsteroidal anti-inflammatory drugs (NSAIDs), colchicine, and glucocorticoids. While glucocorticoids and NSAIDs work equally well, glucocorticoids may be safer. Options for prevention include allopurinol, febuxostat, and probenecid. Lowering uric acid levels can cure the disease. Treatment of associated health problems is also important. Lifestyle interventions have been poorly studied. It is unclear whether dietary supplements have an effect in people with gout. NSAIDs NSAIDs are the usual first-line treatment for gout. No specific agent is significantly more or less effective than any other. Improvement may be seen within four hours and treatment is recommended for one to two weeks. They are not recommended for those with certain other health problems, such as gastrointestinal bleeding, kidney failure, or heart failure. While indometacin has historically been the most commonly used NSAID, an alternative, such as ibuprofen, may be preferred due to its better side effect profile in the absence of superior effectiveness. For those at risk of gastric side effects from NSAIDs, an additional proton pump inhibitor may be given. There is some evidence that COX-2 inhibitors may work as well as nonselective NSAIDs for acute gout attack with fewer side effects. Colchicine Colchicine is an alternative for those unable to tolerate NSAIDs. At high doses, side effects (primarily gastrointestinal upset) limit its usage. At lower doses, which are still effective, it is well tolerated. Colchicine may interact with other commonly prescribed drugs, such as atorvastatin and erythromycin, among others. Glucocorticoids Glucocorticoids have been found to be as effective as NSAIDs and may be used if contraindications exist for NSAIDs. They also lead to improvement when injected into the joint. A joint infection must be excluded, however, as glucocorticoids worsen this condition. There were no short-term adverse effects reported. Others Interleukin-1 inhibitors, such as canakinumab, showed moderate effectiveness for pain relief and reduction of joint swelling, but have increased risk of adverse events, such as back pain, headache, and increased blood pressure. They, however, may work less well than usual doses of NSAIDS. The high cost of this class of drugs may also discourage their use for treating gout. Prognosis Without treatment, an acute attack of gout usually resolves in five to seven days; however, 60% of people have a second attack within one year. Those with gout are at increased risk of hypertension, diabetes mellitus, metabolic syndrome, and kidney and cardiovascular disease and thus are at increased risk of death. It is unclear whether medications that lower urate affect cardiovascular disease risks. This may be partly due to its association with insulin resistance and obesity, but some of the increased risk appears to be independent. Without treatment, episodes of acute gout may develop into chronic gout with destruction of joint surfaces, joint deformity, and painless tophi. These tophi occur in 30% of those who are untreated for five years, often in the helix of the ear, over the olecranon processes, or on the Achilles tendons. With aggressive treatment, they may dissolve. Kidney stones also frequently complicate gout, affecting between 10 and 40% of people, and occur due to low urine pH promoting the precipitation of uric acid. Other forms of chronic kidney dysfunction may occur. Epidemiology Gout affects around 1–2% of people in the Western world at some point in their lifetimes and is becoming more common. Some 5.8 million people were affected in 2013. Rates of gout approximately doubled between 1990 and 2010. This rise is believed to be due to increasing life expectancy, changes in diet and an increase in diseases associated with gout, such as metabolic syndrome and high blood pressure. Factors that influence rates of gout include age, race, and the season of the year. In men over 30 and women over 50, rates are 2%. In the United States, gout is twice as likely in males of African descent than those of European descent. Rates are high among Polynesians, but the disease is rare in aboriginal Australians, despite a higher mean uric acid serum concentration in the latter group. It has become common in China, Polynesia, and urban Sub-Saharan Africa. Some studies found that attacks of gout occur more frequently in the spring. This has been attributed to seasonal changes in diet, alcohol consumption, physical activity, and temperature. Taiwan, Hong Kong and Singapore have relatively higher prevalence of gout. A study based on the National Health Insurance Research Database (NHIRD) estimated that 4.92% of Taiwanese residents have gout in 2004. A survey hold by the Hong Kong government found that 5.1% of Hong Kong resident between 45–59 years and 6.1% of those older than 60 years have gout. A study hold in Singapore found that 2,117 in 52,322 people between 45–74 years have gout, roughly equals to 4.1%. History The English term "gout" first occurs in the work of Randolphus of Bocking, around 1200 AD. It derives from the Latin word , meaning "a drop" (of liquid). According to the Oxford English Dictionary, this originates from humorism and "the notion of the 'dropping' of a morbid material from the blood in and around the joints". Gout has been known since antiquity. Historically, wits have referred to it as "the king of diseases and the disease of kings" or as "rich man's disease". The Ebers papyrus and the Edwin Smith papyrus, () each mention arthritis of the first metacarpophalangeal joint as a distinct type of arthritis. These ancient manuscripts cite (now missing) Egyptian texts about gout that are claimed to have been written 1,000 years earlier and ascribed to Imhotep. Greek physician Hippocrates around 400 BC commented on it in his Aphorisms, noting its absence in eunuchs and premenopausal women. Aulus Cornelius Celsus (30 AD) described the linkage with alcohol, later onset in women and associated kidney problems: Benjamin Welles, an English physician, authored the first medical book on gout, A Treatise of the Gout, or Joint Evil, in 1669. In 1683, Thomas Sydenham, an English physician, described its occurrence in the early hours of the morning and its predilection for older males: In the 18th century, Thomas Marryat distinguished different manifestations of gout: The Gout is a chronical disease most commonly affecting the feet. If it attacks the knees, it is called ; if the hands, ; if the elbow, Onagra; if the shoulder, ; if the back or loins, Lumbago. Dutch scientist Antonie van Leeuwenhoek first described the microscopic appearance of urate crystals in 1679. In 1848, English physician Alfred Baring Garrod identified excess uric acid in the blood as the cause of gout. Other animals Gout is rare in most other animals due to their ability to produce uricase, which breaks down uric acid. Humans and other great apes do not have this ability; thus, gout is common. Other animals with uricase include fish, amphibians and most non-primate mammals. The Tyrannosaurus rex specimen known as "Sue" is believed to have had gout. Research A number of new medications are under study for treating gout, including anakinra, canakinumab, and rilonacept. Canakinumab may result in better outcomes than a low dose of a glucocorticoid, but costs five thousand times more. A recombinant uricase enzyme (rasburicase) is available but its use is limited, as it triggers an immune response. Less antigenic versions are in development.
Biology and health sciences
Non-infectious disease
null
55602
https://en.wikipedia.org/wiki/Peanut
Peanut
The peanut (Arachis hypogaea), also known as the groundnut, goober (US), goober pea, pindar (US) or monkey nut (UK), is a legume crop grown mainly for its edible seeds. It is widely grown in the tropics and subtropics by small and large commercial producers, both as grain legume and as an oil crop. Atypically among legumes, peanut pods develop underground leading botanist Carl Linnaeus to name peanuts hypogaea, which means "under the earth". The peanut belongs to the botanical family Fabaceae (or Leguminosae), commonly known as the legume, bean, or pea family. Like most other legumes, peanuts harbor symbiotic nitrogen-fixing bacteria in root nodules, which improve soil fertility, making them valuable in crop rotations. Despite not meeting the botanical definition of a nut as "a fruit whose ovary wall becomes hard at maturity," peanuts are usually categorized as nuts for culinary purposes and in common English. Some people are allergic to peanuts, a potentially fatal reaction to peanuts; this is distinct from tree nut allergies. Peanuts are similar in taste and nutritional profile to tree nuts such as walnuts and almonds, and, as a culinary nut, are often served in similar ways in Western cuisines. World production of shelled peanuts in 2020 was 54 million tonnes, led by China with 34% of the total. Botanical description The peanut is an annual herbaceous plant growing tall. As a legume, it belongs to the botanical family Fabaceae, also known as Leguminosae, and commonly known as the legume, bean, or pea family. Like most other legumes, peanuts harbor symbiotic nitrogen-fixing bacteria in their root nodules. The leaves are opposite and pinnate with four leaflets (two opposite pairs; no terminal leaflet); each leaflet is long and across. Like those of many other legumes, the leaves are nyctinastic; that is, they have "sleep" movements, closing at night. The flowers are across, and yellowish orange with reddish veining. They are borne in axillary clusters on the stems above ground and last for just one day. The ovary is located at the base of what appears to be the flower stem but is a highly elongated floral cup. Peanut fruits develop underground, an unusual feature known as geocarpy. After fertilization, a short stalk at the base of the ovary—often termed a gynophore, but which appears to be part of the ovary—elongates to form a thread-like structure known as a "peg". This peg grows into the soil, allowing the fruit to develop underground. These pods, technically called legumes, are long, normally containing one to four seeds. The shell of the peanut fruit consists primarily of a mesocarp with several large veins traversing its length. Parts of the peanut include: Shell – outer covering, in contact with soil Cotyledons (two) – the main edible part Seed coat – brown paper-like covering of the edible part Radicle – embryonic root at the bottom of the cotyledon, which can be snapped off Plumule – embryonic shoot emerging from the top of the radicle Peanut phytochemistry Peanuts contain polyphenols, polyunsaturated and monounsaturated fats, phytosterols and dietary fiber in amounts similar to several tree nuts. Peanut skins contain resveratrol. History The Arachis genus is native to South America, east of the Andes, around Peru, Bolivia, Argentina, and Brazil. Cultivated peanuts (A. hypogaea) arose from a hybrid between two wild species of peanut, thought to be A. duranensis and A. ipaensis. The initial hybrid would have been sterile, but spontaneous chromosome doubling restored its fertility, forming what is termed an amphidiploid or allotetraploid. Genetic analysis suggests the hybridization may have occurred only once and gave rise to A. monticola, a wild form of peanut that occurs in a few limited locations in northwestern Argentina, or in southeastern Bolivia, where the peanut landraces with the most wild-like features are grown today, and by artificial selection to A. hypogaea. The process of domestication through artificial selection made A. hypogaea dramatically different from its wild relatives. The domesticated plants are bushier, more compact, and have a different pod structure and larger seeds. From this primary center of origin, cultivation spread and formed secondary and tertiary centers of diversity in Peru, Ecuador, Brazil, Paraguay, and Uruguay. Over time, thousands of peanut landraces evolved; these are classified into six botanical varieties and two subspecies (as listed in the peanut scientific classification table). Subspecies A. h. fastigiata types are more upright in their growth habit and have shorter crop cycles. Subspecies A. h. hypogaea types spread more on the ground and have longer crop cycles. The oldest known archeological remains of pods have been dated at about 7,600 years old, possibly a wild species that was in cultivation, or A. hypogaea in the early phase of domestication. They were found in Peru, where dry climatic conditions are favorable for the preservation of organic material. Almost certainly, peanut cultivation antedated this at the center of origin where the climate is moister. Many pre-Columbian cultures, such as the Moche, depicted peanuts in their art. Cultivation was well-established in Mesoamerica before the Spanish arrived. There, the conquistadors found the (the plant's Nahuatl name, hence the name in Spanish cacahuate) offered for sale in the marketplace of Tenochtitlan. Its cultivation was introduced in Europe in the 19th century through Spain, particularly Valencia, where it is still produced, albeit marginally. European traders later spread the peanut worldwide, and cultivation is now widespread in tropical and subtropical regions. In West Africa, it substantially replaced a crop plant from the same family, the Bambara groundnut, whose seed pods also develop underground. In Asia, it became an agricultural mainstay, and this region is now the largest producer in the world. Peanuts were introduced to the US during the colonial period and grown as a garden crop. According to Bernard Romans, groundnuts were introduced into colonial East Florida by Black people from Guinea, where the plant is also endemic. Starting in 1870 they were used as an animal feedstock until human consumption grew in the 1930s. George Washington Carver (1864-1943) championed the peanut as part of his efforts for agricultural extension in the American South, where soils were depleted after repeated plantings of cotton. He invented and promulgated hundreds of peanut-based products, including cosmetics, paints, plastics, gasoline and nitroglycerin. The US Department of Agriculture initiated a program to encourage agricultural production and human consumption of peanuts in the late 19th and early 20th centuries. Peanut butter was developed in the 1890s in the US. It became well known after the Beech-Nut company began selling peanut butter at the St. Louis World's Fair of 1904. Varieties Cultivars in the United States There are many peanut cultivars grown around the world. The market classes grown in the United States are Spanish, Runner, Virginia, and Valencia. Peanut production in the US is divided into three major areas: the southeastern US region which includes Alabama, Georgia, and Florida; the southwestern US region which includes New Mexico, Oklahoma, and Texas; and the third region in the general eastern US which includes Virginia, North Carolina, and South Carolina. In Georgia, Naomi Chapman Woodroof is responsible for developing the breeding program of peanuts resulting in a harvest almost five times greater. Certain cultivar groups are preferred for particular characteristics, such as differences in flavor, oil content, size, shape, and disease resistance. Most peanuts marketed in the shell are of the Virginia type, along with some Valencias selected for large size and the attractive appearance of the shell. Spanish peanuts are used mostly for peanut candy, salted nuts, and peanut butter. Spanish group The small Spanish types are grown in South Africa and the southwestern and southeastern United States. Until 1940, 90% of the peanuts grown in the US state of Georgia were Spanish types, but the trend since then has been larger-seeded, higher-yielding, more disease-resistant cultivars. Spanish peanuts have a higher oil content than other types of peanuts. In the US, the Spanish group is primarily grown in New Mexico, Oklahoma, and Texas. Cultivars of the Spanish group include 'Dixie Spanish', 'Improved Spanish 2B', 'GFA Spanish', 'Argentine', 'Spantex', 'Spanette', 'Shaffers Spanish', 'Natal Common (Spanish)', "White Kernel Varieties', 'Starr', 'Comet', 'Florispan', 'Spanhoma', 'Spancross', 'OLin', 'Tamspan 90', 'AT 9899–14', 'Spanco', 'Wilco I', 'GG 2', 'GG 4', 'TMV 2', and 'Tamnut 06'. Runner group Since 1940, the southeastern US region has seen a shift to producing Runner group peanuts. This shift is due to good flavor, better roasting characteristics, and higher yields when compared to Spanish types, leading to food manufacturers' preference for the use in peanut butter and salted nuts. Georgia's production is now almost 100% Runner-type. Cultivars of Runners include 'Southeastern Runner 56-15', 'Dixie Runner', 'Early Runner', 'Virginia Bunch 67', 'Bradford Runner', 'Egyptian Giant' (also known as 'Virginia Bunch' and 'Giant'), 'Rhodesian Spanish Bunch' (Valencia and Virginia Bunch), 'North Carolina Runner 56-15', 'Florunner', 'Virugard', 'Georgia Green', 'Tamrun 96', 'Flavor Runner 458', 'Tamrun OL01', 'Tamrun OL02' 'AT-120', 'Andru-93', 'Southern Runner', 'AT1-1', 'Georgia Brown', 'GK-7', and 'AT-108'. Virginia group The large-seeded Virginia group peanuts are grown in the US states of Virginia, North Carolina, Tennessee, Texas, New Mexico, Oklahoma, and parts of Georgia. They are increasing in popularity due to the demand for large peanuts for processing, particularly for salting, confections, and roasting in shells. Virginia group peanuts are either bunch or running in growth habit. The bunch type is upright to spreading. It attains a height of , and a spread of , with rows that seldom cover the ground. The pods are borne within of the base of the plant. Cultivars of Virginia-type peanuts include 'NC 7', 'NC 9', 'NC 10C', 'NC-V 11', 'VA 93B', 'NC 12C', 'VA-C 92R', 'Gregory', 'VA 98R', 'Perry', 'Wilson, 'Hull', 'AT VC-2' and 'Shulamit'. Valencia group Valencia group peanuts are coarse and have heavy reddish stems and large foliage. In the United States, large commercial production is primarily in the South Plains of West Texas and in eastern New Mexico near and south of Portales, but they are grown on a small scale elsewhere in the South as the best-flavored and preferred type for boiled peanuts. They are comparatively tall, reaching a height of and a spread of . Peanut pods are borne on pegs arising from the main stem and the side branches. Most pods are clustered around the base of the plant, and only a few are found several inches away. Valencia types are three- to five-seeded and smooth, with no constriction of the shell between the seeds. Seeds are oval and tightly crowded into the pods. Typical seed weight is 0.4 to 0.5 g. This type is used heavily for selling roasted and salted in-shell peanuts and peanut butter. Varieties include 'Valencia A' and 'Valencia C'. Tennessee Red and Tennessee White groups These are alike except for the color of the seed. Sometimes known also as Texas Red or White, the plants are similar to Valencia types, except the stems are green to greenish brown, and the pods are rough, irregular, and have a smaller proportion of kernels. Cultivation Peanuts grow best in light, sandy loam soil with a pH of 5.9–7. Their capacity to fix nitrogen means that providing they nodulate properly, peanuts benefit little or not at all from nitrogen-containing fertilizer, and they improve soil fertility. Therefore, they are valuable in crop rotations. Also, the yield of the peanut crop itself is increased in rotations through reduced diseases, pests, and weeds. For example, in Texas, peanuts in a three-year rotation with corn yield 50% more than nonrotated peanuts. Adequate levels of phosphorus, potassium, calcium, magnesium, and micronutrients are also necessary for good yields. Peanuts need warm weather throughout the growing season to develop well. They can be grown with as little as of water, but for best yields need at least . Depending on growing conditions and the cultivar of peanut, harvest is usually 90 to 130 days after planting for subspecies A. h. fastigiata types, and 120 to 150 days after planting for subspecies A. h. hypogaea types. Subspecies A. h. hypogaea types yield more and are usually preferred where the growing seasons are sufficiently long. Peanut plants continue to produce flowers when pods are developing; therefore, some pods are immature even when they are ready for harvest. To maximize yield, the timing of harvest is important. If it is too early, too many pods will be unripe; if too late, the pods will snap off at the stalk and remain in the soil. For harvesting, the entire plant, including most of the roots, is removed from the soil. The pods are covered with a network of raised veins and are constricted between seeds. The main yield-limiting factors in semi-arid regions are drought and high-temperature stress. The stages of reproductive development before flowering, at flowering, and at early pod development are particularly sensitive to these constraints. Apart from nitrogen, phosphorus and potassium, other nutrient deficiencies causing significant yield losses are calcium, iron and boron. Biotic stresses mainly include pests, diseases, and weeds. Among insects pests, pod borers, aphids, and mites are of importance. The most important diseases are leaf spots, rusts, and the toxin-producing fungus Aspergillus. Harvesting occurs in two stages. In mechanized systems, a machine is used to cut off the main root of the peanut plant by cutting through the soil just below the level of the peanut pods. The machine lifts the "bush" from the ground, shakes it, then inverts it, leaving the plant upside down to keep the peanuts out of the soil. This allows the peanuts to dry slowly to a little less than a third of their original moisture level over three to four days. Traditionally, peanuts were pulled and inverted by hand. After the peanuts have dried sufficiently, they are threshed, removing the peanut pods from the rest of the bush. Peanuts must be dried properly and stored in dry conditions. If they are too high in moisture, or if storage conditions are poor, they may become infected by the mold fungus Aspergillus flavus. Many strains of this fungus release toxic and highly carcinogenic substances called aflatoxins. Pests and diseases If peanut plants are subjected to severe drought during pod formation, or if pods are not properly stored, they may become contaminated with the mold Aspergillus flavus which may produce carcinogenic substances called aflatoxins. Lower-quality peanuts, particularly where mold is evident, are more likely to be contaminated. The USDA tests every truckload of raw peanuts for aflatoxin; any containing aflatoxin levels of more than 15 parts per billion are destroyed. The peanut industry has manufacturing steps to ensure all peanuts are inspected for aflatoxin. Peanuts tested to have high aflatoxin are used to make peanut oil where the mold can be removed. The plant leaves can also be affected by a fungus, Alternaria arachidis. Toxicity Allergies Some people (1.4–2% in Europe and the United States) report that they experience allergic reactions to peanut exposure; symptoms can be especially severe, ranging from watery eyes to anaphylactic shock, the latter of which is generally fatal if untreated. Eating a small amount of peanuts can cause a reaction. Because of their widespread use in prepared and packaged foods, avoiding peanuts can be difficult. Reading ingredients and warnings on product packaging is necessary to avoid this allergen. Foods processed in facilities that also handle peanuts on the same equipment as other foods are required to carry such warnings on their labels. Avoiding cross-contamination with peanuts and peanut products (along with other severe allergens like shellfish) is a promoted and common practice of which chefs and restaurants worldwide are becoming aware. The hygiene hypothesis of allergy states that a lack of early childhood exposure to infectious agents like germs and parasites could be causing the increase in food allergies. Studies comparing age of peanut introduction in Great Britain with introduction in Israel showed that delaying exposure to peanuts in childhood can dramatically increase the risk of developing peanut allergies. Peanut allergy has been associated with the use of skin preparations containing peanut oil among children, but the evidence is not regarded as conclusive. Peanut allergies have also been associated with family history and intake of soy products. Some school districts in the US and elsewhere have banned peanuts or products containing peanuts. However, the efficacy of the bans in reducing allergic reactions is uncertain. A 2015 study in Canada found no difference in the percentage of accidental exposures occurring in schools prohibiting peanuts compared to schools allowing them. Refined peanut oil will not cause allergic reactions in most people with peanut allergies. However, crude (unrefined) peanut oils have been shown to contain protein, which may cause allergic reactions. In a randomized, double-blind crossover study, 60 people with proven peanut allergy were challenged with both crude peanut oil and refined peanut oil. The authors concluded, "Crude peanut oil caused allergic reactions in 10% of allergic subjects studied and should continue to be avoided." They also stated, "Refined peanut oil does not seem to pose a risk to most people with peanut allergy." However, they point out that refined peanut oil can still pose a risk to peanut-allergic individuals if the oil that has previously been used for cooking foods containing peanuts is reused. Uses Nutrition Raw Valencia peanuts are 4% water, 48% fat, 25% protein, and 21% carbohydrates, including 9% dietary fiber (USDA nutrient data). Peanuts are rich in essential nutrients. In a reference amount of , peanuts provide of food energy, and are an excellent source (defined as more than 20% of the Daily Value, DV) of several B vitamins, vitamin E, several dietary minerals, such as manganese (95% DV), magnesium (52% DV) and phosphorus (48% DV), and dietary fiber. The fats are mainly polyunsaturated and monounsaturated (83% of total fats when combined). Some studies show that regular consumption of peanuts is associated with a lower specific risk of mortality from certain diseases. However, the study designs do not allow cause and effect to be inferred. According to the US Food and Drug Administration, "Scientific evidence suggests but does not prove that eating 1.5 ounces per day of most nuts (such as peanuts) as part of a diet low in saturated fat and cholesterol may reduce the risk of heart disease." Culinary Whole peanuts Dry-roasting peanuts is a common form of preparation. Dry peanuts can be roasted in the shell or shelled in a home oven if spread out one layer deep in a pan and baked at a temperature of for 15 to 20 min (shelled) and 20 to 25 min (in shell). Boiled peanuts are a popular snack in India, China, West Africa, and the southern United States. In the US South, boiled peanuts are often prepared in briny water and sold in streetside stands. A distinction can be drawn between raw and green peanuts. A green peanut is a term to describe farm-fresh harvested peanuts that have not been dehydrated. They are available from grocery stores, food distributors, and farmers markets during the growing season. Raw peanuts are also uncooked but have been dried/dehydrated and must be rehydrated before boiling (usually in a bowl full of water overnight). Once rehydrated, the raw peanuts are ready to be boiled. Peanut oil Peanut oil is often used in cooking because it has a mild flavor and a relatively high smoke point. Due to its high monounsaturated content, it is considered more healthful than saturated oils and is resistant to rancidity. The several types of peanut oil include aromatic roasted peanut oil, refined peanut oil, extra virgin or cold-pressed peanut oil, and peanut extract. Refined peanut oil is exempt from allergen labeling laws in the US. A common cooking and salad oil, peanut oil is 46% monounsaturated fats (primarily oleic acid), 32% polyunsaturated fats (primarily linoleic acid), and 17% saturated fats (primarily palmitic acid). Extractable from whole peanuts using a simple water and centrifugation method, the oil is being considered by NASA's Advanced Life Support program for future long-duration human space missions. Peanut butter Peanut butter is a food paste or spread made from ground dry roasted peanuts. It often contains additional ingredients that modify the taste or texture, such as salt, sweeteners, or emulsifiers. Many companies have added twists on traditionally plain peanut butter by adding various flavor varieties, such as chocolate, birthday cake, and cinnamon raisin. Peanut butter is served as a spread on bread, toast or crackers, and used to make sandwiches (notably the peanut butter and jelly sandwich). It is also used in a number of confections, such as peanut-flavored granola bars or croissants and other pastries. The United States is a leading exporter of peanut butter, and itself consumes $800 million of peanut butter annually. Peanut flour Peanut flour is used in gluten-free cooking. Peanut proteins Peanut protein concentrates and isolates are commercially produced from defatted peanut flour using several methods. Peanut flour concentrates (about 70% protein) are produced from dehulled kernels by removing most of the oil and the water-soluble, non-protein components. Hydraulic pressing, screw pressing, solvent extraction, and pre-pressing followed by solvent extraction may be used for oil removal, after which protein isolation and purification are implemented. Latin America Peanuts are particularly common in Peruvian and Mexican cuisine, both of which marry indigenous and European ingredients. For instance, in Peru, a popular traditional dish is picante de cuy, a roasted guinea pig served in a sauce of ground peanuts (ingredients native to South America) with roasted onions and garlic (ingredients from European cuisine). Also, in the Peruvian city of Arequipa, a dish called ocopa consists of a smooth sauce of roasted peanuts and hot peppers (both native to the region) with roasted onions, garlic, and oil, poured over meat or potatoes. Another example is a fricassee combining a similar mixture with sautéed seafood or boiled and shredded chicken. These dishes are generally known as ajíes, meaning "hot peppers", such as ají de pollo and ají de mariscos (seafood ajíes may omit peanuts). In Mexico, it is also used to prepare different traditional dishes, such as chicken in peanut sauce (encacahuatado), and is used as the main ingredient for the preparation of other famous dishes such as red pipián, mole poblano and oaxacan mole negro. Likewise, during colonial times in Peru, the Spanish used peanuts to replace nuts unavailable locally but used extensively in Spanish cuisine, such as almonds and pine nuts, typically ground or as a paste mixed with rice, meats, and vegetables for dishes like rice pilaf. Throughout the region, many candies and snacks are made using peanuts. In Mexico, it is common to find them in different presentations as a snack or candy: salty, "Japanese" peanuts, praline, enchilados or in the form of a traditional sweet made with peanuts and honey called palanqueta, and even as peanut marzipan. There is a similar form of peanut candy in Brazil, called pé-de-moleque, made with peanuts and molasses, which resembles the Indian chikki in form. West Asia Crunchy coated peanuts, called kabukim in Hebrew, are a popular snack in Israel. Kabukim are commonly sold by weight at corner stores where fresh nuts and seeds are sold, though they are also available packaged. The coating typically consists of flour, salt, starch, lecithin, and sometimes sesame seeds. The origin of the name is obscure (it may be derived from kabuk, which means nutshell or husk in Turkish). An additional variety of crunchy coated peanuts popular in Israel is "American peanuts". The coating of this variety is thinner but harder to crack. Bamba puffs are a popular snack in Israel. Their shape is similar to Cheez Doodles, but they are made of peanuts and corn. Southeast Asia Peanuts are also widely used in Southeast Asian cuisine, such as in Malaysia, Vietnam, and Indonesia, where they are typically made into a spicy sauce. Peanuts came to Indonesia from the Philippines, where the legume was derived from Mexico during Spanish colonization. One Philippine dish using peanuts is kare-kare, a mixture of meat and peanut butter. Apart from being used in dishes, fried shelled peanuts are a common inexpensive snack in the Philippines. The peanuts are commonly served plain salted with garlic chips and variants, including adobo and chili flavors. Common Indonesian peanut-based dishes include gado-gado, pecel, karedok, and ketoprak, vegetable salads mixed with peanut sauce, and the peanut-based sauce, satay. Indian subcontinent In the Indian subcontinent, peanuts are a light snack, usually roasted and salted (sometimes with the addition of chilli powder), and often sold roasted in pods or boiled with salt. They are also made into dessert or sweet snack of peanut brittle by processing with refined sugar and jaggery. Indian cuisine uses roasted, crushed peanuts to give a crunchy body to salads; they are added whole (without pods) to leafy vegetable stews for the same reason. Another use is peanut oil for cooking. Most Indians use mustard, sunflower, and peanut oil for cooking. In South India, groundnut chutney is eaten with dosa and idli as breakfast. Peanuts are also used in sweets and savory items in South India and also as a flavor in tamarind rice. Kovilpatti is known for its sweet peanut chikki or peanut brittle, which is also used in savory and sweet mixtures, such as Bombay mix. West Africa Peanuts grow well in southern Mali and adjacent regions of the Ivory Coast, Burkina Faso, Ghana, Nigeria, and Senegal; peanuts are similar in both agricultural and culinary qualities to the Bambara groundnut native to the region, and West Africans have adopted the crop as a staple. Peanut sauce, prepared with onions, garlic, peanut butter/paste, and vegetables such as carrots, cabbage, and cauliflower, can be vegetarian (the peanuts supplying ample protein) or prepared with meat, usually chicken. Peanuts are used in the Malian meat stew maafe. In Ghana, peanut butter is used for peanut butter soup nkate nkwan. Crushed peanuts may also be used for peanut candies nkate cake and kuli-kuli, as well as other local foods such as oto. Peanut butter is an ingredient in Nigeria's "African salad". Peanut powder is an important ingredient in the spicy coating for kebabs (Suya) in Nigeria and Ghana. East Africa Peanuts are a common ingredient of several types of relishes (dishes which accompany nshima) eaten in Malawi, and in the eastern part of Zambia, and these dishes are common throughout both countries. Thick peanut butter sauces are also made in Uganda to serve with rice and other starchy foods. Groundnut stew, called ebinyebwa in Luganda-speaking areas of Uganda, is made by boiling ground peanut flour with other ingredients, such as cabbage, mushrooms, dried fish, meat or other vegetables. Across East Africa, roasted peanuts, often in cones of newspaper, are obtained from street vendors. North America The state of Georgia leads the US in peanut production, with 49 percent of the nation's peanut acreage and output. In 2014, farmers cultivated 591,000 acres of peanuts, yielding of 2.4 billion pounds. The most famous peanut farmer was Jimmy Carter of Sumter County, Georgia who became US president in 1976. In the US and Canada, peanuts are used in candies, cakes, cookies, and other sweets. Individually, they are eaten dry-roasted with or without salt. Ninety-five percent of Canadians eat peanuts or peanut butter, with the average consumption of of peanuts per person annually, and 79% of Canadians consume peanut butter weekly. In the United States, peanuts and peanut butter are central to American dietary practices, and are typically considered as comfort foods. Peanuts were sold at fairs or by pushcart operators through the 19th century. Peanut butter is a common peanut-based food, representing half of the American total peanut consumption and $850 million in annual retail sales. Peanut soup is found on restaurant menus in the southeastern states. In some southern portions of the US, peanuts are boiled for several hours until soft and moist. Peanuts are also deep-fried, sometimes within the shell. Per person, Americans eat of peanut products annually, spending a total of $2 billion in peanut retail purchases. Manufacturing Production In 2020, world production of peanuts (reported as groundnuts in shells) was 54 million tonnes, an 8% increase over 2019 production. China had 34% of global production, followed by India (19%). Other significant producers were Nigeria, the US, and Sudan. Industrial Peanuts have a variety of industrial end uses. Paint, varnish, lubricating oil, leather dressings, furniture polish, insecticides, and nitroglycerin are made from peanut oil. Soap is made from saponified oil, and many cosmetics contain peanut oil and its derivatives. The protein portion is used in the manufacture of some textile fibers. Peanut shells are used in the manufacture of plastic, wallboard, abrasives, fuel, cellulose (used in rayon and paper), and mucilage (glue). Malnutrition With their high protein concentration, peanuts are used to help fight malnutrition. Plumpy Nut, MANA Nutrition, and Medika Mamba are high-protein, high-energy, and high-nutrient peanut-based pastes developed to be used as a therapeutic food to aid in famine relief. The World Health Organization, UNICEF, Project Peanut Butter, and Doctors Without Borders have used these products to help save malnourished children in developing countries. Peanuts can be used like other legumes and grains to make a lactose-free, milk-like beverage, peanut milk, which is promoted in Africa as a way to reduce malnutrition among children. Animal feed Peanut plant tops and crop residues can be used for silage. The protein cake (oilcake meal) residue from oil processing is used as animal feed and soil fertilizer. Groundnut cake is a livestock feed, mostly used by cattle as protein supplements. It is one of the most important and valuable feeds for all types of livestock and one of the most active ingredients for poultry rations. Poor storage of the cake may sometimes result in its contamination by aflatoxin, a naturally occurring mycotoxin that is produced by Aspergillus flavus and Aspergillus parasiticus. The major constituents of the cake are essential amino acids such as lysine and glutamine. Other components are crude fiber, crude protein, and fat. Some peanuts can also be fed whole to livestock, for example, those over the peanut quota in the US or those with a higher aflatoxin content than that permitted by the food regulations. Peanut processing often requires dehulling: the hulls generated in large amounts by the peanut industries can feed livestock, particularly ruminants. Gallery
Biology and health sciences
Fabales
null
55623
https://en.wikipedia.org/wiki/Cotyledon
Cotyledon
A cotyledon ( ; ; "a cavity, small cup, any cup-shaped hollow", gen. (), ) is a "seed leaf" – a significant part of the embryo within the seed of a plant – and is formally defined as "the embryonic leaf in seed-bearing plants, one or more of which are the first to appear from a germinating seed." Botanists use the number of cotyledons present as one characteristic to classify the flowering plants (angiosperms): species with one cotyledon are called monocotyledonous ("monocots"); plants with two embryonic leaves are termed dicotyledonous ("dicots"). In the case of dicot seedlings whose cotyledons are photosynthetic, the cotyledons are functionally similar to leaves. However, true leaves and cotyledons are developmentally distinct. Cotyledons form during embryogenesis, along with the root and shoot meristems, and are therefore present in the seed prior to germination. True leaves, however, form post-embryonically (i.e. after germination) from the shoot apical meristem, which generates subsequent aerial portions of the plant. The cotyledon of grasses and many other monocotyledons is a highly modified leaf composed of a scutellum and a coleoptile. The scutellum is a tissue within the seed that is specialized to absorb stored food from the adjacent endosperm. The coleoptile is a protective cap that covers the plumule (precursor to the stem and leaves of the plant). Gymnosperm seedlings also have cotyledons. Gnetophytes, cycads, and ginkgos all have 2, whereas in conifers they are often variable in number (multicotyledonous), with 2 to 24 cotyledons forming a whorl at the top of the hypocotyl (the embryonic stem) surrounding the plumule. Within each species, there is often still some variation in cotyledon numbers, e.g. Monterey pine (Pinus radiata) seedlings have between 5 and 9, and Jeffrey pine (Pinus jeffreyi) 7 to 13 (Mirov 1967), but other species are more fixed, with e.g. Mediterranean cypress always having just two cotyledons. The highest number reported is for big-cone pinyon (Pinus maximartinezii), with 24 (Farjon & Styles 1997). Cotyledons may be ephemeral, lasting only days after emergence, or persistent, enduring at least a year on the plant. The cotyledons contain (or in the case of gymnosperms and monocotyledons, have access to) the stored food reserves of the seed. As these reserves are used up, the cotyledons may turn green and begin photosynthesis, or may wither as the first true leaves take over food production for the seedling. Epigeal versus hypogeal development Cotyledons may be either epigeal, expanding on the germination of the seed, throwing off the seed shell, rising above the ground, and perhaps becoming photosynthetic; or hypogeal, not expanding, remaining below ground and not becoming photosynthetic. The latter is typically the case where the cotyledons act as a storage organ, as in many nuts and acorns. Hypogeal plants have (on average) significantly larger seeds than epigeal ones. They are also capable of surviving if the seedling is clipped off, as meristem buds remain underground (with epigeal plants, the meristem is clipped off if the seedling is grazed). The tradeoff is whether the plant should produce a large number of small seeds, or a smaller number of seeds which are more likely to survive. The ultimate development of the epigeal habit is represented by a few plants, mostly in the family Gesneriaceae in which the cotyledon persists for a lifetime. Such a plant is Streptocarpus wendlandii of South Africa in which one cotyledon grows to be up to 75 centimeters (2.5 feet) in length and up to 61 cm (two feet) in width (the largest cotyledon of any dicot, and exceeded only by Lodoicea). Adventitious flower clusters form along the midrib of the cotyledon. The second cotyledon is much smaller and ephemeral. Related plants may show a mixture of hypogeal and epigeal development, even within the same plant family. Groups which contain both hypogeal and epigeal species include, for example, the Southern Hemisphere conifer family Araucariaceae, the pea family, Fabaceae, and the genus Lilium (see Lily seed germination types). The frequently garden grown common bean, Phaseolus vulgaris, is epigeal, while the closely related runner bean, Phaseolus coccineus, is hypogeal. History The term cotyledon was coined by Marcello Malpighi (1628–1694). John Ray was the first botanist to recognize that some plants have two and others only one, and eventually the first to recognize the immense importance of this fact to systematics, in Methodus plantarum (1682). Theophrastus (3rd or 4th century BC) and Albertus Magnus (13th century) may also have recognized the distinction between the dicotyledons and monocotyledons.
Biology and health sciences
Plant anatomy and morphology: General
Biology
55625
https://en.wikipedia.org/wiki/Monocotyledon
Monocotyledon
Monocotyledons (), commonly referred to as monocots, (Lilianae sensu Chase & Reveal) are grass and grass-like flowering plants (angiosperms), the seeds of which typically contain only one embryonic leaf, or cotyledon. They constitute one of the major groups into which the flowering plants have traditionally been divided; the rest of the flowering plants have two cotyledons and were classified as dicotyledons, or dicots. Monocotyledons have almost always been recognized as a group, but with various taxonomic ranks and under several different names. The APG III system of 2009 recognises a clade called "monocots" but does not assign it to a taxonomic rank. The monocotyledons include about 70,000 species, about a quarter of all angiosperms. The largest family in this group (and in the flowering plants as a whole) by number of species are the orchids (family Orchidaceae), with more than 20,000 species. About 12,000 species belong to the true grasses (Poaceae), which are economically the most important family of monocotyledons. Often mistaken for grasses, sedges are also monocots. In agriculture the majority of the biomass produced comes from monocotyledons. These include not only major grains (rice, wheat, maize, etc.), but also forage grasses, sugar cane, the bamboos, and many other common food and decorative crops. Description General The monocots or monocotyledons have, as the name implies, a single (mono-) cotyledon, or embryonic leaf, in their seeds. Historically, this feature was used to contrast the monocots with the dicotyledons or dicots which typically have two cotyledons; however, modern research has shown that the dicots are not a natural group, and the term can only be used to indicate all angiosperms that are not monocots and is used in that respect here. From a diagnostic point of view the number of cotyledons is neither a particularly useful characteristic (as they are only present for a very short period in a plant's life), nor is it completely reliable. The single cotyledon is only one of a number of modifications of the body plan of the ancestral monocotyledons, whose adaptive advantages are poorly understood, but may have been related to adaption to aquatic habitats, prior to radiation to terrestrial habitats. Nevertheless, monocots are sufficiently distinctive that there has rarely been disagreement as to membership of this group, despite considerable diversity in terms of external morphology. However, morphological features that reliably characterise major clades are rare. Thus monocots are distinguishable from other angiosperms both in terms of their uniformity and diversity. On the one hand, the organization of the shoots, leaf structure, and floral configuration are more uniform than in the remaining angiosperms, yet within these constraints a wealth of diversity exists, indicating a high degree of evolutionary success. Monocot diversity includes perennial geophytes such as ornamental flowers including orchids (Asparagales); tulips and lilies (Liliales); rosette and succulent epiphytes (Asparagales); mycoheterotrophs (Liliales, Dioscoreales, Pandanales), all in the lilioid monocots; major cereal grains (maize, rice, barley, rye, oats, millet, sorghum and wheat) in the grass family; and forage grasses (Poales) as well as woody tree-like palm trees (Arecales), bamboo, reeds and bromeliads (Poales), bananas and ginger (Zingiberales) in the commelinid monocots, as well as both emergent (Poales, Acorales) and aroids, as well as floating or submerged aquatic plants such as seagrass (Alismatales). Vegetative Organisation, growth and life forms The most important distinction is their growth pattern, lacking a lateral meristem (cambium) that allows for continual growth in diameter with height (secondary growth), and therefore this characteristic is a basic limitation in shoot construction. Although largely herbaceous, some arboraceous monocots reach great height, length and mass. The latter include agaves, palms, pandans, and bamboos. This creates challenges in water transport that monocots deal with in various ways. Some, such as species of Yucca, develop anomalous secondary growth, while palm trees utilise an anomalous primary growth form described as establishment growth (see Vascular system). The axis undergoes primary thickening, that progresses from internode to internode, resulting in a typical inverted conical shape of the basal primary axis (see Tillich, Figure 1). The limited conductivity also contributes to limited branching of the stems. Despite these limitations a wide variety of adaptive growth forms has resulted (Tillich, Figure 2) from epiphytic orchids (Asparagales) and bromeliads (Poales) to submarine Alismatales (including the reduced Lemnoideae) and mycotrophic Burmanniaceae (Dioscreales) and Triuridaceae (Pandanales). Other forms of adaptation include the climbing vines of Araceae (Alismatales) which use negative phototropism (skototropism) to locate host trees (i.e. the darkest area), while some palms such as Calamus manan (Arecales) produce the longest shoots in the plant kingdom, up to 185 m long. Other monocots, particularly Poales, have adopted a therophyte life form. Leaves The cotyledon, the primordial Angiosperm leaf consists of a proximal leaf base or hypophyll and a distal hyperphyll. In monocots the hypophyll tends to be the dominant part in contrast to other angiosperms. From these, considerable diversity arises. Mature monocot leaves are generally narrow and linear, forming a sheathing around the stem at its base, although there are many exceptions. Leaf venation is of the striate type, mainly arcuate-striate or longitudinally striate (parallel), less often palmate-striate or pinnate-striate with the leaf veins emerging at the leaf base and then running together at the apices. There is usually only one leaf per node because the leaf base encompasses more than half the circumference. The evolution of this monocot characteristic has been attributed to developmental differences in early zonal differentiation rather than meristem activity (leaf base theory). Roots and underground organs The lack of cambium in the primary root limits its ability to grow sufficiently to maintain the plant. This necessitates early development of roots derived from the shoot (adventitious roots). In addition to roots, monocots develop runners and rhizomes, which are creeping shoots. Runners serve vegetative propagation, have elongated internodes, run on or just below the surface of the soil and in most case bear scale leaves. Rhizomes frequently have an additional storage function and rhizome producing plants are considered geophytes (Tillich, Figure 11). Other geophytes develop bulbs, a short axial body bearing leaves whose bases store food. Additional outer non-storage leaves may form a protective function (Tillich, Figure 12). Other storage organs may be tubers or corms, swollen axes. Tubers may form at the end of underground runners and persist. Corms are short lived vertical shoots with terminal inflorescences and shrivel once flowering has occurred. However, intermediate forms may occur such as in Crocosmia (Asparagales). Some monocots may also produce shoots that grow directly down into the soil, these are geophilous shoots (Tillich, Figure 11) that help overcome the limited trunk stability of large woody monocots. Reproductive Flowers In nearly all cases the perigone consists of two alternating trimerous whorls of tepals, being homochlamydeous, without differentiation between calyx and corolla. In zoophilous (pollinated by animals) taxa, both whorls are corolline (petal-like). Anthesis (the period of flower opening) is usually fugacious (short lived). Some of the more persistent perigones demonstrate thermonastic opening and closing (responsive to changes in temperature). About two thirds of monocots are zoophilous, predominantly by insects. These plants need to advertise to pollinators and do so by way of phaneranthous (showy) flowers. Such optical signalling is usually a function of the tepal whorls but may also be provided by semaphylls (other structures such as filaments, staminodes or stylodia which have become modified to attract pollinators). However, some monocot plants may have aphananthous (inconspicuous) flowers and still be pollinated by animals. In these the plants rely either on chemical attraction or other structures such as coloured bracts fulfill the role of optical attraction. In some phaneranthous plants such structures may reinforce floral structures. The production of fragrances for olfactory signalling are common in monocots. The perigone also functions as a landing platform for pollinating insects. Fruit and seed The embryo consists of a single cotyledon, usually with two vascular bundles. Comparison with dicots The traditionally listed differences between monocots and dicots are as follows. This is a broad sketch only, not invariably applicable, as there are a number of exceptions. The differences indicated are more true for monocots versus eudicots. A number of these differences are not unique to the monocots, and, while still useful, no one single feature will infallibly identify a plant as a monocot. For example, trimerous flowers and monosulcate pollen are also found in magnoliids, and exclusively adventitious roots are found in some of the Piperaceae. Similarly, at least one of these traits, parallel leaf veins, is far from universal among the monocots. Broad leaves and reticulate leaf veins, features typical of dicots, are found in a wide variety of monocot families: for example, Trillium, Smilax (greenbriar), Pogonia (an orchid), and the Dioscoreales (yams). Potamogeton and Paris quadrifolia (herb-paris) are examples of monocots with tetramerous flowers. Other plants exhibit a mixture of characteristics. Nymphaeaceae (water lilies) have reticulate veins, a single cotyledon, adventitious roots, and a monocot-like vascular bundle. These examples reflect their shared ancestry. Nevertheless, this list of traits is generally valid, especially when contrasting monocots with eudicots, rather than non-monocot flowering plants in general. Apomorphies Monocot apomorphies (characteristics derived during radiation rather than inherited from an ancestral form) include herbaceous habit, leaves with parallel venation and sheathed base, an embryo with a single cotyledon, an atactostele, numerous adventitious roots, sympodial growth, and trimerous (3 parts per whorl) flowers that are pentacyclic (5 whorled) with 3 sepals, 3 petals, 2 whorls of 3 stamens each, and 3 carpels. In contrast, monosulcate pollen is considered an ancestral trait, probably plesiomorphic. Synapomorphies The distinctive features of the monocots have contributed to the relative taxonomic stability of the group. Douglas E. Soltis and others identify thirteen synapomorphies (shared characteristics that unite monophyletic groups of taxa); Calcium oxalate raphides Absence of vessels in leaves Monocotyledonous anther wall formation* Successive microsporogenesis Syncarpous gynoecium Parietal placentation Monocotyledonous seedling Persistent radicle Haustorial cotyledon tip Open cotyledon sheath Steroidal saponins* Fly pollination* Diffuse vascular bundles and absence of secondary growth Vascular system Monocots have a distinctive arrangement of vascular tissue known as an atactostele in which the vascular tissue is scattered rather than arranged in concentric rings. Collenchyma is absent in monocot stems, roots and leaves. Many monocots are herbaceous and do not have the ability to increase the width of a stem (secondary growth) via the same kind of vascular cambium found in non-monocot woody plants. However, some monocots do have secondary growth; because this does not arise from a single vascular cambium producing xylem inwards and phloem outwards, it is termed "anomalous secondary growth". Examples of large monocots which either exhibit secondary growth, or can reach large sizes without it, are palms (Arecaceae), screwpines (Pandanaceae), bananas (Musaceae), Yucca, Aloe, Dracaena, and Cordyline. Taxonomy The monocots form one of five major lineages of mesangiosperms (core angiosperms), which in themselves form 99.95% of all angiosperms. The monocots and the eudicots are the largest and most diversified angiosperm radiations, accounting for 22.8% and 74.2% of all angiosperm species respectively. Of these, the grass family (Poaceae) is the most economically important, which together with the orchids Orchidaceae account for half of the species diversity, accounting for 34% and 17% of all monocots respectively, and are among the largest families of angiosperms. They are also among the dominant members of many plant communities. Early history Pre-Linnean The monocots are one of the major divisions of the flowering plants or angiosperms. They have been recognized as a natural group since the sixteenth century when Lobelius (1571), searching for a characteristic to group plants by, decided on leaf form and their venation. He observed that the majority had broad leaves with net-like venation, but a smaller group were grass-like plants with long straight parallel veins. In doing so he distinguished between the dicotyledons, and the latter (grass-like) monocotyledon group, although he had no formal names for the two groups. Formal description dates from John Ray's studies of seed structure in the 17th century. Ray, who is often considered the first botanical systematist, observed the dichotomy of cotyledon structure in his examination of seeds. He reported his findings in a paper read to the Royal Society on 17 December 1674, entitled "A Discourse on the Seeds of Plants". Since this paper appeared a year before the publication of Malpighi's Anatome Plantarum (1675–1679), Ray has the priority. At the time, Ray did not fully realise the importance of his discovery but progressively developed this over successive publications. And since these were in Latin, "seed leaves" became folia seminalia and then cotyledon, following Malpighi. Malpighi and Ray were familiar with each other's work, and Malpighi in describing the same structures had introduced the term cotyledon, which Ray adopted in his subsequent writing. In this experiment, Malpighi also showed that the cotyledons were critical to the development of the plant, proof that Ray required for his theory. In his Methodus plantarum nova Ray also developed and justified the "natural" or pre-evolutionary approach to classification, based on characteristics selected a posteriori in order to group together taxa that have the greatest number of shared characteristics. This approach, also referred to as polythetic would last till evolutionary theory enabled Eichler to develop the phyletic system that superseded it in the late nineteenth century, based on an understanding of the acquisition of characteristics. He also made the crucial observation Ex hac seminum divisione sumum potest generalis plantarum distinctio, eaque meo judicio omnium prima et longe optima, in eas sci. quae plantula seminali sunt bifolia aut διλόβω, et quae plantula sem. adulta analoga. (From this division of the seeds derives a general distinction amongst plants, that in my judgement is first and by far the best, into those seed plants which are bifoliate, or bilobed, and those that are analogous to the adult), that is between monocots and dicots. He illustrated this by quoting from Malpighi and including reproductions of Malpighi's drawings of cotyledons (see figure). Initially Ray did not develop a classification of flowering plants (florifera) based on a division by the number of cotyledons, but developed his ideas over successive publications, coining the terms Monocotyledones and Dicotyledones in 1703, in the revised version of his Methodus (Methodus plantarum emendata), as a primary method for dividing them, Herbae floriferae, dividi possunt, ut diximus, in Monocotyledones & Dicotyledones (Flowering plants, can be divided, as we have said, into Monocotyledons & Dicotyledons). Post Linnean Although Linnaeus (1707–1778) did not utilise Ray's discovery, basing his own classification solely on floral reproductive morphology, the term was used shortly after his classification appeared (1753) by Scopoli and who is credited for its introduction. Every taxonomist since then, starting with De Jussieu and De Candolle, has used Ray's distinction as a major classification characteristic. In De Jussieu's system (1789), he followed Ray, arranging his Monocotyledones into three classes based on stamen position and placing them between Acotyledones and Dicotyledones. De Candolle's system (1813) which was to predominate thinking through much of the 19th century used a similar general arrangement, with two subgroups of his Monocotylédonés (Monocotyledoneae). Lindley (1830) followed De Candolle in using the terms Monocotyledon and Endogenae interchangeably. They considered the monocotyledons to be a group of vascular plants (Vasculares) whose vascular bundles were thought to arise from within (Endogènes or endogenous). Monocotyledons remained in a similar position as a major division of the flowering plants throughout the nineteenth century, with minor variations. George Bentham and Hooker (1862–1883) used Monocotyledones, as would Wettstein, while August Eichler used Mononocotyleae and Engler, following de Candolle, Monocotyledoneae. In the twentieth century, some authors used alternative names such as Bessey's (1915) Alternifoliae and Cronquist's (1966) Liliatae. Later (1981) Cronquist changed Liliatae to Liliopsida, usages also adopted by Takhtajan simultaneously. Thorne (1992) and Dahlgren (1985) also used Liliidae as a synonym. Taxonomists had considerable latitude in naming this group, as the Monocotyledons were a group above the rank of family. Article 16 of the ICBN allows either a descriptive botanical name or a name formed from the name of an included family. In summary they have been variously named, as follows: class Monocotyledoneae in the de Candolle system and the Engler system class Monocotyledones in the Bentham & Hooker system and the Wettstein system class Monocotyleae in the Eichler system class Liliatae then Liliopsida in the Takhtajan system and the Cronquist system subclass Liliidae in the Dahlgren system and the Thorne system Modern era Over the 1980s, a more general review of the classification of angiosperms was undertaken. The 1990s saw considerable progress in plant phylogenetics and cladistic theory, initially based on rbcL gene sequencing and cladistic analysis, enabling a phylogenetic tree to be constructed for the flowering plants. The establishment of major new clades necessitated a departure from the older but widely used classifications such as Cronquist and Thorne, based largely on morphology rather than genetic data. These developments complicated discussions on plant evolution and necessitated a major taxonomic restructuring. This DNA based molecular phylogenetic research confirmed on the one hand that the monocots remained as a well defined monophyletic group or clade, in contrast to the other historical divisions of the flowering plants, which had to be substantially reorganized. No longer could the angiosperms be simply divided into monocotyledons and dicotyledons; it was apparent that the monocotyledons were but one of a relatively large number of defined groups within the angiosperms. Correlation with morphological criteria showed that the defining feature was not cotyledon number but the separation of angiosperms into two major pollen types, uniaperturate (monosulcate and monosulcate-derived) and triaperturate (tricolpate and tricolpate-derived), with the monocots situated within the uniaperturate groups. The formal taxonomic ranking of Monoctyledons thus became replaced with monocots as an informal clade. This is the name that has been most commonly used since the publication of the Angiosperm Phylogeny Group (APG) system in 1998 and regularly updated since. Within the angiosperms, there are two major grades, a small early branching basal grade, the basal angiosperms (ANA grade) with three lineages and a larger late branching grade, the core angiosperms (mesangiosperms) with five lineages, as shown in the cladogram. Subdivision While the monocotyledons have remained extremely stable in their outer borders as a well-defined and coherent monophylectic group, the deeper internal relationships have undergone considerable flux, with many competing classification systems over time. Historically, Bentham (1877), considered the monocots to consist of four alliances, Epigynae, Coronariae, Nudiflorae and Glumales, based on floral characteristics. He describes the attempts to subdivide the group since the days of Lindley as largely unsuccessful. Like most subsequent classification systems it failed to distinguish between two major orders, Liliales and Asparagales, now recognised as quite separate. A major advance in this respect was the work of Rolf Dahlgren (1980), which would form the basis of the Angiosperm Phylogeny Group's (APG) subsequent modern classification of monocot families. Dahlgren who used the alternate name Lilliidae considered the monocots as a subclass of angiosperms characterised by a single cotyledon and the presence of triangular protein bodies in the sieve tube plastids. He divided the monocots into seven superorders, Alismatiflorae, Ariflorae, Triuridiflorae, Liliiflorae, Zingiberiflorae, Commeliniflorae and Areciflorae. With respect to the specific issue regarding Liliales and Asparagales, Dahlgren followed Huber (1969) in adopting a splitter approach, in contrast to the longstanding tendency to view Liliaceae as a very broad sensu lato family. Following Dahlgren's untimely death in 1987, his work was continued by his widow, Gertrud Dahlgren, who published a revised version of the classification in 1989. In this scheme the suffix -florae was replaced with -anae (e.g. Alismatanae) and the number of superorders expanded to ten with the addition of Bromelianae, Cyclanthanae and Pandananae. Molecular studies have both confirmed the monophyly of the monocots and helped elucidate relationships within this group. The APG system does not assign the monocots to a taxonomic rank, instead recognizing a monocots clade. However, there has remained some uncertainty regarding the exact relationships between the major lineages, with a number of competing models (including APG). The APG system establishes eleven orders of monocots. These form three grades, the alismatid monocots, lilioid monocots and the commelinid monocots by order of branching, from early to late. In the following cladogram numbers indicate crown group (most recent common ancestor of the sampled species of the clade of interest) divergence times in mya (million years ago). Of some 70,000 species, by far the largest number (65%) are found in two families, the orchids and grasses. The orchids (Orchidaceae, Asparagales) contain about 25,000 species and the grasses (Poaceae, Poales) about 11,000. Other well known groups within the Poales order include the Cyperaceae (sedges) and Juncaceae (rushes), and the monocots also include familiar families such as the palms (Arecaceae, Arecales) and lilies (Liliaceae, Liliales). Evolution In prephyletic classification systems monocots were generally positioned between plants other than angiosperms and dicots, implying that monocots were more primitive. With the introduction of phyletic thinking in taxonomy (from the system of Eichler 1875–1878 onwards) the predominant theory of monocot origins was the ranalean (ranalian) theory, particularly in the work of Bessey (1915), which traced the origin of all flowering plants to a Ranalean type, and reversed the sequence making dicots the more primitive group. The monocots form a monophyletic group arising early in the history of the flowering plants, but the fossil record is meagre. The earliest fossils presumed to be monocot remains date from the early Cretaceous period. For a very long time, fossils of palm trees were believed to be the oldest monocots, first appearing 90 million years ago (mya), but this estimate may not be entirely true. At least some putative monocot fossils have been found in strata as old as the eudicots. The oldest fossils that are unequivocally monocots are pollen from the Late Barremian–Aptian – Early Cretaceous period, about 120-110 million years ago, and are assignable to clade-Pothoideae-Monstereae Araceae; being Araceae, sister to other Alismatales. They have also found flower fossils of Triuridaceae (Pandanales) in Upper Cretaceous rocks in New Jersey, becoming the oldest known sighting of saprophytic/mycotrophic habits in angiosperm plants and among the oldest known fossils of monocotyledons. Topology of the angiosperm phylogenetic tree could imply that the monocots are among the oldest lineages of angiosperms, which would support the theory that they are just as old as the eudicots. The pollen of the eudicots dates back 125 million years, so the lineage of monocots should be that old too. Molecular clock estimates Kåre Bremer, using rbcL sequences and the mean path length method for estimating divergence times, estimated the age of the monocot crown group (i.e. the time at which the ancestor of today's Acorus diverged from the rest of the group) as 134 million years. Similarly, Wikström et al., using Sanderson's non-parametric rate smoothing approach, obtained ages of 127–141 million years for the crown group of monocots. All these estimates have large error ranges (usually 15-20%), and Wikström et al. used only a single calibration point, namely the split between Fagales and Cucurbitales, which was set to 84 Ma, in the late Santonian period. Early molecular clock studies using strict clock models had estimated the monocot crown age to 200 ± 20 million years ago or 160 ± 16 million years, while studies using relaxed clocks have obtained 135-131 million years or 133.8 to 124 million years. Bremer's estimate of 134 million years has been used as a secondary calibration point in other analyses. Some estimates place the diversification of the monocots as far back as 150 mya in the Jurassic period. The lineage that led to monocots (stem group) split from other plants about 136 million years ago or 165-170 million years ago. Core group The age of the core group of so-called 'nuclear monocots' or 'core monocots', which correspond to all orders except Acorales and Alismatales, is about 131 million years to present, and crown group age is about 126 million years to the present. The subsequent branching in this part of the tree (i.e. Petrosaviaceae, Dioscoreales + Pandanales and Liliales clades appeared), including the crown Petrosaviaceae group may be in the period around 125–120 million years BC (about 111 million years so far), and stem groups of all other orders, including Commelinidae would have diverged about or shortly after 115 million years. These and many clades within these orders may have originated in southern Gondwana, i.e. Antarctica, Australasia, and southern South America. Aquatic monocots The aquatic monocots of Alismatales have commonly been regarded as "primitive". They have also been considered to have the most primitive foliage, which were cross-linked as Dioscoreales and Melanthiales. Keep in mind that the "most primitive" monocot is not necessarily "the sister of everyone else". This is because the ancestral or primitive characters are inferred by means of the reconstruction of character states, with the help of the phylogenetic tree. So primitive characters of monocots may be present in some derived groups. On the other hand, the basal taxa may exhibit many morphological autapomorphies. So although Acoraceae is the sister group to the remaining monocotyledons, the result does not imply that Acoraceae is "the most primitive monocot" in terms of its character states. In fact, Acoraceae is highly derived in many morphological characters, and that is precisely why Acoraceae and Alismatales occupied relatively derived positions in the trees produced by Chase et al. and others. Some authors support the idea of an aquatic phase as the origin of monocots. The phylogenetic position of Alismatales (many water), which occupy a relationship with the rest except the Acoraceae, do not rule out the idea, because it could be 'the most primitive monocots' but not 'the most basal'. The Atactostele stem, the long and linear leaves, the absence of secondary growth (see the biomechanics of living in the water), roots in groups instead of a single root branching (related to the nature of the substrate), including sympodial use, are consistent with a water source. However, while monocots were sisters of the aquatic Ceratophyllales, or their origin is related to the adoption of some form of aquatic habit, it would not help much to the understanding of how it evolved to develop their distinctive anatomical features: the monocots seem so different from the rest of angiosperms and it's difficult to relate their morphology, anatomy and development and those of broad-leaved angiosperms. Other taxa In the past, taxa which had petiolate leaves with reticulate venation were considered "primitive" within the monocots, because of the superficial resemblance to the leaves of dicotyledons. Recent work suggests that while these taxa are sparse in the phylogenetic tree of monocots, such as fleshy fruited taxa (excluding taxa with aril seeds dispersed by ants), the two features would be adapted to conditions that evolved together regardless. Among the taxa involved were Smilax, Trillium (Liliales), Dioscorea (Dioscoreales), etc. A number of these plants are vines that tend to live in shaded habitats for at least part of their lives, and this fact may also relate to their shapeless stomata. Reticulate venation seems to have appeared at least 26 times in monocots, and fleshy fruits have appeared 21 times (sometimes lost later); the two characteristics, though different, showed strong signs of a tendency to be good or bad in tandem, a phenomenon described as "concerted convergence" ("coordinated convergence"). Etymology The name monocotyledons is derived from the traditional botanical name "Monocotyledones" or Monocotyledoneae in Latin, which refers to the fact that most members of this group have one cotyledon, or embryonic leaf, in their seeds. Ecology Emergence Some monocots, such as grasses, have hypogeal emergence, where the mesocotyl elongates and pushes the coleoptile (which encloses and protects the shoot tip) toward the soil surface. Since elongation occurs above the cotyledon, it is left in place in the soil where it was planted. Many dicots have epigeal emergence, in which the hypocotyl elongates and becomes arched in the soil. As the hypocotyl continues to elongate, it pulls the cotyledons upward, above the soil surface. Conservation The IUCN Red List describes four species as extinct, four as extinct in the wild, 626 as possibly extinct, 423 as critically endangered, 632 endangered, 621 vulnerable, and 269 near threatened of 4,492 whose status is known. Uses Monocots are among the most important plants economically and culturally, and account for most of the staple foods of the world, such as cereal grains and starchy root crops, and palms, orchids and lilies, building materials, and many medicines. Of the monocots, the grasses are of enormous economic importance as a source of animal and human food, and form the largest component of agricultural species in terms of biomass produced. Other economically important monocotyledon crops include various palms (Arecaceae), bananas and plantains (Musaceae), gingers and their relatives, turmeric and cardamom (Zingiberaceae), asparagus (Asparagaceae), pineapple (Bromeliaceae), sedges (Cyperaceae) and rushes (Juncaceae), vanilla (Orchidaceae), yam (Dioscoreaceae), taro (Araceae), and leeks, onion and garlic (Amaryllidaceae). Many houseplants are monocotyledon epiphytes. Most of the horticultural bulbs, plants cultivated for their blooms, such as lilies, daffodils, irises, amaryllis, cannas, bluebells and tulips, are monocotyledons.
Biology and health sciences
Monocots
null
55632
https://en.wikipedia.org/wiki/Linear%20combination
Linear combination
In mathematics, a linear combination or superposition is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). The concept of linear combinations is central to linear algebra and related fields of mathematics. Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article. Definition Let V be a vector space over the field K. As usual, we call elements of V vectors and call elements of K scalars. If v1,...,vn are vectors and a1,...,an are scalars, then the linear combination of those vectors with those scalars as coefficients is There is some ambiguity in the use of the term "linear combination" as to whether it refers to the expression or to its value. In most cases the value is emphasized, as in the assertion "the set of all linear combinations of v1,...,vn always forms a subspace". However, one could also say "two different linear combinations can have the same value" in which case the reference is to the expression. The subtle difference between these uses is the essence of the notion of linear dependence: a family F of vectors is linearly independent precisely if any linear combination of the vectors in F (as value) is uniquely so (as expression). In any case, even when viewed as expressions, all that matters about a linear combination is the coefficient of each vi; trivial modifications such as permuting the terms or adding terms with zero coefficient do not produce distinct linear combinations. In a given situation, K and V may be specified explicitly, or they may be obvious from context. In that case, we often speak of a linear combination of the vectors v1,...,vn, with the coefficients unspecified (except that they must belong to K). Or, if S is a subset of V, we may speak of a linear combination of vectors in S, where both the coefficients and the vectors are unspecified, except that the vectors must belong to the set S (and the coefficients must belong to K). Finally, we may speak simply of a linear combination, where nothing is specified (except that the vectors must belong to V and the coefficients must belong to K); in this case one is probably referring to the expression, since every vector in V is certainly the value of some linear combination. Note that by definition, a linear combination involves only finitely many vectors (except as described in the section. However, the set S that the vectors are taken from (if one is mentioned) can still be infinite; each individual linear combination will only involve finitely many vectors. Also, there is no reason that n cannot be zero; in that case, we declare by convention that the result of the linear combination is the zero vector in V. Examples and counterexamples Euclidean vectors Let the field K be the set R of real numbers, and let the vector space V be the Euclidean space R3. Consider the vectors , and . Then any vector in R3 is a linear combination of e1, e2, and e3. To see that this is so, take an arbitrary vector (a1,a2,a3) in R3, and write: Functions Let K be the set C of all complex numbers, and let V be the set CC(R) of all continuous functions from the real line R to the complex plane C. Consider the vectors (functions) f and g defined by f(t) := eit and g(t) := e−it. (Here, e is the base of the natural logarithm, about 2.71828..., and i is the imaginary unit, a square root of −1.) Some linear combinations of f and g are: On the other hand, the constant function 3 is not a linear combination of f and g. To see this, suppose that 3 could be written as a linear combination of eit and e−it. This means that there would exist complex scalars a and b such that for all real numbers t. Setting t = 0 and t = π gives the equations and , and clearly this cannot happen. See Euler's identity. Polynomials Let K be R, C, or any field, and let V be the set P of all polynomials with coefficients taken from the field K. Consider the vectors (polynomials) p1 := 1, , and . Is the polynomial x2 − 1 a linear combination of p1, p2, and p3? To find out, consider an arbitrary linear combination of these vectors and try to see when it equals the desired vector x2 − 1. Picking arbitrary coefficients a1, a2, and a3, we want Multiplying the polynomials out, this means and collecting like powers of x, we get Two polynomials are equal if and only if their corresponding coefficients are equal, so we can conclude This system of linear equations can easily be solved. First, the first equation simply says that a3 is 1. Knowing that, we can solve the second equation for a2, which comes out to −1. Finally, the last equation tells us that a1 is also −1. Therefore, the only possible way to get a linear combination is with these coefficients. Indeed, so x2 − 1 is a linear combination of p1, p2, and p3. On the other hand, what about the polynomial x3 − 1? If we try to make this vector a linear combination of p1, p2, and p3, then following the same process as before, we get the equation However, when we set corresponding coefficients equal in this case, the equation for x3 is which is always false. Therefore, there is no way for this to work, and x3 − 1 is not a linear combination of p1, p2, and p3. The linear span Take an arbitrary field K, an arbitrary vector space V, and let v1,...,vn be vectors (in V). It is interesting to consider the set of all linear combinations of these vectors. This set is called the linear span (or just span) of the vectors, say S = {v1, ..., vn}. We write the span of S as span(S) or sp(S): Linear independence Suppose that, for some sets of vectors v1,...,vn, a single vector can be written in two different ways as a linear combination of them: This is equivalent, by subtracting these (), to saying a non-trivial combination is zero: If that is possible, then v1,...,vn are called linearly dependent; otherwise, they are linearly independent. Similarly, we can speak of linear dependence or independence of an arbitrary set S of vectors. If S is linearly independent and the span of S equals V, then S is a basis for V. Affine, conical, and convex combinations By restricting the coefficients used in linear combinations, one can define the related concepts of affine combination, conical combination, and convex combination, and the associated notions of sets closed under these operations. Because these are more restricted operations, more subsets will be closed under them, so affine subsets, convex cones, and convex sets are generalizations of vector subspaces: a vector subspace is also an affine subspace, a convex cone, and a convex set, but a convex set need not be a vector subspace, affine, or a convex cone. These concepts often arise when one can take certain linear combinations of objects, but not any: for example, probability distributions are closed under convex combination (they form a convex set), but not conical or affine combinations (or linear), and positive measures are closed under conical combination but not affine or linear – hence one defines signed measures as the linear closure. Linear and affine combinations can be defined over any field (or ring), but conical and convex combination require a notion of "positive", and hence can only be defined over an ordered field (or ordered ring), generally the real numbers. If one allows only scalar multiplication, not addition, one obtains a (not necessarily convex) cone; one often restricts the definition to only allowing multiplication by positive scalars. All of these concepts are usually defined as subsets of an ambient vector space (except for affine spaces, which are also considered as "vector spaces forgetting the origin"), rather than being axiomatized independently. Operad theory More abstractly, in the language of operad theory, one can consider vector spaces to be algebras over the operad (the infinite direct sum, so only finitely many terms are non-zero; this corresponds to only taking finite sums), which parametrizes linear combinations: the vector for instance corresponds to the linear combination . Similarly, one can consider affine combinations, conical combinations, and convex combinations to correspond to the sub-operads where the terms sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories. From this point of view, we can think of linear combinations as the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations. The basic operations of addition and scalar multiplication, together with the existence of an additive identity and additive inverses, cannot be combined in any more complicated way than the generic linear combination: the basic operations are a generating set for the operad of all linear combinations. Ultimately, this fact lies at the heart of the usefulness of linear combinations in the study of vector spaces. Generalizations If V is a topological vector space, then there may be a way to make sense of certain infinite linear combinations, using the topology of V. For example, we might be able to speak of a1v1 + a2v2 + a3v3 + ⋯, going on forever. Such infinite linear combinations do not always make sense; we call them convergent when they do. Allowing more linear combinations in this case can also lead to a different concept of span, linear independence, and basis. The articles on the various flavors of topological vector spaces go into more detail about these. If K is a commutative ring instead of a field, then everything that has been said above about linear combinations generalizes to this case without change. The only difference is that we call spaces like this V modules instead of vector spaces. If K is a noncommutative ring, then the concept still generalizes, with one caveat: since modules over noncommutative rings come in left and right versions, our linear combinations may also come in either of these versions, whatever is appropriate for the given module. This is simply a matter of doing scalar multiplication on the correct side. A more complicated twist comes when V is a bimodule over two rings, KL and KR. In that case, the most general linear combination looks like where a1,...,an belong to KL, b1,...,bn belong to KR, and v1,…,vn belong to V.
Mathematics
Linear algebra
null
55693
https://en.wikipedia.org/wiki/Turquoise
Turquoise
Turquoise is an opaque, blue-to-green mineral that is a hydrous phosphate of copper and aluminium, with the chemical formula . It is rare and valuable in finer grades and has been prized as a gemstone for millennia due to its hue. Like most other opaque gems, turquoise has been devalued by the introduction of treatments, imitations, and synthetics into the market. The robin egg blue or sky blue color of the Persian turquoise mined near the modern city of Nishapur, Iran, has been used as a guiding reference for evaluating turquoise quality. Names The word turquoise dates to the 17th century and is derived from the Old French turquois meaning "Turkish" because the mineral was first brought to Europe through the Ottoman Empire. However, according to Etymonline, the word dates to the 14th century with the form turkeis, meaning "Turkish", which was replaced with turqueise from French in the 1560s. According to the same source, the gemstone was first brought to Europe from Turkestan or another Turkic territory. Pliny the Elder referred to the mineral as callais (from Ancient Greek ) and the Aztecs knew it as chalchihuitl. In professional mineralogy, until the mid-19th century, the scientific names kalaite or azure spar were also used, which simultaneously provided a version of the mineral origin of turquoise. However, these terms did not become widespread and gradually fell out of use. Properties The finest of turquoise reaches a maximum Mohs hardness of just under 6, or slightly more than window glass. Characteristically a cryptocrystalline mineral, turquoise almost never forms single crystals, and all of its properties are highly variable. X-ray diffraction testing shows its crystal system to be triclinic. With lower hardness comes greater porosity. The lustre of turquoise is typically waxy to subvitreous, and its transparency is usually opaque, but may be semitranslucent in thin sections. Colour is as variable as the mineral's other properties, ranging from white to a powder blue to a sky blue and from a blue-green to a yellowish green. The blue is attributed to idiochromatic copper while the green may be the result of iron impurities (replacing copper.) The refractive index of turquoise varies from 1.61 to 1.65 on the three crystal axes, with birefringence 0.040, biaxial positive, as measured from rare single crystals. Crushed turquoise is soluble in hot hydrochloric acid. Its streak is white to greenish to blue, and its fracture is smooth to conchoidal. Despite its low hardness relative to other gems, turquoise takes a good polish. Turquoise may also be peppered with flecks of pyrite or interspersed with dark, spidery limonite veining. Turquoise is nearly always cryptocrystalline and massive and assumes no definite external shape. Crystals, even at the microscopic scale, are rare. Typically the form is a vein or fracture filling, nodular, or botryoidal in habit. Stalactite forms have been reported. Turquoise may also pseudomorphously replace feldspar, apatite, other minerals, or even fossils. Odontolite is fossil bone or ivory that has historically been thought to have been altered by turquoise or similar phosphate minerals such as the iron phosphate vivianite. Intergrowth with other secondary copper minerals such as chrysocolla is also common. Turquoise is distinguished from chrysocolla, the only common mineral with similar properties, by its greater hardness. Turquoise forms a complete solid solution series with chalcosiderite, , in which ferric iron replaces aluminium. Formation Turquoise deposits probably form in more than one way. However, a typical turquoise deposit begins with hydrothermal deposition of copper sulfides. This takes place when hydrothermal fluids leach copper from a host rock, which is typically an intrusion of calc-alkaline rock with a moderate to high silica content that is relatively oxidized. The copper is redeposited in more concentrated form as a copper porphyry, in which veins of copper sulfide fill joints and fractures in the rock. Deposition takes place mostly in the potassic alteration zone, which is characterized by conversion of existing feldspar to potassium feldspar and deposition of quartz and micas at a temperature of Turquoise is a secondary or supergene mineral, not present in the original copper porphyry. It forms when meteoric water (rain or snow melt infiltrating the Earth's surface) percolates through the copper porphyry. Dissolved oxygen in the water oxidizes the copper sulfides to soluble sulfates, and the acidic, copper-laden solution then reacts with aluminum and potassium minerals in the host rock to precipitate turquoise. This typically fills veins in volcanic rock or phosphate-rich sediments. Deposition usually takes place at a relatively low temperature, , and seems to occur more readily in arid environments. Turquoise in the Sinai Peninsula is found in lower Carboniferous sandstones overlain by basalt flows and upper Carboniferous limestone. The overlying beds were presumably the source of the copper, which precipitated as turquoise in nodules, horizontal seams, or vertical joints in the sandstone beds. The classical Iranian deposits are found in sandstones and limestones of Tertiary age were intruded by apatite-rich porphyritic trachytes and mafic rock. Supergene alteration fractured the rock and converted some of the minerals in the rock to alunite, which freed aluminum and phosphate to combine with copper from oxidized copper sulfides to form turquoise. This process took place at a relatively shallow depth, and by 1965 the mines had "bottomed" at a depth averaging just below the surface. Turquoise deposits are widespread in North America. Some deposits, such as those of Saguache and Conejos Counties in Colorado or the Cerrillos Hills in New Mexico, are typical supergene deposits formed from copper porphyries. The deposits in Cochise County, Arizona, are found in Cambrian quartzites and geologically young granites and go down at least as deep as . Occurrence Turquoise was among the first gems to be mined, and many historic sites have been depleted, though some are still worked to this day. These are all small-scale operations, often seasonal owing to the limited scope and remoteness of the deposits. Most are worked by hand with little or no mechanization. However, turquoise is often recovered as a byproduct of large-scale copper mining operations, especially in the United States. Deposits typically take the form of small veins in partially decomposed volcanic rock in arid climates. Iran Iran has been an important source of turquoise for at least 2,000 years. It was initially named by Iranians "pērōzah" meaning "victory", and later the Arabs called it "fayrūzah", which is pronounced in Modern Persian as "fīrūzeh". In Iranian architecture, the blue turquoise was used to cover the domes of palaces because its intense blue colour was also a symbol of heaven on earth. This deposit is blue naturally and turns green when heated due to dehydration. It is restricted to a mine-riddled region in Nishapur, the mountain peak of Ali-mersai near Mashhad, the capital of Khorasan Province, Iran. Weathered and broken trachyte is host to the turquoise, which is found both in situ between layers of limonite and sandstone and amongst the scree at the mountain's base. These workings are the oldest known, together with those of the Sinai Peninsula. Iran also has turquoise mines in Semnan and Kerman provinces. Sinai Since at least the First Dynasty (3000 BCE) in ancient Egypt, and possibly before then, turquoise was used by the Egyptians and was mined by them in the Sinai Peninsula. This region was known as the Country of Turquoise by the native Monitu. There are six mines in the peninsula, all on its southwest coast, covering an area of some . The two most important of these mines, from a historical perspective, are Serabit el-Khadim and Wadi Maghareh, believed to be among the oldest of known mines. The former mine is situated about 4 kilometres from an ancient temple dedicated to the deity Hathor. The turquoise is found in sandstone that is, or was originally, overlain by basalt. Copper and iron workings are present in the area. Large-scale turquoise mining is not profitable today, but the deposits are sporadically quarried by Bedouin peoples using homemade gunpowder. In the rainy winter months, miners face a risk from flash flooding; even in the dry season, death from the collapse of the haphazardly exploited sandstone mine walls may occur. The colour of Sinai material is typically greener than that of Iranian material but is thought to be stable and fairly durable. Often referred to as "Egyptian turquoise", Sinai material is typically the most translucent, and under magnification, its surface structure is revealed to be peppered with dark blue discs not seen in material from other localities. United States The Southwest United States is a significant source of turquoise; Arizona, California (San Bernardino, Imperial, Inyo counties), Colorado (Conejos, El Paso, Lake, Saguache counties), New Mexico (Eddy, Grant, Otero, Santa Fe counties) and Nevada (Clark, Elko, Esmeralda County, Eureka, Lander, Mineral County and Nye counties) are (or were) especially rich. The deposits of California and New Mexico were mined by pre-Columbian Native Americans using stone tools, some local and some from as far away as central Mexico. Cerrillos, New Mexico is thought to be the location of the oldest mines; prior to the 1920s, the state was the country's largest producer; it is more or less exhausted today. Only one mine in California, located at Apache Canyon, operates at a commercial capacity today. The turquoise occurs as vein or seam fillings, and as compact nuggets; these are mostly small in size. While quite fine material is sometimes found, rivalling Iranian material in both colour and durability, most American turquoise is of a low grade (called "chalk turquoise"); high iron levels mean greens and yellows predominate, and a typically friable consistency in the turquoise's untreated state precludes use in jewelry. Arizona is currently the most important producer of turquoise by value. Several mines exist in the state, two of them famous for their unique colour and quality and considered the best in the industry: the Sleeping Beauty Mine in Globe ceased turquoise mining in August 2012. The mine chose to send all ore to the crusher and to concentrate on copper production due to the rising price of copper on the world market. The price of natural untreated Sleeping Beauty turquoise has risen dramatically since the mine's closing. The Kingman Mine as of 2015 still operates alongside a copper mine outside of the city. Other mines include the Blue Bird mine, Castle Dome, and Ithaca Peak, but they are mostly inactive due to the high cost of operations and federal regulations. The Phelps Dodge Lavender Pit mine at Bisbee ceased operations in 1974 and never had a turquoise contractor. All Bisbee turquoise was "lunch pail" mined. It came out of the copper ore mine in miners' lunch pails. Morenci and Turquoise Peak are either inactive or depleted. Nevada is the country's other major producer, with more than 120 mines which have yielded significant quantities of turquoise. Unlike elsewhere in the US, most Nevada mines have been worked primarily for their gem turquoise and very little has been recovered as a byproduct of other mining operations. Nevada turquoise is found as nuggets, fracture fillings and in breccias as the cement filling interstices between fragments. Because of the geology of the Nevada deposits, a majority of the material produced is hard and dense, being of sufficient quality that no treatment or enhancement is required. While nearly every county in the state has yielded some turquoise, the chief producers are in Lander and Esmeralda counties. Most of the turquoise deposits in Nevada occur along a wide belt of tectonic activity that coincides with the state's zone of thrust faulting. It strikes at a bearing of about 15° and extends from the northern part of Elko County, southward down to the California border southwest of Tonopah. Nevada has produced a wide diversity of colours and mixes of different matrix patterns, with turquoise from Nevada coming in various shades of blue, blue-green, and green. Some of this unusually-coloured turquoise may contain significant zinc and iron, which is the cause of the beautiful bright green to yellow-green shades. Some of the green to green-yellow shades may actually be variscite or faustite, which are secondary phosphate minerals similar in appearance to turquoise. A significant portion of the Nevada material is also noted for its often attractive brown or black limonite veining, producing what is called "spiderweb matrix". While a number of the Nevada deposits were first worked by Native Americans, the total Nevada turquoise production since the 1870s has been estimated at more than , including nearly from the Carico Lake mine. In spite of increased costs, small scale mining operations continue at a number of turquoise properties in Nevada, including the Godber, Orvil Jack and Carico Lake mines in Lander County, the Pilot Mountain Mine in Mineral County, and several properties in the Royston and Candelaria areas of Esmerelda County. In 1912, the first deposit of distinct, single-crystal turquoise was discovered at Lynch Station in Campbell County, Virginia. The crystals, forming a druse over the mother rock, are very small; is considered large. Until the 1980s Virginia was widely thought to be the only source of distinct crystals; there are now at least 27 other localities. In an attempt to recoup profits and meet demand, some American turquoise is treated or enhanced to a certain degree. These treatments include innocuous waxing and more controversial procedures, such as dyeing and impregnation (see Treatments). There are some American mines which produce materials of high enough quality that no treatment or alterations are required. Any such treatments which have been performed should be disclosed to the buyer on sale of the material. Other sources Turquoise prehistoric artifacts (beads) are known since the fifth millennium BCE from sites in the Eastern Rhodopes in Bulgaria – the source for the raw material is possibly related to the nearby Spahievo lead–zinc ore field. In Spain, turquoise has been found as a minor mineral in the variscite deposits exploited during prehistoric times in Palazuelos de las Cuevas (Zamora) and in Can Tintorer, Gavá (Barcelona). China has been a minor source of turquoise for 3,000 years or more. Gem-quality material, in the form of compact nodules, is found in the fractured, silicified limestone of Yunxian and Zhushan, Hubei province. Additionally, Marco Polo reported turquoise found in present-day Sichuan. Most Chinese material is exported, but a few carvings worked in a manner similar to jade exist. In Tibet, gem-quality deposits purportedly exist in the mountains of Derge and Nagari-Khorsum in the east and west of the region respectively. Other notable localities include: Afghanistan; Australia (Victoria and Queensland); north India; northern Chile (Chuquicamata); Cornwall; Saxony; Silesia; and Turkestan. History of use The pastel shades of turquoise have endeared it to many great cultures of antiquity: it has adorned the rulers of Ancient Egypt, the Aztecs (and possibly other Pre-Columbian Mesoamericans), Persia, Mesopotamia, the Indus Valley, and to some extent in ancient China since at least the Shang dynasty. Despite being one of the oldest gems, probably first introduced to Europe (through Turkey) with other Silk Road novelties, turquoise did not become important as an ornamental stone in the West until the 14th century, following a decline in the Roman Catholic Church's influence which allowed the use of turquoise in secular jewellery. It was apparently unknown in India until the Mughal period, and unknown in Japan until the 18th century. A common belief shared by many of these civilizations held that turquoise possessed certain prophylactic qualities; it was thought to change colour with the wearer's health and protect him or her from untoward forces. The Aztecs viewed turquoise as an embodiment of fire and gave it properties such as heat and smokiness. They inlaid turquoise, together with gold, quartz, malachite, jet, jade, coral, and shells, into provocative (and presumably ceremonial) mosaic objects such as masks (some with a human skull as their base), knives, and shields. Natural resins, bitumen and wax were used to bond the turquoise to the objects' base material; this was usually wood, but bone and shell were also used. Like the Aztecs, the Pueblo, Navajo and Apache tribes cherished turquoise for its amuletic use; the latter tribe believe the stone to afford the archer dead aim. In Navajo culture it is used for "a spiritual protection and blessing." Among these peoples turquoise was used in mosaic inlay, in sculptural works, and was fashioned into toroidal beads and freeform pendants. The Ancestral Puebloans (Anasazi) of the Chaco Canyon and surrounding region are believed to have prospered greatly from their production and trading of turquoise objects. The distinctive silver jewellery produced by the Navajo and other Southwestern Native American tribes today is a rather modern development, thought to date from around 1880 as a result of European influences. In Persia, turquoise was the de facto national stone for millennia, extensively used to decorate objects (from turbans to bridles), mosques, and other important buildings both inside and out, such as the Medresseh-i Shah Husein Mosque of Isfahan. The Persian style and use of turquoise was later brought to India following the establishment of the Mughal Empire there, its influence seen in high purity gold jewellery (together with ruby and diamond) and in such buildings as the Taj Mahal. Persian turquoise was often engraved with devotional words in Arabic script which was then inlaid with gold. Cabochons of imported turquoise, along with coral, was (and still is) used extensively in the silver and gold jewellery of Tibet and Mongolia, where a greener hue is said to be preferred. Most of the pieces made today, with turquoise usually roughly polished into irregular cabochons set simply in silver, are meant for inexpensive export to Western markets and are probably not accurate representations of the original style. The Ancient Egyptian use of turquoise stretches back as far as the First Dynasty and possibly earlier; however, probably the most well-known pieces incorporating the gem are those recovered from Tutankhamun's tomb, most notably the Pharaoh's iconic burial mask which was liberally inlaid with the stone. It also adorned rings and great sweeping necklaces called pectorals. Set in gold, the gem was fashioned into beads, used as inlay, and often carved in a scarab motif, accompanied by carnelian, lapis lazuli, and in later pieces, coloured glass. Turquoise, associated with the goddess Hathor, was so liked by the Ancient Egyptians that it became (arguably) the first gemstone to be imitated, the fair structure created by an artificial glazed ceramic product known as faience. The French conducted archaeological excavations of Egypt from the mid-19th century through the early 20th. These excavations, including that of Tutankhamun's tomb, created great public interest in the western world, subsequently influencing jewellery, architecture, and art of the time. Turquoise, already favoured for its pastel shades since around 1810, was a staple of Egyptian Revival pieces. In contemporary Western use, turquoise is most often encountered cut en cabochon in silver rings, bracelets, often in the Native American style, or as tumbled or roughly hewn beads in chunky necklaces. Lesser material may be carved into fetishes, such as those crafted by the Zuni. While strong sky blues remain superior in value, mottled green and yellowish material is popular with artisans. Cultural associations In many cultures of the Old and New Worlds, this gemstone has been esteemed for thousands of years as a holy stone, a bringer of good fortune or a talisman. The oldest evidence for this claim was found in Ancient Egypt, where grave furnishings with turquoise inlay were discovered, dating from approximately 3000 BCE. In the ancient Persian Empire, the sky-blue gemstones were earlier worn round the neck or wrist as protection against unnatural death. If they changed colour, the wearer was thought to have reason to fear the approach of doom. Meanwhile, it has been discovered that the turquoise certainly can change colour, but that this is not necessarily a sign of impending danger. The change can be caused by the light, or by a chemical reaction brought about by cosmetics, dust or the acidity of the skin. The goddess Hathor was associated with turquoise, as she was the patroness of Serabit el-Khadim, where it was mined. Her titles included "Lady of Turquoise", "Mistress of Turquoise", and "Lady of Turquoise Country". In Western culture, turquoise is also the traditional birthstone for those born in the month of December. The turquoise is also a stone in the Jewish High Priest's breastplate, described in Exodus chapter 28. The stone is also considered sacred to the indigenous Zuni and Pueblo peoples of the American Southwest. The pre-Columbian Aztec and Maya also considered it to be a valuable and culturally important stone. Imitations The Egyptians were the first to produce an artificial imitation of turquoise, in the glazed earthenware product faience. Later glass and enamel were also used, and in modern times more sophisticated porcelain, plastics, and various assembled, pressed, bonded, and sintered products (composed of various copper and aluminium compounds) have been developed: examples of the latter include "Viennese turquoise", made from precipitated aluminium phosphate coloured by copper oleate; and "neolith", a mixture of bayerite and copper(II) phosphate. Most of these products differ markedly from natural turquoise in both physical and chemical properties, but in 1972 Pierre Gilson introduced one fairly close to a true synthetic (it does differ in chemical composition owing to a binder used, meaning it is best described as a simulant rather than a synthetic). Gilson turquoise is made in both a uniform colour and with black "spiderweb matrix" veining not unlike the natural Nevada material. The most common imitation of turquoise encountered today is dyed howlite and magnesite, both white in their natural states, and the former also having natural (and convincing) black veining similar to that of turquoise. Dyed chalcedony, jasper, and marble is less common, and much less convincing. Other natural materials occasionally confused with or used in lieu of turquoise include: variscite and faustite; chrysocolla (especially when impregnating quartz); lazulite; smithsonite; hemimorphite; wardite; and a fossil bone or tooth called odontolite or "bone turquoise", coloured blue naturally by the mineral vivianite. While rarely encountered today, odontolite was once mined in large quantities—specifically for its use as a substitute for turquoise—in southern France. These fakes are detected by gemologists using a number of tests, relying primarily on non-destructive, close examination of surface structure under magnification; a featureless, pale blue background peppered by flecks or spots of whitish material is the typical surface appearance of natural turquoise, while manufactured imitations will appear radically different in both colour (usually a uniform dark blue) and texture (usually granular or sugary). Glass and plastic will have a much greater translucency, with bubbles or flow lines often visible just below the surface. Staining between grain boundaries may be visible in dyed imitations. Some destructive tests may be necessary; for example, the application of diluted hydrochloric acid will cause the carbonates odontolite and magnesite to effervesce and howlite to turn green, while a heated probe may give rise to the pungent smell so indicative of plastic. Differences in specific gravity, refractive index, light absorption (as evident in a material's absorption spectrum), and other physical and optical properties are also considered as means of separation. Treatments Turquoise is treated to enhance both its colour and durability (increased hardness and decreased porosity). As is so often the case with any precious stones, full disclosure about treatment is frequently not given. Gemologists can detect these treatments using a variety of testing methods, some of which are destructive, such as the use of a heated probe applied to an inconspicuous spot, which will reveal oil, wax or plastic treatment. Waxing and oiling Historically, light waxing and oiling were the first treatments used in ancient times, providing a wetting effect, thereby enhancing the colour and lustre. This treatment is more or less acceptable by tradition, especially because treated turquoise is usually of a higher grade to begin with. Oiled and waxed stones are prone to "sweating" under even gentle heat or if exposed to too much sun, and they may develop a white surface film or bloom over time. (With some skill, oil and wax treatments can be restored.) Backing Since finer turquoise is often found as thin seams, it may be glued to a base of stronger foreign material for reinforcement. These stones are termed "backed", and it is standard practice that all thinly cut turquoise in the Southwestern United States is backed. Native indigenous peoples of this region, because of their considerable use and wearing of turquoise, have found that backing increases the durability of thinly cut slabs and cabochons of turquoise. They observe that if the stone is not backed it will often crack. Backing of turquoise is not widely known outside of the Native American and Southwestern United States jewellery trade. Backing does not diminish the value of high quality turquoise, and indeed the process is expected for most thinly cut American commercial gemstones. Zachery treatment A proprietary process was created by electrical engineer and turquoise dealer James E. Zachery in the 1980s to improve the stability of medium to high-grade turquoise. The process can be applied in several ways: either through deep penetration on rough turquoise to decrease porosity, by shallow treatment of finished turquoise to enhance color, or both. The treatment can enhance color and improve the turquoise's ability to take a polish. Such treated turquoise can be distinguished in some cases from natural turquoise, without destruction, by energy-dispersive X-ray spectroscopy, which can detect its elevated potassium levels. In some instances, such as with already high-quality, low-porosity turquoise that is treated only for porosity, the treatment is undetectable. Dyeing The use of Prussian blue and other dyes (often in conjunction with bonding treatments) to "enhance” its appearance, make uniform or completely change the colour, is regarded as fraudulent by some purists, especially since some dyes may fade or rub off on the wearer. Dyes have also been used to darken the veins of turquoise. Stabilization Material treated with plastic or water glass is termed "bonded" or "stabilized" turquoise. This process consists of pressure impregnation of otherwise unsaleable chalky American material by epoxy and plastics (such as polystyrene) and water glass (sodium silicate) to produce a wetting effect and improve durability. Plastic and water glass treatments are far more permanent and stable than waxing and oiling, and can be applied to material too chemically or physically unstable for oil or wax to provide sufficient improvement. Conversely, stabilization and bonding are rejected by some as too radical an alteration. The epoxy binding technique was first developed in the 1950s and has been attributed to Colbaugh Processing of Arizona, a company that still operates today. Reconstitution Perhaps the most extreme of treatments is "reconstitution", wherein fragments of fine turquoise material, too small to be used individually, are powdered and then bonded with resin to form a solid mass. Very often the material sold as "reconstituted turquoise" is artificial, with little or no natural stone, made entirely from resins and dyes. In the trade reconstituted turquoise is often called "block turquoise" or simply "block". Valuation and care Hardness and richness of colour are two of the major factors in determining the value of turquoise; while colour is a matter of individual taste, generally speaking, the most desirable is a strong sky to robin egg blue (in reference to the eggs of the American robin). Whatever the colour, for many applications, turquoise should not be soft or chalky; even if treated, such lesser material (to which most turquoise belongs) is liable to fade or discolour over time and will not hold up to normal use in jewellery. The mother rock or matrix in which turquoise is found can often be seen as splotches or a network of brown or black veins running through the stone in a netted pattern; this veining may add value to the stone if the result is complementary, but such a result is uncommon. Such material is sometimes described as "spiderweb matrix"; it is most valued in the Southwest United States and Far East, but is not highly appreciated in the Near East where unblemished and vein-free material is ideal (regardless of how complementary the veining may be). Uniformity of colour is desired, and in finished pieces the quality of workmanship is also a factor; this includes the quality of the polish and the symmetry of the stone. Calibrated stones—that is, stones adhering to standard jewellery setting measurements—may also be more sought after. Like coral and other opaque gems, turquoise is commonly sold at a price according to its physical size in millimetres rather than weight. Turquoise is treated in many different ways, some more permanent and radical than others. Controversy exists as to whether some of these treatments should be acceptable, but one can be more or less forgiven universally: This is the light waxing or oiling applied to most gem turquoise to improve its colour and lustre; if the material is of high quality to begin with, very little of the wax or oil is absorbed and the turquoise therefore does not rely on this impermanent treatment for its beauty. All other factors being equal, untreated turquoise will always command a higher price. Bonded and reconstituted material is worth considerably less. Being a phosphate mineral, turquoise is inherently fragile and sensitive to solvents; perfume and other cosmetics will attack the finish and may alter the colour of turquoise gems, as will skin oils, as will most commercial jewellery cleaning fluids. Prolonged exposure to direct sunlight may also discolour or dehydrate turquoise. Care should therefore be taken when wearing such jewels: cosmetics, including sunscreen and hair spray, should be applied before putting on turquoise jewellery, and they should not be worn to a beach or other sun-bathed environment. After use, turquoise should be gently cleaned with a soft cloth to avoid a buildup of residue, and should be stored in its own container to avoid scratching by harder gems. Turquoise can also be adversely affected if stored in an airtight container.
Physical sciences
Minerals
Earth science
55695
https://en.wikipedia.org/wiki/Mirage
Mirage
A mirage is a naturally-occurring optical phenomenon in which light rays bend via refraction to produce a displaced image of distant objects or the sky. The word comes to English via the French (se) mirer, from the Latin mirari, meaning "to look at, to wonder at". Mirages can be categorized as "inferior" (meaning lower), "superior" (meaning higher) and "Fata Morgana", one kind of superior mirage consisting of a series of unusually elaborate, vertically stacked images, which form one rapidly-changing mirage. In contrast to a hallucination, a mirage is a real optical phenomenon that can be captured on camera, since light rays are actually refracted to form the false image at the observer's location. What the image appears to represent, however, is determined by the interpretive faculties of the human mind. For example, inferior images on land are very easily mistaken for the reflections from a small body of water. Inferior mirage In an inferior mirage, the mirage image appears below the real object. The real object in an inferior mirage is the (blue) sky or any distant (therefore bluish) object in that same direction. The mirage causes the observer to see a bright and bluish patch on the ground. Light rays coming from a particular distant object all travel through nearly the same layers of air, and all are refracted at about the same angle. Therefore, rays coming from the top of the object will arrive lower than those from the bottom. The image is usually upside-down, enhancing the illusion that the sky image seen in the distance is a specular reflection on a puddle of water or oil acting as a mirror. While the aero-dynamics are highly active, the image of the inferior mirage is stable unlike the fata morgana which can change within seconds. Since warmer air rises while cooler air (being denser) sinks, the layers will mix, causing turbulence. The image will be distorted accordingly; it may vibrate or be stretched vertically (towering) or compressed vertically (stooping). A combination of vibration and extension are also possible. If several temperature layers are present, several mirages may mix, perhaps causing double images. In any case, mirages are usually not larger than about half a degree high (roughly the angular diameter of the Sun and Moon) and are from objects between dozens of meters and a few kilometers away. Heat haze Heat haze, also called heat shimmer, refers to the inferior mirage observed when viewing objects through a mass of heated air. Common instances when heat haze occurs include images of objects viewed across asphalt concrete (also known as tarmac), roads and over masonry rooftops on hot days, above and behind fire (as in burning candles, patio heaters, and campfires), and through exhaust gases from jet engines. When appearing on roads due to the hot asphalt, it is often referred to as a "highway mirage". It also occurs in deserts; in that case, it is referred to as a "desert mirage". Both tarmac and sand can become very hot when exposed to the sun, easily being more than higher than the air above, enough to make conditions suitable to cause the mirage. Convection causes the temperature of the air to vary, and the variation between the hot air at the surface of the road and the denser cool air above it causes a gradient in the refractive index of the air. This produces a blurred shimmering effect, which hinders the ability to resolve the image and increases when the image is magnified through a telescope or telephoto lens. Light from the sky at a shallow angle to the road is refracted by the index gradient, making it appear as if the sky is reflected by the road's surface. This might appear as a pool of liquid (usually water, but possibly others, such as oil) on the road, as some types of liquid also reflect the sky. The illusion moves into the distance as the observer approaches the miraged object giving one the same effect as approaching a rainbow. Heat haze is not related to the atmospheric phenomenon of haze. Superior mirage A superior mirage is one in which the mirage image appears to be located above the real object. A superior mirage occurs when the air below the line of sight is colder than the air above it. This unusual arrangement is called a temperature inversion. During daytime, the normal temperature gradient of the atmosphere is cold air above warm air. Passing through the temperature inversion, the light rays are bent down, and so the image appears above the true object, hence the name superior. Superior mirages are quite common in polar regions, especially over large sheets of ice that have a uniform low temperature. Superior mirages also occur at more moderate latitudes, although in those cases they are weaker and tend to be less smooth and stable. For example, a distant shoreline may appear to tower and look higher (and, thus, perhaps closer) than it really is. Because of the turbulence, there appear to be dancing spikes and towers. This type of mirage is also called the Fata Morgana, or hafgerðingar in the Icelandic language. A superior mirage can be right-side up or upside-down, depending on the distance of the true object and the temperature gradient. Often the image appears as a distorted mixture of up and down parts. Since the earth is round, if the downward bending curvature of light rays is about the same as the curvature of Earth, light rays can travel large distances, including from beyond the horizon. This was observed and documented in 1596, when a ship in search of the Northeast passage became stuck in the ice at Novaya Zemlya, above the Arctic Circle. The Sun appeared to rise two weeks earlier than expected; the real Sun had still been below the horizon, but its light rays followed the curvature of Earth. This effect is often called a Novaya Zemlya mirage. For every that light rays travel parallel to Earth's surface, the Sun will appear 1° higher on the horizon. The inversion layer must have just the right temperature gradient over the whole distance to make this possible. In the same way, ships that are so far away that they should not be visible above the geometric horizon may appear on or even above the horizon as superior mirages. This may explain some stories about flying ships or coastal cities in the sky, as described by some polar explorers. These are examples of so-called Arctic mirages, or hillingar in Icelandic. If the vertical temperature gradient is + per (where the positive sign means the temperature increases at higher altitudes) then horizontal light rays will just follow the curvature of Earth, and the horizon will appear flat. If the gradient is less (as it almost always is) the rays are not bent enough and get lost in space, which is the normal situation of a spherical, convex "horizon". In some situations, distant objects can be elevated or lowered, stretched or shortened with no mirage involved. Fata Morgana A Fata Morgana (the name comes from the Italian translation of Morgan le Fay, the fairy, shapeshifting half-sister of King Arthur) is a very complex superior mirage. It appears with alternations of compressed and stretched areas, erect images, and inverted images. A Fata Morgana is also a fast-changing mirage. Fata Morgana mirages are most common in polar regions, especially over large sheets of ice with a uniform low temperature, but they can be observed almost anywhere. In polar regions, a Fata Morgana may be observed on cold days; in desert areas and over oceans and lakes, a Fata Morgana may be observed on hot days. For a Fata Morgana, temperature inversion has to be strong enough that light rays' curvatures within the inversion are stronger than the curvature of Earth. The rays will bend and form arcs. An observer needs to be within an atmospheric duct to be able to see a Fata Morgana. Fata Morgana mirages may be observed from any altitude within Earth's atmosphere, including from mountaintops or airplanes. Distortions of image and bending of light can produce spectacular effects. In his book Pursuit: The Chase and Sinking of the "Bismarck", Ludovic Kennedy describes an incident that allegedly took place below the Denmark Strait during 1941, following the sinking of the Hood. The Bismarck, while pursued by the British cruisers Norfolk and Suffolk, passed out of sight into a sea mist. Within a matter of seconds, the ship re-appeared steaming toward the British ships at high speed. In alarm the cruisers separated, anticipating an imminent attack, and observers from both ships watched in astonishment as the German battleship fluttered, grew indistinct and faded away. Radar watch during these events indicated that the Bismarck had in fact made no change to her course. Night-time mirages The conditions for producing a mirage can occur at night as well as during the day. Under some circumstances mirages of astronomical objects and mirages of lights from moving vehicles, aircraft, ships, buildings, etc. can be observed at night. Mirage of astronomical objects A mirage of an astronomical object is a naturally occurring optical phenomenon in which light rays are bent to produce distorted or multiple images of an astronomical object. Mirages can be observed for such astronomical objects as the Sun, the Moon, the planets, bright stars, and very bright comets. The most commonly observed are sunset and sunrise mirages.
Physical sciences
Atmospheric optics
null
55762
https://en.wikipedia.org/wiki/Harmonic%20function
Harmonic function
In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function where is an open subset of that satisfies Laplace's equation, that is, everywhere on . This is usually written as or Etymology of the term "harmonic" The descriptor "harmonic" in the name harmonic function originates from a point on a taut string which is undergoing harmonic motion. The solution to the differential equation for this type of motion can be written in terms of sines and cosines, functions which are thus referred to as harmonics. Fourier analysis involves expanding functions on the unit circle in terms of a series of these harmonics. Considering higher dimensional analogues of the harmonics on the unit n-sphere, one arrives at the spherical harmonics. These functions satisfy Laplace's equation and over time "harmonic" was used to refer to all functions satisfying Laplace's equation. Examples Examples of harmonic functions of two variables are: The real or imaginary part of any holomorphic function. The function this is a special case of the example above, as and is a holomorphic function. The second derivative with respect to x is while the second derivative with respect to y is The function defined on This can describe the electric potential due to a line charge or the gravity potential due to a long cylindrical mass. Examples of harmonic functions of three variables are given in the table below with {| class="wikitable" ! Function !! Singularity |- |align=center| |Unit point charge at origin |- |align=center| |x-directed dipole at origin |- |align=center| |Line of unit charge density on entire z-axis |- |align=center| |Line of unit charge density on negative z-axis |- |align=center| |Line of x-directed dipoles on entire z axis |- |align=center| |Line of x-directed dipoles on negative z axis |} Harmonic functions that arise in physics are determined by their singularities and boundary conditions (such as Dirichlet boundary conditions or Neumann boundary conditions). On regions without boundaries, adding the real or imaginary part of any entire function will produce a harmonic function with the same singularity, so in this case the harmonic function is not determined by its singularities; however, we can make the solution unique in physical situations by requiring that the solution approaches 0 as r approaches infinity. In this case, uniqueness follows by Liouville's theorem. The singular points of the harmonic functions above are expressed as "charges" and "charge densities" using the terminology of electrostatics, and so the corresponding harmonic function will be proportional to the electrostatic potential due to these charge distributions. Each function above will yield another harmonic function when multiplied by a constant, rotated, and/or has a constant added. The inversion of each function will yield another harmonic function which has singularities which are the images of the original singularities in a spherical "mirror". Also, the sum of any two harmonic functions will yield another harmonic function. Finally, examples of harmonic functions of variables are: The constant, linear and affine functions on all of (for example, the electric potential between the plates of a capacitor, and the gravity potential of a slab) The function on for . Properties The set of harmonic functions on a given open set can be seen as the kernel of the Laplace operator and is therefore a vector space over linear combinations of harmonic functions are again harmonic. If is a harmonic function on , then all partial derivatives of are also harmonic functions on . The Laplace operator and the partial derivative operator will commute on this class of functions. In several ways, the harmonic functions are real analogues to holomorphic functions. All harmonic functions are analytic, that is, they can be locally expressed as power series. This is a general fact about elliptic operators, of which the Laplacian is a major example. The uniform limit of a convergent sequence of harmonic functions is still harmonic. This is true because every continuous function satisfying the mean value property is harmonic. Consider the sequence on defined by this sequence is harmonic and converges uniformly to the zero function; however note that the partial derivatives are not uniformly convergent to the zero function (the derivative of the zero function). This example shows the importance of relying on the mean value property and continuity to argue that the limit is harmonic. Connections with complex function theory The real and imaginary part of any holomorphic function yield harmonic functions on (these are said to be a pair of harmonic conjugate functions). Conversely, any harmonic function on an open subset of is locally the real part of a holomorphic function. This is immediately seen observing that, writing the complex function is holomorphic in because it satisfies the Cauchy–Riemann equations. Therefore, locally has a primitive , and is the real part of up to a constant, as is the real part of Although the above correspondence with holomorphic functions only holds for functions of two real variables, harmonic functions in variables still enjoy a number of properties typical of holomorphic functions. They are (real) analytic; they have a maximum principle and a mean-value principle; a theorem of removal of singularities as well as a Liouville theorem holds for them in analogy to the corresponding theorems in complex functions theory. Properties of harmonic functions Some important properties of harmonic functions can be deduced from Laplace's equation. Regularity theorem for harmonic functions Harmonic functions are infinitely differentiable in open sets. In fact, harmonic functions are real analytic. Maximum principle Harmonic functions satisfy the following maximum principle: if is a nonempty compact subset of , then restricted to attains its maximum and minimum on the boundary of . If is connected, this means that cannot have local maxima or minima, other than the exceptional case where is constant. Similar properties can be shown for subharmonic functions. The mean value property If is a ball with center and radius which is completely contained in the open set then the value of a harmonic function at the center of the ball is given by the average value of on the surface of the ball; this average value is also equal to the average value of in the interior of the ball. In other words, where is the volume of the unit ball in dimensions and is the -dimensional surface measure. Conversely, all locally integrable functions satisfying the (volume) mean-value property are both infinitely differentiable and harmonic. In terms of convolutions, if denotes the characteristic function of the ball with radius about the origin, normalized so that the function is harmonic on if and only if as soon as Sketch of the proof. The proof of the mean-value property of the harmonic functions and its converse follows immediately observing that the non-homogeneous equation, for any admits an easy explicit solution of class with compact support in . Thus, if is harmonic in holds in the set of all points in with Since is continuous in , converges to as showing the mean value property for in . Conversely, if is any function satisfying the mean-value property in , that is, holds in for all then, iterating times the convolution with one has: so that is because the -fold iterated convolution of is of class with support . Since and are arbitrary, is too. Moreover, for all so that in by the fundamental theorem of the calculus of variations, proving the equivalence between harmonicity and mean-value property. This statement of the mean value property can be generalized as follows: If is any spherically symmetric function supported in such that then In other words, we can take the weighted average of about a point and recover . In particular, by taking to be a function, we can recover the value of at any point even if we only know how acts as a distribution. See Weyl's lemma. Harnack's inequality Let be a connected set in a bounded domain . Then for every non-negative harmonic function , Harnack's inequality holds for some constant that depends only on and . Removal of singularities The following principle of removal of singularities holds for harmonic functions. If is a harmonic function defined on a dotted open subset of , which is less singular at than the fundamental solution (for ), that is then extends to a harmonic function on (compare Riemann's theorem for functions of a complex variable). Liouville's theorem Theorem: If is a harmonic function defined on all of which is bounded above or bounded below, then is constant. (Compare Liouville's theorem for functions of a complex variable). Edward Nelson gave a particularly short proof of this theorem for the case of bounded functions, using the mean value property mentioned above: Given two points, choose two balls with the given points as centers and of equal radius. If the radius is large enough, the two balls will coincide except for an arbitrarily small proportion of their volume. Since is bounded, the averages of it over the two balls are arbitrarily close, and so assumes the same value at any two points. The proof can be adapted to the case where the harmonic function is merely bounded above or below. By adding a constant and possibly multiplying by –1, we may assume that is non-negative. Then for any two points and , and any positive number , we let We then consider the balls and where by the triangle inequality, the first ball is contained in the second. By the averaging property and the monotonicity of the integral, we have (Note that since is independent of , we denote it merely as .) In the last expression, we may multiply and divide by and use the averaging property again, to obtain But as the quantity tends to 1. Thus, The same argument with the roles of and reversed shows that , so that Another proof uses the fact that given a Brownian motion in such that we have for all . In words, it says that a harmonic function defines a martingale for the Brownian motion. Then a probabilistic coupling argument finishes the proof. Generalizations Weakly harmonic function A function (or, more generally, a distribution) is weakly harmonic if it satisfies Laplace's equation in a weak sense (or, equivalently, in the sense of distributions). A weakly harmonic function coincides almost everywhere with a strongly harmonic function, and is in particular smooth. A weakly harmonic distribution is precisely the distribution associated to a strongly harmonic function, and so also is smooth. This is Weyl's lemma. There are other weak formulations of Laplace's equation that are often useful. One of which is Dirichlet's principle, representing harmonic functions in the Sobolev space as the minimizers of the Dirichlet energy integral with respect to local variations, that is, all functions such that holds for all or equivalently, for all Harmonic functions on manifolds Harmonic functions can be defined on an arbitrary Riemannian manifold, using the Laplace–Beltrami operator . In this context, a function is called harmonic if Many of the properties of harmonic functions on domains in Euclidean space carry over to this more general setting, including the mean value theorem (over geodesic balls), the maximum principle, and the Harnack inequality. With the exception of the mean value theorem, these are easy consequences of the corresponding results for general linear elliptic partial differential equations of the second order. Subharmonic functions A function that satisfies is called subharmonic. This condition guarantees that the maximum principle will hold, although other properties of harmonic functions may fail. More generally, a function is subharmonic if and only if, in the interior of any ball in its domain, its graph lies below that of the harmonic function interpolating its boundary values on the ball. Harmonic forms One generalization of the study of harmonic functions is the study of harmonic forms on Riemannian manifolds, and it is related to the study of cohomology. Also, it is possible to define harmonic vector-valued functions, or harmonic maps of two Riemannian manifolds, which are critical points of a generalized Dirichlet energy functional (this includes harmonic functions as a special case, a result known as Dirichlet principle). This kind of harmonic map appears in the theory of minimal surfaces. For example, a curve, that is, a map from an interval in to a Riemannian manifold, is a harmonic map if and only if it is a geodesic. Harmonic maps between manifolds If and are two Riemannian manifolds, then a harmonic map is defined to be a critical point of the Dirichlet energy in which is the differential of , and the norm is that induced by the metric on and that on on the tensor product bundle Important special cases of harmonic maps between manifolds include minimal surfaces, which are precisely the harmonic immersions of a surface into three-dimensional Euclidean space. More generally, minimal submanifolds are harmonic immersions of one manifold in another. Harmonic coordinates are a harmonic diffeomorphism from a manifold to an open subset of a Euclidean space of the same dimension.
Mathematics
Harmonic analysis
null
55890
https://en.wikipedia.org/wiki/Cocoa%20%28API%29
Cocoa (API)
Cocoa is Apple's native object-oriented application programming interface (API) for its desktop operating system macOS. Cocoa consists of the Foundation Kit, Application Kit, and Core Data frameworks, as included by the Cocoa.h header file, and the libraries and frameworks included by those, such as the C standard library and the Objective-C runtime. Cocoa applications are typically developed using the development tools provided by Apple, specifically Xcode (formerly Project Builder) and Interface Builder (now part of Xcode), using the programming languages Objective-C or Swift. However, the Cocoa programming environment can be accessed using other tools. It is also possible to write Objective-C Cocoa programs in a simple text editor and build it manually with GNU Compiler Collection (GCC) or Clang from the command line or from a makefile. For end users, Cocoa applications are those written using the Cocoa programming environment. Such applications usually have a familiar look and feel, since the Cocoa programming environment provides a lot of common UI elements (such as buttons, scroll bars, etc.), and automates many aspects of an application to comply with Apple's human interface guidelines. For iOS, iPadOS, tvOS, and watchOS, APIs similar to Application Kit, named UIKit and WatchKit, are available; they include gesture recognition, animation, and a different set of graphical control elements that are designed to accommodate the specific platforms they target. Foundation Kit and Core Data are also available in those operating systems. It is used in applications for Apple devices such as the iPhone, the iPod Touch, the iPad, the Apple TV, and the Apple Watch. History Cocoa continues the lineage of several software frameworks (mainly the App Kit and Foundation Kit) from the NeXTSTEP and OpenStep programming environments developed by NeXT in the 1980s and 1990s. Apple acquired NeXT in December 1996, and subsequently went to work on the Rhapsody operating system that was to be the direct successor of OpenStep. It was to have had an emulation base for classic Mac OS applications, named Blue Box. The OpenStep base of libraries and binary support was termed Yellow Box. Rhapsody evolved into Mac OS X, and the Yellow Box became Cocoa. Thus, Cocoa classes begin with the letters NS, such as NSString or NSArray. These stand for the original proprietary term for the OpenStep framework, NeXTSTEP. Much of the work that went into developing OpenStep was applied to developing Mac OS X, Cocoa being the most visible part. However, differences exist. For example, NeXTSTEP and OpenStep used Display PostScript for on-screen display of text and graphics, while Cocoa depends on Apple's Quartz (which uses the Portable Document Format (PDF) imaging model, but not its underlying technology). Cocoa also has a level of Internet support, including the NSURL and WebKit HTML classes, and others, while OpenStep had only rudimentary support for managed network connections via NSFileHandle classes and Berkeley sockets. The API toolbox was originally called “Yellow Box” and was renamed to Cocoa - a name that had been already trademarked by Apple. Apple's Cocoa trademark had originated as the name of a multimedia project design application for children. The name was intended to evoke "Java for kids", as it ran embedded in web pages. The original "Cocoa" program was discontinued following the return of Steve Jobs to Apple. At the time, Java was a big focus area for the company, so “Cocoa” was used as the new name for “Yellow Box” because, in addition to the native Objective-C usage, it could also be accessed from Java via a bridging layer. Even though Apple discontinued support for the Cocoa Java bridge, the name continued and was even used for the Cocoa Touch API. Memory management One feature of the Cocoa environment is its facility for managing dynamically allocated memory. Foundation Kit's NSObject class, from which most classes, both vendor and user, are derived, implements a reference counting scheme for memory management. Objects that derive from the NSObject root class respond to a retain and a release message, and keep a retain count. A method titled retainCount exists, but contrary to its name, will usually not return the exact retain count of an object. It is mainly used for system-level purposes. Invoking it manually is not recommended by Apple. A newly allocated object created with alloc or copy has a retain count of one. Sending that object a retain message increments the retain count, while sending it a release message decrements the retain count. When an object's retain count reaches zero, it is deallocated by a procedure similar to a C++ destructor. dealloc is not guaranteed to be invoked. Starting with Objective-C 2.0, the Objective-C runtime implemented an optional garbage collector, which is now obsolete and deprecated in favor of Automatic Reference Counting (ARC). In this model, the runtime turned Cocoa reference counting operations such as "retain" and "release" into no-ops. The garbage collector does not exist on the iOS implementation of Objective-C 2.0. Garbage collection in Objective-C ran on a low-priority background thread, and can halt on Cocoa's user events, with the intention of keeping the user experience responsive. The legacy garbage collector is still available on Mac OS X version 10.13, but no Apple-provided applications use it. In 2011, the LLVM compiler introduced Automatic Reference Counting (ARC), which replaces the conventional garbage collector by performing static analysis of Objective-C source code and inserting retain and release messages as necessary. Main frameworks Cocoa consists of three Objective-C object libraries called frameworks. Frameworks are functionally similar to shared libraries, a compiled object that can be dynamically loaded into a program's address space at runtime, but frameworks add associated resources, header files, and documentation. The Cocoa frameworks are implemented as a type of bundle, containing the aforementioned items in standard locations. Foundation Kit (Foundation), first appeared in Enterprise Objects Framework on NeXTSTEP 3. It was developed as part of the OpenStep work, and subsequently became the basis for OpenStep's AppKit when that system was released in 1994. On macOS, Foundation is based on Core Foundation. Foundation is a generic object-oriented library providing string and value manipulation, containers and iteration, distributed computing, event loops (run loops), and other functions that are not directly tied to the graphical user interface. The "NS" prefix, used for all classes and constants in the framework, comes from Cocoa's OPENSTEP heritage, which was jointly developed by NeXT and Sun Microsystems. Application Kit (AppKit) is directly descended from the original NeXTSTEP Application Kit. It contains code programs can use to create and interact with graphical user interfaces. AppKit is built on top of Foundation, and uses the same NS prefix. Core Data is the object persistence framework included with Foundation and Cocoa and found in Cocoa.h. A key part of the Cocoa architecture is its comprehensive views model. This is organized along conventional lines for an application framework, but is based on the Portable Document Format (PDF) drawing model provided by Quartz. This allows creating custom drawing content using PostScript-like drawing commands, which also allows automatic printer support and so forth. Since the Cocoa framework manages all the clipping, scrolling, scaling and other chores of drawing graphics, the programmer is freed from implementing basic infrastructure and can concentrate on the unique aspects of an application's content. Model–view–controller The Smalltalk teams at Xerox PARC eventually settled on a design philosophy that led to easy development and high code reuse. Named model–view–controller (MVC), the concept breaks an application into three sets of interacting object classes: Model classes represent problem domain data and operations (such as lists of people/departments/budgets; documents containing sections/paragraphs/footnotes of stylized text). View classes implement visual representations and affordances for human-computer interaction (such as scrollable grids of captioned icons and pop-up menus of possible operations). Controller classes contain logic that surfaces model data as view representations, maps affordance-initiated user actions to model operations, and maintains state to keep the two synchronized. Cocoa's design is a fairly, but not absolutely strict application of MVC principles. Under OpenStep, most of the classes provided were either high-level View classes (in AppKit) or one of a number of relatively low-level model classes like NSString. Compared to similar MVC systems, OpenStep lacked a strong model layer. No stock class represented a "document," for instance. During the transition to Cocoa, the model layer was expanded greatly, introducing a number of pre-rolled classes to provide functionality common to desktop applications. In Mac OS X 10.3, Apple introduced the NSController family of classes, which provide predefined behavior for the controller layer. These classes are considered part of the Cocoa Bindings system, which also makes extensive use of protocols such as Key-Value Observing and Key-Value Binding. The term 'binding' refers to a relationship between two objects, often between a view and a controller. Bindings allow the developer to focus more on declarative relationships rather than orchestrating fine-grained behavior. With the arrival of Mac OS X 10.4, Apple extended this foundation further by introducing the Core Data framework, which standardizes change tracking and persistence in the model layer. In effect, the framework greatly simplifies the process of making changes to application data, undoing changes when necessary, saving data to disk, and reading it back in. In providing framework support for all three MVC domains, Apple's goal is to reduce the amount of boilerplate or "glue" code that developers have to write, freeing up resources to spend time on application-specific features. Late binding In most object-oriented languages, calls to methods are represented physically by a pointer to the code in memory. This restricts the design of an application since specific command handling classes are needed, usually organized according to the chain-of-responsibility pattern. While Cocoa retains this approach for the most part, Objective-C's late binding opens up more flexibility. Under Objective-C, methods are represented by a selector, a string describing the method to call. When a message is sent, the selector is sent into the Objective-C runtime, matched against a list of available methods, and the method's implementation is called. Since the selector is text data, this lets it be saved to a file, transmitted over a network or between processes, or manipulated in other ways. The implementation of the method is looked up at runtime, not compile time. There is a small performance penalty for this, but late binding allows the same selector to reference different implementations. By a similar token, Cocoa provides a pervasive data manipulation method called key-value coding (KVC). This allows a piece of data or property of an object to be looked up or changed at runtime by name. The property name acts as a key to the value. In traditional languages, this late binding is impossible. KVC leads to great design flexibility. An object's type need not be known, yet any property of that object can be discovered using KVC. Also, by extending this system using something Cocoa terms key-value observing (KVO), automatic support for undo-redo is provided. Late static binding is a variant of binding somewhere between static and dynamic binding. The binding of names before the program is run is called static (early); bindings performed as the program runs are dynamic (late or virtual). Rich objects One of the most useful features of Cocoa is the powerful base objects the system supplies. As an example, consider the Foundation classes NSString and NSAttributedString, which provide Unicode strings, and the NSText system in AppKit, which allows the programmer to place string objects in the GUI. NSText and its related classes are used to display and edit strings. The collection of objects involved permit an application to implement anything from a simple single-line text entry field to a complete multi-page, multi-column text layout schema, with full professional typography features such as kerning, ligatures, running text around arbitrary shapes, rotation, full Unicode support, and anti-aliased glyph rendering. Paragraph layout can be controlled automatically or by the user, using a built-in "ruler" object that can be attached to any text view. Spell checking is automatic, using a system-wide set of language dictionaries. Unlimited undo/redo support is built in. Using only the built-in features, one can write a text editor application in as few as 10 lines of code. With new controller objects, this may fall towards zero. When extensions are needed, Cocoa's use of Objective-C makes this a straightforward task. Objective-C includes the concept of "categories," which allows modifying existing class "in-place". Functionality can be accomplished in a category without any changes to the original classes in the framework, or even access to its source. In other common languages, this same task requires deriving a new subclass supporting the added features, and then replacing all instances of the original class with instances of the new subclass. Implementations and bindings The Cocoa frameworks are written in Objective-C. Java bindings for the Cocoa frameworks (termed the Java bridge) were also made available with the aim of replacing Objective-C with a more popular language but these bindings were unpopular among Cocoa developers and Cocoa's message passing semantics did not translate well to a statically-typed language such as Java. Cocoa's need for runtime binding means many of Cocoa's key features are not available with Java. In 2005, Apple announced that the Java bridge was to be deprecated, meaning that features added to Cocoa in macOS versions later than 10.4 would not be added to the Cocoa-Java programming interface. At Apple Worldwide Developers Conference (WWDC) 2014, Apple introduced a new programming language named Swift, which is intended to replace Objective-C. AppleScriptObjC Originally, AppleScript Studio could be used to develop simpler Cocoa applications. However, as of Snow Leopard, it has been deprecated. It was replaced with AppleScriptObjC, which allows programming in AppleScript, while using Cocoa frameworks. Other bindings The Cocoa programming environment can be accessed using other tools with the aid of bridge mechanisms such as PasCocoa, PyObjC, CamelBones, RubyCocoa, and a D/Objective-C Bridge. Third-party bindings available for other languages include AppleScript, Clozure CL, Monobjc and NObjective (C#), Cocoa# (CLI), Cocodao and D/Objective-C Bridge, LispWorks, Object Pascal, CamelBones (Perl), PyObjC (Python), FPC PasCocoa (Lazarus and Free Pascal), RubyCocoa (Ruby). A Ruby language implementation named MacRuby, which removes the need for a bridge mechanism, was formerly developed by Apple, while Nu is a Lisp-like language that uses the Objective-C object model directly, and thus can use the Cocoa frameworks without needing a binding. Other implementations There are also open source implementations of major parts of the Cocoa framework, such as GNUstep and Cocotron, which allow cross-platform Cocoa application development to target other operating systems, such as Microsoft Windows and Linux.
Technology
Software development: General
null
55924
https://en.wikipedia.org/wiki/Prostate
Prostate
The prostate is an accessory gland of the male reproductive system and a muscle-driven mechanical switch between urination and ejaculation. It is found in all male mammals. It differs between species anatomically, chemically, and physiologically. Anatomically, the prostate is found below the bladder, with the urethra passing through it. It is described in gross anatomy as consisting of lobes and in microanatomy by zone. It is surrounded by an elastic, fibromuscular capsule and contains glandular tissue, as well as connective tissue. The prostate produces and contains fluid that forms part of semen, the substance emitted during ejaculation as part of the male sexual response. This prostatic fluid is slightly alkaline, milky or white in appearance. The alkalinity of semen helps neutralize the acidity of the vaginal tract, prolonging the lifespan of sperm. The prostatic fluid is expelled in the first part of ejaculate, together with most of the sperm, because of the action of smooth muscle tissue within the prostate. In comparison with the few spermatozoa expelled together with mainly seminal vesicular fluid, those in prostatic fluid have better motility, longer survival, and better protection of genetic material. Disorders of the prostate include enlargement, inflammation, infection, and cancer. The word prostate is derived from Ancient Greek (), meaning "one who stands before", "protector", "guardian", with the term originally used to describe the seminal vesicles. Structure The prostate is a exocrine gland of the male reproductive system. In adults, it is about the size of a walnut, and has an average weight of about , usually ranging between . The prostate is located in the pelvis. It sits below the urinary bladder and surrounds the urethra. The part of the urethra passing through it is called the prostatic urethra, which joins with the two ejaculatory ducts. The prostate is covered in a surface called the prostatic capsule or prostatic fascia. The internal structure of the prostate has been described using both lobes and zones. Because of the variation in descriptions and definitions of lobes, the zone classification is used more predominantly. The prostate has been described as consisting of three or four zones. Zones are more typically able to be seen on histology, or in medical imaging, such as ultrasound or MRI. The "lobe" classification describes lobes that, while originally defined in the fetus, are also visible in gross anatomy, including dissection and when viewed endoscopically. The five lobes are the anterior lobe or isthmus, the posterior lobe, the right and left lateral lobes, and the middle or median lobe. Inside of the prostate, adjacent and parallel to the prostatic urethra, there are two longitudinal muscle systems. On the front side (ventrally) runs the urethral dilator (musculus dilatator urethrae), on the backside (dorsally) runs the muscle switching the urethra into the ejaculatory state (musculus ejaculatorius). Blood and lymphatic vessels The prostate receives blood through the inferior vesical artery, internal pudendal artery, and middle rectal arteries. These vessels enter the prostate on its outer surface where it meets the bladder, and travel forward to the apex of the prostate. Both the inferior vesical and the middle rectal arteries often arise together directly from the internal iliac arteries. On entering the bladder, the inferior vesical artery splits into a urethral branch, supplying the urethral prostate; and a capsular branch, which travels around the capsule and has smaller branches, which perforate into the prostate. The veins of the prostate form a network – the prostatic venous plexus, primarily around its front and outer surface. This network also receives blood from the deep dorsal vein of the penis, and is connected via branches to the vesical plexus and internal pudendal veins. Veins drain into the vesical and then internal iliac veins. The lymphatic drainage of the prostate depends on the positioning of the area. Vessels surrounding the vas deferens, some of the vessels in the seminal vesicle, and a vessel from the posterior surface of the prostate drain into the external iliac lymph nodes. Some of the seminal vesicle vessels, prostatic vessels, and vessels from the anterior prostate drain into internal iliac lymph nodes. Vessels of the prostate itself also drain into the obturator and sacral lymph nodes. Microanatomy The prostate consists of glandular and connective tissue. Tall column-shaped cells form the lining (the epithelium) of the glands. These form one layer or may be pseudostratified. The epithelium is highly variable and areas of low cuboidal or flat cells can also be present, with transitional epithelium in the outer regions of the longer ducts. Basal cells surround the luminal epithelial cells in benign glands. The glands are formed as many follicles, which drain into canals and subsequently 12–20 main ducts, These in turn drain into the urethra as it passes through the prostate. There are also a small amount of flat cells, which sit next to the basement membranes of glands, and act as stem cells. The connective tissue of the prostate is made up of fibrous tissue and smooth muscle. The fibrous tissue separates the gland into lobules. It also sits between the glands and is composed of randomly orientated smooth-muscle bundles that are continuous with the bladder. Over time, thickened secretions called corpora amylacea accumulate in the gland. Gene and protein expression About 20,000 protein-coding genes are expressed in human cells and almost 75% of these genes are expressed in the normal prostate. About 150 of these genes are more specifically expressed in the prostate, with about 20 genes being highly prostate specific. The corresponding specific proteins are expressed in the glandular and secretory cells of the prostatic gland and have functions that are important for the characteristics of semen, including prostate-specific proteins, such as the prostate specific antigen (PSA), and the prostatic acid phosphatase. Development In the developing embryo, at the hind end lies an inpouching called the cloaca. This, over the fourth to the seventh week, divides into a urogenital sinus and the beginnings of the anal canal, with a wall forming between these two inpouchings called the urorectal septum. The urogenital sinus divides into three parts, with the middle part forming the urethra; the upper part is largest and becomes the urinary bladder, and the lower part then changes depending on the biological sex of the embryo. The prostatic part of the urethra develops from the middle, pelvic, part of the urogenital sinus, which is of endodermal origin. Around the end of the third month of embryonic life, outgrowths arise from the prostatic part of the urethra and grow into the surrounding mesenchyme. The cells lining this part of the urethra differentiate into the glandular epithelium of the prostate. The associated mesenchyme differentiates into the dense connective tissue and the smooth muscle of the prostate. Condensation of mesenchyme, urethra, and Wolffian ducts gives rise to the adult prostate gland, a composite organ made up of several tightly fused glandular and non-glandular components. To function properly, the prostate needs male hormones (androgens), which are responsible for male sex characteristics. The main male hormone is testosterone, which is produced mainly by the testicles. It is dihydrotestosterone (DHT), a metabolite of testosterone, that predominantly regulates the prostate. The prostate gland enlarges over time, until the fourth decade of life. Function In ejaculation The prostate secretes fluid, which becomes part of the semen. Its secretion forms up to 30% of the semen. Semen is the fluid emitted (ejaculated) by males during the sexual response. When sperm are emitted, they are transmitted from the vas deferens into the male urethra via the ejaculatory duct, which lies within the prostate gland. Ejaculation is the expulsion of semen from the urethra. Semen is moved into the urethra following contractions of the smooth muscle of the vas deferens and seminal vesicles, following stimulation, primarily of the glans penis. Stimulation sends nerve signals via the internal pudendal nerves to the upper lumbar spine; the nerve signals causing contraction act via the hypogastric nerves. After traveling into the urethra, the seminal fluid is ejaculated by contraction of the bulbocavernosus muscle. The secretions of the prostate include proteolytic enzymes, prostatic acid phosphatase, fibrinolysin, zinc, and prostate-specific antigen. Together with the secretions from the seminal vesicles, these form the major fluid part of semen. The prostate contains various metals, including zinc, and is known to be the primary source of most metals found in semen, which are released during ejaculation. In urination The prostate's changes of shape, which facilitate the mechanical switch between urination and ejaculation, are mainly driven by the two longitudinal muscle systems running along the prostatic urethra. These are the urethral dilator (musculus dilatator urethrae) on the urethra's front side, which contracts during urination and thereby shortens and tilts the prostate in its vertical dimension thus widening the prostatic section of the urethral tube, and the muscle switching the urethra into the ejaculatory state (musculus ejaculatorius) on its backside. In case of an operation, e.g. because of benign prostatic hyperplasia (BPH), damaging or sparing of these two muscle systems varies considerably depending on the choice of operation type and details of the procedure of the chosen technique. The effects on postoperational urination and ejaculation vary correspondingly. In stimulation It is possible for some men to achieve orgasm solely through stimulation of the prostate gland, such as via prostate massage or anal intercourse. This has led to the area of the rectal wall adjacent to the prostate to be popularly referred to as the "male G-spot". Clinical significance Inflammation Prostatitis is inflammation of the prostate gland. It can be caused by infection with bacteria, or other noninfective causes. Inflammation of the prostate can cause painful urination or ejaculation, groin pain, difficulty passing urine, or constitutional symptoms such as fever or tiredness. When inflamed, the prostate becomes enlarged and is tender when touched during digital rectal examination. The bacteria responsible for the infection may be detected by a urine culture. Acute prostatitis and chronic bacterial prostatitis are treated with antibiotics. Chronic non-bacterial prostatitis, or male chronic pelvic pain syndrome is treated by a large variety of modalities including the medications alpha blockers, non-steroidal anti-inflammatories and amitriptyline, antihistamines, and other anxiolytics. Other treatments that are not medications may include physical therapy, psychotherapy, nerve modulators, and surgery. More recently, a combination of trigger point and psychological therapy has proved effective for category III prostatitis as well. Prostate enlargement An enlarged prostate is called prostatomegaly, with benign prostatic hyperplasia (BPH) being the most common cause. BPH refers to an enlargement of the prostate due to an increase in the number of cells that make up the prostate () from a cause that is not a malignancy. It is very common in older men. It is often diagnosed when the prostate has enlarged to the point where urination becomes difficult. Symptoms include needing to urinate often (urinary frequency) or taking a while to get started (urinary hesitancy). If the prostate grows too large, it may constrict the urethra and impede the flow of urine, making urination painful and difficult, or in extreme cases completely impossible, causing urinary retention. Over time, chronic retention may cause the bladder to become larger and cause a backflow of urine into the kidneys (hydronephrosis). BPH can be treated with medication, a minimally invasive procedure or, in extreme cases, surgery that removes the prostate. In general, treatment often begins with an alpha-1 adrenergic receptor antagonist medication such as tamsulosin, which reduces the tone of the smooth muscle found in the urethra that passes through the prostate, making it easier for urine to pass through. For people with persistent symptoms, procedures may be considered. The surgery most often used in such cases is transurethral resection of the prostate, in which an instrument is inserted through the urethra to remove prostate tissue that is pressing against the upper part of the urethra and restricting the flow of urine. Minimally invasive procedures include transurethral needle ablation of the prostate and transurethral microwave thermotherapy. These outpatient procedures may be followed by the insertion of a temporary stent, to allow normal voluntary urination, without exacerbating irritative symptoms. Cancer Prostate cancer is one of the most common cancers affecting older men in the UK, US, Northern Europe and Australia, and a significant cause of death for elderly men worldwide. Often, a person does not have symptoms; when they do occur, symptoms may include urinary frequency, urgency, hesitation and other symptoms associated with BPH. Uncommonly, such cancers may cause weight loss, retention of urine, or symptoms such as back pain due to lesions that have spread outside of the prostate. A digital rectal examination and the measurement of a prostate-specific antigen (PSA) level are usually the first investigations done to check for prostate cancer. PSA values are difficult to interpret, because a high value might be present in a person without cancer, and a low value can be present in someone with cancer. The next form of testing is often the taking of a prostate biopsy to assess for tumour activity and invasiveness. Because of the significant risk of overdiagnosis with widespread screening in the general population, prostate cancer screening is controversial. If a tumour is confirmed, medical imaging such as an MRI or bone scan may be done to check for the presence of tumour in other parts of the body. Prostate cancer that is only present in the prostate is often treated with either surgical removal of the prostate or with radiotherapy or by the insertion of small radioactive particles of iodine-125 or palladium-103, called brachytherapy. Cancer that has spread to other parts of the body is usually treated also with hormone therapy, to deprive a tumour of sex hormones (androgens) that stimulate proliferation. This is often done through the use of GnRH analogues or agents (such as bicalutamide) that block the receptors that androgens act on; occasionally, surgical removal of the testes may be done instead. Cancer that does not respond to hormonal treatment, or that progresses after treatment, might be treated with chemotherapy such as docetaxel. Radiotherapy may also be used to help with pain associated with bony lesions. Sometimes, the decision may be made not to treat prostate cancer. If a cancer is small and localised, the decision may be made to monitor for cancer activity at intervals ("active surveillance") and defer treatment. If a person, because of frailty or other medical conditions or reasons, has a life expectancy less than ten years, then the impacts of treatment may outweigh any perceived benefits. Surgery Surgery to remove the prostate is called prostatectomy, and is usually done as a treatment for cancer limited to the prostate, or prostatic enlargement. When it is done, it may be done as open surgery or as laparoscopic (keyhole) surgery. These are done under general anaesthetic. Usually the procedure for cancer is a radical prostatectomy, which means that the seminal vesicles are removed and the vasa deferentia are also tied off. Part of the prostate can also be removed from within the urethra, called transurethral resection of the prostate (TURP). Open surgery may involve a cut that is made in the perineum, or via an approach that involves a cut down the midline from the belly button to the pubic bone. Open surgery may be preferred if there is a suspicion that lymph nodes are involved and they need to be removed or biopsied during a procedure. A perineal approach will not involve lymph node removal and may result in less pain and a faster recovery following an operation. A TURP procedure uses a tube inserted into the urethra via the penis and some form of heat, electricity or laser to remove prostate tissue. The whole prostate can be removed. Complications that might develop because of surgery include urinary incontinence and erectile dysfunction because of damage to nerves during the operation, particularly if a cancer is very close to nerves. Ejaculation of semen will not occur during orgasm if the vasa deferentia are tied off and seminal vesicles removed, such as during a radical prosatectomy. This will mean a man becomes infertile. Sometimes, orgasm may not be able to occur or may be painful. The penis length may shorten slightly if the part of the urethra within the prostate is also removed. General complications due to surgery can also develop, such as infections, bleeding, inadvertent damage to nearby organs or within the abdomen, and the formation of blood clots. History The prostate was first formally identified by Venetian anatomist Niccolò Massa in Anatomiae libri introductorius (Introduction to Anatomy) in 1536 and illustrated by Flemish anatomist Andreas Vesalius in Tabulae anatomicae sex (six anatomical tables) in 1538. Massa described it as a "glandular flesh upon which rests the neck of the bladder," and Vesalius as a "glandular body". The first time a word similar to prostate was used to describe the gland is credited to André du Laurens in 1600, who described it as a term already in use by anatomists at the time. The term was however used at least as early as 1549 by French surgeon Ambroise Pare. At the time, Du Laurens was describing what was considered to be a pair of organs (not the single two-lobed organ), and the Latin term prostatae that was used was a mistranslation of the term for the Ancient Greek word used to describe the seminal vesicles, parastatai; although it has been argued that surgeons in Ancient Greece and Rome must have at least seen the prostate as an anatomical entity. The term prostatae was taken rather than the grammatically correct prostator (singular) and prostatores (plural) because the gender of the Ancient Greek term was taken as female, when it was in fact male. The fact that the prostate was one and not two organs was an idea popularised throughout the early 18th century, as was the English language term used to describe the organ, prostate, attributed to William Cheselden. A monograph, "Practical observations on the treatment of the diseases of the prostate gland" by Everard Home in 1811, was important in the history of the prostate by describing and naming anatomical parts of the prostate, including the median lobe. The idea of the five lobes of the prostate was popularized following anatomical studies conducted by American urologist Oswald Lowsley in 1912. John E. McNeal first proposed the idea of "zones" in 1968; McNeal found that the relatively homogeneous cut surface of an adult prostate in no way resembled "lobes" and thus led to the description of "zones". Prostate cancer was first described in a speech to the Medical and Chiurgical Society of London in 1853 by surgeon John Adams and increasingly described by the late 19th century. Prostate cancer was initially considered a rare disease, probably because of shorter life expectancies and poorer detection methods in the 19th century. The first treatments of prostate cancer were surgeries to relieve urinary obstruction. Samuel David Gross has been credited with the first mention of a prostatectomy, as "too absurd to be seriously entertained" The first removal for prostate cancer (radical perineal prostatectomy) was first performed in 1904 by Hugh H. Young at Johns Hopkins Hospital; partial removal of the gland was conducted by Theodore Billroth in 1867. Transurethral resection of the prostate (TURP) replaced radical prostatectomy for symptomatic relief of obstruction in the middle of the 20th century because it could better preserve penile erectile function. Radical retropubic prostatectomy was developed in 1983 by Patrick Walsh. In 1941, Charles B. Huggins published studies in which he used estrogen to oppose testosterone production in men with metastatic prostate cancer. This discovery of "chemical castration" won Huggins the 1966 Nobel Prize in Physiology or Medicine. The role of the gonadotropin-releasing hormone (GnRH) in reproduction was determined by Andrzej W. Schally and Roger Guillemin, who both won the 1977 Nobel Prize in Physiology or Medicine for this work. GnRH receptor agonists, such as leuprorelin and goserelin, were subsequently developed and used to treat prostate cancer. Radiation therapy for prostate cancer was first developed in the early 20th century and initially consisted of intraprostatic radium implants. External beam radiotherapy became more popular as stronger X-ray radiation sources became available in the middle of the 20th century. Brachytherapy with implanted seeds (for prostate cancer) was first described in 1983. Systemic chemotherapy for prostate cancer was first studied in the 1970s. The initial regimen of cyclophosphamide and 5-fluorouracil was quickly joined by multiple regimens using a host of other systemic chemotherapy drugs. Other animals The prostate is found only in mammals. The prostate glands of male marsupials are proportionally larger than those of placental mammals. The presence of a functional prostate in monotremes is controversial, and if monotremes do possess functional prostates, they may not make the same contribution to semen as in other mammals. The structure of the prostate varies, ranging from tubuloalveolar (as in humans) to branched tubular. The gland is particularly well developed in carnivorans and boars, though in other mammals, such as bulls, it can be small and inconspicuous. In other animals, such as marsupials and small ruminants, the prostate is disseminate, meaning not specifically localisable as a distinct tissue, but present throughout the relevant part of the urethra; in other animals, such as red deer and American elk, it may be present as a specific organ and in a disseminate form. In some marsupial species, the size of the prostate gland changes seasonally. The prostate is the only accessory gland that occurs in male dogs. Dogs can produce in one hour as much prostatic fluid as a human can in a day. They excrete this fluid along with their urine to mark their territory. Additionally, dogs are the only species apart from humans seen to have a significant incidence of prostate cancer. The prostate is the only male accessory gland that occurs in cetaceans, consisting of diffuse urethral glands surrounded by a very powerful compressor muscle. The prostate gland originates with tissues in the urethral wall. This means the urethra, a compressible tube used for urination, runs through the middle of the prostate; enlargement of the prostate can constrict the urethra so that urinating becomes slow and painful. Prostatic secretions vary among species. They are generally composed of simple sugars and are often slightly alkaline. In eutherian mammals, these secretions usually contain fructose. The prostatic secretions of marsupials usually contain N-Acetylglucosamine or glycogen instead of fructose. Skene's gland Because the Skene's gland and the male prostate act similarly by secreting prostate-specific antigen (PSA), which is an ejaculate protein produced in males, and of prostate-specific acid phosphatase, the Skene's gland is sometimes referred to as the "female prostate". Although homologous to the male prostate (developed from the same embryological tissues), various aspects of its development in relation to the male prostate are widely unknown and a matter of research.
Biology and health sciences
Reproductive system
Biology
55927
https://en.wikipedia.org/wiki/Urinary%20system
Urinary system
The human urinary system, also known as the urinary tract or renal system, consists of the kidneys, ureters, bladder, and the urethra. The purpose of the urinary system is to eliminate waste from the body, regulate blood volume and blood pressure, control levels of electrolytes and metabolites, and regulate blood pH. The urinary tract is the body's drainage system for the eventual removal of urine. The kidneys have an extensive blood supply via the renal arteries which leave the kidneys via the renal vein. Each kidney consists of functional units called nephrons. Following filtration of blood and further processing, wastes (in the form of urine) exit the kidney via the ureters, tubes made of smooth muscle fibres that propel urine towards the urinary bladder, where it is stored and subsequently expelled through the urethra during urination. The female and male urinary system are very similar, differing only in the length of the urethra. 8002,000 milliliters (mL) of urine are normally produced every day in a healthy human. This amount varies according to fluid intake and kidney function. Structure The urinary system refers to the structures that produce and transport urine to the point of excretion. In the human urinary system there are two kidneys that are located between the dorsal body wall and parietal peritoneum on both the left and right sides. The formation of urine begins within the functional unit of the kidney, the nephrons. Urine then flows through the nephrons, through a system of converging tubules called collecting ducts. These collecting ducts then join together to form the minor calyces, followed by the major calyces that ultimately join the renal pelvis. From here, urine continues its flow from the renal pelvis into the ureter, transporting urine into the urinary bladder. The anatomy of the human urinary system differs between males and females at the level of the urinary bladder. In males, the urethra begins at the internal urethral orifice in the trigone of the bladder, continues through the external urethral orifice, and then becomes the prostatic, membranous, bulbar, and penile urethra. Urine exits the male urethra through the urinary meatus in the glans penis. The female urethra is much shorter, beginning at the bladder neck and terminating in the vulval vestibule. Development Microanatomy Under microscopy, the urinary system is covered in a unique lining called urothelium, a type of transitional epithelium. Unlike the epithelial lining of most organs, transitional epithelium can flatten and distend. Urothelium covers most of the urinary system, including the renal pelvis, ureters, and bladder. Function The main functions of the urinary system and its components are to: Regulate blood volume and composition (e.g. sodium, potassium and calcium) Regulate blood pressure. Regulate pH homeostasis of the blood. Contributes to the production of red blood cells by the kidney. Helps synthesize calcitriol (the active form of Vitamin D). Stores waste products (mainly urea and uric acid) before it and other products are removed from the body. Urine formation Average urine production in adult humans is about 12 litres (L) per day, depending on state of hydration, activity level, environmental factors, weight, and the individual's health. Producing too much or too little urine requires medical attention. Polyuria is a condition of excessive urine production (> 2.5 L/day). Conditions involving low output of urine are oliguria (< 400 mL/day) and anuria (< 100 mL/day). The first step in urine formation is the filtration of blood in the kidneys. In a healthy human, the kidney receives between 12 and 30% of cardiac output, but it averages about 20% or about 1.25 L/min. The basic structural and functional unit of the kidney is the nephron. Its chief function is to regulate the concentration of water and soluble substances like sodium by filtering the blood, reabsorbing what is needed and excreting the rest as urine. In the first part of the nephron, Bowman's capsule filters blood from the circulatory system into the tubules. Hydrostatic and osmotic pressure gradients facilitate filtration across a semipermeable membrane. The filtrate includes water, small molecules, and ions that easily pass through the filtration membrane. However, larger molecules such as proteins and blood cells are prevented from passing through the filtration membrane. The amount of filtrate produced every minute is called the glomerular filtration rate or GFR and amounts to 180 litres per day. About 99% of this filtrate is reabsorbed as it passes through the nephron and the remaining 1% becomes urine. The urinary system is regulated by the endocrine system by hormones such as antidiuretic hormone, aldosterone, and parathyroid hormone. Regulation of concentration and volume The urinary system is under influence of the circulatory system, nervous system, and endocrine system. Aldosterone plays a central role in regulating blood pressure through its effects on the kidney. It acts on the distal tubules and collecting ducts of the nephron and increases reabsorption of sodium from the glomerular filtrate. Reabsorption of sodium results in retention of water, which increases blood pressure and blood volume. Antidiuretic hormone (ADH), is a neurohypophysial hormone found in most mammals. Its two primary functions are to retain water in the body and vasoconstriction. Vasopressin regulates the body's retention of water by increasing water reabsorption in the collecting ducts of the kidney nephron. Vasopressin increases water permeability of the kidney's collecting duct and distal convoluted tubule by inducing translocation of aquaporin-CD water channels in the kidney nephron collecting duct plasma membrane. Urination Urination, also sometimes referred to as micturition, is the ejection of urine from the urinary bladder to the outside of the body. Urine is ejected through the urethra from the penis or vulva in placental mammals and through the cloaca in other vertebrates. In healthy humans (and many other animals), the process of urination is under voluntary control. In infants, some elderly individuals, and those with neurological injury, urination may occur as an involuntary reflex. Physiologically, micturition involves coordination between the central, autonomic, and somatic nervous systems. Brain centers that regulate urination include the pontine micturition center, periaqueductal gray, and the cerebral cortex. Clinical significance Urologic disease can involve congenital or acquired dysfunction of the urinary system. As an example, urinary tract obstruction is a urologic disease that can cause urinary retention. Diseases of the kidney tissue are normally treated by nephrologists, while diseases of the urinary tract are treated by urologists. Gynecologists may also treat female urinary incontinence. Diseases of other bodily systems also have a direct effect on urogenital function. For instance, it has been shown that protein released by the kidneys in diabetes mellitus sensitizes the kidney to the damaging effects of hypertension. Diabetes also can have a direct effect in urination due to peripheral neuropathies, which occur in some individuals with poorly controlled blood sugar levels. Urinary incontinence can result from a weakening of the pelvic floor muscles caused by factors such as pregnancy, childbirth, aging, and being overweight. Findings recent systematic reviews demonstrate that behavioral therapy generally results in improved urinary incontinence outcomes, especially for stress and urge UI, than medications alone. Pelvic floor exercises known as Kegel exercises can help in this condition by strengthening the pelvic floor. There can also be underlying medical reasons for urinary incontinence which are often treatable. In children, the condition is called enuresis. Some cancers also target the urinary system, including bladder cancer, kidney cancer, ureteral cancer, and urethral cancer. Due to the role and location of these organs, treatment is often complicated. History Kidney stones have been identified and recorded about as long as written historical records exist. The urinary tract including the ureters, as well as their function to drain urine from the kidneys, has been described by Galen in the second century AD. The first to examine the ureter through an internal approach, called ureteroscopy, rather than surgery was Hampton Young in 1929. This was improved on by VF Marshall who is the first published use of a flexible endoscope based on fiber optics, which occurred in 1964. The insertion of a drainage tube into the renal pelvis, bypassing the ureters and urinary tract, called nephrostomy, was first described in 1941. Such an approach differed greatly from the open surgical approaches within the urinary system employed during the preceding two millennia.
Biology and health sciences
Urinary system
null